uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,566,415 | arxiv | \section{Introduction}
FU Ori objects were originally identified as systems in star-forming regions that exhibit large outbursts at optical wavelengths \citep{Ambart1971, Herbig1977, Hartmann1996, Audard2014}. These outbursts have been explained by the onset of rapid mass accretion, rising from quiescent states of 10$^{-8}$--10$^{-7}$ M$_{\odot}$ yr$^{-1}$ to as much as 10$^{-4}$ M$_{\odot}$ yr$^{-1}$ \citep{Hartmann1996}. In their high states, FUors can also exhibit low-amplitude photometric/spectroscopic variability on $\sim$weekly timescales \citep{Siwak2018, Takagi2018}. Such high accretion rates, which can last for decades, may explain how stars accrete much of their mass, up to 10$^{-2}$ M$_{\odot}$ for a single outburst \citep{Hartmann1996}.
The causes of these outbursts are not clearly understood. Several explanations, including perturbations due to nearby companions, thermal instabilities, and gravitational instabilities, have all been proposed \citep{B&B1992, Clarke2005, PP6, Hartmann1996}. Companion perturbations, while an attractive solution for particular systems with specific orbital parameters, fail to explain many FUor events for isolated stars. Thermal and/or gravitational instabilities are more attractive alternatives for the underlying mechanism(s) behind FUor outbursts.
Thermal instabilities can occur when high disk opacities trap heat, leading to a runaway temperature increase within the disk. This heightened temperature then increases the effective disk viscosity, which in turn leads to high accretion rates \citep{Bell1994}. This is considered necessary to explain the fast rise times of FU Ori and V1057 Cyg during their initial outbursts \citep{Audard2014}. However, thermal instabilities depend significantly
on the disk viscosity and temperature, which are generally difficult to determine \citep{Vorobyov2005}.
Gravitational instabilities are thought to occur when mass infalling from the surrounding envelope builds up in the disk, becoming gravitationally unstable. These instabilities can manifest in several ways, including magnetorotational instabilities \citep[e.g.][]{Armitage2001, Zhu2010} and/or clumps of material that then accrete onto the host star \citep[e.g.][]{Vorobyov2005}. If the envelope continues to dump mass into the disk over time, outbursts can be repetitive, something thought to be generally true of FUor objects \citep{Hartmann1985}.
Despite the above caveats, it is possible that both thermal and gravitational instabilities work in concert to produce FUor events. However, both explanations require disk masses large enough to sustain high mass accretion rates (10$^{-4}$ M$_\odot$ yr$^{-1}$) for decades, even 100's of years, and for the gravitational instability to be triggered \citep[M$_{disk}$/M$_*\gtrsim0.1$, see][]{Hartmann1996, Liu2016, Cieza2018}. Some measurements of FUor disk masses have called into question whether disks are, in general, massive enough for these instabilities to fully explain the observed outburst. For example, \citet{Dunham2012} and \citet{Kospal2016} found upper limits on the disk mass of the FUor object HBC 722 to be 0.02 M$_{\odot}$ and 0.01 M$_\odot$, respectively, likely too small for gravitational instabilities to explain its outburst \citep{Audard2014}. \citet[][hereafter F17]{Feher2017} also found that the disk masses of several FUor objects are quite low (e.g., 0.04 M$_{\odot}$ and $<$0.05 M$_{\odot}$ for V1515 Cyg and V710 Cas, respectively).
In order to gauge the viability of gravitational and/or thermal instabilities to explain FUor outbursts, accurate estimates of disk masses need to be made. This is not simple given that recent observations of FUor objects have shown that there may be up to 25--60\% variability at 1.3 mm \citep{Liu2018}. \citet{Liu2018} pointed to two possible explanations for the variability, including the perturbations of the thermal or density structure in the disk, as well as increased irradiation from the host star.
Coordinated millimeter and optical/NIR observations can provide insight into the underlying mechanism behind the observed millimeter variability. Most of the millimeter emission from FUor objects is thought to originate from the outermost regions of the disk, and traces cool, optically thin, millimeter-sized dust grains. The shorter optical/NIR emission originates from the innermost, hottest regions of the disk. A simultaneous change at millimeter and optical/NIR wavelengths would indicate that such variation is likely due to temperature changes in the disk, and hence thermal instabilities. This would further imply that future millimeter observations of FUor objects could, in general, benefit from coordinated optical/IR observations so as to better constrain the disk's properties \citep{PP6}. On the other hand, observing solely a change in millimeter flux density would indicate that the millimeter emission is disconnected from perturbations in the thermal structure of the disk, perhaps pointing towards gravitational instabilities or other alternative mechanisms.
In the following, we present millimeter observations of six FUor objects and coordinated millimeter and optical observations for a subset of three of these objects in order to determine the manner in which they vary over time. Details of our observations, as well as our data reduction techniques can be found in Section \ref{sec:obs}. Section \ref{sec: results&analysis} describes our analysis of these observations and the results. We discuss our findings in Section \ref{sec:disc}.
\section{Observations and Data Reduction} \label{sec:obs}
\subsection{Sample}
Our entire sample consists of six known FUor objects (V1735 Cyg, V2494 Cyg, V2495 Cyg, V1057 Cyg, V1515 Cyg, and V733 Cep). All six targets had one observing run at 2.7mm in May-June 2017. In this paper, we focus primarily on V1735 Cyg, V2494 Cyg, and V2495 Cyg, which were selected for two follow-up observing runs including both millimeter and optical observations in June and August 2018. V2494 Cyg and V2495 Cyg were chosen for follow-up observations because \citet{Liu2018} found tentative evidence for millimeter variability in these objects at 1.3 mm. V1735 Cyg was chosen because our 2017 data displayed variability relative to 2014 observations taken by F17. Our 2017 observations of V1057 Cyg, V1515 Cyg, and V733 Cep were consistent with 2014 flux densities reported by F17, so we elected not to carry out follow-up observations of these targets.
\subsection{Millimeter Observations} \label{sec: mm obs}
\begin{deluxetable*}{c c c c c c}[htp]
\setlength{\tabcolsep}{10pt}
\tablecaption{NOEMA Observation Log \label{tab:NOEMAObs}}
\centering
\tablehead{
\colhead{Object} & \colhead{RA/Dec} & \colhead{Date} & \colhead{Array Config.} & \colhead{Central Frequency} & \colhead{Distance$^a$} \\
\colhead{} & \colhead{(J2000)} & \colhead{} & \colhead{(No. of Antennas)} & \colhead{(GHz)} & \colhead{(pc)}
}
\startdata
V1735 Cyg & 21:47:20.66$^a$ & April 5, 2014 & 6Cq (6) & 109.918 & 616$^c$ \\
& +47:32:03.6 & June 11/12/14, 2017 & 8D-E10 (7)/8D (8)/8D-W12N13 (6) & 108.502 & \\
& & June 9/10, 2018 & 8ant-Special (8) & 106.744 & \\
& & August 14/15, 2018 & 8D-W05 (7) & 106.744 & \\
\hline
V2494 Cyg & 20:58:21.09$^a$ & May 31/June 11, 2017 & 8D-N09 (7)/8D-E10 (7) & 108.502 & 800$^c$ \\
& +52:29:27.7 & June 10, 2018 & 8ant-Special (8) & 106.744 & \\
& & August 14/15, 2018 & 8D-W05 (7) & 106.744 & \\
\hline
V2495 Cyg & 21:00:25.38$^b$ & June 10, 2017 & 8D (8) & 108.502 & 800$^d$ \\
& +52:30:15.5 & June 10, 2018 & 8ant-Special (8) & 106.744 & \\
& & August 14/15, 2018 & 8D-W05 (7) & 106.744 & \\
\hline
V1057 Cyg & 20:58:53.73$^a$ & May 31/June 2, 2017 & 8D-N09 (7) & 108.502 & 898$^c$ \\
& +44:15:28.38 & & & & \\
\hline
V1515 Cyg & 20:23:48.02$^a$ & June 3/4/7, 2017 & 8D-N09 (7) & 108.502 & 981$^c$ \\
& +42:12:25.78 & & & & \\
\hline
V733 Cep & 22:53:33.26$^a$ & June 5/8, 2017 & 8D-N09 (7)/8D (8) & 108.502 & 661$^c$ \\
& +62:32:23.63 & & & &
\enddata
\tablenotetext{}{$^a$\citet{Gaia}, $^b$\citet{Cutri2012}, $^c$\citet{Bailer2018}, $^d$\citet{Magakian2013}}
\end{deluxetable*}
Our millimeter observations were carried out using the NOrthern Extended Millimeter Array (NOEMA) in Plateau de Bure, France in May and/or June 2017 for all six objects in our sample. Three objects in our sample (V1735 Cyg, V2494 Cyg, and V2495 Cyg) were observed twice more in June 2018 and August 2018. This gave us baselines of one year and two months. We also used the April 2014 observations from F17 of V1735 Cyg, which gives us a longer, three-year baseline for that object. Observing conditions were generally good throughout each observation, with stable precipitable water vapor typically below 10 mm. Conditions during our 2017 observations were worse, with precipitable water vapor levels sometimes above 10 mm. More details of all observations can be found in Table \ref{tab:NOEMAObs}.
All continuum observations reported here were centered near 108 GHz ($\sim$2.8 mm), while the 2014 observations of V1735 Cyg from F17 were centered near 110 GHz ($\sim$2.7 mm). The 2017 observations used NOEMA's WideX correlator tuned from 106.743--110.261 GHz, which covered the $^{13}$CO(1--0) (110.201 GHz) and C$^{18}$O(1--0) (109.782 GHz) lines at 78.118 kHz resolution. Our 2018 observations used the new PolyFix correlator tuned from 87.399 to 95.117 GHz (lower sideband) and 102.886 to 110.603 GHz (upper sideband). The two frequency ranges allowed us to measure the continuum emission at two different wavelengths, 2.81 mm and 3.29 mm. These frequencies covered the lines mentioned above, as well as the $^{13}$CS(2--1) (92.494 GHz) and HCO$^+$(1--0) (89.189 GHz) lines, at a spectral resolution of 62.495 kHz.
\begin{deluxetable}{c c c}[h!]
\setlength{\tabcolsep}{15pt}
\centering
\tablecaption{Summary of Observed Molecular Lines \label{tab:MolInfo} }
\tablehead{
\colhead{Molecule} & \colhead{Line Frequency} & \colhead{Transition}
}
\startdata
$^{13}$CO & 110.201 GHz & J=1-0 \\
C$^{18}$O & 109.782 GHz & J=1-0 \\
$^{13}$CS & 92.494 GHz & J=2-1 \\
HCO$^+$ & 89.189 GHz & J=1-0
\enddata
\end{deluxetable}
Data reduction was done in GILDAS \citep{Pety2005, Gildas2013} using the NOEMA pipeline in the CLIC package. Minimal flagging was required to remove spurious data. The calibrated data were then transferred to CASA 5.3.0 \citep{McMullin2007} for cleaning and further analysis. The flux calibrator MWC349 was used for all observations. J2037+511 was the phase calibrator for V2494 Cyg and V2495 Cyg, while J2201+508 was used for V1735 Cyg. The flux calibration source MWC 349 is time variable at 3 mm by $<$10\%. Taking this and other factors into account, the nominal absolute flux uncertainty of NOEMA at 2.7 mm is estimated to be 10\%.
We note that we cannot exclude the possibility that unaccounted instrumentation errors also affected the flux calibration (e.g., antenna pointing errors). We expect any other possible effect to have a lower impact than the nominal 10\% absolute flux calibration uncertainty. However, quantifying their impact is extremely uncertain, and could lead to an underestimate of all of our cited flux calibration uncertainties.
Continuum data of V1735 Cyg, V2494 Cyg, V2495 Cyg were first restricted to a \textit{uv} range of 30--65 k$\lambda$ in order to mitigate the effects of different uv-coverages and ensure that the amount of missing flux is the same between epochs. This also has the effect of removing extended emission, thereby ensuring that the measured flux densities are solely of the central, compact source. These continuum data were imaged using the \textit{clean} task with natural weighting and then convolved to a $4\arcsec\times4\arcsec$ beam. After cleaning, but before uv-restriction/beam convolution, our angular resolution was about 3-4$\arcsec$. After restricting the \textit{uv} range to 30--65 k$\lambda$, our resolutions improved to about 2-3$\arcsec$. And after beam convolution, the resolution was constant at 4$\arcsec$.
The resulting continuum images show compact emission for all the sources. The morphology of this continuum emission remained unchanged. No source was resolved, either fully or marginally, at 2.7 mm, before or after \textit{uv} restriction or beam convolution. Continuum data of V1057 Cyg, V1515 Cyg, and V733 Cep were cleaned in the same manner, but were not uv-restricted nor convolved to a $4\arcsec\times4\arcsec$ beam. The line data were also processed using the \textit{clean} task and natural weighting, but were corrected using \textit{imcontsub} to remove continuum emission. The spectral resolution for all data cubes was about 0.25 km/s. Continuum rms values were obtained from emission-free regions, whereas line rms values were determined using emission-free channels. Note that the continuum rms values therefore include the effects of thermal and phase atmospheric noise as well as some contribution from the imaging reconstruction process due to the limited uv-coverage, whereas the line rms values do not include the latter effect.
\begin{deluxetable*}{ccccc}[h!]
\setlength{\tabcolsep}{15pt}
\tablecaption{Millimeter Continuum Flux Densities \label{tab:ContFluxes} }
\centering
\tablehead{
\colhead{Object} & \colhead{Date} & \colhead{I$_{\nu,Peak}$} & \colhead{S$_{\nu,Gauss}$} & \colhead{rms} \\
\colhead{} & \colhead{} & \colhead{(mJy/beam)} & \colhead{(mJy)} & \colhead{(mJy/beam)}
}
\startdata
V1735 Cyg & April 2014 & 1.63 $\pm$ 0.16$^a$ & 1.33 $\pm$ 0.16$^a$ & 0.10 \\
& June 2017 & 2.27 $\pm$ 0.34$^a$ & 2.44 $\pm$ 0.39$^a$ & 0.07 \\
& June 2018 & 1.94 $\pm$ 0..19 & 1.84 $\pm$ 0.20 & 0.08 \\
& August 2018 & 1.68 $\pm$ 0.17 & 1.60 $\pm$ 0.19 & 0.09 \\
\hline
V2494 Cyg & May/June 2017 & 16.7 $\pm$ 2.5$^a$ & 16.6 $\pm$ 2.5$^a$ & 0.16 \\
& June 2018 & 16.0 $\pm$ 1.6 & 15.8 $\pm$ 1.6 & 0.07 \\
& August 2018 & 13.8 $\pm$ 1.4 & 12.5 $\pm$ 1.4 & 0.40 \\
\hline
V2495 Cyg & June 2017 & 14.6 $\pm$ 2.2$^a$ & 13.9 $\pm$ 2.2$^a$ & 0.17 \\
& June 2018 & 14.0 $\pm$ 1.4 & 13.8 $\pm$ 1.4 & 0.07 \\
& August 2018 & 12.9 $\pm$ 1.3 & 12.2 $\pm$ 1.2 & 0.24 \\
\hline
V1057 Cyg & May/June 2017 & 3.97 $\pm$ 0.60 & 5.40 $\pm$ 0.81 & 0.08 \\
\hline
V1515 Cyg & June 2017 & 0.66 $\pm$ 0.10 & 0.84 $\pm$ 0.13 & 0.04 \\
\hline
V733 Cep & June 2017 & 0.54 $\pm$ 0.08 & 1.50 $\pm$ 0.23 & 0.05
\enddata
\tablenotetext{a}{Corrected for frequency discrepancy, using a spectral index of 2.5}
\tablecomments{ I$_{\nu,Peak}$ and S$_{\nu,Gauss}$ are obtained from Gaussian fitting and the rms is obtained from an emission-free region.}
\end{deluxetable*}
\begin{deluxetable*}{cccccccc}[htp]
\setlength{\tabcolsep}{7.5pt}
\tablecaption{$^{13}$CO, C$^{18}$O Fluxes \label{tab:13COC18OFluxes} }
\centering
\tablehead{ \colhead{Object} & \colhead{Date} & \colhead{I$_{\nu,13CO}$} & \colhead{rms$_{13CO}$} & \colhead{$\Delta$v$_{13CO}$} & \colhead{I$_{\nu,C18O}$} & \colhead{rms$_{C18O}$} & \colhead{$\Delta$v$_{C18O}$} \\
\colhead{} & \colhead{} & \colhead{(Jy km/s)} & \colhead{(Jy/beam km/s)} & \colhead{(km/s)} & \colhead{(Jy km/s)} & \colhead{(Jy/beam km/s)} & \colhead{(km/s)}
}
\startdata
V1735 Cyg & April 2014 & 0.7 $\pm$ 0.2 & 0.12 & --4.43, +5.87 & 0.21 $\pm$ 0.05 & 0.03 & --2.73, +1.52 \\
& June 2017 & 0.7 $\pm$ 0.2 & 0.12 & --4.50, +6.00 & 0.22 $\pm$ 0.06 & 0.03 & --2.75, +1.50 \\
& June 2018 & 1.0 $\pm$ 0.3 & 0.11 & --4.50, +6.00 & 0.21 $\pm$ 0.05 & 0.04 & --2.75, +1.50 \\
& August 2018 & 1.3 $\pm$ 0.3 & 0.17 & --4.50, +6.00 & 0.39 $\pm$ 0.10 & 0.06 & --2.75, +1.50 \\
\hline
V2494 Cyg & May/June 2017 & 1.2 $\pm$ 0.3 & 0.09 & --2.75, +2.25 & 0.40 $\pm$ 0.10 & 0.05 & --3.00, +1.50 \\
& June 2018 & 1.2 $\pm$ 0.3 & 0.08 & --2.75, +2.25 & 0.40 $\pm$ 0.10 & 0.04 & --3.00, +1.50 \\
& August 2018 & 1.2 $\pm$ 0.3 & 0.11 & --2.75, +2.25 & 0.38 $\pm$ 0.10 & 0.06 & --3.00, +1.50 \\
\hline
V2495 Cyg & June 2017 & 0.19 $\pm$ 0.05 & 0.03 & --2.00, +3.75 & 0.22 $\pm$ 0.06 & 0.03 & --2.00, +2.50 \\
& June 2018 & 0.23 $\pm$ 0.06 & 0.05 & --2.00, +3.75 & 0.28 $\pm$ 0.07 & 0.04 & --2.00, +2.50 \\
& August 2018 & 0.31 $\pm$ 0.08 & 0.03 & --2.00, +3.75 & 0.27 $\pm$ 0.07 & 0.05 & --2.00, +2.50 \\
\hline
V1057 Cyg & May/June 2017 & 2.2 $\pm$ 0.6 & 0.15 & --4.32, +1.84 & 0.49 $\pm$ 0.12 & 0.03 & --2.12, +1.08 \\
\hline
V1515 Cyg$^a$ & June 2017 & -- & -- & -- & -- & -- & -- \\
\hline
V733 Cep$^a$ & June 2017 & -- & -- & -- & -- & -- & --
\enddata
\tablenotetext{a}{ No data due to poor quality}
\tablecomments{ I$_{\nu}$ is the flux obtained from a $5.66\arcsec\times5.66\arcsec$ aperture on the object location. rms$_{13CO}$ and rms$_{C18O}$ are the background rms obtained from emission-free regions. $\Delta$v is the integrated velocity range.}
\end{deluxetable*}
\begin{deluxetable*}{c c c c c c c c}[htp]
\setlength{\tabcolsep}{7.5pt}
\tablecaption{$^{13}$CS, HCO$^+$ Fluxes \label{tab:13CSHCO+Fluxes} }
\centering
\tablehead{ \colhead{Object} & \colhead{Date} & \colhead{I$_{\nu,13CS}$} & \colhead{rms$_{13CS}$} & \colhead{$\Delta$v$_{13CS}$} & \colhead{I$_{\nu,HCO^+}$} & \colhead{rms$_{HCO^+}$} & \colhead{$\Delta$v$_{HCO^+}$} \\
\colhead{} & \colhead{} & \colhead{(Jy km/s)} & \colhead{(Jy/beam km/s)} & \colhead{(km/s)} & \colhead{(Jy km/s)} & \colhead{(Jy/beam km/s)} & \colhead{(km/s)}
}
\startdata
V1735 Cyg & June 2018 & 0.033 $\pm$ 0.009 & 0.008 & --1.00, +0.50 & 1.14 $\pm$ 0.29 & 0.07 & --1.00, +7.50 \\
& August 2018 & 0.040 $\pm$ 0.010 & 0.014 & --1.00, +0.50 & 1.35 $\pm$ 0.34 & 0.11 & --1.00, +7.50 \\
\hline
V2494 Cyg & May/June 2018 & -- & 0.004 & --1.50, --0.50 & 0.18 $\pm$ 0.05 & 0.01 & --2.25, +0.25 \\
& August 2018 & -- & 0.007 & --1.50, --0.50 & 0.17 $\pm$ 0.04 & 0.03 & --2.25, +0.25 \\
\hline
V2495 Cyg & June 2018 & -- & 0.01 & --1.25, +3.25 & -- & 0.02 & --2.00, +1.00 \\
& August 2018 & -- & 0.03 & --1.25, +3.25 & -- & 0.03 & --2.00, +1.00
\enddata
\tablecomments{ Our 2017 correlator setup did not include these lines. Therefore, the 2017 observations are not included in this table. I$_{\nu}$ is the flux obtained from a $5.66\arcsec\times5.66\arcsec$ aperture on the object location. rms$_{13CS}$ and rms$_{HCO^+}$ are the background rms obtained from emission-free regions. $\Delta$v is the integrated velocity range. We do not detect $^{13}$CS emission from V2494 Cyg or V2495 Cyg. We do not detect HCO$^+$ emission from V2495 Cyg.}
\end{deluxetable*}
\subsection{Optical Observations} \label{sec:opt obs}
\begin{deluxetable*}{c c c c c c c c c}[htp]
\setlength{\tabcolsep}{15pt}
\tablecaption{LDT Photometry \label{tab:OptObsMags} }
\centering
\tablehead{
\colhead{Object} & \colhead{Date} & \colhead{Start Time (UT)} & \colhead{V} & \colhead{R} & \colhead{I} }
\startdata
V1735 Cyg & June 11, 2018 & 10:44 & 19.02 $\pm$ 0.09 & 16.61 $\pm$ 0.06 & 14.24 $\pm$ 0.04 \\
& June 12, 2018 & 10:45 & 19.07 $\pm$ 0.09 & 16.61 $\pm$ 0.06 & 14.24 $\pm$ 0.04 \\
& August 13, 2018 & 11:10 & 19.13 $\pm$ 0.09 & 16.62 $\pm$ 0.07 & 14.28 $\pm$ 0.05 \\
\hline
V2494 Cyg & June 11, 2018 & 10:51 & 18.47 $\pm$ 0.09 & 16.65 $\pm$ 0.07 & 14.79 $\pm$ 0.04 \\
& June 12, 2018 & 10:53 & 18.47 $\pm$ 0.09 & 16.63 $\pm$ 0.06 & 14.80 $\pm$ 0.04 \\
& August 13, 2018 & 11:30 & 18.38 $\pm$ 0.09 & 16.55 $\pm$ 0.07 & 14.73 $\pm$ 0.05 \\
\enddata
\end{deluxetable*}
Optical observations of V1735 Cyg, V2494 Cyg, and V2495 Cyg were carried out using the Lowell Discovery Telescope's (LDT) Large Monolithic Imager (LMI) in Happy Jack, AZ \citep{Bida2014}, in June and August 2018. These observations were coordinated with our 2018 NOEMA observations. Our June millimeter and optical observations were separated by about 24 hours, whereas our August observations were separated by about 36 hours. Details of these observations can be found in Table \ref{tab:OptObsMags}.
All observations were performed in photometric conditions using three optical filters: Johnson V, R, and I, with central wavelengths of 551 nm, 658 nm, and 806 nm respectively \citep{Johnson1966}. The FOV of the LMI is 12.5$\arcmin$ $\times$ 12.5$\arcmin$ (0.12$\arcsec$ per unbinned pixel). We used 2 $\times$ 2 pixel binning for these observations (0.24$\arcsec$ per pixel). The bias, flat-field, and overscan calibration of the CCD images were performed using a custom Python routine utilizing the Astropy package \citep{Astropy2018}. The photometric calibration of all images was carried out interactively also using a custom Python routine utilizing the Astropy package. Exposure times varied among the targets, dependent on precise seeing conditions and object brightness. V2495 Cyg was too faint to detect in any of our observations. We were therefore unable to extract optical magnitudes for this object.
Using the standard stars SA92 253 and GD 246 \citep{Landolt2009}, we obtained V, R, and I magnitudes for our targets in August 2018. To obtain magnitudes for our June 2018 observations, we utilized differential photometry. This involved normalizing the flux density of our target to that of the total brightness of several background stars, which fluctuated by less than 3\% throughout the observations. Then we directly compared the June 2018 flux densities to those of August 2018. Finally, we used that ratio to obtain an optical magnitude for the June 2018 epoch. Magnitudes of our sample are listed in Table \ref{tab:OptObsMags}.
\section{Results and Analysis} \label{sec: results&analysis}
In the following, we present millimeter flux densities for all six FUors in our sample. We then search for variability in the 2.7 mm flux densities; $^{13}$CO, C$^{18}$O, $^{13}$CS, and HCO$^+$ molecular line fluxes; and optical magnitudes of V1735 Cyg, V2494 Cyg, V2495 Cyg. We then describe the variability that is seen in V1735 Cyg for the 2.7 mm continuum.
\subsection{Millimeter Continuum} \label{sec:mmresults}
Continuum flux densities were measured by 2D Gaussian fits within a $5.66\arcsec\times5.66\arcsec$ circular region, twice the convolved beam area. These results can be found in Table \ref{tab:ContFluxes}. To account for the frequency discrepancy, we adjusted our 2014 and 2017 measured flux densities using a spectral index of 2.5. The uncertainties stated in Table \ref{tab:ContFluxes} were obtained via the root sum square of the 10\% absolute flux calibration uncertainty of NOEMA (15\% in the case of June 2017 observations) and the Gaussian fit uncertainties (typically 1--5\%).
Figure \ref{fig:ContFigs} shows the continuum maps of all six targets. While flux densities did vary somewhat, intensity distributions remained unchanged, thus we only show one epoch. The maps of V1735 Cyg, V2494 Cyg, V2495 Cyg were made using a restricted \textit{uv} coverage, whereas the rest were not. We note that despite steps taken to mitigate the differing \textit{uv} coverages, a portion of the variability reported may still be due to remaining differences.
Also of note, our measured flux density for V1057 Cyg ($5.4\pm0.1$ mJy) agrees with that of F17 ($4.9\pm0.2$ mJy). V1515 Cyg and V733 Cep were weakly detected in F17, where F17 estimates peak intensities of 0.18 $\pm$ 0.03 and 0.38 $\pm$ 0.10 mJy/beam, respectively. F17 could not, however, estimate flux densities. The F17 peak intensities are somewhat lower than what we report here for V1515 Cyg and V733 Cep, 0.66 $\pm$ 0.10 and 0.54 $\pm$0.08 mJy/beam, respectively.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{ContinuumMaps.pdf}
\caption{Continuum maps of all six sources. Solid contours denote positive 6-, 12-, 24-, 48-, 96-, 192-$\sigma$ levels. $\sigma$ levels are rms values noted in Table \ref{tab:ContFluxes}. The central `+' denotes object position (see Table \ref{tab:NOEMAObs}).}
\label{fig:ContFigs}
\end{figure*}
For V1735 Cyg, V2494 Cyg, and V2495 Cyg, we show the continuum flux densities and optical magnitudes for the two epochs in 2018 in Figures \ref{fig:V1735mmVtime}, \ref{fig:V2494mmVtime}, and \ref{fig:V2495mmVtime}, respectively. The only object to display any millimeter variability in our sample, V1735 Cyg, exhibited an $\sim80$\% increase in flux density from 2014 to 2017. This flux density increase falls outside our stated uncertainties, thus, we conclude that the observed variability is intrinsic to the source.
\begin{figure}[ht]
\centering
\includegraphics[width=.45\textwidth]{V1735CYG_mmDCT_v_time.png}
\caption{Top: V1735 Cyg millimeter continuum flux density vs.\ time. Error bars are the uncertainties and are listed in Table \ref{tab:ContFluxes}. Bottom: V1735 Cyg VRI magnitudes vs.\ time. Error bars are roughly the size of the points.}
\label{fig:V1735mmVtime}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=.45\textwidth]{V2494CYG_mmDCT_v_time.png}
\caption{Top: V2494 Cyg millimeter continuum flux density vs.\ time. Error bars are the uncertainties and are listed in Table \ref{tab:ContFluxes}. Bottom: V2494 Cyg VRI magnitudes vs.\ time. Error bars are roughly the size of the points.}
\label{fig:V2494mmVtime}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=.45\textwidth]{V2495CYG_mmDCT_v_time.png}
\caption{V2495 Cyg millimeter continuum flux density vs.\ time. Error bars are the uncertainties and are listed in Table Table \ref{tab:ContFluxes}.}
\label{fig:V2495mmVtime}
\end{figure}
We also see that following its rise from 2014 to 2017, V1735 Cyg possibly dimmed in June 2018 and then again in August 2018. This may be a sign that it is returning to some quiescent state, from some ``burstlike," heightened state. However, this downward trend from 2017 to 2018 is within or close to the flux uncertainties of NOEMA, and is also seen in both V2494 Cyg and V2495 Cyg. More observations are needed to confirm if V1735 Cyg's flux density at 3 mm has truly decreased since June 2017. No other objects in our sample, over any time period, show signs of millimeter variability outside of the measurement and flux calibration uncertainties.
\subsection{Molecular Lines}
$^{13}$CO, C$^{18}$O, $^{13}$CS, and HCO$^+$ line fluxes for V1735 Cyg, V2494 Cyg, and V2495 Cyg were extracted from velocity-integrated spectral cubes after continuum subtraction and cleaning. The velocity range used for integration varied per object and per species, but was always centered in the frames that contained emission. In all cases, the same $5.66\arcsec\times5.66\arcsec$ circular aperture (centered at the primary source of emission) was used to measure the flux. Given the extended and asymmetric morphology of the line emission, we chose not to use 2D Gaussian fits to obtain line fluxes. These results can be found in Tables \ref{tab:13COC18OFluxes} and \ref{tab:13CSHCO+Fluxes}. We note that for the line fluxes, due to the variable \textit{uv} coverages, lower SNR, extension of the emission, and possible contamination from the surrounding envelope, we assume uncertainties of 25\%. In one case (V2495 Cyg) $^{13}$CS emission was not detected on or near the target's location, but further from the target at about $\sim15\arcsec$ away. We measure 0.039 and 0.051 Jy km/s in June and August 2018, respectively.
$^{13}$CO and C$^{18}$O fluxes (Table \ref{tab:13COC18OFluxes}) were generally consistent for all objects across all epochs. The only exception may be the C$^{18}$O emission of V1735 Cyg from June 2018 to August 2018 (see Figure \ref{V1735C18OVtime}). The flux appears to have risen by about 86\%, though we emphasize that the line fluxes are highly uncertain because of the lack of short baselines to recover the extended emission. Additionally, the slightly different \textit{uv} coverages between epochs also result in artificial differences in the morphology of the extended line emission. Regardless, these differences have a relatively small (yet hard to quantify) effect on our flux measurements since we focus only on the compact line emission at the position of each object. This is partly shown by the fact that the 2014 observations (which included IRAM 30m observations to cover short \textit{uv} spacings) display similar fluxes to our observations.
$^{13}$CS and HCO$^+$ was observed in V1735 Cyg, V2494 Cyg, V2495 Cyg in June and August of 2018 (Table \ref{tab:13CSHCO+Fluxes}). These lines show no signs of variability, either in flux or spatial morphology. We did not detect $^{13}$CS or HCO$^+$ in V2495 Cyg. The emission morphology for $^{13}$CS and HCO$^+$ for all objects was consistent throughout 2018, thus we display only one epoch in Figures \ref{fig:V1735LineMaps} and \ref{fig:V2494V2495LineMaps}.
\begin{figure}[h!]
\centering
\includegraphics[width=.45\textwidth]{V1735CYG_C18O_v_time.png}
\caption{V1735 Cyg C$^{18}$O flux vs.\ time. Error bars are the uncertainties and are listed in Table \ref{tab:13COC18OFluxes}.}
\label{V1735C18OVtime}
\end{figure}
\begin{figure*}[h!]
\centering
\includegraphics[width=\textwidth]{V1735CYG_LineMaps.pdf}
\caption{Moment-0 molecular line maps of V1735 Cyg. Top: $^{13}$CO. Middle: C$^{18}$O. Bottom: $^{13}$CS (left) and HCO$^+$ (right). Solid (dashed) gray contours denote positive (negative) 3-, 6-, 12-, 24-, 48-, 96-, and 192-$\sigma$ levels. $\sigma$ for each epoch is equivalent to the rms of each image, which can be found in Table \ref{tab:13COC18OFluxes}. The central `+' denotes the target's location (see Table \ref{tab:NOEMAObs}). The `$\times$' denotes the location of V1735 Cyg SM1 \citep{Harvey2008}. Morphological differences between April 2014 and other epochs is due to differing \textit{uv} coverages. Note that we do not include the $^{13}$CS and HCO$^+$ line maps from August 2018 since they are very similar to those of June 2018 shown here.}
\label{fig:V1735LineMaps}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{V2494CYG-V2495CYG_LineMaps.pdf}
\caption{Moment-0 molecular line maps of V2494 Cyg (top) and V2495 Cyg (bottom). From left to right: $^{13}$CO, C$^{18}$O, $^{13}$CS, and HCO$^+$. Solid (dashed) gray contours denote positive (negative) 3-, 6-, 12-, 24-, 48-, 96-, and 192-$\sigma$ levels. $\sigma$ for each epoch is equivalent to the rms of each image, which can be found in Table \ref{tab:13COC18OFluxes}. The central `+' denotes the source's location (see Table \ref{tab:NOEMAObs}). Note that here we only show $^{13}$CO and C$^{18}$O line maps from June 2018 since the maps from June 2017 and August 2018 are very similar. Likewise, we only show line maps of $^{13}$CS and HCO$^+$ from June 2018 since the maps from August 2018 are very similar.}
\label{fig:V2494V2495LineMaps}
\end{figure*}
\subsection{LDT-LMI Photometry}
Optical photometry taken in June and August 2018 for V1735 Cyg and V2494 Cyg do not show variability (Table \ref{tab:OptObsMags}) and are consistent with previous observations. Our measurements of V1735 Cyg generally agree with those of \citet{Peneva2009} from 2003 to 2009. They measured V $\sim18.9$ and R $\sim16.6$, but found I $\sim13.8$, about half a magnitude brighter than reported here. Our measurements of V2494 Cyg agree with those of \citet{Magakian2013} from 2003 to 2010 in R and I. They found R $\sim16.4$ and I $\sim14.7$, but did not report V. We note that V2495 Cyg was observed, but not detected (see Section \ref{sec:opt obs}).
\section{Discussion} \label{sec:disc}
Prior to our observations, V2494 Cyg and V2495 Cyg were the only two FUor objects thought to be variable at millimeter wavelengths, displaying 1.3 mm flux density changes of $\sim$25--60\% on a timescale of about one year \citep{Liu2018}. Here we report that V1735 Cyg has also exhibited variability in the millimeter, but at 2.7 mm, and over a timescale of $\sim3$ years, from 2014 to 2017. We discuss here possible underlying mechanisms for this variability.
\subsection{Variable Disk Heating} \label{sec:heating}
In FUors, the inner disk is significantly heated by viscous heating from the accretion process and produces strong optical/IR emission \citep{Hartmann1996} and possibly millimeter emission as well \citep{Takami2019}. This hot inner disk irradiates the outer disk. Therefore, changes in the temperature of the inner disk may lead to changes in the heating of the outer disk, which we can trace with millimeter emission.
If temperature changes in the inner disk were the cause of the millimeter variability we see in V1735 Cyg, we would also expect to see a corresponding increase in its optical and/or IR flux as well. To the best of our knowledge, there is no optical data of V1735 Cyg taken close in time to the 2014 NOEMA data and so we cannot test this using our 2018 optical data. However, archival WISE data of V1735 Cyg exists at 3.4 micron and 4.6 micron from 2014 through 2020 (Figure \ref{fig:WISE}), with data from June 2014 ($\sim$2 months after the April 2014 NOEMA observations) and June 2017 (taken within a week of the June 2017 NOEMA observations). The WISE photometry displays no significant variability, indicating that the disk irradiation may have remained relatively constant during that time, and therefore, that the millimeter variability is not tied to disk temperature changes. We can also rule out a change in disk temperature being responsible for the millimeter variability given that the $\sim$doubling of the millimeter flux in V1735 Cyg would imply an equivalent $\sim$doubling in disk irradiation, which is unlikely.
\begin{figure}[h!]
\centering
\includegraphics[width=.5\textwidth]{V1735Cyg_WISEPhotometry.png}
\caption{WISE photometry of V1735 Cyg from 2014--2019. Red circles are Band 1 (3.4 $\mu$m). Blue squares are Band 2 (4.6 $\mu$m). Dashed grey bars indicate dates of NOEMA observations.}
\label{fig:WISE}
\end{figure}
\subsection{Gas and Dust Buildup in the Disk} \label{sec:buildup}
Because we see solely millimeter variability and no optical changes, one may speculate that this may be evidence that material is building up in the disk from the envelope. Using the equation
\begin{equation} \label{equ:Mcont}
M_{cont} = \frac{gS_{\nu}d^2}{\kappa_{\nu}B_{\nu}(T)} \\
\end{equation}
\noindent (which assumes an optically thin disk), where M$_{cont}$ is the continuum mass, $g=100$ is the gas-to-dust ratio, $S_\nu$ is the measured flux density at 2.7 mm, $d$ is the distance, $\kappa_{\nu}=0.2$ cm$^2$ g$^{-1}$ is the dust opacity coefficient at 2.7 mm, and $B_{\nu}(T)$ is the Planck function for a blackbody with a temperature of $T=30$ K, we find that the disk mass of V1735 Cyg \citep[at a distance $d=616$ pc;][]{Bailer2018} must have increased from 0.13 to 0.21 M$_{\odot}$ from 2014 to 2017. We note that this is consistent with a previous disk mass (0.20 M$_{\odot}$) estimated with SED modeling \citep{Gramajo2014}. Our measured disk mass change would correspond to a mass infall rate of 0.027 M$_{\odot}$ yr$^{-1}$ from the envelope, which is highly unlikely \citep{Ohtani2013, White2019}. Therefore, given the degree to which the continuum flux density changes, an unrealistic rate of mass infall would be necessary to account for the magnitude of the millimeter variability seen in V1735 Cyg. In addition, the (likely optically thin) C$^{18}$O emission was relatively constant from 2014 to 2017, implying that no C$^{18}$O has built up during that time. Thus, material buildup does not seem to be the source of the variability of V1735 Cyg at 2.7 mm.
\subsection{Free-Free Emission}
One other potential source of millimeter variability is changes in the free-free emission of the system. Free-free emission can be identified by analysis of the spectral index, $\alpha$, of the millimeter emission. The more significant the free-free emission, the shallower the spectral index, down to -0.1--0.6 for purely free-free \citep{Reynolds1986}. Scattering in an optically thick disk can act to lower the spectral index as well, though this effect is generally strongest at the innermost regions of the disk \citep{Zhu2019, Liu2019}.
The change in $\alpha$ of V1735 Cyg can be measured using existing data from June 2013, April 2014, June 2018, and August 2018. \citet{Liu2018} weakly detected V1735 Cyg at 1.3 mm in June 2013. Given their 3-$\sigma$ upper limits, and using flux density estimates from April 2014 (F17), \citet{Liu2018} determined an upper limit on $\alpha$ of 1.7--2.0. Using the upper (106.7 GHz) and lower (91.3 GHz) sidebands described in Section \ref{sec:obs}, we are able to determine spectral indices of our June and August 2018 observations. We find tentative evidence of shallower slopes than \citet{Liu2018}, $\alpha=1.4\pm0.4$ in June 2018 and $\alpha=1.3\pm0.7$ in August 2018. These slopes are somewhat lower than the expected spectral index of most circumstellar disks, where generally $\alpha=2$--3 \citep{Beckwith1991, Ubach2012, Liu2018}, and are consistent with free-free emission. We note that the spectral indices we measure with our NOEMA data in V2494 Cyg ($\alpha=2.5$--2.6) and V2495 Cyg ($\alpha=2.3$--2.5) are in line with those of most circumstellar disks, thus free-free emission was likely not a significant contributor during those observations.
These possibly shallower spectral indices we find are suggestive that the slope of the millimeter emission of V1735 Cyg decreased from 2014 to 2017 while we see an increase in millimeter emission, and may indicate that the free-free emission of V1735 Cyg has increased to become a significant contributor to the overall SED near 2.7 mm. Free-free emission has been tied to ionized jets/winds in objects with disks \citep[e.g.,][]{Macias2016, Ubach2017, Espaillat2019}, and these jets/winds are linked to accretion \citep{Frank2014}. One would then expect to see signatures of accretion variability in V1735 Cyg which may be traced in the IR. However, the WISE photometry shows no significant variability between 2014 and 2017 (Figure \ref{fig:WISE}). It may be the case that the IR emission is variable due to accretion, but was not detected with the cadence of WISE.
\citet{Liu2018} note that free-free emission is not thought to be significant in FUor objects based on previous observations \citep[see][]{Rodriguez1990, Liu2014, Dzib2015, Liu2017}. This may indeed be the case for certain objects and/or during quiescent states without enhanced accretion, but free-free emission may become significant following an accretion event. As such, future observations of FUor objects would benefit not only from multi-epoch observations, but also from multiwavelength millimeter and centimeter observations \citep{Liu2017}. This will help inform how significant, if at all, free-free emission is for a given object. If significant, free-free emission may lead to overestimated disk masses.
\section{Summary} \label{sec:summary}
We observed six FUor objects (V1735 Cyg, V2494 Cyg, V2495 Cyg, V1057 Cyg, V1515 Cyg, and V733 Cep) in 2017 at 2.7 mm. Motivated by comparison to previously published works, we then followed up with coordinated 2.7 mm and optical (V, R, I) observations for three objects (V1735 Cyg, V2494 Cyg, and V2495 Cyg) to probe for flux variability. We did not see variability outside our stated uncertainties ($\sim$10--15\%) from 2017 to 2018 in either our millimeter or optical observations. However, we do see a $\sim80$\% increase in the 2.7 mm flux density of V1735 Cyg in our June 2017 data relative to archival April 2014 data from F17. Although we took steps to mitigate the effect of differing \textit{uv} coverages for each observation, it should be noted that they may still have had effects on our measurements.
We can likely rule out thermal changes in the disk as the source of millimeter variability in V1735 Cyg since 3.4 and 4.6 $\mu$m WISE photometry from 2014 to 2017 displayed no signs of corresponding variability, indicating that the millimeter variability is not related to temperature changes in the inner disk. Gas and dust buildup in the disk is also unlikely to be the sole mechanism behind the observed millimeter variability given that the mass transfer rate from the envelope to the disk necessary to account for the continuum flux density changes we see would be unreasonably large ($\sim0.027$ M$_{\odot}$ yr$^{-1}$).
We find that the spectral slope of V1735 Cyg is shallower than expected for pure thermal dust emission at 3 mm, which may indicate a significant contribution from free-free emission. We also find that the 3 mm spectral index may have decreased since 2014, indicating a significant increase in the free-free emission. We hypothesize that V1735 Cyg may have experienced a small accretion event, leading to the ionization of ejected material, increasing the free-free emission and leading to the observed millimeter variability. If confirmed, this could imply that previously reported disk masses of FUor objects measured during enhanced accretion activity may be overestimated. Future study of FUor objects will benefit from both multi-epoch and multiwavelength observations to disentangle the free-free component from that of thermal dust emission and allow for more accurate disk mass estimates, which will help constrain what role thermal/gravitational instabilities have in triggering FUor outbursts.
\acknowledgements
We thank the anonymous referee for a careful review and suggestions that greatly improved this paper. JW, CCE, and EM acknowledge support from the National Science Foundation under CAREER grant AST-1455042. \'AK acknowledges funding from the European Research Council under the European Union's Horizon 2020 research and innovation program under grant agreement 716155 (SACCRED). This work is based on observations carried out under project number SX17AG and S18AX with the IRAM NOEMA interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain). These results made use of the Lowell Discovery Telescope (formerly Discovery Channel Telescope) at Lowell Observatory. Lowell is a private, nonprofit institution dedicated to astrophysical research and public appreciation of astronomy and operates the LDT in partnership with Boston University, the University of Maryland, the University of Toledo, Northern Arizona University, and Yale University. The Large Monolithic Imager was built by Lowell Observatory using funds provided by the National Science Foundation (AST-1005313). This publication makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration. This research made use of APLpy, an open-source plotting package for Python \citep{Aplpy}.
\vspace{5mm}
\facilities{IRAM:NOEMA, LDT}
\software{CASA, Python, Astropy, APLpy,}
|
1,108,101,566,416 | arxiv | \section{The Kontsevich Integral}
The Kontsevich integral \cite{K} is a functional on knots that can be seen as a generalization of the Gauss integral. It is a graded sum of chord diagrams times some coefficients that are essentially integrals of powers of log differentials. Chord diagrams are the knots used as an argument in $Z$ with horizontal dashed chords stretching between its strands. The Kontsevich integral $Z$ is highly dependent on a choice of time axis. Further, if it is invariant under horizontal deformations that keep the local extrema of knots used as an argument fixed, it is not invariant under translations for which such extrema are moved. Thus $Z$ depends on a path along which graphs are translated, as well as rotations, thus presenting the Kontsevich integral as a map of what appears to be Riemann surfaces, but the exact structure of which we will study in this paper.\\
Before introducing this integral, we define the algebra $\mathcal{A}$ ~\cite{K} in which it takes its values. For a singular oriented knot whose only singularities are transversal self-intersections, the preimage of each singular crossing under the embedding map defining the knot yields a pair of distinct points on $S^1$. Each singular point in the image therefore yields a pair of points on $S^1$ that are conventionally connected by a chord for book keeping purposes. A knot with $m$ singular points will yield $m$ distinct chords on $S^1$. One refers to such a circle with $m$ chords on it as a chord diagram of degree $m$, the degree being the number of chords. The support of the graph is an oriented $S^1$, and it is regarded up to orientation preserving diffeomorphisms of the circle. More generally, for a singular oriented link all of whose singularities are double-crossings, preimages of each singular crossing under the embedding map defining the link yield pairs of distinct points on possibly different circles depending on whether the double crossing was on a same component or between different components of the link. One also connects points making a pair by a chord. A link with $m$ singular points will yield $m$ chords on $\coprod S^1$. One calls such a graph a chord diagram. The support is $\coprod S^1$ regarded up to orientation preserving diffeomorphism of each $S^1$.\\
One denotes by $\mathcal{D}$ the complex vector space spanned by chord diagrams with support $S^1$. There is a grading on $\mathcal{D}$ given by the number of chords featured in a diagram. If $\mathcal{D}^{(m)}$ denotes the subspace of chord diagrams of degree $m$, then one writes:
\beq
\mathcal{D}=\oplus_{m\geq 0} \mathcal{D}^{(m)}
\eeq
One quotients this space by the 4-T relation which locally looks like:\\
\setlength{\unitlength}{1cm}
\begin{picture}(1,2)(-1,-0.5)
\multiput(0,0.75)(0.2,0){5}{\line(1,0){0.1}}
\multiput(1,0.25)(0.2,0){5}{\line(1,0){0.1}}
\put(0,0.9){\vector(0,1){0.2}}
\put(1,0.9){\vector(0,1){0.2}}
\put(2,0.9){\vector(0,1){0.2}}
\linethickness{0.3mm}
\put(0,0){\line(0,1){1}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(2.5,0.5){$+$}
\end{picture}
\setlength{\unitlength}{1cm}
\begin{picture}(1,2)(-3,-0.5)
\multiput(0,0.75)(0.2,0){5}{\line(1,0){0.1}}
\multiput(0,0.25)(0.2,0){10}{\line(1,0){0.1}}
\put(0,0.9){\vector(0,1){0.2}}
\put(1,0.9){\vector(0,1){0.2}}
\put(2,0.9){\vector(0,1){0.2}}
\linethickness{0.3mm}
\put(0,0){\line(0,1){1}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(2.5,0.5){$=$}
\end{picture}
\setlength{\unitlength}{1cm}
\begin{picture}(1,2)(-5,-0.5)
\multiput(0,0.25)(0.2,0){5}{\line(1,0){0.1}}
\multiput(1,0.75)(0.2,0){5}{\line(1,0){0.1}}
\put(0,0.9){\vector(0,1){0.2}}
\put(1,0.9){\vector(0,1){0.2}}
\put(2,0.9){\vector(0,1){0.2}}
\linethickness{0.3mm}
\put(0,0){\line(0,1){1}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(2.5,0.5){$+$}
\end{picture}
\setlength{\unitlength}{1cm}
\begin{picture}(1,2)(-7,-0.5)
\multiput(0,0.25)(0.2,0){5}{\line(1,0){0.1}}
\multiput(0,0.75)(0.2,0){10}{\line(1,0){0.1}}
\put(0,0.9){\vector(0,1){0.2}}
\put(1,0.9){\vector(0,1){0.2}}
\put(2,0.9){\vector(0,1){0.2}}
\linethickness{0.3mm}
\put(0,0){\line(0,1){1}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\end{picture}\\ \\
where solid lines are intervals on $S^1$ on which a chord foot rests, and arrows indicate the orientation of each strand. One further quotients this space by the framing independence relation: if a chord diagram has a chord forming an arc on $S^1$ with no other chord ending in between its feet, then the chord diagram is set to zero. The resulting quotient space is the complex vector space generated by chord diagrams mod the 4-T relation and framing independence and is denoted by $\mathcal{A}$. The grading of $\mathcal{D}$ is preserved by the quotient, inducing a grading on $\mathcal{A}$:
\beq
\mathcal{A}=\oplus_{m \geq 0}\mathcal{A}^{(m)}
\eeq
where $\mathcal{A}^{(m)}$ is obtained from $\mathcal{D}^{(m)}$ upon modding out by the 4-T and the framing independence relations. All this carries over to the case of links by formally extending the 4-T relation to the case of $q$ disjoint copies of the circle in the case of a $q$-components link, and the resulting $\mathbb{C}$-vector space will be denoted $\mathcal{A}(\coprod_q S^1)$.\\
The connected sum of circles can be extended to chorded circles, thereby defining a product on $\mathcal{A}$, making it into an associative and commutative algebra. The Kontsevich integral will be valued in the graded completion $\overline{\mathcal{A}}=\prod_{m \geq 0}\mathcal{A}^{(m)}$ of the algebra $\mathcal{A}$.\\
As far as knots are concerned, one works with Morse knots, geometric tangles and graphs whose vertices are curved lines. We distinguish graphs that are initially given in the argument of $Z$ from those that result from the gluing of two distinct graphs. The reason for this distinction is that initial graphs will be univalent, trivalent (y or $\lambda$-shaped) or 4-valent (X-shaped) so that their corresponding Kontsevich integral is non-singular. However graphs that result from the gluing of two graphs may have vertices that do not fall in either of these categories and thus may likely result in the Kontsevich integral of such graph being singular as we will see later. Having said that, one considers all such geometric pictures being embedded in $\mathbb{R}^3$ a decomposition of which can be given as the product of the complex plane and the real line: $\mathbb{R}^3=\mathbb{R}^2\times \mathbb{R} \simeq \mathbb{C}\times \mathbb{R}$, with local coordinates $z$ on the complex plane and $t$ on the real line for time. A morse knot $K$ is such that $t\circ K$ is a Morse function on $S^1$. An acceptable graph for our purposes is an initial graph as defined above, so that after a possible rotation none of its edges end up being a horizontal edge, something that could happen should one of its edges be a straight line. If one denotes by $Z$ the Kontsevich integral functional on knots, if $K$ is a Morse knot, one defines ~\cite{K}, ~\cite{ChDu}:
\beq
Z(K):=\sum_{m\geq 0} \frac{1}{(2 \pi i)^m}\int_{t_{min}< t_1<...<t_m<t_{max}}\sum_{P\; applicable}(-1)^{\varepsilon(P)}D_P\prod_{1\leq i \leq m}\dlog \vartriangle \!\!z[P_i] \label{IK}
\eeq
where $t_{min}$ and $t_{max}$ are the min and max values of $t$ on $K$ respectively, $P$ is an $m$-vector each entry of which is a pair of points on the image of the knot $K$, $P=(P_1,...,P_m)$, where the $i$-th entry $P_i$ corresponds to a pair of points on the knot. One refers to such $P$'s as pairings. If one further situates these paired points at some height $t_i$, and denote these two points by $z_i$ and $z'_i$, then we define $\vartriangle \!\! z[P_i]:=z_i-z'_i$. One denotes by $K_P$ the knot $K$ with $m$ pairs of points placed on it following the prescription given by $P$, along with chords connecting such points at a same height. A pairing is said to be applicable if each entry is a pair of two distinct points on the knot, at the same height ~\cite{ChDu}. We will assume that all chords are horizontal on knots and will drop the adjective applicable, simply referring to $P$'s as pairings. One denotes by $\varepsilon(P)$ the number of those points ending on portions of $K$ that are locally oriented down. For example if $P=(z(t),z'(t))$ and $K$ is decreasing at $z(t)$, then it will contribute 1 to $\varepsilon(P)$. One also define the length of $P$ to be $|P|$, the number of pairings it is a combination of. If we denote by $\iota_K$ the embedding defining the knot $K$ then $D_P$ is defined to be the chord diagram one obtains by taking the inverse image of $K_P$ under $\iota_K$, that is $D_P=\iota_K^{-1} K_P$. This generalizes immediately to the case of Morse links, and in this case the geometric coefficient will not be an element of $\barA$ but will be an element of $\barA (\coprod_q S^1)$ if the argument of $Z$ is a $q$-components link. Observe that $Z(L) \in \overline{\mathcal{A}}(\amalg_q S^1)$ is known once $L_P$ is known for all $P$'s. This is what is referred to as a tangle chord diagram \cite{ChDu} and sometimes the Kontsevich integral is given not as an element of $\overline{\mathcal{A}}(\amalg S^1)$ but as an element of $\overline{\mathcal{A}}(L)$ and is written instead exactly as in \eqref{IK} except that instead of using chord diagrams $D_P$ tangle chord diagrams $L_P$ are used. This generalizes to the case of a geometric braid or even a graph $\Gamma$ by using $\Gamma_P$ instead of $L_P$.\\
In section 2, we introduce the configuration space of $N$ unordered points in the complex plane. In section 3 we present the general notion of chord diagrams. In section 4 we present a minimal Kontsevich integral that we regard as an equivalence between objects in $\overline{\mathcal{A}}$ and links, but which can also be used to generate the original Kontsevich integral. In section 5 we show that we can make the Kontsevich integral more dynamic by making it time dependent as well as dependent on rotations, thus introducing cylinders in the picture on which $Z$ is defined.\\
\section{The configuration space of $N$ points in the plane}
A link in $S^3$ is ambient isotopic to a closed braid ~\cite{A} ~\cite{JB}, so that one can deform a link into a braid part, outside of which all its strands are parallel. For a given link, let $N$ be the number of strands of its braid part. $N$ will depend on the link we have chosen. The transversal intersection of these $N$ strands with the complex plane will yield a set of $N$ distinct points, each point resulting from the intersection of one strand with this plane. It is natural then to study, for any given $N$, the space $X_N$ defined as the configuration space of $N$ distinct unordered points in the complex plane:
\beq
X_N:=\{(z_1,...,z_n) \in \mathbb{C}^N | z_i=z_j \Rightarrow i=j\}/ S_N=(\mathbb{C}^N-\Delta)/S_N
\eeq
where $S_N$ is the permutation group on $N$ elements and $\Delta$ is the big diagonal in $\mathbb{C}^N$. The labeling of points of $X_N$ is not induced by any ordering on $\mathbb{C}^N$ but rather is a way to locate the $N$ points in the complex plane whose collection defines a single point of $X_N$. We will sometimes write $\sum_{1 \leq i \leq N}[z_i]$ instead of $\{z_1,...,z_N\}$ to represent points in configuration space. The points $z_1,...,z_N$ of the complex plane defining a point $Z=\sum_{1 \leq i \leq N}[z_i]$ of $X_N$ will be referred to as the $N$ defining points of $Z$. We consider the topology $\tau$ on $X_N$ generated by open sets of the form $U=\{U_1,...,U_N\}$ where the $U_i,\,1 \leq i \leq N$ are non-overlapping open sets in the complex plane. We will also refer to those open sets $U_1,...,U_N$ as the $N$ defining open sets of the open set $U$ of $X_N$. \\
We review the basic terminology pertaining to braids as presented in ~\cite{JB} since we will work with braids in what follows. The pure braid group of $\mathbb{C}^N$ is defined to be $\pi_1 (\mathbb{C}^N - \Delta)$, and the braid group of $\mathbb{C}^N$ is defined to be $\pi_1 (X_N)$. A braid is an element of this latter group. If $q$ denotes the regular projection map from $\mathbb{C}^N - \Delta$ to $X_N$, $Z=(z_1,...,z_N) \in \mathbb{C}^N - \Delta$, $qZ \in X_N$, then $\gamma \in \pi_1 (X_N,qZ)$ based at $qZ$ is given by a loop $\gamma =\{\gamma_1,...,\gamma_N \}$ which lifts uniquely to a path in $\mathbb{C}^N - \Delta$ based at $Z$ that without loss of generality we will denote by the same letter $\gamma$. Then we have $\gamma =(\gamma_1 ,...,\gamma_N )$. The graph of the $i$-th coordinate of $\gamma$ is defined to be $\Gamma_i := \{(\gamma_i (t),t)\;|\;t \in I \}$, $1 \leq i \leq N$. Each such graph $\Gamma_i$ defines an arc $\tilde{\gamma}_i \in \mathbb{C} \times I$ and $\tilde{\gamma}:=\cup_{1 \leq i \leq N} \tilde{\gamma}_i \in \mathbb{C}\times I $ is called a geometric braid, which we will refer to as the lift of $\gamma$. As such it is open, and its closure is a closed braid.
\section{Chord diagrams}
We will be interested in considering chord diagrams with support graphs in $\mathbb{C} \times I$, so for that purpose one considers a more general definition of chord diagrams than the one presented in the introduction which was sufficient to discuss the Kontsevich integral of knots.
\begin{ceedee}[\cite{LM}]
Let $X$ be a one dimensional, compact, oriented, smooth manifold with corners with numbered components. A chord diagram with support on $X$ is a set of finitely many unordered pairs of distinct non-boundary points on $X$ defined modulo orientation and component preserving homeomorphisms. One realizes each pair geometrically by drawing a dashed line, or chord, stretching from one point to the other. One denotes by $\mathcal{A}(X)$ the $\mathbb{C}$-vector space spanned by chord diagrams with support on $X$ modulo the framing indepence relation as well as the 4-T relation: if $i$, $j$ and $k$ are indices for components of $X$ on which chords are ending, then locally the 4-T relation can be written:
\setlength{\unitlength}{1cm}
\begin{picture}(1,2)(0,-0.5)
\multiput(0,0.75)(0.2,0){5}{\line(1,0){0.1}}
\multiput(1,0.25)(0.2,0){5}{\line(1,0){0.1}}
\put(0,0.9){\vector(0,1){0.2}}
\put(1,0.9){\vector(0,1){0.2}}
\put(2,0.9){\vector(0,1){0.2}}
\linethickness{0.3mm}
\put(0,0){\line(0,1){1}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(0.1,-0.2){$i$}
\put(1.1,-0.2){$j$}
\put(2.1,-0.2){$k$}
\put(2.5,0.5){$+$}
\end{picture}
\setlength{\unitlength}{1cm}
\begin{picture}(1,2)(-2,-0.5)
\multiput(0,0.75)(0.2,0){5}{\line(1,0){0.1}}
\multiput(0,0.25)(0.2,0){10}{\line(1,0){0.1}}
\put(0,0.9){\vector(0,1){0.2}}
\put(1,0.9){\vector(0,1){0.2}}
\put(2,0.9){\vector(0,1){0.2}}
\linethickness{0.3mm}
\put(0,0){\line(0,1){1}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(0.1,-0.2){$i$}
\put(1.1,-0.2){$j$}
\put(2.1,-0.2){$k$}
\put(2.5,0.5){$=$}
\end{picture}
\setlength{\unitlength}{1cm}
\begin{picture}(1,2)(-4,-0.5)
\multiput(0,0.25)(0.2,0){5}{\line(1,0){0.1}}
\multiput(1,0.75)(0.2,0){5}{\line(1,0){0.1}}
\put(0,0.9){\vector(0,1){0.2}}
\put(1,0.9){\vector(0,1){0.2}}
\put(2,0.9){\vector(0,1){0.2}}
\linethickness{0.3mm}
\put(0,0){\line(0,1){1}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(0.1,-0.2){$i$}
\put(1.1,-0.2){$j$}
\put(2.1,-0.2){$k$}
\put(2.5,0.5){$+$}
\end{picture}
\setlength{\unitlength}{1cm}
\begin{picture}(1,2)(-6,-0.5)
\multiput(0,0.25)(0.2,0){5}{\line(1,0){0.1}}
\multiput(0,0.75)(0.2,0){10}{\line(1,0){0.1}}
\put(0,0.9){\vector(0,1){0.2}}
\put(1,0.9){\vector(0,1){0.2}}
\put(2,0.9){\vector(0,1){0.2}}
\linethickness{0.3mm}
\put(0,0){\line(0,1){1}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(0.1,-0.2){$i$}
\put(1.1,-0.2){$j$}
\put(2.1,-0.2){$k$}
\end{picture}
One defines the degree of a chord diagram to be the number of chords a chord diagram has, and we call it the chord degree. This induces a graded decomposition of the space $\mathcal{A}(X)$:
\beq
\mathcal{A}(X)=\bigoplus_{m\geq 0}\mathcal{A}^{(m)}(X)
\eeq
where $\mathcal{A}^{(m)}(X)$ is the $\mathbb{C}$-vector space of chord diagrams of degree $m$ with support on $X$. One writes $\overline{\mathcal{A}}(X)$ for the graded completion of $\mathcal{A}(X)$.
\end{ceedee}
We will initially be interested in the case where $X$ is a geometric braid $\tilde{\gamma} \in \mathbb{C} \times I$ corresponding to some loop $\gamma$ in $X_N$. The strands are oriented up, $t=0$ being the bottom plane of the space $\mathbb{C}\times I$ in which the braid is embedded, $t=1$ corresponding to the top plane. Since indices for pairings match those for the times at which they are located, chords will be ordered from the bottom up. For $m=1$, a chord will stretch between two strands, say the strands indexed by $i$ and $j$, and we will denote such a chord diagram by $|ij\rangle \in \mathcal{A}(\tilde{\gamma})$, corresponding to the pairing $(ij)$ in this case. If we want to insist that the skeleton of the chord diagram is a given geometric braid $\tilde{\gamma}$ then we write $|ij\rangle (\tilde{\gamma})$. In certain situations it will be necessary to also indicate at which point along the braid is the chord situated for location purposes. Once we have $|ij\rangle (\tilde{\gamma})$, it is sufficient to have the height $t \in I$ at which we have to place the chord $|ij\rangle$ on $\tilde{\gamma}$ and $|ij\rangle (\tilde{\gamma})(t)$ is defined to be a chord between the $i$-th and $j$-th strands of $\tilde{\gamma}$ at height $t$, or equivalently a chord between $(\gamma_i (t),t)$ and $(\gamma_j (t),t)$. In that case we work with a representative of the class defining the chord diagram $|ij\rangle (\tilde{\gamma})$. \\
For the purpose of reconstructing links from chord diagrams, we will be interested in chord diagrams supported at a point of $X_N$. For a point $Z=\{z_1,...,z_N\} \in X_N$, some $P=(k,l)$, $1 \leq k \neq l \leq N$, $|P\rangle(Z) \in \mathcal{A}(Z)$ is a chord between $z_k$ and $z_l$ in $X_N$. We denote by $\mathcal{A}(X_N)$ the complex vector space spanned by all such elements, and by $\overline{\mathcal{A}}(X_N)$ its graded completion.\\
We will also be interested in working with elements of $\mathcal{A}^{(1)}(X) \otimes \Omega^1 (\log \mathbb{C})$ with $X$ to be determined, that we denote by $|ij\rangle \dlog (z_i-z_j)$. In this notation if $\tilde{\gamma} \in \mathbb{C} \times I$ is a geometric braid obtained from lifting a loop $\gamma$ in $X_N$, if we arbitrarily index the $N$ strands of $\tilde{\gamma}$, then the $k$-th strand is obtained from lifting a path in the complex plane given by some function $z(t)$, $t \in I$. For a chord $|ij\rangle$ between the $i$-th and the $j$-th strands which are the respective lifts of paths $\gamma_i$ and $\gamma_j$ in the complex plane given by functions $z_i(t)$ and $z_j(t)$, $t\in I$, then $z_i-z_j$ is the difference of two such functions. This leads us to defining the subspace $\Omega^1 (\log \!\vartriangle \! \!\mathbb{C})$ of log differential functionals on $\mathbb{C}$, defined by $\dlog(\vartriangle \!\! z[z_1, z_2])=\dlog(z_1-z_2)$. We have a projection:
\begin{align}
\Omega^1(log \!\vartriangle \!\!\mathbb{C}) & \xrightarrow{p_2} (\mathbb{C}^2-\Delta)/S_2 \\
dlog(z_i-z_j) &\mapsto \{z_i,z_j\}
\end{align}
On the other hand, $|ij\rangle$ represents a chord stretching between the $i$-th and $j$-th strand of a given braid. We define a projection:
\begin{align}
\mathcal{A}^{(1)}(braid) &\xrightarrow{p_1} (\mathbb{C}^2-\Delta)/S_2 \\
|ij\rangle(Z) &\mapsto \{z_i,z_j\}
\end{align}
It follows that we must have $|ij\rangle \dlog(z_i-z_j) \in \mathcal{A}^{(1)}(\text{braid})\times_{X_2} \Omega^1 (\log \!\vartriangle \!\!\mathbb{C})$.\\
\section{The Kontsevich integral as a generator}
Any given link $L$ can be put in braid form \cite{A}, a geometric braid $\tilde{\gamma} \in \mathbb{C} \times I$ whose closure yields back the link we started with. Thus we regard links as being equivalent to their geometric braids. We regard a two strands geometric braid in $\mathbb{C} \times I$ with 4 boundary points, two of which are in the plane $\mathbb{C} \times \{0\}$, the other remaining two in the plane $\mathbb{C} \times \{1\}$, as the boundary of a ribbon of $\mathbb{C} \times I$ whose intersections with the planes $\mathbb{C} \times \{0\}$ and $\mathbb{C} \times \{1\}$ are exactly those 4 points. We refer to such a ribbon as being the ribbon associated to that particular geometric braid. Thus we regard two-strands geometric braids as being equivalent to their associated ribbons, since we can go from one to the other. Consequently, given a ribbon in $\mathbb{C} \times I$, we can also refer to its boundary strands as its two associated strands. We also regard a unique strand as being associated to a ribbon of vanishing width in an obvious way. This generalizes easily to geometric braids with $N$ strands; of the $(N-1)!$ associated ribbons connecting them, a smaller number is necessary to fully recover the $N$ strands of the braid, as well as to position the strands with respect to one another. In doing so we keep in mind that a geometric braid $\tilde{\gamma}$ is the lift of some loop $\gamma$ in configuration space $X_N$. To give the positioning of two strands by means of their associated ribbon is an equivalence relation, and by transitivity it follows that all we need is $(N-1)$ of those $(N-1)!$ ribbons. We now regard a given ribbon in $\mathbb{C} \times I$ associated to two strands of a geometric braid as the closure of the set of horizontal chords from one point of either of its associated strands to the point on its other associated strand. Such a closure defines a ruled surface whose underlying ribbon is none other than the ribbon we started with. This generalizes easily to the presence of $N-1$ ribbons. One may therefore think that to recover a link $L$ it is sufficient to have $N-1$ well-chosen ribbons, or equivalently the closures of $N-1$ sets of chords between their boundaries. What the Kontsevich integral does is a lot more. The Kontsevich integral is valued in the graded algebra $\mathcal{A}$ of chord diagrams. The metric aspect necessary for locating geometric objects with respect to one another is encoded in the local coordinates on $\mathbb{C} \times I$ for such objects. For instance two strands of a geometric braid are viewed as the lift of two paths in $X_2$ given by two functions $z_1(t)$ and $z_2(t)$, $t \in I$. Ultimately the Kontsevich integral is invariant under horizontal deformations, so what is of most interest to us is the winding of strands around one another, and thus we are led to considering not differences $z_1 - z_2$ but logarithms of such differences as those pick up crossings between strands. Thus a horizontal chord based at two points of two different strands along with the logarithm of the difference of the two complex variables locating these two points is sufficient. For $P$ an applicable pairing, $Z=\{z_1,z_2\}$ a point of $X_2$, $|P\rangle (Z)$ the chord between the two points $z_1$ and $z_2$, the object we are looking at is $(|P\rangle (Z), \frac{1}{2 \pi i}\log (z_1-z_2))$. Given a two strands geometric braid $\tilde{\gamma} = \{\tilde{\gamma}_1 , \tilde{\gamma}_2 \}$ with $\tilde{\gamma}_i$ the arc corresponding to the graph $\Gamma_i=\{(z_i (t), t) | t \in I \}$, obtained from lifting $\gamma_i = \{z_i(t) | t \in I \} $ in $X_2$, $i=1,2$, we regard its associated ribbon as the surface underlying the ruled surface obtained as the closure of the set $\{|P\rangle (z_1(t), z_2(t)) | t \in I \}$. If it is clear how to reproduce the ribbon, it is not clear however how to implement such a closure. A first step towards achieving this in Kontsevich integral computations is to fatten chords and to consider germs of chords based at small neighborhoods of points at which they are located, which are given as the intersections of small open balls in $\mathbb{C} \times I$ centered about those points with the strands on which the points are located. We denote by $\delta$ such an operation, and by $\delta Z$ such a neighborhood. We write:
\beq
\delta |P\rangle (Z)=|P\rangle (\delta Z)
\eeq
where $|P\rangle(Z)$ is a chord with support the point $Z$ while $|P\rangle(\delta Z)$ is the same chord with support in a neighborhood $\delta Z$ of the point $Z$, and with its feet located at $Z$. Once such an object is defined we can define differentials in such a neighborhood $\delta Z$, and thus we can consider the log differential $\dlog(z_1-z_2)$. This leads to considering densities defined as follows:
\beq
\delta (|P\rangle (Z), \log (z_1-z_2))= |P\rangle (\delta Z) \dlog (z_1-z_2)
\eeq
Further if $\overline{\cup _{Z \in \tilde{\gamma}}|P\rangle (Z)}$ does reproduce a ruled surface, we cannot easily incorporate the logarithms in such a closure. Summing over densities is possible however, and this is done via an integration. At the level of chord diagrams what used to be a simple union of chord diagrams based at a point can now be implemented by taking the concatenation of chord diagrams based in a neighborhood of points of $\tilde{\gamma}$, as such neighborhoods can be concatenated. This leads us to defining the following product. For two applicable pairings $P$ and $P'$ of degree one, $|P\rangle (Z_a)$ and $|P'\rangle (Z_b)$ based at two different points distant from one another, then we define:
\begin{align}
|P\rangle &(\delta Z_a) \dlog (z_{a,1} - z_{a,2}) \cdot |P'\rangle (\delta Z_b) \dlog (z_{b,1}-z_{b,2}) \nonumber \\
&=|P\rangle (\delta Z_a)|P'\rangle (\delta Z_b)\dlog (z_{a,1} - z_{a,2})\dlog (z_{b,1}-z_{b,2})
\end{align}
If the two points $Z_a$ and $Z_b$ are close together, then we can regard $\delta Z_a$ and $\delta Z_b$ as being essentially the same neighborhood and we define the product $|P\rangle (\delta Z_a) |P'\rangle(\delta Z_b)$ as being a concatenation strand-wise:
\beq
|P\rangle (\delta Z_a) |P'\rangle(\delta Z_b)=|P,P'\rangle (\delta Z_{\Lambda})
\eeq
$\Lambda$ being either of $a$ or $b$. In this situation, this leads to defining:
\begin{align}
|P\rangle &(\delta Z_a) \dlog (z_{a,1} - z_{a,2}) \cdot |P'\rangle (\delta Z_b) \dlog (z_{b,1}-z_{b,2})\nonumber \\
&=|P,P'\rangle (\delta Z_{\Lambda})\dlog (z_{a,1} - z_{a,2})\dlog (z_{b,1}-z_{b,2})
\end{align}
We generalize this easily to the product of more than two chord diagram valued log differentials. This gives rise to the non-commutative graded algebra $\delta \mathcal{A}(X_N \times I)$ with graded completion $\delta \overline{\mathcal{A}}(X_N \times I)$. Observe that the support of such chord diagrams was the braid itself in the original definition of the Kontsevich integral. Thus if we define $Z(t)=\tilde{\gamma} \cap \mathbb{C} \times \{t\}$, then this defines a fonction $Z$ on $I$. Then for $t$ fixed, $P$ fixed, $|P|=1$, the notation $|P\rangle (Z(t))$ makes sense. What we have is a minimal such definition where chords are not tangle chord diagrams per se \cite{ChDu} but rather are merely chord diagrams defined only locally. It is then easy to sum over such densities: for $m \geq 0$ fixed, for $P=(P_1,\cdots , P_m)$ fixed, we sum over all terms of the form $(\frac{1}{2 \pi i})^m\prod _{1 \leq i \leq m}|P_i\rangle (\delta Z(t_i)) \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (Z(t_i))$ for $0 < t_1 < \cdots < t_m <0$. We then sum over all such choices of $P$'s for which $|P|=m$, and then finally sum over all $m \geq 0$:
\beq
\sum _{m \geq 0} \sum_{|P|=m} \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m}\prod _{1 \leq i \leq m}|P_i\rangle (\delta Z(t_i)) \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (Z(t_i))
\eeq
We can easily generalize this to the case of a geometric braid with $N$ strands and we obtain the minimal Kontsevich integral $\Lambda$:
\beq
\Lambda=\sum _{m \geq 0} \sum_{|P|=m} \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m}\prod _{1 \leq i \leq m}|P_i\rangle (\delta Z(t_i)) \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (Z(t_i))
\eeq
If we define:
\begin{align}
\Lambda_M&=\nonumber \\
&\sum _{0 \leq m \leq M} \sum_{|P|=m} \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m}\prod _{1 \leq i \leq m}|P_i\rangle (\delta Z(t_i)) \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (Z(t_i))
\end{align}
then we can write:
\beq
\Lambda=\lim_{M \rightarrow \infty} \Lambda_M
\eeq
If we assume that the geometric braids we work with are smooth enough, then for $M$ large, $\Lambda_M$ is sufficient to geometrically produce ruled surfaces whose boundaries are the geometric braid we are seeking. We define the depth of a geometric braid to be the smallest value of $M$ for which we can recover the braid from studying the coefficients of $\Lambda_M$. From \cite{RG1} we know such a value is 1. The definition of depth will be most useful later when we generalize the Kontsevich integral to more complicated objects than simple geometric braids. Now however the minimal Kontsevich integral is not the integral as it was initially defined \cite{K}. Kontsevich used the skeleton as a support of the chord diagrams, and instead of considering chord diagrams defined locally, or equivalently germs of chord diagrams, one considers tangle chord diagrams with support on a geometric braid. This can easily be implemented by putting the chords of $\Lambda$ on the geometric braid $\tilde{\gamma}$. To do this we define an action of the graded completion of the non-commutative graded algebra $\oplus_{n \geq 0} \delta \mathcal{A}^{(n)}( X_N \times I) \times ( \Omega^1 (\log \vartriangle \!\mathbb{C}))^n$ on braids by recurrence. If $|P|=1$, $Z \in X_N \times \{t\}$, $t \in I$, $1 \leq i \neq j \leq N$ are given, $\tilde{\gamma}_h$, $h=i,j$ the strands of $\tilde{\gamma}$ on which $|P\rangle $ is supported, then we define $|P\rangle (\delta Z) \cdot \tilde{\gamma}=|P\rangle (\tilde{\gamma}(t))$ if $\tilde{\gamma}(t)=Z$, $ \tilde{\gamma}$ otherwise. Thus here recovering the geometric braid is not the point of computing the Kontsevich integral. We have:
\begin{align}
\Lambda \cdot \tilde{\gamma}&= \nonumber \\
\sum _{m \geq 0} &\sum_{|P|=m} \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m}\prod _{1 \leq i \leq m}|P_i\rangle (\delta \tilde{\gamma}(t_i)) \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (\tilde{\gamma}(t_i)) \cdot \tilde{\gamma}\\
=\sum _{m \geq 0} &\sum_{|P|=m} \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m}\prod _{1 \leq i \leq m}|P_i\rangle (\delta \tilde{\gamma}(t_i)) \cdot \tilde{\gamma}\prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (\tilde{\gamma}(t_i)) \\
=\sum _{m \geq 0} &\sum_{|P|=m} \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m} |P\rangle (\tilde{\gamma}T) \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (\tilde{\gamma}(t_i))\\
&=Z(\tilde{\gamma})
\end{align}
where we have used the notation $T=(t_1,\cdots, t_m)$. This simplifies as follows:
\begin{align}
\sum _{m \geq 0} \sum_{|P|=m} &\int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m} |P\rangle (\tilde{\gamma}T) \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (\tilde{\gamma}(t_i))= \nonumber \\
&\sum _{m \geq 0} \sum_{|P|=m} |P\rangle (\tilde{\gamma}) \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m} \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (\tilde{\gamma}(t_i))
\end{align}
as all the representatives $|P\rangle (\tilde{\gamma}T)$ are elements of the homeomorphism class $|P\rangle (\tilde{\gamma})$ which we can factor out of the integral. This enables one to see that the Kontsevich integral is a sum over all degrees of chords, and for each degree a sum over all possible homeomorphism classes of chords of that particular degree, and for each such class a sum over all representatives, which is given by chords supported on the braid times the integral of an appropriate power of the log differentials which are none other than the densities necessary for performing such as sum. We can simplify this sum even further by defining an equivalence class on homeomorphism classes of tangle chord diagrams: define two pairings $P$ and $P'$ to be equivalent relative to $\tilde{\gamma}$ if one can go from one pairing to the other by sliding the chords of one pairing along $\tilde{\gamma}$ to obtain the chords of the other. If we close the geometric braid into a link, this is what we would obtain as the chords circle the link. Thus this equivalence relation becomes manifest once each tangle chord diagram is closed into a link. The resulting Kontsevich integral we denote by $\complement Z(\tilde{\gamma})$. We also write $\complement |P\rangle (\tilde{\gamma})=L_P$ if the geometric braid $\tilde{\gamma}$ closes into a link $L$. Then we can write:
\begin{align}
\complement Z(\tilde{\gamma})&=\complement \sum _{m \geq 0} \sum_{|P|=m} |P\rangle (\tilde{\gamma}) \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m} \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (\tilde{\gamma}(t_i))\\
&=\sum _{m \geq 0} \sum_{|P|=m} \complement |P\rangle (\tilde{\gamma}) \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m} \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (\tilde{\gamma}(t_i))\\
&=\sum _{m \geq 0} \sum_{|P|=m} L_P \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m} \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P_i] (\tilde{\gamma}(t_i))\\
&=\sum _{m \geq 0} \sum_{\substack{[P] \\ |P|=m}}\sum _{P' \in [P]} L_{P'} \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m} \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P'_i] (\tilde{\gamma}(t_i))\\
&=\sum _{m \geq 0} \sum_{\substack{[P] \\ |P|=m}} L_P \Big(\sum _{P' \in [P]} \int_{0 < t_1< \cdots < t_m <1} \frac{1}{(2 \pi i)^m} \prod_{1 \leq i \leq m} \dlog (\vartriangle \!\! z [P'_i] (\tilde{\gamma}(t_i)) \Big)
\end{align}
\section{The Kontsevich integral as a map from orbifolded cylinders}
We have the Kontsevich integral of geometric braids $\tilde{\gamma}$ embedded in $\mathbb{C} \times I$. One can of course consider the integral of links as well, the only addition being an overall sign for each chord diagram as follows:
\beq
Z(L)=\sum_{m \geq 0} \sum_{|P|=m} \frac{1}{(2 \pi i)^m} \int _{0 < t_1 < \cdots < t_m < 1} (-1)^{\epsilon (P)}L_P \prod_{ 1 \leq i \leq m} \dlog \vartriangle \!\! z[P_i](Z(t_i))
\eeq
and $Z(t)$ are local coordinates on $L$ and for each chord $\epsilon(P)$ counts the number of its feet ending on strands that are locally oriented down. We can easily generalize this integral to more general pictures such as oriented graphs whose vertices are univalent, trivalent (y or $\lambda$-shaped) or 4-valent (X-shaped). It is worth recalling at this point that the Kontsevich integral is defined for Morse links, and correspondingly we will assume that we do not have graphs with straight edges if we know that after rotation those edges may end up being horizontal. If $\{ G_i | 1 \leq i \leq q \}$ is a collection of graphs of $\mathbb{C} \times I$, we can compute $Z( \amalg _i G_i)$. In doing such a computation, we can study the behavior of $Z$ as graphs are moved in $\mathbb{C} \times I$, thereby introducing a time dependence $\tau$ in the computation of $Z$. Such a dynamic picture can be implemented as follows: for a graph $G$ and a path $\alpha$ in $\mathbb{C} \times I$ inducing a tangent vector field $X$, then moving $G$ along $\alpha$ means at time $\tau$ each point of $G$ moves in the direction given by the vector $X(\tau)$, and this for all $\tau \in [0,1]$. In other terms moving a graph along a path means moving it as a single block. Observe that $Z(G)$ of a single graph $G$ is time independent as the movement of the graph $G$ along any path will not alter $Z(G)$. As soon as we consider two or more graphs however, $Z(\amalg_{i \geq 2} G_i)$ becomes non-trivial as at least one of the graph is moved relative to the others. A trivial example is provided by two non-parallel strands with same orientation, with highest and lowest points in the same respective planes $\mathbb{C} \times \{2/3\}$ and $\mathbb{C} \times \{1/3\}$, with a separation of $a$ at the top, a separation of $b$ at the bottom. The degree one term of the Kontsevich integral of such a picture is $\log(a/b)/2 \pi i$, while if we move up either strand by a third of a unit, the resulting Kontsevich integral is trivial as there are no longer any chords between the two strands. We also consider the rotation of graphs with respect to a point. For a graph $G$ and a fixed point $p$ of $\mathbb{C} \times I$, we can rotate $G$ with respect to that point. The resulting Kontsevich integral will not be invariant under such a rotation as it is known that $Z$ depends on a choice of time axis \cite{K}. One can trivially convince oneself of this fact; the Kontsevich integral of the U-shaped unknot is non trivial, it is commonly denoted $\nu^{-1}$, whereas the Kontsevich integral of the same unknot rotated sideways by ninety degrees is trivial. Since we consider moving graphs, the point $p$ will not be fixed throughout but will change with time so we consider another path $\beta$ such that $\beta (\tau)$ will be the desired point at time $\tau$ with respect to which a graph is rotated.\\
Each graph $G_i$, $1 \leq i \leq q$ moves along a particular path $\alpha_i$ and has a particular curve of center of rotation points $\beta_i$. Thus what we have is a functional $Z(\amalg_i G_i)[\alpha_1][\beta_1][\cdots][\alpha_q][\beta_q]$ and we have that the Kontsevich integral appears as a map from $q$ cylinders $S^1 \times I$ to $\overline{\mathcal{A}}(\amalg_i G_i)$:
\beq
Z(\amalg_i G_i)[\alpha_1][\beta_1][\cdots][\alpha_q][\beta_q]: \otimes ^q S^1 \times I \rightarrow \overline{\mathcal{A}}(\amalg_i G_i)
\eeq
Now as graphs move towards one another, the presence of logarithmic differentials in the expression for $Z$ may lead to singularities. As points connected by a chord get closer together the corresponding log differentials give rise to coefficients that are increasing in value. If contact occurs we distinguish two cases. If at the point of contact we have a vertex that is not y, $\lambda$ or X-shaped, we do have an infinite result. If the point of contact is trivalent (y or $\lambda$-shaped) or 4-valent (X-shaped), we have what we call a vanishing singularity for then the coefficient of such a resulting graph is finite by virtue of the framing independence relation. A first remark is that as the number of components is reduced we either have singularities or vanishing singularities, which points to the fact that the Kontsevich integral may be ultimately defined on a stratified space, something we will go into in a forthcoming paper. Observe that if we rotate graphs, what appears to be singularities may disappear altogether. For illustrative purposes, consider a circle and a strand at an angle that moves towards the circle and touches it say at the point of intersection of the horizontal line going through the center of the circle. This is a vanishing singularity. If however we move this strand around the circle in such a manner that it touches the circle on the vertical line going through the center of the circle, then there never was a singularity. \\
Thus two graphs $G_1$ and $G_2$ can be brought into contact:
\begin{itemize}
\item[-] At some points $(\sigma_1, \tau_1)$ and $(\sigma_2, \tau_2)$ and are therefore identified. Thus $Z$ is defined on $S^1 \times I \otimes S^1 \times I/\{(\sigma_1, \tau_1) \sim (\sigma_2, \tau_2)\}$ where $Z(G_1 \amalg G_2)$ is defined away from the singular point on the quotient and $Z(G_1 \cup G_2)$ is defined exactly at the point where the two graphs are brought into contact to form what we call $G_1 \cup G_2$. If this results in a singularity of $Z$, we mark this point by an ``X", a point otherwise. The point of contact depends on the choice of $G=\{G_1, \cdots, G_q\}$, $\alpha =\{ \alpha_1, \cdots, \alpha_q \}$ and $\beta=\{ \beta_1, \cdots, \beta_q \}$, thus we will denote by $\sim_{G \alpha \beta}$ such an identification and by $S^1 \times I \otimes S^1 \times I /\sim_{G \alpha \beta}$ the resulting space on which $Z$ is defined. In so doing we adopt the Knot Theory point of view that tensor products can be represented by objects side by side. In this manner the identification can be easily visualized as being a simple gluing between cylinders, leading to a singular space that we will refer to as an identifold. We will refer to those glued cylinders as id-folded cylinders for short.
\item[-] The two graphs $G_1$ and $G_2$ can be brought into contact and $G_1 \cup G_2$ exists along some path $(\alpha, \beta)$ which can either be given by $(\alpha_1, \beta_1)$ or $(\alpha_2, \beta_2)$. This corresponds to some values $(\sigma_i, \tau_i) \in [\theta_{i1}, \theta_{i2}]\times [a_i, b_i]$, $i=1,2$ on their respective cylinders being identified, leading to a common arc. Such an identification is taken into account by saying that $Z$ is a map on $S^1 \times I \otimes S^1 \times I /\sim_{G \alpha \beta}$. Subarcs of the arc of contact are drawn as a solid line if along such subarcs $Z$ is singular. Subarcs on which $Z$ is well-defined are drawn as a dashed line. Now along the arc of contact, $G_1 \cup G_2$ may be brought into contact with other graphs. The two above steps can then be repeated, leading to a second identification of points or subarcs of this arc with points from a third cylinder. All of this is still taken into account by working with the quotient $\otimes ^3 S^1 \times I/ \sim_{G \alpha \beta}$.
\item[-] The two graphs $G_1$ and $G_2$ are brought into contact along some area. One instance where this happens is in the event that we have two circles of radius $0.5$ units centered at $(0,0)$ and $(1,0)$ respectively, each moving straight up, the circle on the left rotating counterclockwise as it moves, the one on the right rotating clockwise. Those two graphs are in contact for all times and angles. In that situation the two cylinders corresponding to those two circles are identified.
In the event that contact occurs only for areas $\Sigma_1$ and $\Sigma_2$ possibly ending on either or both boundaries of $S^1 \times I$, we identify such areas to yield a common area $\Sigma$ in $S^1 \times I \otimes S^1 \times I / \sim_{G \alpha \beta}$. Subareas of $\Sigma$ over which contact between $G_1$ and $G_2$ results in a singularity for $Z$ are delimited by a solid line, a dashed line otherwise.
The resulting graph $G_1 \cup G_2$ can further be brought into contact with other graphs, resulting in the area of contact in $\otimes^3 S^1 \times I / \sim_{G \alpha \beta}$ having points, arcs or subareas being identified with points from a third cylinder.
\end{itemize}
This has been done for two or three graphs being brought into contact but can easily be generalized to $q$ graphs $G_i$ being brought into contact, the geometry of contact still being taken into account by working with $\otimes ^q S^1 \times I / \sim_{G \alpha \beta}$, on which $Z$ is defined. We denote such an identifold by $IdX(S^1 \times I, G, \alpha, \beta)$ and by $IdX_{S^1 \times I}$ the set of all such identifolds.
More generally, if $\Gamma(\mathbb{C} \times I)$ denotes the set of graphs embedded in $\mathbb{C} \times I$, then $Z$ is an element of $F( \Gamma (\mathbb{C} \times I)^q, F( (P ( \mathbb{C} \times I ))^{2q}, F( IdX_{S^1 \times I}, \overline{\mathcal{A}}( \Gamma ( \mathbb{C} \times I))^q)))$. In stages:
\begin{align}
Z: \Gamma (\mathbb{C} \times I)^q &\rightarrow F( (P ( \mathbb{C} \times I ))^{2q}, F(IdX_{S^1 \times I}, \overline{\mathcal{A}}( \Gamma ( \mathbb{C} \times I))^q)) \nonumber \\
\amalg_{1 \leq i \leq q} G_i &\mapsto Z(\amalg_{1 \leq i \leq q} G_i)
\end{align}
To those $q$ graphs in $\mathbb{C} \times I$, we can associate curves for translations as well as curves for rotations as follows:
\begin{align}
Z(\amalg_{1 \leq i \leq q} G_i): P ( \mathbb{C} \times I ))^{2q} \rightarrow F(IdX_{S^1 \times I}, \overline{\mathcal{A}}( \Gamma ( \mathbb{C} \times I))^q) \nonumber \\
\times_{1 \leq i \leq q} (\alpha_i, \beta_i) \mapsto Z(\amalg_i G_i, \times_i (\alpha_i, \beta_i))
\end{align}
Once these paths are defined, the resulting Kontsevich integral can be seen as being a map of towers to the graded algebra of chord diagrams with support the $q$ diagrams that were initially chosen:
\begin{align}
Z(\amalg_i G_i, \times_i (\alpha_i, \beta_i)): IdX_{S^1 \times I} \rightarrow \overline{\mathcal{A}}(\amalg_q G_i) \nonumber \\
\otimes^q S^1 \times I / \sim_{G, \alpha \beta} \mapsto Z(\amalg_i G_i, \times^q (\alpha_i, \beta_i))
\end{align}
At the second stage above, we can study the deformations of $Z$ under deformations in the space of paths in $\mathbb{C}\times I$. For a graph $G$ in $\Gamma ( \mathbb{C} \times I)$ which is the result of glueing $N(G)$ arcs $\tilde{\gamma}_i$ in $\mathbb{C} \times I$, with associated tangent vectors $X_i$, $ 1 \leq i \leq N(G)$, then we define the tangent space to $\Gamma(\mathbb{C} \times I)$ at $G$ to be given by:
\beq
T_G \Gamma(\mathbb{C} \times I)=\{X_i | 1 \leq i \leq N(G) \}
\eeq
Such arcs $\tilde{\gamma}_i$ are lifts of paths $\gamma_i$ in the complex plane and thus are given by $(\gamma_i(t), t)$ for $t \in I$, which we denote by $\tilde{\gamma_i}(t)$. We denote by $d \tilde{\gamma}_i$ a differential along such arcs $\tilde{\gamma}_i$, duals to the vectors $X_i$. Then the cotangent space to a graph $G$ is defined to be:
\beq
T^*_G \Gamma(\mathbb{C} \times I) = \{d \tilde{\gamma}_i | 1 \leq i \leq N(G) \}/{G' \amalg G''=G}
\eeq
We write:
\beq
\delta G =\sum_{1 \leq i \leq N(G)}\lambda_i d\tilde{\gamma}_i
\eeq
the formal deformation of $G$ where the $\lambda_i$'s are coefficients. We can also deform the paths $\alpha$ and $\beta$ which leads to defining the tangent space:
\begin{align}
T_{\alpha, \beta}P(\mathbb{C}\times I)^q &=\{\delta(\alpha, \beta)\} \\
&=\{((d\alpha_1, d\beta_1), \cdots, (d\alpha_q, d\beta_q))\}
\end{align}
All such deformations induce deformations of $\otimes^q S^1 \times I / \sim_{G \alpha \beta}$. Observe that absent any knowledge of $G$, $\alpha$ or $\beta$, knowing this quotient space we can determine when divergences for $Z$ arise, thereby presenting such identifolds as a blueprint for studying the singularities of the Kontsevich integral.
|
1,108,101,566,417 | arxiv | \section{Introduction}
Human translators exhibit remarkable generalization capabilities and are able to translate even rare inflections they may have never seen before. Indeed, this skill is necessary for translation since language follows a Zipfian distribution \cite{zipf1949human}: a large number of the tokens in a translated text will come from rare types, including rare inflections of common lexemes. For instance, a Spanish translator will most certainly know the verb \word{hablar} ``to speak'', but they will only have seen the less frequent, first-person plural future form \word{hablar{\'a}mos} a few times. Nevertheless, they would have no problem translating the latter.
In this paper we ask whether current methods for bilingual lexicon induction (BLI) generalize morphologically as humans do. Generalization to rare and novel words is arguably the main point of BLI as a task---most frequent translation pairs are already contained in digital dictionaries. Modern word embeddings encode character-level knowledge \cite{bojanowski2017enriching}, which should---in principle---enable the models to learn this behaviour; but morphological generalization has never been directly tested.\looseness=-1
\begin{figure}
\hspace*{-0.63cm}
\centering
\includegraphics[width=1.1\linewidth]{test_muse_vs_ruder.pdf}
\caption{The relation between the BLI performance and the frequency of source words in the test dictionary. The graph presents results for the model of \newcite{ruder2018discriminative} evaluated on both the MUSE dictionary \cite{conneau2018word} and our morphologically complete dictionary, which contains many rare morphological variants of words. The numbers above the bars correspond to the number of translated source words (a hyphen represents an empty dictionary).}
\label{fig:first-page}
\vspace*{-0.4cm}
\end{figure}
Most existing dictionaries used for BLI evaluation do not account for the full spectrum of linguistic properties of language. Specifically, as we demonstrate in \cref{sec:dictionaries}, they omit \emph{most morphological inflections} of even common lexemes.
To enable a more thorough evaluation we introduce a new resource: 40 \defn{morphologically complete} dictionaries for 5 Slavic and 5 Romance languages, which contain the inflectional paradigm of every word they hold. Much like with a human translator, we expect a BLI model to competently translate full paradigms of lexical items. {Throughout this work we place our focus on genetically-related language pairs. This not only allows us to cleanly map one morphological inflection onto another, but also provides an upper bound for the performance on the generalization task; if the models are not able to generalize for closely related languages they would most certainly be unable to generalize when translating between unrelated languages.}
We use our dictionaries to train and evaluate three of the best performing BLI models \cite{artetxe2016learning,artetxe2017learning,ruder2018discriminative} on all 40 language pairs. To paint a complete picture of the models' generalization ability we propose a new experimental paradigm in which we independently control for four different variables: the word form's frequency, morphology, the lexeme frequency and the lexeme (a total of 480 experiments). Our comprehensive analysis reveals that BLI models can generalize for frequent morphosyntactic categories, even of infrequent lexemes, but fail to generalize for the more rare categories. This yields a more nuanced picture of the known deficiency of word embeddings to underperform on infrequent words \cite{Gong2018}. Our findings also contradict the strong empirical claims made elsewhere in the literature \cite{artetxe2017learning, conneau2018word, ruder2018discriminative, grave2018unsupervised}, as we observe that performance severely degrades when the evaluation includes rare morphological variants of a word and infrequent lexemes. We picture this general trend in Figure \ref{fig:first-page}, which also highlights the skew of existing dictionaries towards more frequent words.\footnote{For MUSE, we only evaluate on forms that are not present in our training dictionary (970 out of 1500 source words).} As our final contribution, we demonstrate that better encoding of morphology is indeed beneficial: enforcing a simple morphological constraint yields consistent performance improvements for all Romance language pairs and many of the Slavic language pairs.\looseness=-1
\section{Morphological Dictionaries}\label{sec:dictionaries}
\subsection{Existing Dictionaries} \label{sec:existing_dicts}
Frequent word forms can often be found in human-curated dictionaries. Thus, the practical purpose of training a BLI model should be to create translations of new and less common forms, not present in the existing resources. In spite of this, most ground truth lexica used for BLI evaluation contain mainly frequent word forms.
Many available resources are restricted to the top 200k most frequent words; this applies to the English--Italian dictionary of \citet{dinu2015improving}, the English--German and English--Finnish dictionaries of \citet{artetxe2017learning}, and \citet{artetxe2018generalizing}'s English--Spanish resource. The dictionaries of \citet{irvine2017comprehensive} contain only the top most frequent 10k words for each language. \citet{zhang2017adversarial} extracted their Spanish--English and Italian--English lexica from Open Multilingual WordNet \cite{bond2012survey, miller1998wordnet}, a resource which only yields high frequency, lemma level mappings. Another example is the recent MUSE dataset \cite{conneau2018word}, which was generated using an ``internal translation tool'', and in which the majority of word pairs consist of forms ranked in the top 10k of the vocabularies of their respective languages.
Another problem associated with existing resources is `semantic leakage' between train and evaluation sets. As we demonstrate in \textsection \ref{sec:muse}, it is common for a single lexeme to appear in both train and test dictionary---in the form of different word inflections. This circumstance is undesirable in evaluation settings as it can lead to performance overstatements---a model can `memorize' the corresponding target lemma, which ultimately reduces the translation task to a much easier task of finding the most appropriate inflection. Finally, most of the available BLI resources include English in each language pair and, given how morphologically impoverished English is, those resources are unsuitable for analysis of morphological generalization.
\begin{table}
\footnotesize
\centering
\begin{adjustbox}{width=\columnwidth}
\begin{tabular}{l l l l l}
\toprule
\makecell{Source\\Word} & \makecell{Target\\Word} & \makecell{Source\\Lemma} & \makecell{Target\\Lemma} & Tag\\
\midrule
morza & mo\v{r}e & morze & mo\v{r}e & \T{N;NOM;PL}\\
morzu &mo\v{r}i & morze & mo\v{r}e & \T{N;DAT;SG} \\
morze & mo\v{r}e & morze & mo\v{r}e & \T{N;NOM;SG} \\
morzami & mo\v{r}i & morze & mo\v{r}e & \T{N;INS;PL} \\
m\'{o}rz & mo\v{r}í & morze & mo\v{r}e & \T{N;GEN;PL} \\
morzu & mo\v{r}i & morze & mo\v{r}e & \T{N;ESS;SG} \\
morzom & mo\v{r}\'{i}m & morze & mo\v{r}e & \T{N;DAT;PL} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{An example extract from our morphologically complete Polish--Czech dictionary.}
\label{t:dictexample}
\end{table}
\begin{table*}
\renewcommand\arraystretch{1.2}
\footnotesize
\centering
\begin{tabular}{l l l l l | l l l l l }
\toprule
\textbf{Slavic} & \lang{Czech} & \lang{Russian} & \lang{Slovak} & \lang{Ukrainian} &
\textbf{Romance} & \lang{Spanish} & \lang{Catalan} & \lang{Portuguese} & \lang{Italian} \\
\midrule
\lang{Polish} & 53,353 & 128,638 & 14,517 & 12,361 & \lang{French} & 686,139 & 381,825 & 486,575 & 705,800\\
\lang{Czech} & - & 65,123 & 10,817 & 8,194 &\lang{Spanish} & - & 343,780 & 476,543 & 619,174 \\
\lang{Russian} & & - & 128,638 & 10,554 & \lang{Catalan} & & - & 261,016 & 351,609 \\
\lang{Slovak} & & & - & 3,434 & \lang{Portuguese} & & & - & 468,945 \\
\bottomrule
\end{tabular}
\caption{The sizes of our morphologically complete dictionaries for Slavic and Romance language families. We present the sizes for 20 \emph{base} dictionaries. We further split those to obtain 40 train, development and test dictionaries---one for each mapping direction to ensure the correct source language lemma separation.}
\label{t:dictsize}
\end{table*}
\subsection{Our Dictionaries}
To address the shortcomings of the existing evaluation, we built 40 new \defn{morphologically complete} dictionaries, which contain most of the inflectional paradigm of every word they contain. This enables a more thorough evaluation and makes the task much more challenging than traditional evaluation sets. In contrast to the existing resources our dictionaries consist of many rare forms, some of which are out-of-vocabulary for large-scale word embeddings such as \model{fastText}. Notably, this makes them the only resource of this kind that enables evaluating \emph{open-vocabulary} BLI.
We focus on pairs of genetically-related languages for which we can cleanly map one morphological inflection onto another.\footnote{One may translate \word{talked}, the past tense of \word{talk}, into many different Spanish forms, but the Portuguese \word{falavam} has, arguably, only one best Spanish translation: \word{hablaban}.} We selected 5 languages from the Slavic family: Polish, Czech, Russian, Slovak and Ukrainian, and 5 Romance languages: French, Spanish, Italian, Portuguese and Catalan. Table \ref{t:dictexample} presents an example extract from our resource; every source--target pair is followed by their corresponding lemmata and a shared tag.
We generated our dictionaries automatically based on openly available resources: {Open Multilingual WordNet} \cite{bond2012survey} and {Extended Open Multilingual WordNet}\footnote{We used the union of the extended and the original version of the multilingual WordNet since the latter included entries that were not present in the extended version.} \cite{bond2013linking}, both of which are collections of lexical databases which group words into sets of synonyms (synsets), and {UniMorph}\footnote{\url{https://unimorph.github.io/}} \cite{kirov2016very}---a resource comprised of inflectional word paradigms for 107 languages, extracted from Wiktionary\footnote{Wiktionary (\url{https://en.wiktionary.org/wiki/Wiktionary:Main_Page}) is a large-scale, free content multilingual dictionary. (CC BY-SA 3.0); \url{https://creativecommons.org/licenses/by-sa/3.0/}.} and annotated according to the UniMorph schema \cite{sylak2016composition}. For each language pair $(L1, L2)$ we first generated lemma translation pairs by mapping all $L1$ lemmata to all $L2$ lemmata for each synset that appeared in both $L1$ and $L2$ WordNets.\footnote{Note that with this many-to-many mapping we allow for many translations of a single word.} We then filtered out the pairs which contained lemmata not present in UniMorph and generated inflected entries from the remaining pairs: one entry for each tag that appears in the UniMorph paradigms of both lemmata.\footnote{As UniMorph annotations are not consistent across different languages (e.g. in Czech and Polish resources verbs are marked for animacy, while for other Slavic resources they lack such markings) we performed minor tag processing in order to make the tags more compatible. We also discovered that for some languages, the UniMorph resource was either incomplete (resources for Romance languages do not contain adjectives or nouns) or incorrect (Czech verb inflections), in which cases we personally scraped lemma inflections directly from Wiktionary.}
The sizes of dictionaries vary across different language pairs and so does the POS distribution. In particular, while Slavic dictionaries are dominated by nouns and adjectives, verbs constitute the majority of pairs in Romance dictionaries. We report the sizes of the dictionaries in Table \ref{t:dictsize}. In order to prevent semantic leakage, discussed in \textsection \ref{sec:existing_dicts}, for each language pair we split the initial dictionary into train, development and test splits so that each sub-dictionary has its own, independent set of lemmata. In our split, the train dictionary contains 60\% of all lemmata, while the development and test dictionaries each have 20\% of the lemmata.
\subsection{Comparison with MUSE} \label{sec:muse}
In this section we briefly outline important differences between our resource and the MUSE dictionaries \cite{conneau2018word} for Portuguese, Italian, Spanish, and French (12 dictionaries in total). We focus on MUSE as it is one of the few openly available resources that covers genetically-related language pairs.\looseness=-1
\paragraph{Word Frequency}
The first and most prominent difference lies in the skew towards frequent word forms in MUSE evaluation. While our test dictionaries contain a representative sample of forms in lower frequency bins, the majority of forms present in MUSE are ranked in the top 10k in their respective language vocabularies. This is clearly presented in Figure \ref{fig:first-page} for the French--Spanish resource and also holds for the remaining 11 dictionaries.
\paragraph{Morphological Diversity} Another difference lies in the morphological diversity of both dictionaries. The average proportion of paradigm covered for lemmata present in MUSE test dictionaries is 53\% for nouns, 37\% for adjectives and only 3\% for verbs. We generally observe that for most lemmata the dictionaries contain \emph{only one} inflection. In contrast, {for our test dictionaries we get 97\% coverage for nouns, 98\% for adjectives and 67\% for verbs.} Note that we do not get 100\% coverage as we are limited by the compatibility of source language and target language UniMorph resources.
\paragraph{Train--test Paradigm Leakage} Finally, we carefully analyze the magnitude of the train--test paradigm leakage. We found that, on average 20\% (299 out of 1500) of source words in MUSE test dictionaries share their lemma with a word in the corresponding train dictionary. E.g. the French--Spanish test set includes the form \word{perdent}---a third-person plural present indicative of \word{perdre} (to lose) which is present in the train set.
Note that the splits we provide for our dictionaries do not suffer from any leakage as we ensure that each dictionary contains the full paradigm of every lemma.
\begin{figure*}
\begin{subfigure}{.5\textwidth}
\hspace*{-5mm}
\centering
\includegraphics[width=1.05\linewidth]{slav_multilang_test_ruder.pdf}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\hspace*{-0.25cm}
\centering
\includegraphics[width=1.05\linewidth]{rom_multilang_test_ruder.pdf}
\end{subfigure}
\caption{The relation between performance and the frequency of source words in the test dictionary for four example language pairs on the standard BLI task. The numbers above the bars correspond to the dictionary sizes.}
\label{fig:frequency-bars}
\end{figure*}
\section{Bilingual Lexicon Induction} \label{sec:exp1}
The task of bilingual lexicon induction is well established in the community \cite{Vulic2013a,Gouws2015} and is the current standard choice for evaluation of cross-lingual word embedding models. Given a list of $N$ source language word forms $x_1, \ldots, x_N$, the goal is to determine the most appropriate translation $t_i$, for each query form $x_i$. In the context of cross-lingual embeddings, this is commonly accomplished by finding a target language word that is most similar to $x_i$ in the shared semantic space, where words' similarity is usually computed using a cosine between their embeddings.
The resulting set of $(x_i, t_i)$ pairs is then compared to the gold standard and evaluated using the precision at $k$ (P@$k$) metric, where $k$ is typically set to 1, 5 or 10.\footnote{Precision at $k$ represents how many times the correct translation of a source word is returned as one of its $k$ nearest neighbours in the target language.} Throughout our evaluation we use P@1, which is equivalent to accuracy.
In our work, we focus on the supervised and semi-supervised settings in which the goal is to automatically generate a dictionary given only monolingual word embeddings and some initial, seed translations.
For our experiments we selected the models of \citet{artetxe2016learning}, \citet{artetxe2017learning} and \citet{ruder2018discriminative}---three of the best performing BLI models, which induce a shared cross-lingual embedding space by learning an orthogonal transformation from one monolingual space to another (model descriptions are given in the supplementary material).\footnote{We also used our dictionaries to train and test the more recent model of \citet{artetxe2018robust} on a handful of languages and observed the same general trends.} In particular, the last two employ a self-learning method in which they alternate between a mapping step and a word alignment (dictionary induction) step in an iterative manner. As we observed the same general trends across all models, in the body of the paper we only report the results for the best performing model of \citet{ruder2018discriminative}. We present the complete set of results in the supplementary material.
\paragraph{Experimental Setup} We trained and evaluated all models using the Wikipedia \model{fastText} embeddings \cite{grave2018learning}. Following the existing work, for training we only used the most frequent 200k words in both source and target vocabularies. To allow for evaluation on less frequent words, in all our experiments the models search through the whole target embedding matrix at evaluation (not just the top 200k words, as is common in the literature). This makes the task more challenging, but also gives a more accurate picture of performance.
To enable evaluation on the unseen word forms we generated a \model{fastText} embedding for every out-of-vocabulary (OOV) inflection of every word in WordNet that also appears in UniMorph. We built those embeddings by summing the vectors of all $n$-grams that constitute an OOV form.\footnote{We did this within the \model{fastText} framework, using the trained $.bin$ models for each of our 10 languages.}
In the OOV evaluation we append the resulting vectors to the original embedding matrices.\looseness=-1
\section{Morphological Generalization} \label{sec:morph_gen}
We propose a novel \defn{quadripartite analysis} of the BLI models, in which we independently control for four different variables: (i) word form frequency, (ii) morphology, (iii) lexeme frequency and (iv) lexeme. We provide detailed descriptions for each of those conditions in the following sections. For each condition, we analyzed all 40 language pairs for each of our selected models---a total of 480 experiments. In the body of the paper we only present \emph{a small representative subset} of our results.
\subsection{Controlling for Word Frequency} \label{sec:freq_control}
For highly inflectional languages, many of the infrequent types are rare forms of otherwise common lexemes and, given the morphological regularity of less frequent forms, a model that generalizes well should be able to translate those capably. Thus, to gain insight into the models' generalization ability we first examine the relation between their performance and the frequency of words in the test set.
We split each test dictionary into 9 frequency bins, based on the relative frequencies of words in the original training corpus for the word embeddings (Wikipedia in the case of \model{fastText}). More specifically, a pair appears in a frequency bin if its source word belongs to that bin, according to its rank in the respective vocabulary.
We also considered unseen words that appear in the test portion of our dictionaries, but do not occur in the training corpus for the embeddings. This is a fair experimental setting since most of those OOV words are associated with known lemmata. Note that it bears a resemblance to the classic Wug Test \cite{berko1958child} in which a child is introduced to a single instance of a fictitious object---`a wug'---and is asked to name two instances of the same object---`wugs'. However, in contrast to the original setup, we are interested in making sure the unseen inflection of a known lexeme is properly translated.
Figure \ref{fig:frequency-bars} presents the results on the BLI task for four example language pairs: two from the Slavic and two from the Romance language family. The left-hand side of the plots shows the performance for the full dictionaries (with and without OOVs), while the right-hand side demonstrates how the performance changes as the words in the evaluation set become less frequent. The general trend we observe across all language pairs is an acute drop in accuracy for infrequent word forms---e.g. for Catalan--Portuguese the performance falls {from 83\% for pairs containing only the top 10k most frequent words to 40\% for pairs, which contain source words ranked between 200k and 300k}.
\begin{table*}[h]
\footnotesize
\begin{tabular}{l | ll | rrrrrrrrrr}
\toprule
\multirow{2}{*}{Tag} & \multicolumn{2}{c |}{Accuracy} & \multicolumn{10}{c }{Distribution Across Frequency Bins} \\
& In vocab & All & 10k & 50k & 100k & 200k & 300k & 400k & 500k & 600k & Tail & OOVs \\
\midrule
\multicolumn{13}{c}{French--Spanish} \\
\midrule
\T{N;SG} & 56.7 & 55.1 & 45\% & 33\% & 10\% & 6\% & 1\% & 1\% & 0\% & 0\% & 1\% & 2\% \\
\T{N;PL} & 53.6 & 49.9 & 31\% & 29\% & 14\% & 9\% & 4\% & 3\% & 2\% & 1\% & 3\% & 4\% \\
\T{ADJ;MASC;SG} & 62.7 & 60.9 & 29\% & 40\% & 16\% & 7\% & 3\% & 1\% & 1\% & 0\% & 1\% & 2\% \\
\T{NFIN;V} & 50.6 & 49.0 & 35\% & 37\% & 13\% & 7\% & 2\% & 1\% & 1\% & 1\% & 2\% & 2\% \\
\T{2;IMP;PL;POS;V} & 1.5 & 2.0 & 3\% & 4\% & 16\% & 14\% & 12\% & 4\% & 5\% & 3\% & 12\% & 25\% \\
\T{3;FUT;PL;V} & 34.7 & 23.3 & 0\% & 4\% & 8\% & 14\% & 9\% & 8\% & 8\% & 5\% & 16\% & 27\% \\
\T{3;COND;SG;V} & 41.0 & 30.6 & 0\% & 5\% & 14\% & 14\% & 9\% & 9\% & 3\% & 6\% & 15\% & 23\% \\
\midrule
\multicolumn{13}{c}{Polish--Czech} \\
\midrule
\T{INS;N;PL} & 29.5 & 23.8 & 1\% & 17\% & 16\% & 21\% & 10\% & 7\% & 7\% & 2\% & 9\% & 10\% \\
\T{DAT;N;SG} & 26.1 & 18.7 & 13\% & 15\% & 10\% & 12\% & 7\% & 4\% & 3\% & 3\% & 8\% & 24\% \\
\T{ACC;ADJ;MASC;SG} & 47.8 & 47.8 & 10\% & 28\% & 27\% & 16\% & 11\% & 4\% & 1\% & 1\% & 1\% & 0\% \\
\T{ACC;ADJ;FEM;SG} & 47.0 & 46.4 & 4\% & 29\% & 27\% & 25\% & 6\% & 4\% & 3\% & 1\% & 0\% & 0\% \\
\T{NFIN;V} & 47.4 & 45.0 & 13\% & 42\% & 18\% & 15\% & 4\% & 4\% & 0\% & 0\% & 0\% & 5\% \\
\T{3;MASC;PL;PST;V} & 59.4 & 51.4 & 2\% & 23\% & 16\% & 21\% & 9\% & 12\% & 5\% & 5\% & 0\% & 7\% \\
\T{2;IMP;PL;V} & 11.1 & 8.1 & 0\% & 0\% & 0\% & 14\% & 0\% & 7\% & 14\% & 7\% & 3\% & 55\% \\
\bottomrule
\end{tabular}
\caption{ BLI results for word pairs that have a specific morphosyntactic category (left) and a distribution of those forms across different frequency bins (right).}
\label{tab:morph_dist}
\end{table*}
\subsection{Controlling for {Morphology}} \label{sec:morph_control}
From the results of the previous section, it is not clear whether the models perform badly on inflections of generally \emph{infrequent lemmata} or whether they fail on \emph{infrequent morphosyntactic categories}, independently of the lexeme frequency. Indeed, the frequency of different morphosyntactic categories is far from uniform. To shed more light on the underlying cause of the performance drop in \cref{sec:freq_control}, we first analyze the differences in the models' performance as they translate forms belonging to different categories and, next, look at the distribution of these categories across the frequency bins.
In Table \ref{tab:morph_dist} we present our findings for a representative sample of morphosyntactic categories for one Slavic and one Romance language pair (we present the results for all models and all language pairs in the supplementary material).\footnote{We selected the morphosyntactic categories independently for each language family based on how well the models translate words of such morphology--`the good, the bad and the ugly'.} It illustrates the great variability across different paradigm slots---both in terms of their frequency and the difficulty of their translation.
As expected, the performance is best for the slots belonging to the highest frequency bins and forms residing in the rarer slots prove to be more challenging. For example, for French--Spanish the performance on \T{2;IMP;PL;POS;V}, \T{3;FUT;PL;V} and \T{3;COND;SG;V}\footnote{2nd person imperative plural form of a verb, 3rd person future plural form of a verb and 2nd person conditional singular form of a verb.} is notably lower than that for the remaining categories. For both language pairs, the accuracy for the second-person plural present imperative (\T{2;IMP;PL;V}) is particularly low: {1.5\% accuracy for French--Spanish and 11.1\% for Polish--Czech} in the in-vocabulary setting. Note that it is unsurprising for an imperative form, expressing an order or command, to be infrequent in the Wikipedia corpora (the resource our monolingual embeddings were trained on). {The complex distribution of the French \T{2;IMP;PL;POS;V} across the frequency bins is likely due to syncretism---the \T{2;IMP;PL;POS;V} paradigm slot shares a form with 2nd person present plural slot, \T{2;PL;PRS;V}. Our hypothesis is that syncretism may have an effect on the quality of the monolingual embeddings. To our knowledge, the effect of syncretism on embeddings has not yet been systematically investigated.
}
\begin{figure*}[h]
\begin{subfigure}{.5\textwidth}
\hspace*{-0.6cm}
\centering
\includegraphics[width=1.06\linewidth]{pol-ces-lemma-steps.pdf}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\hspace*{-0.26cm}
\centering
\includegraphics[width=1.04\linewidth]{fra-spa-lemma-steps.pdf}
\end{subfigure}
\caption{The performance on the standard BLI task (left side of the graphs) and the controlled for lexeme BLI (right side) for words pairs belonging to the most frequent paradigms and the infrequent paradigms. The numbers above the bars are dictionary sizes and the number of out-of-vocabulary forms in each dictionary (bracketed).}
\label{fig:morph-steps}
\end{figure*}
\subsection{Controlling for Lexeme Frequency} \label{sec:par_control}
To get an even more complete picture, we inspect how the performance on translating inflections of \emph{common lemmata} differs to translating forms coming from \emph{less frequent paradigms} by controlling for the frequency of the lexeme.
We separated our dictionaries into two bins based on the relative frequency of the source lexeme. We approximated frequency of the lexemes by using ranks of their most common inflections: our first bin contained lexemes whose most common inflection is ranked in the top 20k forms in its respective vocabulary, while the second bin consisted of lexemes with most common inflection ranked lower than 60k.\footnote{Future work could operate on actual word counts rather than relative frequencies of words.}
We present the results for the same morphosyntactic categories as in \textsection \ref{sec:morph_control} on the left side of the graphs in Figure \ref{fig:morph-steps}. As anticipated, in the case of less frequent lexemes the performance is generally worse than for frequent ones. However, perhaps more surprisingly, we discover that some morphosyntactic categories prove to be problematic even for the most frequent lexemes. Some examples include the previously mentioned imperative verb form or, for Slavic languages, singular dative nouns (\T{DAT;N;SG}).
\begin{table*}[htbp]
\centering
\small
\renewcommand\arraystretch{1.2}
\begin{tabular}{ l || c c | c c || c c | c c || c c}
\toprule
& \multicolumn{4}{c ||}{Normal} & \multicolumn{4}{c ||}{Lexeme} & \multicolumn{2}{c }{Dictionary Sizes} \\
& \multicolumn{2}{c}{In vocab} & \multicolumn{2}{ c || }{+OOVs} & \multicolumn{2}{ c}{In vocab} & \multicolumn{2}{c ||}{+OOVs} & \multirow{ 2}{*}{In vocab} & \multirow{ 2}{*}{+OOVs}\\
\multirow{3}{*}{}Constraint & \xmark & \cmark & \xmark & \cmark & \xmark & \cmark & \xmark & \cmark \\
\hline
Ukrainian--Russian & \B{68.4} & 61.1 & \B{63.7} & 56.1 & \B{89.9} & 89.1 & \B{88.6} & 87.6 & 786 & 933 \\
Russian--Slovak & \B{25.7} & 21.1 & \B{20.9} & 17.0 & \B{79.3} & 76.8 & \B{76.0} & 74.2 & 1610 & 2150 \\
Polish--Czech & 42.0 & \B{44.4} & 34.8 & \B{36.7} & 80.6 & \B{81.1} & 75.3 & \B{75.9} & 4043 & 5332 \\
Russian--Polish & 39.8 & \B{41.2} & 34.8 & \B{36.1} & 80.8 & \B{82.6} & 77.7 & \B{80.2} & 9183 & 11697 \\
\hline
Catalan--Portuguese & 62.8 & \B{64.2} & 41.1 & \B{42.4} & 83.1 & \B{84.3} & 57.7 & \B{59.0} & 5418 & 10759 \\
French--Spanish & 47.8 & \B{50.2} & 26.7 & \B{28.9} & 78.0 & \B{81.4} & 47.9 & \B{52.2} & 9770 & 21087 \\
Portuguese--Spanish & 60.2 & \B{61.1} & 36.8 & \B{37.6} & 84.7 & \B{85.4} & 57.1 & \B{58.2} & 9275 & 22638\\
Italian--Spanish & 42.7 & \B{43.8} & 21.1 & \B{22.1} & 76.4 & \B{77.6} & 47.6 & \B{49.6} & 11685 & 30686\\
\hline
Polish--Spanish & \B{36.1} & 32.1 & \B{28.0} & 25.0 & \B{78.1} & 77.7 & \B{68.6} & 68.4 & 8964 & 12759\\
Spanish--Polish & 28.1 & \B{30.9} & 21.0 & \B{23.2} & 81.2 & \B{82.0} & 64.2 & \B{65.8} & 4270 & 6095 \\
\end{tabular}
\caption{The results on the standard BLI task and BLI controlled for lexeme for the original \citet{ruder2018discriminative}'s model (\xmark) and the same model trained with a morphological constraint (\cmark) (discussed in \S\ref{sec:exp3}).}
\label{t:all_res}
\end{table*}
\subsection{Controlling for Lexeme}\label{sec:exp2}
We are, in principle, interested in the ability of the models to generalize \emph{morphologically}. In the preceding sections we focused on the standard BLI evaluation, which given our objective is somewhat unfair to the models---they are additionally punished for not capturing lexical semantics. To gain more direct insight into the models' generalization abilities we develop a novel experiment in which the lexeme is controlled for. At test time, the BLI model is given a set of candidate translations, all of which belong to the same paradigm, and is asked to select the most suitable form. Note that the model only requires morphological knowledge to successfully complete the task---no lexical semantics is required. When mapping between closely related languages this task is particularly straightforward, and especially so in the case of \model{fastText} where a single $n$-gram, e.g. the suffix \word{-ing} in English as in the noun \word{running}, can be highly indicative of the inflectional morphology of the word.
We present results on 8 representative language pairs in Table \ref{t:all_res} (column Lexeme). We report the accuracy on the in-vocabulary pairs as well as all the pairs in the dictionary, including OOVs. As expected, compared to standard BLI this task is much easier for the models---the performance is generally high. For Slavic languages numbers remain high even in the open-vocabulary setup, which suggests that the models can generalize morphologically. On the other hand, for Romance languages we observe a visible drop in performance.
We hypothesize that this difference is due to the large quantities of verbs in Romance dictionaries; in both Slavic and Romance languages verbs have substantial paradigms, often of more than 60 forms, which makes identifying the correct form more difficult. In contrast, most words in our Slavic dictionaries are nouns and adjectives with much smaller paradigms.
Following our analysis in \cref{sec:par_control}, we also examine how the performance on this new task differs for less and more frequent paradigms, as well as across different morphosyntactic categories. Here, we exhibit an unexpected result, which we present in the two right-hand side graphs of Figure \ref{fig:morph-steps}: the state-of-the-art BLI models \emph{do} generalize morphologically for frequent slots, but \emph{do not} generalize for infrequent slots.
{For instance, for the Polish--Czech pair, the models achieve 100\% accuracy on identifying the correct inflection when this inflection is {\T{ACC;ADJ;MASC;SG}, \T{ACC;ADJ;FEM;SG}, \T{3;MASC;PL;PST;V} or \T{NFIN;V}}\footnote{Masculine accusative singular form of an adjective, feminine accusative singular form of an adjective, 3rd person masculine plural past form of a verb and a verbal infinitive.} for frequent and, for the first two categories, also the \emph{infrequent} lexemes; all of which are common morphosyntactic categories (see Table \ref{tab:morph_dist}). The results from Figure \ref{fig:morph-steps} also demonstrate that the worst performing forms for the French--Spanish language pair are indeed the infrequent verbal inflections.}
\subsection{Experiments on an Unrelated Language Pair}\label{sec:unrelated}
So far, in our evaluation we have focused on pairs of genetically-related languages, which provided an upper bound for morphological generalization in BLI.
But our experimental paradigm is not limited to related language pairs. We demonstrate this by experimenting on two example pairs of one Slavic and one Romance language: Polish--Spanish and Spanish--Polish. To construct the dictionaries we followed the procedure discussed in \textsection \ref{sec:dictionaries}, but matched the tags based only on the features exhibited in both languages (e.g. Polish \T{DAT;N;SG} can be mapped to \T{N;SG} in Spanish, as Spanish nouns are not declined for case). Note that mapping between morphosyntactic categories of two unrelated languages is a challenging task \cite{haspelmath2010comparative}, but we did our best to address the issues specific to translation between Polish and Spanish. E.g. we ensured that Spanish imperfective/perfective verb forms can only be translated to Polish forms of imperfective/perfective verbs.
The results of our experiments are presented in the last two rows of Table \ref{t:all_res} and, for Polish--Spanish, also in Figure \ref{fig:morph-unrel}. As expected, the BLI results on unrelated languages are generally, but not uniformly, worse than those on related language pairs. { The accuracy for Spanish--Polish is particularly low, at 28\% (for in vocabulary pairs).} { We see large variation in performance across morphosyntactic categories and more and less frequent lexemes, similar to that observed for related language pairs. In particular, we observe that \T{2;IMP;PL;V}---the category difficult for Polish--Czech BLI is also among the most challenging for Polish--Spanish. However, one of the highest performing categories for Polish--Czech, \T{3;MASC;PL;PST;V}, yields much worse accuracy for Polish--Spanish.}
\begin{figure}
\hspace*{-0.45cm}
\centering
\includegraphics[width=1.08\linewidth]{pol-spa-lemma-steps.pdf}
\caption{The results of the experiments on a pair of unrelated languages---Polish and Spanish---on the standard BLI task (left side) and the controlled for lexeme BLI (right side) for word pairs belonging to the most frequent and the infrequent paradigms.}
\label{fig:morph-unrel}
\vspace{-0.2cm}
\end{figure}
\subsection{Adding a Morphological Constraint}\label{sec:exp3}
In our final experiment we demonstrate that improving morphological generalization has the potential to improve BLI results. We show that enforcing a simple, hard morphological constraint \emph{at training time} can lead to performance improvements at test time---both on the standard BLI task and the controlled for lexeme BLI. We adapt the self-learning models of \citet{artetxe2017learning} and \citet{ruder2018discriminative} so that at each iteration they can align two words only if they share the same morphosyntactic category. Note that this limits the training data only to word forms present in UniMorph, as those are the only ones for which we have a gold tag.\footnote{We also experimented with more relaxed versions of the constraint, where we only used a subset of the features, but we found that for most languages the constraint worked better with more features.} The results, a subset of which we present in Table \ref{t:all_res}, show that the constraint, despite its simplicity and being trained on less data, leads to performance improvements for \emph{every} Romance language pair and many of the Slavic language pairs. We take this as evidence that properly modelling morphology will have a role to play in BLI.
\section{Discussion and Conclusion}
We conducted a large-scale evaluation of the generalization ability of the state-of-the-art bilingual lexicon inducers.
To enable our analysis we created 40 morphologically complete dictionaries for 5 Slavic and 5 Romance languages and proposed a novel experimental paradigm in which we independently control for four different variables.
Our study is the first to examine morphological generalization in BLI and it reveals a nuanced picture of the interplay between performance, the word's frequency and morphology. We observe that the performance degrades when models are evaluated on less common words---even for the infrequent forms of common lexemes. Our results from the controlled for lexeme experiments suggest that models are able to generalize well for more frequent morphosyntactic categories and for part-of-speech with smaller paradigms. However, their ability to generalize decreases as the slots get less frequent and/or the paradigms get larger. Finally, we proposed a simple method to inject morphological knowledge and demonstrated that making models more morphologically aware can lead to general performance improvements.
|
1,108,101,566,418 | arxiv |
\section{Prefetcher architecture}
\label{sec:arch}
\begin{figure}[t!]
\centering
\fbox{\includegraphics[width=0.95\columnwidth]{block_diagram.PNG}}
\caption{Semantic prefetcher block diagram (existing core blocks in gray).}
\label{fig:block_diagram}
\vspace{3mm}
\centering
\fbox{\includegraphics[width=0.95\columnwidth]{slice_gen.PNG}}
\caption{PIE lifetime flow chart.}
\label{fig:slice_gen}
\vspace{-2mm}
\end{figure}
In this section we describe the architecture of the semantic prefetcher. Figure~\ref{fig:block_diagram} shows the high level block diagram of the components:
1) The \textit{flakiness detector} for tracking recurring loads;
2) A cyclic \textit{History queue}, tracking retired code flow;
3) The \textit{prefetch injection entries (PIE) array}, storing slices;
4) Several \textit{walker} FSMs, generating and validating slices;
5) The \textit{slice injector}.
and 6) The prefetch queue for tracking usefulness and providing feedbacks.
\subsection{Flaky load detection}
The first component is the \textit{flakiness detector}, which is responsible for isolating loads that have both high recurrence and miss rates. The unit identifies and tracks load context by a combination of its instruction pointer (IP) and a branch history register (BHR).
Our BHR tracks up to 6 recent branches. Each branch is represented by the lowest 4 bits of its IP, with the least significant bit XOR-ed with the binary outcome of that branch (taken or not). Together, the load IP and the BHR represent an instance of load within a specific program context. This method can distinguish between different occurrences within nested loops or complex control flows, which may affect the access pattern and the generated slice. The IP and BHR are concatenated and hashed to create an index that would identify the load throughout the prefetcher mechanisms.
For each load missing the L1, the prefetcher allocates a \textit{prefetch injection entry} (PIE) in the PIE array. These entries serve to track potential loads (within some context) and, if considered useful, construct a slice for them and store it for prefetching. Figure~\ref{fig:slice_gen} describes the life cycle of a single PIE.
Once allocated, the entry starts at the "Active" state. The flakiness detector tracks recurrence and miss rate for each of the active loads.
Once a PIE has been qualified as flaky (above-threshold miss rate) and hot (high recurrence over a time window) its state switches to "Gen" and it is assigned a walker to construct its slice of code (a PIE slice).
\subsection{PIE slice generation}
The slice that generates the load address consists of a subset of the code path prior to that load. The prefetcher tracks the program code flow at retirement using a cyclic history queue, although in modern processors this can be replaced with existing debug features such as Intel's Real-Time Instruction Tracing (RTIT)~\cite{RTIT}. Once a PIE is switched to "Gen" state and needs to construct a slice it is assigned one of the free walker finite-state-machines (FSMs). The walker traverses the history queue from the youngest instruction (the flaky load itself) to the oldest and constructs the PIE slice.
To track data dependency, the walker uses 1) a \textit{source bitmap} which assigns one bit per register to track the active sources (only general-purpose and flags registers); 2) a \textit{renaming cache} that tracks memory operations for potential memory renaming~\cite{MRN}; 3) a \textit{Temporary register map} that tracks architectural registers replaced with temporary ones.
The walker also has storage for 16 operations that serves as the local copy of the slice during construction. Finally, the walker has an index pointing to the history queue (for the traversal), an index for the local slice (for construction), and a counter of temporary registers used for memory renaming.
The walker first sets the bits representing the load sources, and then traverses the history queue backwards (from youngest to oldest instruction).
On each instruction that writes to a register in the bitmap, the walker does the following:
\begin{itemize}
\itemsep0em
\item Pushes the instruction to its local code slice (using an index that starts from the last entry and going backwards from the end of the slice).
\item Clears the destination register from the sources bitmap marking that its producer has been added.
\item Sets all the registers corresponding with the current instruction sources. This ensures older operations producing these sources will also be added.
\item Records the destination value. This will be checked for constants or strides during the next phases.
\end{itemize}
Loads that are added to the slice record their address and their index within the local slice in the rename cache. This structure can host 16 addresses in the form of a set-associative cache. The walker then performs memory renaming whenever an older store is observed (further along the walk) that matches an adderess in the rename cache. The renaming is done by extracting the index of the matching load from the structure and replacing both the store and the load operations in the slice with a move to and from (respectively) an available temporary register.
It should be noted that reducing the store/load pair further by moving the store data directly to the load destination is not possible, since the load destination register may be reused between the store and the load (and therefore override the data).
The walker completes the traversal upon 1) reaching the tail of the cyclic history queue; 2) when there are no longer valid sources marked in the source bitmap; or 3) when the loop completes a round-trip and the same load within the same BHR context is encountered.
Upon successful completion, the walker switches the PIE to "Validate" phase. When the same load context is encountered again, the prefetcher assigns a walker to perform the walk once more to validate that the code slice did not change. The PIE remains in validation phase for several encounters to ensure the code is stable and to identify constant/strided values (the strides themselves may be caused by code beyond the scope of the slice).
The prefetcher performs three validation rounds. Other values (up to seven) were tested, indicating that prime numbers work better, especially when no BHR context is used, as they may avoid some loop patterns from confusing the validation process. However, a lower value was chosen as overall performance benefits more from the speed of generating new slices than from the additional accuracy that may accompany longer validation.
After finishing all validation rounds the entry is switched to the "Trim" phase. The "Trim" phase is the only step allowed to change the PIE slice since it was first generated. It performs the same walk, but stops tracking sources when reaching constants or strides that were discovered during the validation passes and replaces them with a simple immediate move or add/sub. As a result, some branches of the data dependency flow may be removed from the PIE slice.
Another change performed during trimming is renaming the destinations to temporary registers to avoid any side effects. The walker performs a forward traversal over the constructed slice and converts each destination register to the next available temp register. The conversions are recorded in the temporary register map.
During the following traversal steps, all younger slice instruction will rename any matching sources to read from the corresponding temporary register.
After trimming is done, the entry is switched to "Armed" state.
We assume that the walker FSM can handle up to 8 instructions per cycle without exceeding timing restrictions (based on similar existing mechanisms like branch recovery walks), so the full history walk should take up to 16 cycles. However, to ensure feasibility and allow larger history queue sizes, our evaluation assumes that a walk may take up to 64 cycles. Since the prefetcher may encounter additional loads during that time, it may use several parallel walkers, assigned to generation or validation phases based on availability.
\begin{figure}[t]
\centering
\fbox{\includegraphics[width=1\columnwidth]{gen_example.PNG}}
\caption{Slice generation example. The history queue holds all instructions in the dynamic flow and is walked backwards from the triggering load. The sources bitmap is shown on the right during the walk (steps 1 through 5). After the walk we receive an intermediate PIE slice (6) including only the dependency chain (R9's multiply was dropped). We then populate the stride/const values during the validation steps (7). The final PIE slice shows the post-TRIM slice (8), in which RDX was discovered as constant and allowed eliminating the load fetching it.}
\label{fig:slice_gen2}
\end{figure}
Figure~\ref{fig:slice_gen2} shows an example of slice generation over code striding across an array that requires double dereference (since, for e.g., it was passed by pointer). The dependency chain is discovered by walking the history queue as shown on the right hand side (removing the unrelated multiply operation but identifying all other operations as part of the dependency chain). The intermediate slice remains consistent during several validation phase iterations. During that phase the add operation is detected as a stride of one and the two middle loads are identified as constants.
Since RBX is constant, the Trim phase replaces it with a move and stops processing its dependencies (thereby also eliminating the RDX load). The final slice is therefore only three operations long.
\subsection{Slice injection}
Once a slice has been armed, each encounter with its load context (i.e., hitting the same IP while having the same branch history) triggers the PIE slice injection. The allocation stops immediately prior to the triggering load (thus preserving the same register roles and meaning as seen during construction). The slice operations are then injected in order, with no lingering side effects as the temporary registers used are not part of the architectural state. Any memory operation is allowed to lookup the TLBs and caches and, if needed, perform page walks and allocate line fill buffers. These accesses may, by themselves, act as prefetches.
The injected operations may be executed by the normal machine out-of-order resources.
However, this may incur a substantial cost to the actual program performance due to added stress over critical execution resources. Instead, an internal execution engine was added to perform the arithmetic operations without interfering with the normal core activity (other than stalling allocation). We evaluate both the shared-resources and the private-resources modes in Section~\ref{sec:evaluation}.
During the injection, sources marked as constants use the recorded constant value, but operations marked as having a stride are adjusted by having their stride value multiplied by a dynamic lookahead factor.
The dynamic lookahead is initialized for each slice to one, meaning that by default the prefetcher injects the PIE slice as-observed, with no extrapolation (thereby performing the address computation of the next iteration). However, if the PIE is eventually detected as non-timely (as explained in the next section) the lookahead factor will increase gradually up to a maximum of 64 (chosen to allow prefetching far enough ahead of time, but not too far as to exceed the cache lifetime). All strides within a slice are always multiplied by the same lookahead factor so that the ratios between strides are always kept as they were detected over a single iteration.
The final operation in the slice is a copy of the original load that the slice was constructed from, but since any strided sources were enhanced to apply a lookahead over their strides, the load address would belong to some future iteration. This becomes the final prefetch address and is sent to the memory unit as a prefetch. In parallel, it is also pushed to the \textit{prefetch queue} along with its predicting PIE-id for usefulness tracking.
\subsection{Usefulness tracking}
\label{sec:usefulness}
The generated prefetches must be correct and timely (i.e., the address should be used later by a demand, and do so within a sufficiently short time period as to avoid being flushed from the cache). We solve both requirements by tracking the prefetches in the prefetch queue. Each demand address is checked against the queue to find the first (most recent) matching prefetch, and the entry is marked as hit. If the hit is within useful distance (determined by a reward function as in the context-RL prefetcher~\cite{context_pref}), the PIE receives confidence upgrade based on the reward score. On the other hand, if a prefetch entry reaches the end of the prefetch queue without ever being hit, it is considered useless.
The number of sent and useless prefetches is tracked in the PIE (the counters are both right-shifted whenever they are about to exceed in order to preserve their ratio). When a PIE goes below the usefulness threshold (we used 10\% in our experiments), it is reset, but allowed to regenerate the slice in case the current code flow changed compared to when it was originally constructed. If a PIE is reset more than 25 times, it is considered a stale PIE, and its state becomes \textit{Disabled}, preventing it from reconstructing.
Another form of filtering is tracking recurring addresses. The last address emitted by each slice is saved, and if the slice generates it again multiple times in a row, the slice is reset due to low usefulness.
\subsection{Dropping PIE slices}
Multiple issues could stop a slice construction process or reset an already constructed one.
Construction can be aborted due to the following reasons:
\begin{itemize}
\itemsep0em
\item Slice is inconsistent during validation.
This may indicate insufficient context length,
the code having no useful recurrence, or a complex control flow.
\item Timeout while waiting for another validation pass, may indicate the load is not as hot as predicted.
\item Slice is too long (over 16 operations).
\item Complex instruction (for e.g., non-trivial side effects).
\item Too many temporary registers (over 8) are needed.
\end{itemize}
Resets during slice construction are considered transient, meaning that a later attempt may still construct a useful slice.
Conversely, slices can also be reset during run-time.
If too many prefetches fall off the prefetch queue without ever being hit by demands, the slice may have failed capturing the semantic relation. The minimal usefulness ratio is configurable and by default a threshold of 10\% is used. The failure rate is tracked using 2 counters: Failures and sent-prefetches. Both counters saturate at 64, and both shift right by 1 whenever any of them reaches that limit. If, after the counters reach steady-state, the ratio between them drops below the usefulness threshold, the prefetcher resets the entry.
Alternatively, If the same code slice produces the exact same address over and over, the slice is no longer considered meaningful.
This may occur when reaching the history limit during construction,
when the walker cannot include a source being changed.
Aborting a slice at run-time provides information that triggering a prefetch on the initiating context might harm performance.
Therefore, the PIE array records these resets and keeps them
(unless the PIE is overridden by another load context). If too many run-time resets occur, the PIE switches to~\textit{disabled} state and no longer accepts re-construction attempts for that context.
\subsection{Prefetcher area}
The parameters used for the prefetcher are summarized in Table~\ref{table:pie_params}. For the sake of this paper each stored micro-operation is assumed to be represented with 64 bits including all data required for reproduction. The history queue entry also has to store the result 64-bit value (for const/stride detection), and therefore requires 128 bits. A 128-entries history queue requires 2kB.
Each PIE slice requires 16 operations, a context tag for indexing, a walker ID (used during generation and validation) and additional bits for state and reset tracking. Overall size is 140 Bytes. Since the PIEs are relatively large, the PIE array holds only 16 entries, with a total size of \tilde2.25kB. This is sufficient for most of the applications since the number of slices presented in Figure~\ref{fig:slice_count} refers to the entire lifetime of the application, but at any given program phase only a few slices are actively used. This is demonstrated later in Section~\ref{sec:resources}.
Future work may find ways to reduce the size of each entry (for example by compressing identical operations, as some slices may share parts of their history). The storage size is therefore not a fundamental issue.
The slice generation FSM (walker) requires a
source bitmap (32 bits), a memory renaming cache (16 entries with a 64b tag + 4b index each = 136 bytes), and a temporary registers map (40 bits). Each FSM also has a slice storage for the construction process, reaching a total of \tilde280B. Having 2 parallel walkers would therefore require \tilde0.6kB.
Power consideration are reviewed in section~\ref{sec:evaluation}.
\begin{table}[t]
\footnotesize
\centering
\begin{tabular}{| l | l |}
\hline
History queue & 128 instructions, 2kB \\
\hline
BHR size & 24 bit (4b $\times$ 6 last branches)\\
\hline
Mem. renaming cache & 16 $\times$ (64 + 4) bits = 1kB\\
\hline
Walkers & 4 $\times$ 280B = 0.6kB\\
\hline
PIE array size & 16, 2.25kB\\
\hline
Total size (kB) & 6kB \\
\hline
hot/flaky thresholds & 2 appearances / 1 miss\\
\hline
\end{tabular}
\vspace{2mm}
\caption{Prefetcher parameters}
\label{table:pie_params}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
This paper presents the semantic prefetcher. The prefetcer is designed to utilize the most generalized form of locality: \textit{semantic locality}, which is not limited to spatial or temporal artifacts of the memory address sequence, but instead attempts to extract the code slices responsible for invoking consequential memory accesses along the data structure traversal path or algorithmic flow. We then combine and manipulate these slices into forecast slices by utilizing locality artifacts within their code to create new functionality that can predict the memory behavior of future iterations.
While some existing prefetchers attempt to capture access patterns that belong to specific use-cases, whether spatio-temporal relations, temporal correlations, or even irregular and data-dependent patterns, there is currently no generalized technique that can capture all such cases.
The semantic prefetcher attempts to solve that by observing the code path directly, imitating any program functionality and extending it to create lookahead functionality.
Based on that technique, the semantic prefetcher can extend the coverage provided by existing prefetchers (including irregular ones) through inherently supporting complex address generation flows and multiple memory dereferences.
The semantic prefetcher provides a speedup of 24.5\% over SPEC-2006 and 16\% over SPEC-2017, exceeding gains from other prefetchers by over 2$\times$.
\section{Evaluation}
\label{sec:evaluation}
\begin{figure}[t]
\includegraphics[width=1.03\linewidth]{slice_coverage_spec06.PNG}
\\~
\includegraphics[width=1.03\linewidth]{slice_coverage_spec17.PNG}
\caption{Slice coverage over SPEC-2006/2017. The left Y-axis measures the normalized portion of each outcome of slice construction. The right Y-axis (and overlaid line) show the absolute count of successfully armed slices.}
\label{fig:coverage}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{context.PNG}
\caption{Number of slices failing validation per each context length. Longer context contributes to the consistency of slices (since iso-context slices are more likely to share the same locality behavior) }
\label{fig:context}
\end{figure}
\begin{figure}[ht]
\hspace{-5mm}
\includegraphics[width=1.05\linewidth]{resets.PNG}
\caption{Breakdown of reset causes.}
\label{fig:resets}
\end{figure}
\begin{figure}[ht]
\hspace{-5mm}
\includegraphics[width=1.03\linewidth]{speedup_spec06_min7.PNG}
\hspace{-5mm}
\includegraphics[width=1.0\linewidth]{speedup_spec17.PNG}
\caption{Semantic prefetcher speedup vs competition. Showing workloads with any prefetcher gaining over 7\% (average is over full SPEC Int/FP).}
\label{fig:speedup}
\vspace{-2mm}
\end{figure}
To evaluate the benefits of the semantic prefetcher, we first need to determine its ability to cover enough performance-critical loads within common workloads. Figure~\ref{fig:coverage} shows the coverage of different SPEC 2006/2017 workloads: each application shows the number of dynamic loads analyzed by the semantic prefetcher, and the break-down by analysis outcome. The \textit{Non-Flaky} component counts loads not deemed hot enough or not having enough misses to be interesting. The \textit{Timing} component shows slices failing during the validation period due to low rate of recurrence or hash conflicts.
The \textit{Non-Stable} component shows slices failing validation due to variability of code path (results are shown with zero context, increasing the context is shown later to reduce this component as the code paths are more consistent when compared across recurrences with the same branch history).
The remaining component at the top shows the number of armed slices per workload.
The absolute number of slice validation failures is shown in Figure~\ref{fig:context} for the 7 SPEC workloads that have the highest failure rate when using no context (all having over 40\% of their slices reset during slice generation). The number of failures is compared having between 0 to 24 context bits (i.e., indexing loads based on the history of the last 0 to 6 branches and their resolutions). Adding the full context length reduced between 30\% (gcc) to 98\% (milc) of the failures, indicating that recurrences with the same branch history are more likely to have consistent code slice behavior.
The overall breakdown of reset causes appears in Figure~\ref{fig:resets}. The first element is failures due to hash collisions: new dynamic loads matching the PIE index of an existing slice that is under construction, causing it to drop (armed slices are protected from overwrite and can only be reset due to low usefulness). The second element is variance in code flow during slice generation. The third is timeout during the construction: a slice that was not armed within 100k cycles is reset due to low relevance.
The last reset cause, Too-Many-Failures, is a run-time reset cause, occurring after a slice was validated and armed, as explained in Section~\ref{sec:usefulness}.
It should be noted that the various reset thresholds and parameters have shown very little sensitivity to tuning. This happens because most slice stabilization and resets occur during warmup and are therefore negligible on longer runs.
On the other hand, flakiness and usefulness parameters are more sensitive to tuning, and show better performance the more aggressive they are dialed (i.e., building and maintaining slices for more loads). However, optimizing with a higher cost of slice injection (especially without dedicated execution) would likely lead to more conservative thresholds.
The speedup of the semantic prefetcher (with 0 and 16 bits context) is shown in Figure~\ref{fig:speedup} across SPEC 2006/2017 benchmarks. Several competing prefetchers are also shown.
The overall gain is significantly higher on SPEC 2006 (24.5\%), mostly due to the lack of software prefetching by the older compiler, but the improvement exists also on SPEC 2017.
\begin{figure}[t]
\includegraphics[width=1\linewidth]{uops_added.PNG}
\caption{Ratio of injected operations out of the overall, compared to speedup. Showing applications with speedup $\geq$ 15\%.}
\label{fig:uops_added}
\end{figure}
Slice injection also adds computation work. Figure~\ref{fig:uops_added} shows the injected instructions out of the overall instructions executed, compared with the performance gain. In most cases there is good correlation (i.e., the performance gain is proportional to the added work), but some applications with relatively simple slices are able to gain significantly more than their overhead. If we assume the prefetcher's steady-state power cost (disregarding slice generation) is equivalent to the added operations, then the power/performance score has on average 2.5$\times$ more IPC gain than power cost.
Finally, the semantic prefetcher improves performance also on multi-threaded runs. Figure~\ref{fig:mt_speedup} shows the speedup over SPEC-rate simulation (4 traces from the same application over 4 physical cores). In some cases prefetching provides a higher gain than on a single thread (in h264, for example).
There is no sharing of generated slices across physical cores, so the only gain comes from increasing the effective prefetching depth. On MT runs the system becomes more saturated and memory bandwidth usually becomes a more critical bottleneck compared to memory latency. This may reduce the efficiency of prefetching (or even the chances of issuing the prefetches). On the other hand, prefetches may also serve as cache hints that increase the lifetime of required lines, thereby reducing overall bandwidth and improving MT performance. On the highest MT gainer (libquantum), the prefetcher reduced L1 MPKI by 40\%.
\begin{figure}[]
\includegraphics[width=0.9\linewidth]{speedup_mt.PNG}
\caption{Semantic prefetcher speedup over MT SPEC06 workload combinations (4 cores).}
\label{fig:mt_speedup}
\vspace{-3mm}
\end{figure}
\subsection{Comparison with other prefetchers}
We compare the speedup of the semantic prefetcher with other prefetcher with different approaches and coverage. The semantic prefetcher wins over most SPEC workloads, scoring on average more than twice the speedup of the next best prefetcher. However, on some workloads the semantic prefetcher loses to one of the competing prefetchers.
In gemsFDTD, a simple stride prefetcer is able to gain almost 60\% speedup while the semantic prefetcher gains only \tilde17\% at best. The reason for that is short nested loops, where the inner recurrence is too long to fit in the context history length, but too short to allow the chance for re-learning the inner loop on every outer iteration. This control flow gives an advantage to simple and fast prefetchers that need only a few loop recurrences to learn and trigger (the stride prefetcher can start issuing prefetches after the 3rd inner iteration). The semantic prefetcher can still learn the code pattern given sufficient context, and in fact begins to gain performance by covering at least some of the cases with a context of 32 bits and above, but such context length begins to stress the physical design and does not solve the general case where inner loops can be much longer. This can be solved by having the context support compressed representation of loop patterns.
Another competing prefetcher that gains over the semantic prefetcher over some workloads is the Best-Offset prefetcher. BOP has several outliers in its favor, most notably zeusMP. Unlike GHB and other fast-learning prefetches, BOP also takes a while to discover the optimal depth, but once it does, it has a throttling effect where it eliminates unnecessary prefetches (at sub-optimal offsets). The gains in zeusMP (and to a lesser extent also in dealII, sphinx3 and cactusADM) are mostly through reduction of BW from excessive prefetching. For the same reason adding context to the semantic prefetcher also helps on zeusMP by eliminating badly constructed slices emitting some useless prefetches.
Finally, IMP presents an interesting point: within the SPEC applications it wins only on WRF (and only by 8\%), but the graph500 example had array dereferences that make IMP quite useful, except on the longest ones.
\subsection{Lookahead method}
The lookahead multiplier (the number of strides we prefetch ahead) plays a key role in the prefetcher speedup. However, when applying a fixed multipliers we noticed that different workloads were favoring different values, and optimizing the best method required dynamic tuning. We implemented the following approaches:
\begin{itemize}
\item Constant lookahead: we set a constant value and always perform the lookahead according to it.
\item Hit-depth normalization: we measure the average depth of hits within the prefetch queue which indicate the actual distance between a prefetch and the demands access using it. We then increase or decrease the lookahead value to normalize this hit depth to the desired range (if we hit too early, around the beginning of the prefetch queue, we need to extend our lookahead and vice versa).
\end{itemize}
The difference between the policies is shown in Figure~\ref{fig:lookaheads}. Lookahead 1 and 16 are dynamically adjusting the lookahead distance while starting from a multiplier of 1 or 16 iterations ahead (respectively) and increasing from there. The fixed lookahead policy (always 32 iterations ahead) has some minor gains in cases where it starts from a more effective prefetching depth while the dynamic policy takes a while to reach there, but it is ultimately inferior on most runs where the dynamic approach is more adaptable.
\subsection{Execution resources}
\label{sec:resources}
The semantic prefetcher uses a dedicated generic ALU as the execution engine of the PIE slices. However, if the slice execution latency becomes a critical factor, the area addition is too high, and the overall execution bandwidth is sufficient, we may choose to simply inject the slice code into the main out-of-order engine and let the existing resources do the work for us. Figure~\ref{fig:core_resources} shows the penalty of dropping the dedicated HW and sharing the core execution resources.
Another tradeoff is the number of walkers performing the slice generation and validation. Figure~\ref{fig:num_walkers} shows how many parallel walks (traversals over the history queue) and PIE entries (slices tracked for constructions) are needed using cumulative time histograms. Overall cycles with any number of walks consume only 0.5\% of the run-time, (also indicating that the power consumption of the walk itself is negligible). The results indicate
that two walkers are sufficient. In the same way, 16 PIE entries are enough to cover 99.4\% of the run.
\begin{figure}[]
\includegraphics[width=0.95\linewidth]{lookaheads.PNG}
\vspace{-2mm}
\caption{Gain with different lookahead policies.}
\label{fig:lookaheads}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=1\linewidth]{using_core_resources.PNG} %
\vspace{-6mm}
\caption{Using existing vs. dedicated ALU.}
\label{fig:core_resources}
\end{figure}
\vspace{-2mm}
\begin{figure}[]
\centering
\includegraphics[width=0.8\linewidth]{walkers_and_pies.PNG}
\caption{Amount of parallel PIEs and active walks required per cycle.}
\label{fig:num_walkers}
\end{figure}
\section{Introduction}
Existing prefetchers are designed to analyze the memory access stream and identify specific types of access patterns, ranging from sequential and strided ones to traversals over linked data structures.
Most of them target \emph{spatio-temporal locality} and \emph{temporal correlation} between addresses or address-space artifacts (e.g., address deltas), based on the observation that temporally or spatially adjacent accesses tend to repeat~\cite{PrimerHWPref}.
Many applications, however, make use of data structures and algorithms whose physical layout and data access patterns are not plainly observable in the memory address-space domain (e.g., linked lists, arrays of pointers, sparse graphs, cross-indexed tables) and require deeper analysis in order to understand the causal relations
between accesses to objects in memory. These causal relations
may involve complicated arithmetic computations or a chain of memory dereferences.
Such relations between accesses exhibit \textit{semantic locality}~\cite{context_pref}
if they represent consequential steps along a data structure or an algorithm. These steps are characterized by the existence of some program code flow that traverses from one data object to the next
\footnote{Spatio-temporal locality represents a specific case where the relations are purely arithmetic and can be detected through address comparison, but semantic locality encompasses all forms of algorithmic and data structural relations (such as proximity within linked data structures or connectivity in cross-indexed tables).}.
The set of all semantic relations within a program can be said to \textit{span} its data structures, describing all the steps that the program may employ to walk through them.
In this paper we argue that the semantic relations between memory accesses can be represented through the code segments (referred to as code slices) that generate the memory traversals.
The set of all slices effectively forms an abstract guide to the program's data layout, but we can further combine or extrapolate these flows to create \textit{forecast slices} with more complex ``lookahead'' semantics that can predict program behavior over longer periods.
Following our observation, we present the \textit{semantic prefetcher} that dynamically constructs and injects prefetching code for arbitrary memory traversals. The prefetcher analyzes program code at run-time, identifies the dependency chains forming all address calculations, and detects locality artifacts within that code based on contextual similarities. The prefetcher then generates compact and optimized forecast slices which are
code constructs that did not exist in the original program
and enhance the code to generate longer memory traversal steps
capable of reaching future iterations.
The semantic prefetcher generates the forecast slices using hardware-managed binary optimization. The slices are constructed to have no lingering side effects.
Once the prefetcher reaches sufficient confidence in their correctness and structural stability, it injects them at certain interception points to trigger prefetches.
The semantic prefetcher is fundamentally different from previous prefetchers that aim to reconstruct address relations or code sequences such as temporal-correlation prefetchers~\cite{GHB, SMS2, STMS, Domino} and runahead-based prefetchers~\cite{scouting, slice_proc, precompute, precompute2, runahead_ooo, continuous_runahead, runahead_smt}.
Unlike temporal correlation prefetchers, which detect correlations between addresses, the semantic prefetcher correlates program states (specific code locations with specific history and context) with the generated code slices.
Similarly, unlike runahead-based prefetchers that run the program (or its address generation code) in parallel to reach future iterations earlier
(but are ultimately constrained by finite out-of-order depths), the semantic prefetcher can peek into future iso-context iterations without having to execute everything in the middle.
The semantic prefetcher was implemented on an industrial-grate, cycle-accurate x86 simulator that represents a modern micro-architecture. It provides a 24\% IPC speedup on average over SPEC 2006 (outliers of up to 3.7$\times$), and 16\% on average over SPEC 2017 (outliers of up to 85\%).
Our contributions in this paper are as follows:
\begin{itemize}
\item
We present a novel scheme of prefetching using forecast slices. We utilize internal locality artifacts
to extrapolate the code slices and create new functional behavior with lookahead semantics.
\item
We present the design of the semantic prefetcher that injects forecast slices directly into the execution stream. We describe its architecture: flaky load detection, slice generation, binary optimization, and dynamic prefetching depth control.
\item
We demonstrate how the forecast slices can reproduce complex patterns prevalent in common applications, and show that these patterns are not addressed by existing prefetchers.
\item
We model the semantic prefetcher using a cycle accurate simulator.
We show that it outperforms five competing state-of-the-art prefetchers, some of which target irregular access patterns.
\end{itemize}
The remainder of this paper is organized as follows: Section~\ref{sec:semantic} discusses semantic locality and its manifestation in forecast slices. Section~\ref{sec:arch} presents the semantic prefetcher and its architecture. Section~\ref{sec:methodology} explains the experimental methodology. Section~\ref{sec:evaluation} shows the evaluation results and discussion. Section~\ref{sec:related} describes related work. We conclude in Section~\ref{sec:conclusions}.
\section{Methodology}
\label{sec:methodology}
\begin{table}[t]
\scriptsize
\centering
\begin{tabular}{| p{50pt} | p{120pt} |}
\hline
Core type & OoO, 4-wide fetch, 3.2Ghz \\
\hline
Queue sizes & 224 ROB, 97 RS, 180/168 int/FP regs,\\
& 72 load buffer, 56 store buffer \\
\hline
MSHRs & (estimated) 10 L1, 20 L2\\
\hline
L1 cache & 32kB Data + 32kB Code, 8 way, 2 cycles\\
\hline
L2 cache & 256kB, 4 ways\\
\hline
L3 cache & 4MB, 16 ways x 2 slices\\
\hline
Memory & LPDDR3, 2-channel, 8GB, 1600 Mhz\\
\hline
\multicolumn{2}{c}{Competing prefetchers} \\
\hline
GHB (all)~\cite{GHB} & GHB size: 2K, History length: 3 \\
& Prefetch degree: 3, Overall size: 32kB \\
\hline
SMS~\cite{SMS} & PHT size: 2K, AGT size: 32, Filter: 32 \\
& Regions size: 2kB, Overall size: 20kB \\
\hline
VLDP~\cite{VLDP} & 3 DPTs $\times$ 64 entries\\
\hline
Context RL~\cite{context_pref} & CST size: 2K entries x 4 links (18kB), \\
& Reducer: 16K entries (12kB) \\
\hline
\end{tabular}
\vspace{2mm}
\caption{Simulator parameters based on Skylake}
\label{table:params}
\end{table}
The semantic prefetcher was implemented in a proprietary cycle-accurate x86 simulator configured to match the Skylake micro-architecture~\cite{skylake} and validated against real hardware over a wide range of applications (showing an average error margin within 2\%). Table~\ref{table:params} specifies the parameters used. All prefetchers support L1 triggering and virtual addresses and can use TLBs/page walks when needed.
The prefetcher was tested over the SPEC 2006 and 2017 benchmark suites, compiled on Linux with ICC 14 and 16 (respectively). Each application had 5-8 different traces chosen by a SimPoint-equivalent tool based on workload characterization. The traces are weighted to represent overall application performance while measuring 5M instructions each (following a warmup of about \tilde1B memory accesses and \tilde20-50M actual instructions).
To test multithreaded performance we also run combinations of workloads on different threads, although we do not implement the ability to share the learnings across threads.
For that purpose, we use combinations of traces from the same applications to measure SPEC-rate behavior (where several copies of the same application are being run independently). We run the traces with separate physical memory address ranges to avoid data collisions. We also offset the run phases by a few million instructions to ensure some heterogeneity.
\section{Related work}
\label{sec:related}
\subsection{Program semantics}
Multiple researches attempt to automate the analysis and understanding of software applications. \textit{Shape analysis}~\cite{shape_analysis} attempts to build a set of properties and rules representing the program's data sets in order to facilitate program verification (mostly of memory object allocation and management, bounds/coherence checking and functional correctness).
Newer approaches attempt to represent programs in abstract forms derived from their code and behavioral analysis~\cite{code_vectors,inst2vec}, in order to find similarities for code suggestion/completion, anti-plagiarism, or algorithm identification.
These approaches may be useful in high level dynamic tuning (adjusting HW properties such as the type of prefetcher used, or optimal aggressiveness), but they do not yet assist in the analysis of the access pattern or address generation.
\subsection{Using code slices}
Collecting and using slices of actual code has already been proposed for various purposes. Trace cache~\cite{trace_cache} is a form of constructing and efficiently caching selective code paths based on run-time analysis.
In the realm of branch prediction, Zilles et al.~\cite{slice_prediction} proposed using code slices to execute ahead the predicted code path to resolve hard-to-predict branches. A similar approach suggested by Peled et al.~\cite{FBPQ} is based on injecting code slices to resolve data-dependent branches by prefetching the data, using it to resolve the branch condition, and queuing the resolution for overriding the prediction.
This method was also proposed for memory latency mitigation. Carlson et al.~\cite{load_slice} proposed a similar mechanism that dynamically learns load dependency chains in order to expedite their execution. However, their approach was based on in-order cores, and motivated to extract a small portion of the ILP available to full out-of-order cores by execution only load address dependencies out of order.
Prefetching can also be achieved by executing actual code (or even just critical subsets of it) ahead of time as proposed by Mutlu and Hashemi et al. in their set of Runahead techniques~\cite{runahead_ooo,continuous_runahead, filtered_runahead}, and by Collins et al. in their speculative precomputation technique~\cite{precompute} (extended by Atta et al.~\cite{precompute2} to include also limited control flow). That work relied on continued execution of the same program context past memory stalls, and focused on managing a complicated speculative execution state for that purpose. It did not modify the executed code but most approaches did filter out code not required for memory address calculation. It also did no extrapolation, and thus was limited in range to what could fit in the enhanced out-of-order window.
Prefetching based on actual code slices can also be done by helper threads (Runahead flavor by Xekalakis et al.~\cite{runahead_smt} and slice-based processors by Moshovos et al.~\cite{slice_proc}). Similar decoupled approaches were also used for branch prediction by Chappell et al.~\cite{SSMT, SSMT2} and Farcy et al.~\cite{farcy1998dataflow}. However, this form is asynchronous with the progress of the main thread and will not be able to ensure fixed (or even positive) prefetching distance.
Another form of expedited execution through other threads is ~\textit{Hardware scouting}~\cite{scouting}, which is intended for highly multithreaded machines and uses other threads to run ahead of execution. However this approach attempts to optimize MT throughput, and not address single-threaded performance.
\subsection{Prefetching techniques}
Falsafi and Wenisch classified prefetching techniques into the following groups~\cite{PrimerHWPref}:
\begin{itemize}
\itemsep0em
\item \textbf{Stream/stride prefetchers} utilize \textit{spatial locality}, by detecting constant stride patterns. The sandbox prefetcher~\cite{sandbox},
the best-offset prefetcher (BOP)~\cite{BOP},
and Access Map Pattern Matching (AMPM)~\cite{AMPM},
proposed various methods of testing different strides and choosing the optimal one, thereby covering complex flows through common recurrence deltas. Other prefetchers such as the variable length delta prefetcher (VLDP)~\cite{VLDP}
enhanced that ability to varying stride patterns.
\item \textbf{Address-correlating prefetchers} detect correlation within sequences of recurring accesses.
This form of locality has the ability to cover some semantic relations, but is ultimately limited to the storage capacity of correlated addresses.
Examples include the Markov predictor~\cite{MarkovPredictors}, the Global History Buffer Address-Correlation flavors (GHB/AC) \authorhide{by Nesbit and Smith}~\cite{GHB},
and prefetchers targeting linked data structures through partial memoization, such as that by Roth, Moshovos and Sohi~\cite{Roth98,Roth99}, and Bekerman et al.~\cite{Bekerman99}.
An extension of address-correlation is context-correlation
~\cite{context_pref,nnpref} which seeks to correlate a larger context vector with future addresses.
\item \textbf{Spatially-correlated prefetchers} use an extension of temporal locality that correlates between spatial patterns instead of absolute addresses. These prefetchers seek out recurring spatial patterns that are not part of a long consecutive sequence but may repeat locally, such as accesses to the same fields of a structure across different instances. Examples of this family are Spatial Memory Streaming (SMS)\authorhide{by Somogyi et al.}~\cite{SMS} and the DC flavors of GHB~\cite{GHB}.
\item \textbf{Irregular data patterns prefechers} target specific data structured that do not have spatio-temporal locality. IMP\authorhide{by Yu et al.}~\cite{IMP} prefetches future elements within an array of indexes (A[B[i]]).
Other data-driven prefetchers include the Irregular Stream Buffer (ISB)\authorhide{by Jain and Lin}~\cite{ISB}, which restructures the dataset spatially.
Another form of irregular prefetching based on context if B-Fetch~\cite{BFetch} by Kadjo et al. which uses branch history to detect strides in registers used for address generation.
However, since it does not execute actual code it is limited to simple register strides and cannot reconstruct complex value manipulations or see through memory dereferences.
\end{itemize}
\section*{Acknowledgements}
\bibliographystyle{IEEEtranS}
\section{semantic analysis of memory access pattern}
\section{Extracting Semantic Locality from memory access patterns}
\label{sec:semantic}
Existing memory prefetchers scan the stream of memory accesses
and extract spatio-temporal correlations in order to identify patterns and predict future memory accesses~\cite{PrimerHWPref}. Some prefetchres~\cite{GHB, Baer} also associate memory accesses with program context (e.g., instruction pointer) to further refine their predictions.
However, basing predictions solely on the stream of memory accesses that the memory unit emits makes prefetchers oblivious to the underlying program code semantics.
Indeed, most existing prefetchers ignore the data and control flows that generate the memory access sequences they are meant to detect.
A small number of exceptional prefetchers
capable of detecting more elaborate or irregular relations focus only on specific access patterns such as indirect accesses (for e.g., A[B[i]])\cite{IMP} and linked data structures~\cite{Roth98,Roth99,Bekerman99}.
In this section we argue that a more fundamental form of locality can be extracted even when no spatio-temporal locality is present. ~\textit{Semantic locality}~\cite{context_pref, nnpref} correlates memory accesses through their dependency within the program's abstract data layout and usage flow, such as being adjacent steps on a data structure traversal path or being consequential steps in the execution of an algorithm. These accesses do not necessarily exhibit any spatio-temporal correlation.
While prior work attempted to approximate semantic locality through memoization and correlative program context cues,
we show that extracting this form of locality requires following the set of operations that constitutes the relation between two memory addresses.
To this end, we define a \textit{code slice} as the minimal subset of the dynamic code preceding a certain memory operation that is required to generate its memory address. Notably, this subset can be described through the data dependency chain that starts with the address calculation, and goes backwards through all relevant sources at each step.
As semantic locality usually describes program constructs such as data structures or algorithms, the relations it captures often have strong recurrence. Extracting that form of locality can therefore be achieved by analysis of the address-generation code slice between two recurring \textit{consequential} loads.
The remainder of this section demonstrates how program introspection can generate short, explicit code slices that can be replayed to generate memory accesses, or manipulated to create forecast slices that generate future accesses at an arbitrary distance. These code slices can be injected into the code sequence at run time to issue memory accesses ahead of time. Finally, the memory access stream of typical programs is shown to be adequately covered by a small number of distinct code slices.
\begin{figure}[t]
\centering
\includegraphics[width=1.05\columnwidth]{graph500_ex3.PNG}
\caption{Critical BFS loop in graph500
showing a 4-level indirection. The top box shows the source code, the middle shows the pseudo code subset that comprises the slice. The bottom box shows the actual slice generated at run-time.}
\label{fig:graph500}
\end{figure}
\subsection{How code slices describe dependency chains}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{graph500_ex4.PNG}
\caption{Dynamic flow of graph500 (Figure~\ref{fig:graph500}) unrolled to show a possible lookahead prefetch based on the forecast slice. Changing the stride at the beginning of the slice can predict accesses on far iterations. Furthermore, iterations with similar contexts (such as branch history) will invoke their future counterparts.}
\label{fig:graph500_flat}
\end{figure}
Memory access patterns can often be tightly incorporated in a way that makes it difficult for a simple address scan to distinguish between them without understanding program semantics.
Figure~\ref{fig:graph500} demonstrates this over a breadth-first search (BFS) code taken from the Graph500 benchmark. The main \textit{while} loop traverses an array of vertices. An internal loop then scans each vertex's outgoing edges to find its neighboring vertices and check their BFS depth.
Notably, the top level access pattern (array ``vlist'') is sequential, but the deeper levels are accessed through data dependent patterns (the edge loop is also sequential but relatively short, making the first edge of each vertex the critical element). These accesses have no spatial locality and very little temporal reuse. Even contextual cues such as the program counter do not help in correlating the accesses.
However, the figure shows that the dependency chain within each iteration, whose accesses are increasingly critical to program performance, can be represented using a short code slice.
The use of the extracted code slice is demonstrated in Figure~\ref{fig:graph500_flat}. Thanks to the sequential nature of the top loop that exposes spatial locality within the first load in the slice, a simple change in the stride delta can create a forecast slice that predicts accesses in the next iterations at the top loop.
Overall, the example detailed in Figures~\ref{fig:graph500} and~\ref{fig:graph500_flat} demonstrates how code slices can represent the dependency chain of irregular data structures, and how these slices can generate lookahead semantics within the algorithm.
\subsection{Forecast slice creation}
Tracking all dependency chains for a given load would construct a graph of instructions that may span back to the beginning of the program. To generate concise and useful slices, history tracking is limited by breaking the dependency chain in the following cases:
\begin{itemize}
\item \emph{Constant values} remaining
static during analysis.
\item \emph{Strided values} that were shown to have a constant stride or are produced by a simple add/sub operation with constant or immediate sources.
\item When a loop wraps around to the same operation where the analysis started from, the dependency chain can usually stop as it would also repeat itself. Linked data structures may iterate a few time to create a deeper chain.
\end{itemize}
Before the code slice can be used to produce future accesses, it needs to be clean of any side effects. The code is sanitized by performing two final steps: first the destination registers are replaced with temporary ones (which are guaranteed not to be used by the original code) and their occurrences as sources within the slice are renamed accordingly. Second, all memory writes are eliminated from the slice. Since the code was generated through a dependency chain, all writes to memory were added to resolve younger loads reading from the same address. Therefore, a simple memory-renaming may be performed to replace such store-load operations with move operations to a reserved register. For the sake of simplicity partial address overlaps are ignored.
When the base slices are ready, they may be converted into forecast slices.
To this end, any detected stride is extended by the \textit{lookahead} factor: If a certain operation in the slice was detected to induce a stride of $N$, that stride is replaced by $L\times N$ where $L$ is the current lookahead applied for that slice. This lookahead variable is initialized to point a few iterations ahead, but its value dynamically changes to allow further lookahead based on the average hit depth for that slice. The hit depth is updated dynamically as explained in Section~\ref{sec:arch}.
\subsection{Data-structure spanning code slices}
Code slice generation is flexible and generalized enough to cover whole applications
efficiently, with a relatively low amount of code slices.
Any given data structure has a set of operations that define all forms of traversals across it within a given program. We define this set of operations as \textit{spanning} the data structure. Some examples are shown in Figure~\ref{fig:data_structs}.
\hide{
Figure~\ref{fig:data_structs} demonstrates several common data structures (array of indices, linked list, and some common forms of graphs) and shows for each of them a few examples of pseudo-code functions that generate consecutive memory addresses, spanning these structures.
}
\begin{figure}[t!]
\centering
\includegraphics[width=0.85\columnwidth]{data_structs.PNG}
\caption{Example data structures and some of their spanning slices in pseudo arithmetic.}
\label{fig:data_structs}
\vspace{-1mm}
\end{figure}
A linked list, for example, is spanned by the action of dereferencing its next elements pointer. A tree is spanned by the actions of descending from a node to any given child.
The semantic relation must capture all data structures required to complete any recurring traversal step.
\begin{figure}[t]
\centering
\includegraphics[width=1.03\columnwidth]{slice_count.PNG}
\caption{Number of unique slices sampled in SPEC 2006/2017. Collected over slices with at least 1k prefetches sent. Since any recurring load would attempt constructing a slice, this represents the number of slices required for coverage of all recurring loads.}
\label{fig:slice_count}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\columnwidth]{slice_metrics_front.PNG}
\caption{Breakdown of slices in Figure~\ref{fig:slice_count} according to their load-chain depth and number of arithmetic operations, measured over SPEC 2006/2017. The front line marks the rough range covered by existing prefetchers.}
\label{fig:slice_metrics}
\end{figure}
We demonstrate the effectiveness of code slices in Figure~\ref{fig:slice_count}, which shows the number of unique slices needed to cover all accesses to the main data structures in the
SPEC 2006 and 2017 benchmarks (sampling methodology is explained in Section~\ref{sec:methodology}). The results were obtained by running the construction flow on each load that has a sufficient level of recurrence (recurring at least three times and passing seven validation phases to confirm that its slice is invariant). We filtered out low-usage strides (below 1k of actual hits on a generated slice).
Figure~\ref{fig:slice_count} demonstrates the efficiency of memory access coverage of code slices. Specifically, 39 of the 46 benchmarks require only up to \tilde 100 slices to cover all recurring loads, and only one benchmark requires more than 300 slices. This indicates that a prefetcher constructing code slices can cover a large code base with reasonable storage requirements.
The average slice size is 6.6 operations, and the median is 3.5 operations.
Detecting semantic locality through code slices generalizes the existing paradigms of data locality. Figure~\ref{fig:slice_metrics} classifies the code slices (observed in Figure~\ref{fig:slice_count}) according to their memory dereference depth (longest dependent load chain within the slice) and the number of arithmetic operations. The circle sizes indicate relative number of slices within each bucket.
The special case of (1,1) represents slices that have a single arithmetic operation and a single load based on it, which for the most part conform with the common stride pattern. Another interesting case is the (2,1) and (2,2) data points (two loads and one or two arithmetic operation), which includes most examples of array-of-arrays/pointers (A[B[i]] accesses): one index stride, one internal array reference, possible index/pointer math and outer array access. These are potentially covered by IMP~\cite{IMP} or similar dereference based prefetchers like Jump-pointers~\cite{Roth99}.
Notably, while the largest data points pertain to loads that are addressed by existing prefetchers (37\% of all loads in the figure fall under the stride pattern; 13\% are within the two simple single-dereference patterns), there are still many cases left outside that are not targeted by any existing prefetcher.
Semantic analysis can cover all these cases using the same mechanism, thereby generalizing existing prefetchers without having to manually design for each use-case.
|
1,108,101,566,419 | arxiv | \section{Introduction}\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
This demo file is intended to serve as a ``starter file''
for IEEE Computer Society conference papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Word Count}
Word count of this paper: \quickwordcount{main}
\begin{abstract}
Word count: 197 words
The United States is currently experiencing an unprecedented opioid crisis, and opioid overdose has become a leading cause of injury and death. Effective opioid addiction recovery calls for not only medical treatments, but also behavior therapy such as support from families, medical professionals and communities. In this paper, we study the behavior patterns of people with opioid use disorder (OUD) from social media, with the goal of learning more about characteristics of their addiction as well as drug abuse patterns. This study takes a multi-disciplinary perspective to characterize opioid addiction patterns by analyzing opioid groups from Reddit.com - including modeling online discussion topics, analyzing text co-occurrence and correlations, identifying emotional states of persons with OUD, and discovering social network interactions. We conclude that people with OUD have a different degree of motivation for recovery and varied subscription preferences. Also, people with OUD make positive comments on addiction treatment medicines, and people who relapsed show a range of similar emotions such as `joy' and `negative' before relapse. These quantitative analysis discovered in this paper are of practical importance since those findings can be incorporated into a strategy for supporting practitioners working on opioid relapse prevention and addiction treatment.
\tiny
\fontsize{8}{11}\helveticabold { \section{Keywords:} Opioid Crisis, Opioid Use Disorder, Addiction Patterns}
\end{abstract}
\section{Introduction}
Opioid overdoses are now causing more deaths than car crashes, prompting the current President to declare the opioid crisis a national public health emergency in October 2017. According to the latest statistics from the National Institute on Drug Abuse (NIDA), more than 115 Americans die after overdosing on opioids on a daily basis, and nearly 64,000 people died of drug overdoses in the US in 2016, the most lethal year of the drug overdose epidemic~\cite{holly2017drug}. Moreover, millions of Americans have been impacted by opioid-related problems. It is estimated that 2.1 million people suffer from substance use disorders related to prescription opioid pain relievers in the United States alone~\cite{drugabuse2017}. Additionally, the opioid crisis has social impacts beyond the increased death toll. Other consequences include a rise in the number of infants born dependent on opioids as well as a spread of infectious diseases such as HIV and hepatitis C. The status of the opioid crisis in the U.S. is shown in Figure~\ref{fig:map} and Figure~\ref{fig:overdoes_by_age} from different perspectives. As revealed in a recent paper published in Science~\cite{jalal2018changing}, the drug epidemic currently sweeping across the U.S. has deteriorated into a full-scale national pandemic, leading to national concern because of its negative impacts on health, social security and economics.
Current knowledge of the opioid crisis~\cite{jalal2018changing} has been mostly limited to scattered statistics that only reveal macro aspects of the crisis, such as the nationwide/state-level overdose death tolls and the overdose death toll in specific time periods, of specific races, or about specific drugs.
However, detailed analysis of user-specific knowledge, such as social media behavioral patterns that associate with addiction and relapse, have not been studied. Therefore, to facilitate a better understanding of the opioid epidemic or even promote rehabilitation treatment, such individualized, user-specific analysis is urgently needed. This manuscript tackles this complex problem from a multi-disciplinary perspective, and as elaborated includes text analysis, topic modeling, sentiment analysis, social network analysis, and preference analysis.
\section{Opioid Communities on Social Media}
With social media growing profoundly and widely intertwined with people's everyday lives, social media contains more and more biomedical information about users. As a result, social media represents an increasingly important data source for discovering biomedical knowledge that can have various applications. Reddit.com is an online community and social media platform, which allows users to form groups/subreddit to discuss a specific topic. Groups such as `OpioidRecovery' and `Opiates' aim to provide people suffering from opioid use disorder (OUD) with social and psychological support during the recovery process. This function spurs people with OUD to turn to social media for help, posting their various problems and confusion. In this way, social media has become a widely utilized form of social support, which support is an important recovery resource that can buffer negative effects of stress on the quality of life of those with OUD~\cite{laudet2008socialsupport}. Indeed, many with OUD-related problems that have managed to successfully abstain from opioid use, are often willing to share their experience with others online. For instance, a user posted on Reddit that “I was the same way, I only got clean cuz I didn't wanna go back to jail tbh, and had to go cold turkey in rehab”. This study is based on data collected from subreddit groups such as ``opioid”, ``opioid recovery”, and ``drugs” from Reddit.com in 2018.
\section{Users Classification}
One interesting finding is that different users have a different degree of willingness to stay clean, which is consistent with the literature~\cite{bradshaw2013readinessfactors}. More specifically, there are two types of people with OUD, as shown in Figure~\ref{fig:stats}: the first type includes those who are in recovery or at least struggling to recover, which we termed "recovering OUD". The other type includes those who are not in recovery nor are they seeking recovery, which we termed "non-recovering OUD".
We label a user by manually reading his/her post history in the dataset we collected based on self-report manner. A user is labeled with OUD only if he/she admitted the OUD problem, or mentioned his/her fondness of drug, or mentioned he/she has been doing drug; otherwise, a user is labeled with Non-OUD. For instance, an OUD user posted ``Late Christmas Gift for myself 5x 40mg Oxycodone (Toroxycon)''. A user is labeled with recovering OUD if he/she is taking treatment, or seeking treatment, or at least struggling to recover. For instance, a recovering OUD posted ``what to say when returning to work from drug treatment''; otherwise, a user is labeled with non-recovering OUD. Similarly, a user is labeled with relapsed if a recovering OUD reuse opioids again; otherwise, we believe the user has been stay clean. For instance, a relapsed user posted ``45 Days clean and now I’m drunk and high''. We manually labeled 3,157 users, and the structure of the labeled dataset is shown in Figure~\ref{fig:stats}. As we can see, 64.9\% of these Reddit users have OUD problems, and only 10.4\% of these people with OUDs are actively seeking remission and/or recovery, while the other 89.6\% of those with OUD show no signs of seeking remission/recovery.
\section{Topic Modeling}
To discover the hidden semantic structures (i.e., ``topics'') that occur in an extensive collection of posts we collected from Reddit.com, we first built a topic model to analyze those posts~\cite{fei2005bayesian}. Given a document in the topic modeling, we would expect particular words to appear in the document more or less frequently. For instance, by specifying two topics, we found the corresponding topics of ``people take opioids because of pain'' and ``people take dope and feels''. Those topics are indicated by words ``people" and "pain", and ``people'' and ``dope'', respectively, since they appear more frequently in those posts than other words.
Since posts typically concern numerous topics, we specified the existence of multiple topics to further explore any additional potential topics that might emerge. As shown in Figure~\ref{fig:lda_eight_topics}, we present the results of an eight-topic model. These topics are ``hard day, don't do opiates'', ``don't dope heroin'', ``people feel love to feel clean'', ``taking drugs or opioids for days'', ``people feel, heroin'', ``people pain, taking opiates or drugs'', ``for pain, don't do oxycodone'', and ``post about heroin'', respectively. In brief, the general topics include 1). attempts to dissuade people from taking highly addictive opioids (e.g., heroin and oxycodone); 2). information on how to stay clean; 3). how people feel when they stay clean for an extended period of time; and 4). advice about treatment and how to stay clean.
Even though topic modeling can reveal the general topics in certain social communities, individualized user-specific information remains hidden. To expose more details about user-specific patterns for people with OUD that utilize social media, we implemented text co-occurrence and correlation analysis to further dig and discover behavior patterns on social media.
\section{Text co-occurrence and Correlation}
Figure~\ref{fig:cor_big-grams} shows the word correlation with annotations. The word correlation and bigram illustrate the social interactions between different users. We generated word correlation graphs to obtain a more full picture of their conversations. A textual analysis was implemented to capture the relationship between words, with the objective to examine words that tend to immediately follow each other, or tend to co-occur within the same posts. Sentences were tokenized into consecutive sequences of words. By checking how often a word $X$ was followed by word $Y$, the relationships between words are measured by the Phi efficient metric, and the Phi efficient is defined as
\begin{equation}
\phi = \frac{{{n_{11}}{n_{00}} - {n_{10}}{n_{01}}}}{{\sqrt {{n_{1.}}{n_{0.}}{n_{.0}}{n_{.1}}} }},
\label{equation:phi}
\end{equation}
where $n_{11}$ represents the number of posts where both $X$ and $Y$ appear, $n_{00}$ is the number where neither appears, and $n_{10}$ is the cases where one appears without the other.
In this case, a pair of words with higher phi coefficiency indicates that this word pair appears more frequently than other word pairs do.
By gradually increasing the Phi coefficient, the words that are most likely to appear with each other were detected, as shown in Figure~\ref{fig:cor_big-grams}. After analyzing a large number of posts from Reddit, the relationship was visualized in Figure~\ref{fig:word_cluster}. By analyzing and annotating these keywords, some frequent topics are summarized as follows:
\begin{itemize}
\item Talking about detoxification processes and assisting medications such as `naloxone' and `bupe', in conjunction with `doctor' and `detox'. Participants of these discussion topics are users who are motivated and willing to find recovery from their addiction and manage to stay clean. Also, some of these users who have been clean for a longer period of time participate in those discussions and share their experiences about how to stay clean.
\item Describing pains and prescriptions from doctors, as indicated by keywords such as `pain', `doctor', and `painkiller'. These topics are consistent with the fact that opioids are often originally used as a prescribed painkiller. Some users do have bodily and/or chronic pain problems and concerns. Though opioids are an effective painkiller and have curative effects, after months or years of taking opioids, these users often develop an opioid dependency.
\item Sharing of withdrawal symptoms, which symptoms fall into two categories. One category involves mental/psychological symptoms such as anxiety, insomnia, and social isolation. The other category involves physical symptoms, including headaches and dizziness. Both categories of withdrawal symptoms are very uncomfortable and may lead people with OUD to a compulsive relapse if they do not have proper guidance and support.
\item Talking about what kind of opioids they take, how it feels, and the dosage of drugs they take. These topics are indicated by words such as ``hight'', ``enjoy'', ``mg (the dosage of opioids)'', etc. For those topics, users always share their experiences about what kinds of opioids they are taking, or they have been taking, or they used to take. But they try to be cautious about the dosage because of the risks of death caused by overdose.
\end{itemize}
Also, since treatment is critical for people with OUD, figuring out how those with SUD evaluate treatment-assisting medications is very important. By specifying treatment-assisting medications such as `buprenorphine', `methadone', `naloxone', and `suboxone', we detected and filtered out words that were associated with those medicines in these communities. Figure~\ref{fig:medicine_related_words} shows the top six words with the highest correlation coefficiency for each medication. As shown in this figure, most of the words related to these medications are positive, representing positive feedback from those with OUD. Also, a lot of words about dosage are used such as `dose' and `mgs'. Those words indicate that when they use medications, they seem to be paying much attention to dosage, perhaps because high dosages may have lethal rather than curative effects.
\section{Emotion of people with OUD}
On Reddit.com, people with OUD are free to express their joy and sadness, trust and disgust, anticipation and anger in posts or comments. These sentiments, along with their other online behaviors, could disclose information about their remission/recovery status and thus serve as indicators for potential relapse intervention. We, therefore, sought to capture subtle sentiments from those with OUD such as fear, joy, sadness or anger and then. We then associated these sentiments with other behavioral patterns of people with OUD that were based on their social media posts. By studying posts from those with OUDs as well as their interactions with other Reddit users, we can understand better their remission/recovery status and transform sentiments and behavior patterns that emerge from the posts themselves into possible indicators of the need for some type of intervention that can promote recovery.
A word-emotion association lexicon~\cite{mohammad2013crowdsourcing} was utilized to study the emotional content of each comment from a Reddit user. When we do the sentiment analysis, we focus on dominant rather than submerged emotions. For instance, a user may post more than one post or comment every day, and each post can include complex emotion that is a combination of several emotions.
Ten categories of emotion are identified as shown in Figure~\ref{fig:wordcloud}: anger, anticipation, disgust, fear, joy, sadness, surprise, trust, negative, positive. Each of these categories is associated with an emotion count variable. Also consistent with addiction literature~\cite{harris2013processmodel,randles2013shamerelapse}, we found that people with OUD can be highly influenced by their emotions. Words that express emotions or feelings, such as ``feeling”, ``bad”, and ``high”, are repeatedly used. Among 670 persons with OUD who had relapsed, 72\% showed the predominant emotion of `negative' and `joy'. We draw the conclusion that relapse is highly related to more extreme emotions, regardless of valence, such as `negative' and `joy'.
\section{Social Network Interactions}
For users in the same discussion group on social media, they may influence each other, regardless of whether such influence is negative or positive. Figure~\ref{fig:network} shows the network of users by using data collected from Reddit.com. Each dot in the network represents a user, and the labels represent the user IDs. The edge between two nodes indicates an interaction between two users. For example, if a user comments on a post and another user replies, we use an edge to connect those two users. If the comment is unilateral, there is no edge between users. That is, a comment with no reply will be a single node. The red dots represent posts or comments with no reply and this type of discussion takes up 23\% of all the discussions. Different colors indicate the size variations of discussions. As shown in Figure~\ref{fig:network}, the social network, 79\% of the discussions have a small number of participants that is less than 10. Less than 3\% of the discussions have more than 20 participants, and about 18\% of the discussions have a group size between 10 and 19.
By analyzing the posts we collected from Reddit.com, we show the subreddit subscription preferences for people with OUD who seem to stay clean vs. who seem to have relapsed in Figure~\ref{fig:subreddit}. For a user, we show the most frequently visited subreddits. For instance, the first user posts 580 posts in the subreddit `Opiates', which is 96\% of the total posts. An interesting discovery is that people who appear to stay clean have different preferences of subscription to subreddits when compared to those who appear to have relapsed. For relapsed persons, they post more comments in the “Opiates” subreddit, while the ones seemingly staying clean engage more in the “opiatesRecovery” group. Therefore, people who seem to stay clean may also join subreddits that may more effectively support remission and recovery. Those subreddits include “Christianity”, “Quitingkratom”, etc. The findings we show in Figure~\ref{fig:subreddit} generalize to other users. Thus, we conclude that 1) People who appear to have stayed clean have different subscription preferences when compared to those who appear to have relapsed; and 2) More specifically, those relapsing or continuing to use appear to focus more on subreddits such as `Opiates' and `Drugs', while those demonstrating more effective remission/recovery have more posts in subreddits such as `OpiatesRecovery'. This phenomenon may very likely be associated with the well-established varying stages of motivation or personal readiness for change~\cite{prochaska1992ttm}, which influence the mindset, attention, and associated behaviors of those seeking remission and/or recovery.
\section{Discussion}
The opioid crisis is one of the leading threats to public health in the U.S., and a thorough understanding of social media behavior patterns that that associate with remission/recovery outcomes can promote potential supportive and even potentially treatment-related interventions. In this paper, we draw a series of conclusions by analyzing posts \& comments collected from Reddit.com. First, only 10.4\% of those identifiable with OUD actively seek remission/recovery, while the 89.6\% show no indication of seeking recovery. This demonstrates the potential importance of the need that the majority of those with OUD may have yet to progress through various motivational stages that may yield a greater readiness for change~\cite{prochaska1992ttm}, and therefore, only a few may be actively seeking help. Second, people with OUD who are seeking remission/recovery more generally have positive judgments on medications such as Buprenorphine, Methadone, Naloxone, and Suboxone; and they pay close attention to dosage. Thus, this finding supports that medication assisted treatment (MAT; use of medications such as Buprenorphine and Methadone) may be helpful to those seeking recovery from OUD. Third, 72\% of those seeking recovery from OUD but appear to have relapsed showed both dominant 'negative' as well as 'joy' based emotions. Therefore, we infer people with OUD may experience a wide range of emotion and emotion shifts, and without emotional support that aids in the regulation of emotion may put them at a higher risk of relapse. Fourth, 79\% of the interactions between people with OUD are within a small group with less than 10 people. Thus, we can tell that small group discussion is the main communication group on Reddit.com for opioid-related subreddits. Fifth, people with OUD who seek remission and seem to stay clean have different subreddit subscription preferences when compared to those who appear to have relapsed. Thus, subscription preference may serve as a potential risk marker for potential relapse and may also be associated with and/or reflect people' readiness to change. In sum, these multi-disciplinary analyses conducted in this paper help disclose several behavioral patterns and characteristic features of persons with OUD, making it more possible to detect and identify precursors of relapse; this may then also further assist in the development and implementation of personalized OUD-based interventions and effective recovery support.
\section{Conflict of interest statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships
that could be construed as a potential conflict of interest
\section{Author contribution statement}
All authors contributed to all aspects of the preparation and the writing of the manuscript.
\bibliographystyle{frontiersinSCNS_ENG_HUMS}
\section{Introduction}\label{sec:introduction}}
\section{Introduction}
Opioid overdoses are now causing more deaths than car crashes, prompting the current U.S. President to declare the opioid crisis a national public health emergency in October 2017. According to the latest statistics from the National Institute on Drug Abuse (NIDA), more than 115 Americans die daily after overdosing on opioids, and nearly 64,000 people died of drug overdoses in the US in 2016, the most lethal year of the drug overdose epidemic~\cite{cdc_drugoverdose}. Moreover, millions of Americans have been impacted by opioid-related problems. It is estimated that 2.1 million people suffer from substance use disorders related to prescription opioid pain relievers in the United States alone~\cite{drugabusegov}. Additionally, the opioid crisis has social impacts beyond an increased death toll. Other consequences include a rise in the number of infants born dependent on opioids as well as a spread of infectious diseases such as HIV and hepatitis C~\cite{yang2019asonam}. The status of the opioid crisis in the U.S. is shown in Figure~\ref{fig:map_stats} (a) and Figure~\ref{fig:map_stats} (b) from different perspectives. As revealed in a recent paper published in Science~\cite{jalal2018changing}, the drug epidemic currently sweeping across the U.S. has deteriorated into a full-scale national pandemic, leading to national concern because of its negative impacts on health, social security and economics.
\begin{figure*}[t]
\centering
\subfigure[]{
\includegraphics[width=0.45\linewidth]{figure/countyleveldeath_blue.jpg}
}
\subfigure[]{
\includegraphics[width=0.45\linewidth]{figure/overdose_death.jpg}
}
\caption{\textbf{(a).Drug overdose death toll by county in 2017~\cite{Death_rates}; }\textbf{(b).Drug overdose death toll by drug type from 1996 to 2017 in the U.S.}}
\label{fig:map_stats}
\end{figure*}
Current knowledge of the opioid crisis has been mostly limited to scattered statistics that only reveal macro aspects of the crisis, such as the nationwide/state-level overdose death tolls and overdose death tolls regarding certain time periods, races, and/or specific drugs. This has led to a rapid increase in OUD treatment admissions, including a $53.5\%$ increase during 2013 - 2015, and with adult treatment for heroin use more than doubling~\cite{Huhn2018TreatmentAdmission}. While treatment is important and leads to effective outcomes and greater health for some, the relapse rate post-treatment for substance use disorders is relatively high, ranging from $40-60\%$~\cite{McLellan2000ChronicIllness}, with potential relapse being difficult to predict. Therefore, research that can enhance relapse prediction using data from everyday, common situations is needed.
One such resource may include behavioral data available from social media, a forum now used by $88\%$ of persons in the U.S. ages 18-29, $78\%$ of those ages 30-49, $64\%$ of those ages 50-64, and $37\%$ of those age 65 and older~\cite{Smith2018SocialMediaUse}. Despite its popularity, detailed analysis of user-specific knowledge, such as social media behavioral patterns that associate with substance use and relapse, have not been studied. Therefore, to facilitate a better understanding of the opioid epidemic that may ultimately lead to better prediction of relapse and ultimately responses and interventions that prevent relapse and promote remission - individualized, user-specific analysis is urgently needed. This manuscript begins to tackle this complex problem from a multi-disciplinary perspective, and includes text analysis, topic modeling,and sentiment analysis.
\section{Data Analysis and Results}
\subsection{Opioid Communities on Social Media}
With social media growing profoundly and widely intertwined with people's everyday lives, social media contains more and more biomedical psychological information about its users. As a result, social media represents an increasingly important data source for discovering knowledge that can have various applications. Reddit.com is an online community and social media platform, which allows Redditors (registered users of Reddit.com) to form groups/subreddit to discuss specific topics. Groups such as \textit{``r/OpioidRecovery''}, \textit{``r/Opiates''} and\textit{ ``r/drugs''} aim to provide people suffering from OUD and seeking remission with psychological support. This function spurs people with OUD to turn to social media for help, posting various problems and confusion. In this way, social media has become a widely utilized form of social support, which is an important recovery resource that can buffer the negative effects of stress on the quality of life of those with OUD~\cite{laudet2008socialsupport}. Indeed, many people with OUD-related problems that have managed to abstain from opioid use are often willing to share their experience with others online. For instance, a Redditor posted on Reddit that ``I was the same way, I only got clean cuz I didn't wanna go back to jail tbh, and had to go cold turkey in rehab''. This study is based on data collected from subreddit groups such as \textit{``r/opioid''}, \textit{``r/opioidrecovery''}, and \textit{``r/drugs''} from Reddit.com in 2018.
\subsection{Dataset}
The dataset used in this paper is collected from Reddit.com, a social news aggregation, web content rating, and discussion website. We developed a web crawling tool using PRAW (Python Reddit API Wrapper) to collect data. The dataset for this paper is from three subreddits (Subreddits are subsidiary threads or categories within the Reddit website) on Reddit.com, including \textit{``r/opiates"}, \textit{``r/opiatesRecovery"} and \textit{``r/drugs"}. In a subreddit, there are a series of posts that include a post topic and a collection of related comments.
The dataset consists of two parts. The first consists of 3,000 posts with the top (collected in April 2018) 1,000 posts from each subreddit, since 1,000 is the maximum number that is allowed by PRAW API. This first part includes posts and interactions within posts. After collecting the subreddit data, we extracted a list of Redditor IDs of participants. The second part of the dataset consisted of personal data, which was collected based on the previously mentioned Redditor list that was extracted.
Data labeling then involved manually reading the posts/comments and categorizing each Redditor into a corresponding group. With intent to implement binary classification, each Redditor was marked with a label and different labels were mutually exclusive. This classification ensured that each participant was assigned to one and only one label.
More specifically, in the group classification process, we labeled a Redditor ID by manually reading his/her post history in the collected dataset. This labeling process was done by several health science researchers, and labeling agreement among the researchers was required to reduce labeling errors that might otherwise have resulted from different labeling metrics. For the OUD classifier, we followed DSM-5 criteria~\cite{tarrahi2015latent}, in which a Redditor is labeled with OUD if at least two of the listed criteria were met within the past 12 months. For instance, a Redditor was labeled with OUD if he/she admitted that he/she often takes in larger amounts of opioids and/or over a long period of time than intended, and also mentioned a persistent desire or unsuccessful efforts to cut down or control use, or had strong cravings to use. If less than two symptoms were detected, a Redditor was classified to a ``Non-OUD" group. Redditors who met at least two DSM-5 criteria but showed no intentions of limiting or abstaining from use were classified to an ``OUD" group. For instance, an OUD Redditor posted ``Late Christmas Gift for myself 5x 40mg Oxycodone (Toroxycon)''. Redditors were classified into a ``positive recovering OUD'' group if he/she was attempting and/or struggling to seek treatments that reduced symptoms associated with previously met DSM-5 criteria in the last 12 months. For instance, a positive recovering OUD posted ``what to say when returning to work from drug treatment''. If there was no evidence that a Redditor was seeking treatment, this Redditor was classified to a ``non-positive recovering OUD'' group. Similarly, Redditors were classified to an ``OUD relapse'' group if they first classified as in recovery but indicated that they had used opioids again (no matter how many times) in the fifty days previous to the latest post/comment. For instance, a relapsed Redditor posted ``45 Days clean and now I'm drunk and high''. Otherwise, Redditors in recovery were assumed to have stayed clean.
When classifying participants into the ``positive recovering OUD'' and ``non-positive recovering OUD" groups, one interesting finding was that many participants with OUD had varying degrees of willingness to stay clean, which is consistent with the literature~\cite{bradshaw2013readinessfactors}. That is, ``positive recovering OUD'' group members showed the intention of reducing symptoms associated with DSM-5 criteria while ``non-positive recovering OUD'' group members show less intention of such symptom reduction.
In summary, we collected 1,000 Reddit posts respectively from \textit{``r/opiates''}, \textit{``r/opiatesRecovery''}, and \textit{``r/drugs''}. By utilizing Redditor IDs extracted from the 1,000 posts, we also collected the latest 1,000 comments/posts Reddit API restricts the number of posts that can be collected to 1,000 for each Redditor ID respectively, and obtained a personal post dataset.
\subsection{Redditor Classification}
The Redditor classification is based on the text features. Give a set of Redditors $A=\{a_1, a_2,a_3,\dots\}$, and their corresponding feature variable $F_j=<X_1,\dots,X_m>$, which is represented by these features extracted from text, and their status label $Y_j \in [0,1]$, the Redditor classification task is to learn a function $f:F_j \to Y_j$, such that the task errors are minimized. Formally, we have
\begin{equation}
\arg \mathop {\min }\limits_\Psi \sum\limits_{j = 1}^m {||f({X_1,\dots,X_m}) - {Y_j}{||_2}}
\end{equation}
where $X_j$ and $Y_j$ are a feature variable and label variable respectively, and $\Psi$ is the model parameters learned from training.
Two classifiers were designed and implemented to filter out the research targets: an OUD classifier and a recovering classifier. The first classifier differentiates between OUD and non-OUD, while the second ``within OUD" classifier separates those who are in positive recovery and who hadn't shown any evidence of being in a positive recovery process, and they were denoted as ``positive recovering OUD'' and ``non-positive recovering OUD" group members, respectively. For the OUD classifier, we employed a Support Vector Machine (SVM) classifier that transformed posts/comments into term vector features to determine whether participants had an OUD problem.
The dataset for this classifier included 1,000 Redditors (419 OUDs and 581 non-OUDs). In the labeled dataset, $70\%$ (namely, $70\%$ of the samples) was used as the training dataset, and the rest were used for testing.
Once the classifier had been trained, we applied it to the unlabeled dataset. The data of the identified OUDs were then fed into the next classifier to determine whether they were a positive recovering OUD or a non-positive recovering OUD. The second classifier was designed to identify those with OUD who had shown positive attitudes/acts toward seeking recovery. This took another four days to label 1,000 people (with 375 positive recovering OUDs and 625 non-positive recovering OUDs). Among this labeled data, $70\%$ was used as the training dataset and the rest for testing. We show the performance of these two classifiers in Table~\ref{tab:classification}.
\begin{table}[htbp]
\centering
\begin{tabular}{lrrrr|rrrr}
\toprule
\multicolumn{5}{c|}{OUD Classif.} & \multicolumn{4}{c}{Recorvering Classif.} \\
\midrule
& \multicolumn{1}{l}{Acc.} & \multicolumn{1}{l}{Rec. } & \multicolumn{1}{l}{Prec.} & \multicolumn{1}{l|}{F1} & \multicolumn{1}{l}{Acc.} & \multicolumn{1}{l}{Rec. } & \multicolumn{1}{l}{Prec.} & \multicolumn{1}{l}{F1} \\
LG. & 0.82 & 0.83 & 0.82 & 0.82 & 0.74 & 0.78 & 0.73 & 0.76 \\
KNN & 0.70 & 0.81 & 0.76 & 0.7858 & 0.75 & 0.8121 & 0.73 & 0.77 \\
SVM & \textbf{0.92} & \textbf{0.94} & \textbf{0.93} & \textbf{0.93} & \textbf{0.88} & \textbf{0.93} & \textbf{0.84} & \textbf{0.88} \\
\bottomrule
\end{tabular}%
\caption{\textbf{Classification results for Logistic Regression, KNN, and SVM.}}
\label{tab:classification}%
\end{table}%
The SVM classifier is implemented by using the ``Libshorttext'' library. The dataset is processed by converting to lower case, removing punctuation and stop words, tokenizing, stemming and lemmatizing. As shown in Table~\ref{tab:classification}, SVM classifier outperforms logistic regression and KNN in both cases. Moreover, the accuracy of the first classifier was $0.917$, which was better than the accuracy of the second classifier. Thus, there wouldn't be any performance bottleneck if the output of the first classifier was fed into the second classifier.
We manually classified 3,157 people, with 64.9\% of them have OUD problems, and 35.1\% are non-OUD. However, only 10.4\% of these people with OUDs had shown that they have been positively seeking remission and/or recovery, while the other 89.6\% of those with OUD showed no signs of seeking remission and/or recovery according to our dataset records. Of the recovering Redditors with OUD, 89.3\% will relapse within fifty days.
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\linewidth]{figure/lda.jpg}
\caption{\textbf{Topic modeling results. The topic numbers were set as eight, and top keywords as 8.}}
\label{fig:topic-map}
\end{figure*}
\subsection{Topic Modeling}
To discover the hidden semantic structures (i.e., ``topics'') that occur in an extensive collection of posts collected from Reddit.com, we first built a topic model to analyze those posts. Given a document in the topic modeling, we would expect particular words to appear in the document more or less frequently. For instance, by specifying two topics, we found the corresponding topics of ``people take opioids because of pain'' and ``people take dope and feels''. Those topics are indicated by words ``people" and ``pain", and ``people'' and ``dope'', respectively, since they appear more frequently in those posts than other words.
According to the Deveaud2014\cite{deveaud2014accurate}, the best topic number is in the range of [4, 11], where model achieve the extremum. Since posts typically concern numerous topics, we specified the existence of multiple topics to further explore any additional potential topics that might emerge. As shown in Figure~\ref{fig:topic-map}, we present the results of an eight-topic model. These topics are ``hard day, don't do opiates'', ``don't dope heroin'', ``people feel love to feel clean'', ``taking drugs or opioids for days'', ``people feel, heroin'', ``people pain, taking opiates or drugs'', ``for pain, don't do oxycodone'', and ``post about heroin'', respectively. In brief, the general topics included 1). Attempts to dissuade people from taking highly addictive opioids (e.g., heroin and oxycodone); 2). Information on how to stay clean; 3). How people feel when they stay clean for an extended period; and 4). Advice about treatment and how to stay clean.
Even though topic modeling reveals the general topics in certain social communities, individualized user-specific information remains hidden. To expose more details about user-specific patterns for people with OUD that utilize social media, we implemented text co-occurrence and correlation analysis to further dig and discover behavior patterns on social media.
\subsection{Text Co-occurrence and Correlation}
Figure~\ref{fig:word_corr_cluster} A. shows the word correlation with annotations. The word correlation and bigram illustrate the social interactions between different Redditors. We generated word correlation graphs to obtain a full picture of their conversations. A textual analysis was implemented to capture the relationship between words, to examine words that tend to follow each other immediately, or tend to co-occur within the same posts. Sentences were tokenized into sequences of words. By checking how often a word $X$ was followed by word $Y$, the relationships between words are measured by the Phi efficient metric, and the Phi efficient is defined as
\begin{equation}
\phi = \frac{{{n_{11}}{n_{00}} - {n_{10}}{n_{01}}}}{{\sqrt {{n_{1.}}{n_{0.}}{n_{.0}}{n_{.1}}} }},
\label{equation:phi}
\end{equation}
where $n_{11}$ represents the number of posts where both $X$ and $Y$ appear, $n_{00}$ is the number where neither appears, and $n_{10}$ is the cases where one appears without the other.
In this case, a pair of words with higher phi coefficiency indicates that this word pair appears more frequently than other word pairs do.
\begin{figure*}[t]
\centering
\subfigure[]{
\includegraphics[width= 0.99\textwidth]{figure/word_corr_words.jpg}
}
\subfigure[]{
\includegraphics[width= 0.3\textwidth]{figure/wordpairs150.jpg}
}
\subfigure[]{
\includegraphics[width= 0.3\textwidth]{figure/wordpairs200.jpg}
}
\subfigure[]{
\includegraphics[width= 0.3\textwidth]{figure/wordpairs250.jpg}
}
\caption{\textbf{(a): Word correlation with Phi coefficiency$\geq$0.2. (b): Word bigrams with co-currence co-occurrence count $\geq$ 150. (c): Word bigrams with co-occurrence count $\geq$ 200. (d): Word bigrams with co-occurrence count $\geq$ 250.}}
\label{fig:word_corr_cluster}
\end{figure*}
By gradually increasing the Phi coefficient, the words that are most likely to appear with each other were detected, as shown in Figure~\ref{fig:word_corr_cluster} (b), (c), and (d). After analyzing a large number of posts from Reddit, the relationship was visualized in Figure~\ref{fig:word_corr_cluster}. By analyzing and annotating these keywords, some frequent topics are summarized as follows:
\begin{itemize}
\item Talking about detoxification processes and assisting medications such as `naloxone' and `bupe', in conjunction with `doctor' and `detox'. Participants of these discussion topics are Redditors who are motivated and willing to find recovery from their addiction and manage to stay clean. Also, some of these Redditors who have been clean for a longer period participate in those discussions and share their experiences about how to stay clean.
\item Describing pains and prescriptions from doctors, as indicated by keywords such as `pain', `doctor', and `painkiller'. These topics are consistent with the fact that opioids are often originally used as a prescribed painkiller. Some Redditors do have bodily and/or chronic pain problems and concerns. Though opioids are an effective painkiller and have curative effects, after months or years of taking opioids, these Redditors often develop an opioid dependency.
\item Sharing of withdrawal symptoms, which symptoms fall into two categories. One category involves mental/psychological symptoms such as anxiety, insomnia, and social isolation. The other category involves physical symptoms, including headaches and dizziness. Both categories of withdrawal symptoms are very uncomfortable and may lead people with OUD to a compulsive relapse if they do not have proper guidance and support.
\item Talking about what kind of opioids they take, how it feels, and the dosage of drugs they take. These topics are indicated by words such as ``hight'', ``enjoy'', ``mg (the dosage of opioids)'', etc. For those topics, Redditors always share their experiences about what kinds of opioids they are taking, or they have been taking, or they used to take. But they try to be cautious about the dosage because of the risks of death caused by overdose.
\end{itemize}
Also, since treatment is critical for people with OUD, figuring out how those with OUD evaluate treatment-assisting medications is very important. By specifying treatment-assisting medications such as `buprenorphine', `methadone', `naloxone', and `suboxone', we detected and filtered out words that were associated with those medicines in these communities. In Figure~\ref{fig:medical} A, we visualize the word correlation on posts that are about treatments. Figure~\ref{fig:medical} B shows the top six words with the highest correlation coefficiency for each medication. As shown in this figure, most of the words related to these medications are neutral, representing neutral feedback from those with OUD. For instance, Buprenorphine is associate with `` people months clean benzos love''. Also, a lot of words about dosage are used such as `dose' and `mgs'. Those words indicate that when they use medications, they seem to be paying much attention to dosage, perhaps because high dosages may have lethal rather than curative effects.
\begin{figure*}[hbpt]
\centering
\subfigure[]{
\includegraphics[width=0.48\linewidth]{figure/word_cluster.jpg}
}
\subfigure[]{
\includegraphics[width=0.48\linewidth]{figure/MedicineRelatedWordsWithMedicineTweets.jpg}
}
\caption{\textbf{A: Word correlation (in the same posts) with correlation coefficiency $\geq$ 0.17. B: Top ords that are used when talking about opioid related medicines.}}
\label{fig:medical}
\end{figure*}
\subsection{Emotion of People with OUD}
Relapses for persons with OUD are associated with younger age, heavy use before treatment, a history of injecting, and not following up with aftercare~\cite{smyth2010lapse}. They are also frequently associated with extreme emotions and other stressors~\cite{elster2009strong}.
Specifically, people seeking remission or rehabilitation can experience extreme emotions that range from great highs to great lows, and sometimes they feel like their emotions are out of control. This is important as the well-accepted Self-Medication Hypothesis~\cite{Khantzian1985smh} proposes that people often use substances to self-medicate the negative and painful emotions associated with trauma, stress, and/or comorbid mental health disorders~\cite{Tronnier2015regulationsmh}. To explore emotions that are associated with relapse, an emotion analysis was implemented to show the observational results.
On Reddit.com, people with OUD are free to express their joy and sadness, trust and disgust, and anticipation and anger, etc. in their posts or comments. These sentiments, along with their other online behaviors, could disclose information about their remission/recovery status and thus serve as relapse risk indicators and therefore also for the need for intervention that might assist with relapse prevention~\cite{yang2019sstd}. We therefore sought to capture subtle emotional sentiments from those with OUD and associate these sentiments with OUD associated behaviors such as staying clean or relapsing. By studying Redditor posts as well as their interactions with other Redditors, it was hoped that identified emotions can increase understanding of remission/recovery status. Such understanding may ultimately lead to transforming the sentiments and behavior patterns that emerge from Redditor posts into possible indicators of relapse risk and the need for some type of intervention to prevent relapse.
A word-emotion association lexicon~\cite{mohammad2013crowdsourcing} was utilized to study the emotional content of each comment from a Redditor. In the sentiment analysis, we focused on dominant rather than submerged emotions. For instance, a post may include a complex emotion that is a combination of several emotions. In this case, a comment is labeled with the emotion that has the highest score, which is defined in equation~\ref{emotionCount}, ~\ref{emotionCountNorm} and ~\ref{emotionWord}.
In particular, suppose $L_i$ is the subset of a lexicon which contains words in emotion $i$. The emotion count $emotion\_count\_i$ and normalized count $n\_emotion\_count\_i$ for the emotion $i$ are:
\begin{equation}
\label{emotionCount}
emotion\_count\_i = \sum\limits_j{word_{ij}}
\end{equation}
\begin{equation}
\label{emotionCountNorm}
n\_emotion\_count\_i = \frac{emotion\_count\_i}{max(emotion\_counts)}
\end{equation}
Where:
\begin{equation}
\label{emotionWord}
word_j =
\begin{cases}
1 & \text{if $word_j$ in $L_i$} \\
0 & \text{otherwise}
\end{cases}
\end{equation}
\begin{figure*}[htp]
\centering
\includegraphics[width=0.9\linewidth]{figure/wordcloud.jpg}
\caption{\textbf{Word cloud with ten different emotions. The drug names such as Heroin, OxyContin and Vicodin are not the dominant words in the wordcloud, since people talk more about their feelings instead of the drug itself. Also, stopwords are removed from the wordcloud.}}
\label{fig:wordcloud}
\end{figure*}
Ten categories of emotion were identified as shown in Figure~\ref{fig:wordcloud}: anger, anticipation, disgust, fear, joy, sadness, surprise, trust, negative, positive. Also consistent with addiction literature~\cite{randles2013shamerelapse}, we found that people with OUD can be highly influenced by their emotions. Words that express emotions or feelings, such as ``feeling'', ``bad'', and ``high'', are repeatedly used. Among the 670 persons with OUD who had relapsed, 72\% showed the predominant emotions that were `negative' or that of `joy'. This observation shows relapse is highly related to more extreme emotions such as `negative' and `joy' based-emotions.
\section{Discussion}
The opioid crisis is one of the leading threats to public health in the U.S., and therefore treatment admissions are rising. Despite positive benefits from treatment, many persons seeking remission relapse, and such relapse is difficult to predict. Nevertheless, a thorough understanding of social media behavior patterns that associate with remission can promote potential supportive and even potential relapse prevention-based interventions. In this paper, we conducted a series of experiments and observed significant Opioid addiction patterns by analyzing posts and comments collected from Reddit.com. These advanced analyses demonstrate 1) how information from social media posts/discussions can be used to more accurately predict potential relapse, 2) how discussions might be discovered around various topics that association with use, treatment, and remission, and 3) how emotional/affective sentiments from these discussions may be used to predict relapse risk.
First, only 10.4\% of those identified with OUD actively seek remission/recovery, while the 89.6\% show no indication of seeking recovery. This demonstrates the important need that the majority of those with OUD to progress through various motivational stages that may yield a greater readiness for change. Additionally, the machine-learning based classifiers used in this study showed high accuracy and precision in predicting group classification outcomes.
Second, people with OUD who are seeking remission/recovery more generally have positive judgments regarding medications such as Buprenorphine, Methadone, Naloxone, and Suboxone; and they pay close attention to dosage. Thus, this finding supports that medication assisted treatment (MAT; use of medications such as Buprenorphine and Methadone) may be helpful to those seeking recovery from OUD. This is important as MAT is often a controversial topic and widely underutilized resource in substance use disorder treatment~\cite{Knudsen2011MATImplementation}. Third, 72\% of those seeking recovery from OUD - but appear to have relapsed - showed both dominant `negative' as well as `joy' based emotions that associated with relapse. Therefore, we infer people with OUD may experience a wide range of emotion and emotional shifts, and that without support that aids emotional regulation they have higher relapse risk.
In sum, these multi-disciplinary analyses conducted in this paper help disclose several behavioral patterns and characteristic features of persons with OUD, making it more possible to detect and identify precursors of relapse, increasing assessment and prediction of relapse risk. Ultimately, this may lead to the development and implementation of personalized OUD-based interventions that enhance relapse prevention and OUD remission.
\bibliographystyle{IEEEtran}
\section{Appendix}
\begin{table*}[htbp]
\centering
\begin{tabular}{|p{0.25em}ll|}
\toprule
\multicolumn{3}{|p{60.92em}|}{An opioid use disorder is defined as a problematic pattern of opioid use that leads to serious impairment or distress. Doctors use a specific set of criteria to determine if a person has a substance use problem. To be diagnosed with an opioid use disorder, a person must have 2 or more of the following symptoms within a 12-month period of time. } \\
\midrule
\multicolumn{3}{|p{60.92em}|}{\textbf{Loss of Control}} \\
\midrule
\multicolumn{1}{|l|}{1} & \multicolumn{1}{p{23em}|}{{Substance taken in larger amounts or for a longer time than intended}} & \multicolumn{1}{p{28em}|}{{I did not mean to start using so much.}} \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{2}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Persistent desire or unsuccessful effort to cut down or control use of a substance}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{I have tried to stop a few times before, but I start using this drug again every time.}} \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{3}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Great deal of time spent obtaining, using, or recovering from substance use}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{Everything I do revolves around using this drug. (In severe cases, most/all of a person's daily activities may revolve around substance use.)}} \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{4}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Craving (a strong desire or urge) to use opioids}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{I wanted to use so badly, I couldn't think of anything thing else.}} \\
\midrule
\multicolumn{3}{|l|}{\textbf{Social Problems}} \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{5}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Continued opioid use that causes failures to fulfill major obligations at work, school, or home}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{I keep having trouble at work/have lost the trust of friends and family because of using this drug.}} \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{6}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Continued opioid use despite causing recurrent social or personal problems}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{I can not stop using, even though it's causing problems with my friends/family/boss/landlord.}} \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{7}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Important social, occupational, or recreational activities are reduced because of opioid use}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{I have stopped seeing my friends and family, and have given up my favorite hobby because of drugs.}} \\
\midrule
\multicolumn{1}{|l|}{\textbf{Risky Use}} & \multicolumn{1}{l|}{} & \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{8}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Recurrent opioid use in dangerous situations}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{I keep doing things that I know are risky and dangerous to buy or use this drug.}} \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{9}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Continued opioid use despite related physical or psychological problems}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{I know that using this drug causes me to feel badly/messes with my mind, but I still use anyway.}} \\
\midrule
\multicolumn{3}{|p{35.92em}|}{\textbf{Pharmacological Problems}} \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{10}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Tolerance (the need to take higher doses of a drug to feel the same effects, or a reduced effect from the same amount)}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{I have to take more and more of the drug to feel the same high.}} \\
\midrule
\multicolumn{1}{|l|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{11}}} & \multicolumn{1}{p{23em}|}{\textcolor[rgb]{ .18, .176, .196}{Withdrawal (the experience of pain or other uncomfortable symptoms in the absence of a drug)}} & \multicolumn{1}{p{28em}|}{\textcolor[rgb]{ .18, .176, .196}{When I stop using the drug for a while, I am in a lot of pain.}} \\
\midrule
\multicolumn{3}{|p{35.92em}|}{\textcolor[rgb]{ .18, .176, .196}{\textbf{Summary: OUD: $N_{symptoms} \ge 2$, \; Non-OUD:$N_{symptoms} < 2$ }}} \\
\bottomrule
\end{tabular}%
\caption{Labeling criteria and text examples from DSM-5. A Redditor is labeled with OUD if at least two symptoms are observed. It's used for labeling OUD and non-OUD, and positive recovering OUD and non-positive recovering OUD.}
\label{tab:addlabel}%
\end{table*}%
\end{document} |
1,108,101,566,420 | arxiv | \section{Introduction} \label{s1}
It is well established on theoretical grounds that close
binary stars can have a significant impact on dynamical
evolution of globular clusters (Hut et al. 1992). At the same
time, the theory predicts that large numbers of white/red dwarf
binaries should form in the cores of globular clusters via
two-body tidal capture or three-body exchange capture (Fabian,
Pringle \& Rees 1975; Hut \& Paczy\'nski 1984). According to
expectations, some of these systems would be in orbits
sufficiently tight to become cataclysmic variables (CVs) at
some point in their evolution. For example, di~Stefano and
Rappaport (1994) estimate that there should be several thousand
CVs in the galactic globular clusters (GCs). Numerous ground-based
surveys for CVs in GCs yielded the identification of
surprisingly few objects (e.g. Shara et al. 1994). Besides the two
classical novae (Sawyer 1938; Hogg \& Wehlau 1964) observed in M14
and M80, a dwarf nova was detected in M5 (Oosterhoff 1941; Margon,
Downes \&Gunn 1981). Several dozen candidate CVs have been reported
over the last few years in observations collected with the {\it Chandra}
and {\it XMM-Newton} observatories (Grindlay et al. 2001; Gendre,
Barret \& Webb 2003) and with the {\it Hubble Space Telescope}
(e.g. Edmonds et al. 1999; Knigge et al. 2003). However, at present
there are just a few spectroscopically confirmed CVs in GCs
(Margon et al. 1981; Grindlay et al. 1995; Edmonds et al. 1999;
Knigge et al. 2003). Similarly, the list of objects with observed
dwarf nova type outburst is rather short: V101 in M5 (Shara,
Potter \& Moffat 1987), DN in NGC 6624 (Shara, Zurek \& Rich
1996), CV1 in M22 (Anderson, Cool \& King 2003), V1 in M15
(Charles et al. 2002), CV1 in M15 (Shara et al. 2004) and V2 in 47 Tuc
(Paresce \& de Marchi 1994).
We note parenthetically that at least 3 CVs have been identified so
far in open clusters, including one certain DN (Kaluzny et al.
1997; Gilliland et al. 1991). Recently
Mochejska et al. (2004) detected a very likely CV in the field of
the open cluster NGC~2158.
All 4 objects were discovered by
chance and one may expect the detection of more CVs, even in close
open clusters, if a systematic survey is undertaken.
We have started a systematic search for erupting dwarf novae in
GCs by taking advantage of the rich photometric data base collected
by the CASE collaboration. The CASE project is focused on
the determination of ages and distances to nearby GCs using
observations of detached eclipsing binaries (Kaluzny et al. 2005).
To identify suitable binaries we conducted an extensive
photometric survey of a dozen GCs. The data obtained during the
survey can be used to search for various types of variable objects
and, in particular, for optical transients in the fields of the clusters
being monitored. This paper presents results obtained for M55,
which was the first cluster searched by us for the presence of
possible dwarf novae.
\section{Observations}\label{s2}
The CASE project is conducted at Las Campanas Observatory. For the
survey we used the 1.0-m Swope telescope, equipped with a
$2048\times 3150$ pixel SITE3 CCD camera\footnote{The actual size
of the CCD is $2048\times 4096$ pixels, but rows above 3150 are
unusable for photometry.}. With a scale of 0.435 arcsec/pixel, the
usual field of view is about $14.8\times 23$~arcmin$^{2}$. However, a
large fraction of images for the M55 cluster were taken with a
smaller subraster covering a field of $14.8\times 11.6$~arcmin$^{2}$
(longer axis aligned in the E-W direction). In the present search for
CVs all of the analysed frames were trimmed to this smaller size.
The cluster core was located approximately at the centre of the
subraster field.
The cluster M55 was monitored during 8 observing seasons spanning
years 1997-2004. A total of 3795 images were taken through the $V$
filter, with exposure times ranging from 100s to 480s, while
exposure times for a further 313 images taken in the $B$ filter ranged from 100s
to 420s. The number of $V$-band frames taken per night ranged from
a few to 90. The median seeing
was 1.44 and 1.51 arcsec for the $V$ and $B$ bands,
respectively.
The cluster was also observed during the 1997 and 2003 seasons
with the TEK5 direct CCD camera attached to the 2.5-m du~Pont
telescope. The field of view was $8.8\times 8.8$~arcmin$^{2}$ at
a scale of 0.259 arcsec/pixel. The time series photometry
through $BV$ filters was obtained on a total of 6 nights in
May-June 1997 and on 3 nights in May 2003. In addition, exposures
with $UBVI_{\rm C}$ filters were collected on the
nights of 1997 June 2 and 2003 May 11. The journal of
observations used to extract $UBVI_{\rm C}$ photometry discussed
in Sec. \ref{f4} is listed in Table \ref{t1}.
\begin{table}
\centering
\begin{minipage}{200mm}
\caption{$UBVI_{\rm C}$ observations of M55.\label{t1}}
{\small
\begin{tabular}{lcrr}
\hline
Date & Filter & Exposure & Seeing \\
& & (s) & (arcsec)\\
\hline
1997 June 2 & V & 5$\times$ 45 & 0.75 \\
1997 June 2 & B & 6$\times$ 65 & 0.82 \\
1997 June 2 & V & 7$\times$ 35 & 0.73 \\
1997 June 2 & I & 3$\times$ 25 & 0.61 \\
1997 June 2 & U & 5$\times$ 360 & 0.81 \\
1997 June 2 & V & 8$\times$ 35 & 0.81 \\
2003 May 11 & V & 2 $\times$ 30 & 0.99 \\
2003 May 11 & I & 2 $\times$ 20 & 0.83 \\
2003 May 11 & U & 2 $\times$ 120 & 1.20 \\
2003 May 11 & B & 2 $\times$ 50 & 1.13 \\
2003 May 11 & V & 2 $\times$ 35 & 1.25 \\
\hline
\end{tabular}}
\end{minipage}
\end{table}
\section{Search for Dwarf Novae}\label{s3}
The search for possible DNe in M55 was conducted on the $V$ filter
images collected with the Swope telescope. We used a slightly
modified version of the \textsc{isis-2.1} image subtraction package (Alard
\& Lupton 1998; Alard 2000) to detect the variable objects and to
extract their photometry. Our procedure followed that described in
detail by Mochejska et al. (2002). A reference image was
constructed by combining 19 individual frames with $T_{exp}=120{\rm s}$
taken during dark time on the night of 2001 July 12/13. The
seeing for the resultant reference image was $FWHM=1.00$~arcsec
and the limiting magnitude corresponding to a $3\sigma$
detection level was $V\approx 23.0$.\footnote{This
limiting magnitude only applies to the least crowded part of the
analysed field.} Subsequently we selected the nights for which at
least two $V$ images with seeing better than 1.6 arcsec were
available. There were 145 such nights and for 113
of them it was possible to select at least 5 images fulfilling
the above condition. The data sets consisting of 2-5 images were
then combined to form an average image for each of the 145
nights. Use of the combined frames to search for erupting
variables is advantageous not only because of the S/N issue but
also because the combined images are free from defects caused by
cosmic rays. The combined images were remapped to the reference
image coordinate system and subtracted from the
point spread function (PSF) convolved reference image using programs
from the \textsc{isis} package. The resultant frames were searched
with \textsc{daophot} (Stetson 1987) for presence of any subtraction
residuals with stellar PSF profiles. We omitted from the search regions
which corresponded to the location of saturated stars or to
known variables (Clement et al. 2001; Pych et al. 2001; a more
extended list based on CASE results will be published elsewhere).
In addition, to avoid too many false alarms
we set a high detection threshold, at a total residual flux
equivalent to that of a constant star with $V=20.5$. In other
words, such a star would be marginally detected if it doubled its
flux. The apparent distance modulus of M55 is
$(m-M)_{V}=13.87$ (Harris 1996). At this distance our variability limit in terms of
excess flux corresponds to the constant flux produced by a star
of $M_{V}=6.6$.
Our analysis yielded the identification of just one certain erupting
object, which we shall call CV1. Its equatorial
coordinates are: $\alpha_{J2000}=19^{h}40^{m}08.^{s}59$,
$\delta_{J2000}=-30^{\degr} 58^{\arcmin} 51.^{\arcsec}1$. The
external accuracy of these coordinates is about 1.0\arcsec.
For further analysis, the $BV$ light curves of the variable were
extracted with \textsc{isis}, using individual images rather than the
combined frames. In the case of the $B$-band photometry, the
reference image was constructed by combining the 15 best
seeing exposures from the 1998 season. The reference
image has $FWHM=1.1$~arcsec and the limiting magnitude is
$B\approx 23.2$ The instrumental photometry was transformed to the
standard $BV$ system using a set of secondary standards present in
the M55 field (Stetson 2000). Specifically we used a total of 49
standards with $0.07<B-V<1.04$.
\section{Properties of CV1}\label{s4}
Our observations of the variable CV1 recorded 6 outbursts. The
combined light curve in the $V$-band is presented in Fig.~\ref{f1},
while Fig.~\ref{f2} shows two selected outburst light curves. On
individual out-of-outburst images collected with the 1.0-m Swope
telescope, the CV1 variable is, in fact, below or close to the
detection limit. In Figs. \ref{f1} and \ref{f2} the data points
corresponding to the out-of-outburst images are plotted assuming
$V=22$.
\begin{figure}
\begin{center}
\vspace{15.9cm}
\special{psfile=fig1.ps hoffset=-60 voffset= -215 vscale=100 hscale=100}
\end{center}
\caption{\label{f1} $V$-band light curve of CV1 for observing
seasons from 1997 (top) to 2004 (bottom). Each panel covers the
period May-September for a given year.}
\end{figure}
\begin{figure}
\begin{center}
\vspace{7.0cm}
\special{psfile=fig2.ps hoffset=-60 voffset=-500 vscale=100 hscale=100}
\end{center}
\caption{\label{f2} $V$-band light curve of CV1 covering outburst
observed in 1997 (top) and 1999 (bottom) seasons. }
\end{figure}
\begin{figure}
\begin{center}
\vspace{5.0cm}
\special{psfile=fig3.ps hoffset=0 voffset=0 vscale=39 hscale=39}
\end{center}
\caption{\label{f3} Finding charts for CV1 showing the variable in
quiescence (left) and in outburst (right). North is up, and east
is to the left. The field of view is $26~\arcsec$ on a side.}
\end{figure}
The variable in its low state is still detectable on the
reference images, but even then the exact measurement of its
magnitude is difficult because of crowding problems and relatively
poor sampling of the PSF. Photometry of the variable in quiescence
is much easier on the images collected with the du~Pont telescope.
We have used the data listed in Table 1 to measure magnitudes and
colours of CV1 in its low state (1997 June 2) and on the rising
branch of an outburst (2003 May 11).\footnote{Observations
collected with the du~Pont telescope show the variable at $V\approx
20.7$ on 2003 May 4 and at $V\approx 19.9$ on 2003 May 10.}
Figure \ref{f3} shows images of the field of the variable for
these two nights. The close companion visible south-east of CV1 is
located at an angular distance of 0.94~arcsec from the variable
and has $V=20.15$ and $B-V=0.465$. The following magnitude and
colours of the variable were derived from the images taken on June
2, 1997: $V=21.88\pm 0.06$, $B-V=0.63\pm 0.08$, $U-B=-0.83\pm
0.09$ and $V-I=1.18\pm 0.09$. For the night of May 11, 2003 we
obtained: $V=18.98\pm 0.02$, $B-V=0.13\pm 0.02$, $U-B=-0.66\pm
0.03$ and $V-I=0.26\pm 0.03$. The instrumental photometry was
transformed to the standard system using observations of
several Landolt (1992) fields. The 1$\sigma$ uncertainties quoted
only reflect internal errors and do not include the uncertainties of
the zero points of the photometry. To check the external errors
we have compared our photometry against standard stars visible
in the cluster field (Stetson 2000). For 28 stars found in common the
average differences for $V$, $B-V$ and $V-I$ amount to 0.022,
0.022 and 0.033 mag, respectively. No such comparison was possible
for the $U-B$ colour as no Stetson photometry is available for the
$U$ band. The data obtained with the Swope telescope give the
median colour of the variable during outburst as $B-V=0.12$.
\begin{figure*}
\begin{center}
\vspace{8.3cm}
\special{psfile=fig4.ps hoffset=-15 voffset=-55 vscale=90 hscale=90}
\end{center}
\caption{\label{f4} Location of CV1 on colour-magnitude diagrams
of M55, in quiescence ($V=21.8$; 1997 June 2), on the rise to
maximum ($V=19.0$; 2003 May 11) and at maximum ($V=17.9$; 1998 Aug
19).}
\end{figure*}
\begin{figure*}
\begin{center}
\vspace{8.0cm}
\special{psfile=fig5.ps hoffset=-15 voffset=-65 vscale=90 hscale=90}
\end{center}
\caption{\label{f5} Colour-magnitude diagrams of M55. Stars with $U-B<-0.6$
are marked with triangles. The open symbol corresponds to the variable M55-B1.}
\end{figure*}
The nature of the variability of CV1 along with its observed colours
indicates that it is a cataclysmic variable of dwarf nova type.
According to Warner (1976), most DNe at maximum outburst have
unreddened colours in the range $(B-V)_{0}=0.0\pm 0.10$ and
$(U-B)_{0}=-0.80\pm 0.15$. It appears that the colours displayed by CV1
during outburst fall within the above ranges, especially if we
take into account the reddening of M55, which amounts to $E(B-V)=0.08$
(Harris 1996).
In Fig. \ref{f4}, we show the location of CV1 in three different
luminosity stages on the colour-magnitude diagram of the cluster.
It is worth noting that in the low state the variable is located
close to the cluster main sequence on the $V/B-V$ and
$V/V-I$ planes, although it is still very blue in the $U-B$ colour.
There are several examples of DNe from GCs and open clusters which
show relatively red optical colours in quiescence (Kaluzny \&
Thompson 2003; Kaluzny et al. 1997; Anderson, Cool \& King 2003).
Hence, CV1 is hardly exceptional in that respect (see also Bruch \& Engel 1994).
Out of a total of 193 nights, CV1 was seen in outburst on 23 nights,
yielding an average duty cycle of $\rho= 23/193 = 0.119\pm0.025$. Due to
telescope scheduling and weather, the available nights were not distributed
randomly and tend to clump, however, both the numerator and denominator
should be affected in a similar way without any systematic effect
on the duty cycle. The best-covered outburst lasted $t \approx 10$ days.
Each outburst was on average covered by 4, not necessarily consecutive, nights.
Hence we estimate that, to within 30 percent accuracy, the
above duration $t$ is typical for all outbursts. This yields the
average outburst cycle length $T = t/\rho \approx 84$ days with a
similar 30 percent error. These results might be used to deduce
the average accretion rate and evolution time of the binary, a
subject outside our present scope.
We are not in a position to determine from our observations the total
number of close binary stars in the cluster core, an important
number for its dynamical evolution. However, we can try to
estimate the maximum number of DNe in the cluster which would still be consistent
with our observations. Let us assume that in our field there is
another DN with properties identical to CV1. For as small a duty
cycle as $\rho=0.12$, its outburst should, to a good
approximation, obey the Poisson distribution $P(r\leq k;\lambda)$, with the
average number of outbursts $\lambda = 6$ from CV1. However, we
did not observe any outbursts from a star other than CV1 $(k=0)$, an
occurance of probability $P(r=0; 6) = 0.0025$ or $2.4\sigma$
significance. As a conservative estimate, we assume
that a single outburst per star could be misinterpreted as an
artefact and overlooked; thus we obtain a probability $P(r\leq 1; 6)^2
= 0.0003$ for the presence of as many as 2
undetected DNe in our field. Therefore, the hypothesis that M55 contains as many as 3 DNe
including CV1 has to be rejected at the $3.7\sigma$ confidence level.
Furthermore, the peak outburst magnitude of CV1 is around $V=18.0$. However, in our
photometry, outbursts as faint as $V\approx19.8$ mag would stand out. Hence
we can be confident of our conclusion on the lack of 3 DNe in the cluster with
outburst magnitude $M_V<5.9$.
The available data do not allow us to establish with confidence the
cluster membership status of CV1. However, we note that the range
of observed luminosities of the variable, $18<V<21.8$, is
consistent with the assumption that it is a dwarf nova belonging to
M55. The cluster has an apparent distance modulus $(m-M)_{V}=
13.87$ (Harris 1996) which yields a variability range in absolute
magnitudes of $4.3<M_{V}<7.9$ for CV1 under the assumption of cluster
membership. Such a range would be normal for a bright,
non-magnetic DN.
The X-ray observations conducted with the $ROSAT$ PSPC detector
led to the detection of 18 sources in the field of M55 (Johnston,
Verbunt \& Hasinger 1996) of which one, namely object \#9, is a very
likely counterpart of CV1. For that object the {\it ROSAT Source
Catalog of Pointed Observations with the HRI} (ROSAT Team, 2000)
gives $\alpha_{2000}=19^{h} 40^{m} 08.4^{s}$ and $\delta_{2000}
=-30\degr 58\arcmin 52\arcsec$ with a positional error of 2~arcsec.
The optical coordinates of CV1 listed above fall within the error
circle of the $ROSAT$ source M55-\#9. The X-ray to optical
luminosity ratio $L_x/L_o \approx 0.3$ would be higher than
average for DN but consistent with that for SS Cyg, assuming CV1
was in quiescence during $ROSAT$ observations.
\section{Blue stars in the cluster field}\label{s5}
There are several classes of CVs which do not show any pronounced
outbursts on a time-scale of years or even decades. For example, old
novae or AM CVn stars typically show variability on the level of a few tenths
of a magnitude in the optical domain and on time-scales ranging from
seconds to years. However, most CVs show very blue
$U-B$ and $U-V$ colours (Bruch \& Engel 1994). Hence, one may search for candidate
CVs in globular clusters by looking for blue stars located below
the horizontal branch on the $U-B/V$ or $U-V/V$ colour-magnitude diagram.
We have constructed such diagrams for M55 using the data collected with the
du~Pont telescope on the night of 1997 June 2. The photometry was extracted
from the following combined images: $U$~$5\times 360s$, $B$~$6\times 65s$,
$V$~$40\times 35s$, $V$~$5\times 10s$. The resultant colour-magnitude diagrams
are shown in Fig. \ref{f5}. We considered seven blue objects with measured $U-B<-0.6$
to be candidate CVs (CV1 was dropped from the list). Not one of the X-ray sources detected
in the M55 field by the $ROSAT$ PSPS detector (ROSAT Team, 2000) is located within
10 arcsec of any of these blue stars. Examination of time series photometry based on
the data from the Swope telescope allowed us to detect variability for only one of candidates.
This object, which we call M55-B1, is seen at $V=20.40$ and $U-B=-0.65$ in
Fig. 5 and its equatorial coordinates are:
$\alpha_{J2000}=19^{h}39^{m}49.^{s}81$,
$\delta_{J2000}=-30^{\deg} 53^{\arcmin} 19.^{\arcsec}6$.
It exhibits season to season
changes of the mean $V$ luminosity on a level of a few tenths of a magnitude. The light
curve for the period 1997-2004 is shown in Fig. \ref{f6}.
In order to search for possible short term variability of blue star candidates for CVs
we have examined their time series photometry extracted with \textsc{isis} 2.1 (Alard 2000)
from the data collected with the du~Pont telescope in 1997. A total of 592
$V$ images with exposure times ranging from 30~s to 120~s were taken during the
period 1997 May 31 -- 1997 June 7.
Time series photometry was extracted from individual images as well as
from 64 combined images formed by averaging 5-10 consecutive frames.
We failed to detect any significant variability for any of the blue candidates marked
in Fig. \ref{f5}. In particular, the light curve of variable M55-B1
based on combined images
was flat, with $<V>=20.40$ and $rms=0.014$ mag. We conclude that none of the selected blue
stars is a likely candidate for a CV. The blue colours and long time-scale low-amplitude
variability of M55-B1 make it a likely candidate for a quasar. \\
We conclude this section by noting that a large fraction of stars with $V>18.5$ visible
to the blue or below the main-sequence of M55 in Fig. \ref{f5} are members of the Sagittarius
dwarf galaxy which is located in the background of the cluster (Mateo et al. 1996). It is
feasible that some of blue stars considered above belong to the extended blue horizontal
branch of the Sagittarius dwarf.
\begin{figure}
\begin{center}
\vspace{5.0cm}
\special{psfile=fig6.ps hoffset=-60 voffset=-580 vscale=100 hscale=100}
\end{center}
\caption{\label{f6} $V$-band light curve of blue star M55-B1.}
\end{figure}
\section{Summary}\label{s6}
We report the results of a search for DN outbursts in an extensive
photometric survey of the globular cluster M55 obtained within the
scope of the CASE collaboration. Using a total of 3795 $V$-band images
taken over 8 seasons centred on the cluster core we were only able to
detect 6 outbursts, all from the newly discovered DN star
CV1. As we relied on a median combination of several images and on
subtraction of the PSF matched templates using the \textsc{isis} technique,
our survey is quite deep and sensitive down to the very crowded
cluster core. Our outburst statistics are consistent with the
absence of any further undetected DNe similar to CV1 in the
investigated field, and we reject at the $3.7\sigma$ confidence
level the hypothesis that there are 2 additional undetected DNe with
outbursts similar to CV1 or fainter, down to $M_V\approx 5.9$. While most
bright field DNe located at the distance of M55 would be detected
in our survey, we caution that DNe are in general heterogeneous
class of objects and cluster CVs differ in metallicity, so that
some rather exotic faint DNe and/or with rare outbursts could
perhaps have escaped our scrutiny. However, the deficit of DNe in
globular clusters appears to be real. Generally, any dynamical evolutionary
scenario of cluster cores would thus have to address the issue of the slow
creation of DNe and/or destruction of the existing ones.
The outbursts of CV1 last about 10 days and recur every 85 days, on
average. Its position appears to coincide with a $ROSAT$ point
source. Although we do not offer proof of the cluster membership of CV1,
its characteristics -- position in quiescence on the
colour-magnitude diagram and X-ray flux -- are entirely consistent
with a fairly bright dwarf nova $M_{V}(min) = 7.9$ located at the
distance of M55.
In addition, we searched our deep multicolour photometry obtained
with the du~Pont telescope for the presence of any blue objects. Apart
from CV1, we discovered another blue object showing variability between
seasons with no evidence of short time scale variability. Its properties suggest
that it is quite likely to be a background quasar.
\section*{Acknowledgments}
PP was supported by the grant 1~P03D 024 26 from the
State Committee for Scientific Research, Poland. JK \& GS were
supported by the grant 1~P03D 024 26 from the
Ministry of Scientific Research and Informational Technology, Poland.
|
1,108,101,566,421 | arxiv | \section{Introduction}
The long-term orbital evolution of debris in geosynchronous earth orbit (GEO) has been studied extensively over the past 50 years \cite{allan1964, schildknecht2007, rosengren2019}. Lacking atmospheric drag and other natural de-orbit mechanisms, GEO debris will remain on orbit indefinitely \citep{rosengren2019}. On the other hand, less work has been done to understand the attitude dynamics of this debris. Many GEO debris objects are retired and otherwise defunct satellites and rocket bodies. The spin rates of these large debris objects are diverse and evolve over time \citep{papushev,cognion,earl,benson2018a}. Understanding their attitude evolution will benefit orbit prediction since attitude-dependent solar radiation pressure (SRP) is the largest non-gravitational perturbation at GEO. Also, spin rate knowledge for these large objects will help predict debris shedding. Most high area-to-mass ratio GEO debris is thought to be multi-layer insulation (MLI) from defunct satellites and rocket bodies \citep{liou}. Finally, given the growing debris population and the large cost to construct, launch, and operate GEO satellites, many organizations are developing active debris removal (ADR) and satellite servicing missions. To grapple and de-spin large, potentially non-cooperative satellites, spin state predictions are vital. With variable spin rates, forecasting windows of slow rotation will reduce collision risk as well as time and energy needed to de-spin. Also, understanding how end of life satellite configurations (e.g. solar array orientation) affect long-term spin state evolution is important. Improved knowledge could help inform decommission procedures to minimize post-disposal spin rates and variability, further facilitating ADR and servicing missions.
Leveraging studies of asteroid dynamics, Albuja et al.~\cite{albuja2015,albuja2018} investigated the influence of solar radiation and thermal re-emission torques on defunct satellite spin states. The combined influence of these torques on a body's spin state is called the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect \citep{rubincam}. Albuja et al.~\cite{albuja2018} found that the YORP effect could explain the observed spin rate evolution of the defunct GOES 8 and 10 weather satellites. The authors closely predicted the rapid observed spin down of GOES 8 in 2014 and its subsequent transition from uniform rotation to non-principal axis tumbling \cite{albuja2018}. Benson et al. \cite{benson2020b} found that solar array orientation greatly impacts YORP-driven uniform spin state evolution, consistent with the dramatically different observed evolution of GOES 8 and 10. This demonstrated the potential to dictate post-disposal spin state evolution with proper end of life configurations. Propagating the GOES dynamics into the tumbling regime, Benson et al.~\cite{benson2020b} found that the satellite's rotational angular momentum vector tends to track the time-varying sun direction. Further exploration has uncovered cyclic behavior where the satellite transitions repeatedly between uniform rotation and tumbling as well as tumbling period resonances. Additional work is needed to understand these behaviors. All study of tumbling YORP for defunct satellites has considered the full dynamics (i.e. Euler's equations of motion) \cite{albuja2018,benson2020b}. These equations are not amenable to long-term numerical propagation as they require short integration time steps to maintain solution accuracy. Furthermore, Euler's equations are expressed in terms of fast variables (i.e. attitude and angular velocity). Since we are interested in studying changes over long periods of time, slowly varying osculating elements (e.g. the rotational angular momentum vector and kinetic energy) are more appropriate. This is directly comparable to orbital dynamics, where the averaged Lagrange and Gauss planetary equations, written in terms of osculating orbital elements, have been used extensively to study long-term orbital evolution \cite{vallado}. This success motivates development of analogous tumbling-averaged dynamical equations for osculating rotational elements, namely the rotational angular momentum vector and kinetic energy.
A number of authors have investigated spin-averaged attitude dynamics models. Albuja et al. \cite{albuja2015} extended the uniform spin-averaged asteroidal YORP work of Scheeres \cite{scheeres2007} to defunct satellites. These models are not applicable to tumbling satellites as the motion is driven by two generally incommensurate periods rather than one for the uniform case \citep{sa1991}. Several tumbling-averaged YORP models have been developed for asteroids \citep{cicalo,breiter2011}. These asteroidal models average over the spin state and heliocentric orbit given the slow spin state evolution. Orbit averaging is not appropriate for defunct satellites due to the possibility for angular momentum sun-tracking. Also, these models only account for diffuse reflections which is insufficient for defunct satellites since many surfaces are dominated by specular reflections.
In this paper we develop a fast, semi-analytical tumbling-averaged attitude dynamics model that accounts for specular and diffuse reflections as well as absorption and instantaneous thermal re-emission of solar radiation. To allow for analytical averaging, we approximate the facet illumination function with its second order Fourier series expansion. For the time-being, we neglect all other perturbations including gravitational/magnetic torques and internal energy dissipation. First we describe relevant frames, dynamics, and the radiation torque equations in Section II. In Section III, we illustrate the YORP-driven tumbling behavior of the full model. Motivated by these results, we then derive the semi-analytical averaged dynamics in Section IV, leaving details for the appendices. Here, we also validate and explore the averaged model. We finish by discussing implications of the findings and providing conclusions.
\section{Preliminaries}
\subsection{Frames}
For this paper we will assume the satellite is in a circular heliocentric orbit at 1 astronomical unit (AU), neglecting its much smaller earth orbit. This approximation was validated by Albuja et al. \cite{albuja2018} for the GOES 8 and 10 satellites. The rotating orbit frame is denoted by $\mathcal{O}$:$\{\bm{\hat{X}}$,$\bm{\hat{Y}}$,$\bm{\hat{Z}}\}$. This frame is centered at the satellite with $\bm{\hat{X}}$ along the orbit angular momentum direction, $\bm{\hat{Z}}$ pointed towards the sun, and $\bm{\hat{Y}}$ in the orbital velocity direction (see Figure~\ref{fig:frames}a). The angular velocity of $\mathcal{O}$ with respect to the inertial frame $\mathcal{N}$ is $\boldsymbol{\omega}_{\mathcal{O}/\mathcal{N}}=n\bm{\hat{X}}$ where $n$ is the heliocentric mean motion. The next frame is the angular momentum frame $\mathcal{H}$:$\{\bm{\hat{x}}$,$\bm{\hat{y}}$,$\bm{\hat{z}}\}$. Here $\bm{\hat{z}}$ is along the satellite's rotational angular momentum vector $\bm{H}$. Rotation from $\mathcal{O}$ to $\mathcal{H}$ is given by the rotation matrix $HO=R_2(\beta)R_3(\alpha)$. $R_i$ denotes a principal rotation about the $i$th axis \citep{schaub}. Consulting Figure~\ref{fig:frames}a, the "clocking" angle $\alpha$ and "coning" angle $\beta$ are the spherical coordinates of $\bm{\hat{H}}$ in the $\mathcal{O}$ frame.
The final relevant frame is the satellite body frame $\mathcal{B}$:$\{\bm{\hat{b}}_1$,$\bm{\hat{b}}_2$,$\bm{\hat{b}}_3\}$. Rotation from $\mathcal{H}$ to $\mathcal{B}$, shown in Figure~\ref{fig:frames}b, is given by (3-1-3) ($\phi$-$\theta$-$\psi$) Euler angles with the rotation matrix $BH$ \cite{schaub},
\small
\begin{singlespace}
\begin{equation}
BH =
\begin{bmatrix}
\cos\phi\cos\psi - \cos\theta\sin\phi\sin\psi & \cos\psi\sin\phi + \cos\phi\cos\theta\sin\psi & \sin\psi\sin\theta \\
- \cos\phi\sin\psi - \cos\psi\cos\theta\sin\phi & \cos\phi\cos\psi\cos\theta - \sin\phi\sin\psi & \cos\psi\sin\theta\\
\sin\phi\sin\theta & -\cos\phi\sin\theta & \cos\theta
\end{bmatrix}
=
\begin{bmatrix}
a_{x1} & a_{y1} & a_{z1} \\
a_{x2} & a_{y2} & a_{z2} \\
a_{x3} & a_{y3} & a_{z3} \\
\end{bmatrix}
\label{eq:BH}
\end{equation}
\end{singlespace}
\normalsize
So an arbitrary vector $\bm{f}$ in the $\mathcal{H}$ frame is given by,
equivalently in matrix form,
\begin{singlespace}
\begin{equation}
\begin{bmatrix}
f_x \\
f_y \\
f_z \\
\end{bmatrix}
=
\begin{bmatrix}
a_{x1} & a_{x2} & a_{x3} \\
a_{y1} & a_{y2} & a_{y3} \\
a_{z1} & a_{z2} & a_{z3} \\
\end{bmatrix}
\begin{bmatrix}
f_1 \\
f_2 \\
f_3 \\
\end{bmatrix}
\label{eq:Hvec}
\end{equation}
\end{singlespace}
where $f_1$, $f_2$, and $f_3$ are the $\mathcal{B}$ frame components.
\begin{figure}[h]
\centering
\subcaptionbox{$\mathcal{O}$ and $\mathcal{H}$ frames}{\includegraphics[width=3in]{orbit_frame_xy.pdf}}
\subcaptionbox{$\mathcal{H}$ and $\mathcal{B}$ frames}{\includegraphics[width=2in]{goes_313_euler_angles.pdf}}
\caption{Relevant frames and rotations.}
\label{fig:frames}
\end{figure}
\subsection{Osculating Elements}
Given the sun-tracking behavior observed in the full dynamical simulations, we are interested in developing our equations in the rotating $\mathcal{O}$ frame. Using the transport theorem, a method to calculate time derivatives in rotating frame \citep{schaub}, we find the time derivative of $\bm{H}$ with respect to the $\mathcal{O}$ frame,
\begin{equation}
\frac{^\mathcal{O}d}{dt}(\bm{H})=-\boldsymbol{\omega}_{\mathcal{O}/\mathcal{N}}\times\bm{H}+\bm{M}
\label{eq:Horbdot}
\end{equation}
where $\bm{M}=\dot{\bm{H}}$ is the net external torque. Then, expressing $\bm{H}$ in the $\mathcal{O}$ frame, we have
\begin{singlespace}
\begin{equation}
\begin{bmatrix}
H_X\\
H_Y\\
H_Z\\
\end{bmatrix}=
\begin{bmatrix}
H\cos{\alpha}\sin{\beta}\\
H\sin{\alpha}\sin{\beta}\\
H\cos{\beta}\\
\end{bmatrix}
\label{eq:Horb}
\end{equation}
\end{singlespace}
\noindent where ($H_X$, $H_Y$, $H_Z$) are the $\mathcal{O}$ frame components and $H=|\bm{H}|$. Taking the time derivative of the Eq.~\ref{eq:Horb}, solving for $\dot{\alpha}$, $\dot{\beta}$, and $\dot{H}$, and substituting the results from Eq.~\ref{eq:Horbdot}, we ultimately obtain,
\begin{equation}
\dot{\alpha}=\frac{M_y+Hn\cos{\alpha}\cos{\beta}}{H\sin{\beta}}
\label{eq:alphadot}
\end{equation}
\begin{equation}
\dot{\beta}=\frac{M_x+Hn\sin{\alpha}}{H}
\label{eq:betadot}
\end{equation}
\begin{equation}
\dot{H}=M_z
\label{eq:Hdot}
\end{equation}
where ($M_x$, $M_y$, $M_z$) denote the torque components in the angular momentum frame. Note that $\dot{\alpha}$ is singular for $\beta=$ 0$^{\circ}$ and 180$^{\circ}$ due to $\sin{\beta}$ in the denominator of Eq.~\ref{eq:alphadot}. While not implemented in our model, one could replace $\alpha$ and $\beta$ with the alternate coordinates $v=\sin{\alpha}\sin{\beta}$ and $w=\cos{\alpha}\sin{\beta}$ when $\bm{H}$ is very near the sun/anti-sun line. These coordinates were simply obtained by finding expressions that cancel $\sin{\beta}$ in the denominator of Eq.~\ref{eq:alphadot}. This alternate set will instead have a $\beta$ ambiguity since $\sin{\beta}$ is symmetric about $\beta=$ 90$^{\circ}$.
Another quantity of interest, the dynamic moment of inertia $I_d$, is given by $I_d=H^2/2T$ where $\bm{H}=[I]\boldsymbol{\omega}$ and the rotational kinetic energy $T=\frac{1}{2}\boldsymbol{\omega}{\cdot}[I]\boldsymbol{\omega}$. $[I]$ and $\boldsymbol{\omega}$ are the body's inertia tensor and inertial angular velocity of the $\mathcal{B}$ frame respectively. With principal inertias $I_s\;{\geq}\;I_i\;{\geq}\;I_l$, we will assume the long axis convention with $[I]=\mathrm{diag}([I_i,I_s,I_l])$ \cite{sa1991}. For torque-free rigid body rotation, $I_d$ defines the closed path that $\boldsymbol{\omega}$ takes through the body frame, known as a polhode \cite{landau}. $I_d$ is constrained to $[I_l,I_s]$ since $T$ is bounded for a given $H$. When $I_l<I_d<I_i$, the satellite is said to be in a long axis mode (LAM) because $\boldsymbol{\omega}$ circulates about the satellite's long axis ($\bm{\hat{b}}_3$) \cite{landau}. When $I_i<I_d<I_s$, the satellite is in a short axis mode (SAM) where $\boldsymbol{\omega}$ instead circulates about the short axis ($\bm{\hat{b}}_2$). $I_d=I_l$ and $I_d=I_s$ correspond to principal axis rotation about $\bm{\hat{b}}_3$ and $\bm{\hat{b}}_2$ respectively. Finally $I_d=I_i$ denotes motion along the separatrix between LAMs and SAMs or uniform rotation about the intermediate axis, both of which are unstable. Various polhodes are illustrated in Figure~\ref{fig:polhode} for the GOES 8 satellite assuming constant $H$. Here, the separatrices are shown in black.
Taking the time derivative of $I_d$, we ultimately obtain,
\begin{equation}
\dot{I}_d=-\frac{2I_d}{H}\Bigg[\frac{I_d-I_i}{I_i}a_{z1}M_1+\frac{I_d-I_s}{I_s}a_{z2}M_2+\frac{I_d-I_l}{I_l}a_{z3}M_3\Bigg]
\label{eq:Iddot2}
\end{equation}
where ($M_1$, $M_2$, $M_3$) denote the net torque components in the body frame. Complementing $I_d$ is another fundamental quantity called the effective spin rate $\omega_e=H/I_d$, which is proportional to $\boldsymbol{\omega}$ (see Appendix A). Analogous to osculating orbital elements that define an instantaneous unperturbed two-body (Keplerian) orbit \cite{vallado}, $\alpha$, $\beta$, $I_d$, and $H$ (or $\omega_e$) define the instantaneous unperturbed rotation state, which changes slowly over time due solar radiation torques and/or other perturbations.
\begin{figure}[h]
\centering
\includegraphics[height=2in]{goes8_polhode_L_convention_flat.png}
\caption{Angular velocity curves for long (LAM) and short (SAM) axis modes.}
\label{fig:polhode}
\end{figure}
\subsection{Full Dynamics}
For the full dynamics, the body frame angular velocity $\boldsymbol{\omega}$ evolution is given by Euler's equations,
\begin{equation}
[I]\dot{\boldsymbol{\omega}}=-[\tilde{\boldsymbol{\omega}}][I]\boldsymbol{\omega}+\bm{M}
\label{eq:wdot}
\end{equation}
where $[\tilde{\;\;\;}]$ is the skew-symmetric cross product operator.
The body's inertial attitude is tracked using quaternions \citep{schaub},
\begin{singlespace}
\begin{equation}
\begin{bmatrix}
\dot{\beta}_0 \\
\dot{\beta}_1 \\
\dot{\beta}_2 \\
\dot{\beta}_3 \\
\end{bmatrix}
=
\frac{1}{2}\begin{bmatrix}
-\beta_1 & -\beta_2 & -\beta_3\\
\beta_0 & -\beta_3 & \beta_2\\
\beta_3 & \beta_0 & -\beta_1\\
-\beta_2 & \beta_1 & \beta_0\\
\end{bmatrix}
\begin{bmatrix}
\omega_1\\
\omega_2\\
\omega_3\\
\end{bmatrix}
\label{eq:quatkde}
\end{equation}
\end{singlespace}
\noindent where $\beta_0$ is the scalar component. In this paper, the full dynamics are propagated with MATLAB's ode113 numerical integrator with 1e-12 absolute and relative tolerances.
\subsection{Solar Torque Model}
For this work, the faceted solar radiation force model provided by Scheeres \cite{scheeres2007} is used. This model accounts for absorption, specular reflection, and Lambertian diffuse reflection and re-emission. The satellite is assumed to be in thermal equilibrium, so all absorbed radiation is immediately re-emitted. The solar radiation force acting on the $i$th satellite facet is given by,
\begin{equation}
\bm{f}_i=-P_{SRP}\Big[\{{\rho_i}s_i(2\bm{\hat{n}}_{i}\bm{\hat{n}}_{i}-U)+U\}\cdot\bm{\hat{u}}\\
+c_{di}\bm{\hat{n}}_{i}\Big]A_i\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)
\label{eq:srpforce}
\end{equation}
Here, $P_{SRP}$ is the solar radiation pressure (nominally 4.56${\times}10^{-6}\;\mathrm{N/m^2}$ at 1 AU), $\rho_i$ is the total facet reflectivity, $s_i$ is the fraction of total reflectivity that is specular, $\bm{\hat{n}}_i$ is the facet unit normal vector, $U$ is the 3$\times$3 identity matrix, $\bm{\hat{u}}$ is the satellite to sun unit vector (equivalent to $\bm{\hat{Z}}$), $A_i$ is the facet area, and $c_{di}=B(1-s_i)\rho_i+B(1-\rho_i)$ where $B$ is the scattering coefficient (2/3 for Lambertian reflection). The operation $\bm{\hat{n}}_i\bm{\hat{n}}_i$ represents a matrix outer product. The illumination function $\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)$ ensures that only illuminated facets contribute. Self-shadowing by other facets and multiple reflections are not currently considered.
The solar radiation torque acting on the faceted satellite model can then be calculated as,
\begin{equation}
\bm{M}={\sum_{i=1}^{n_f}}{\bm{r}_i}\times\bm{f}_i
\label{eq:M}
\end{equation}
where $\bm{r}_i$ is the center of mass to the facet centroid position vector and $n_f$ is the number of facets.
\subsection{GOES Model}
We will now briefly discuss the GOES model used to explore the YORP dynamics in this paper. GOES 8-12 are a family of five retired GEO weather satellites. They are notable for their asymmetry and well-documented dimensions \citep{databook}. When uncontrolled, this asymmetry provides the opportunity for large net solar torques. The 26 facet GOES shape model used for this work is provided in Figure~\ref{fig:goes_baxes} with GOES 8's approximate end of life principal axes and solar array angle $\theta_{sa}$ of 17$^{\circ}$ \citep{benson2020b}. For GOES 8 with a dry mass of 972 kg, the end of life principal inertias are $I_l =$ 980.5, $I_i =$ 3432.1, and $I_s =$ 3570.0 kg${\cdot}$m$^2$ \citep{benson2020b}, Also, $\theta_{sa}$ is measured positive around $-\bm{\hat{b}}_3$, and $\theta_{sa}=$ 0$^{\circ}$ when the solar array sun-side and $+\bm{\hat{b}}_2$ face are parallel. See Ref. \cite{benson2020b} for how $\theta_{sa}$ impacts the model inertias. Table~\ref{tab:goesoptical} provides the optical properties assumed for the various GOES model components \citep{benson2020b}. Note that most of the materials are MLI or aluminized tape which provide almost exclusively specular reflections.
\begin{figure}[H]
\centering
\includegraphics[width=4in]{goes8_shape_model_paxes_long_axis_convention_labeled.png}
\caption{GOES 8 shape model with principal axes and major components labeled.}
\label{fig:goes_baxes}
\end{figure}
\begin{singlespace}
\begin{table}[H]
\centering
\caption{GOES Model Optical Properties}
\begin{tabular}{llllll}
\hline
Component & Material & $\rho_i$ & $s_i$ \\
\hline
Bus & MLI & 0.60 & 1 \\
Solar Array front & Solar cell & 0.27 & 1 \\
Solar Array back & Graphite & 0.07 & 0 \\
Trim Tab front & Al tape & 0.83 & 1 \\
Trim Tab back & Graphite & 0.07 & 0 \\
Solar Sail sides/top & Al Kapton & 0.66 & 1 \\
Solar Sail base & Al tape & 0.83 & 1 \\
\hline
\end{tabular}
\label{tab:goesoptical}
\end{table}
\end{singlespace}
\section{Full YORP Dynamics}
We will now provide simulation results from the full dynamics model (Eqs.~\ref{eq:wdot} - \ref{eq:M}) to illustrate the complex, yet structured YORP-driven dynamical evolution. This will motivate our development of the tumbling-averaged model in Section IV. Again, we neglect the satellite's geosynchronous orbit and assume that the sun rotates in the inertial frame at earth's mean motion $n$ (${\sim}$0.986$^{\circ}$/day). The GOES 8 shape model and mass parameters given above are utilized. We will discuss two simulation runs, Run 1 and Run 2. Run 1 demonstrates uniform to tumbling transition, spin-orbit coupling, and tumbling cycles. Run 2 demonstrates these behaviors in addition to tumbling period resonances. Starting with Run 1, the satellite is placed in uniform rotation about $+\bm{\hat{b}}_2$ with $P_e=2\pi/{\omega_e}=$ 20 min and a pole direction with $\alpha_o=$ 202$^{\circ}$ and $\beta_o=$ 77$^{\circ}$. The initial evolution is provided in Figure~\ref{fig:run1_evol_zoom}. Starting in uniform rotation, Figure~\ref{fig:run1_evol_zoom}a shows that $\omega_e$ decreases rapidly over the first four days as the satellite spins down. During this initial spin down, Figure~\ref{fig:run1_evol_zoom}d shows that $\beta$ decreases as the pole moves towards the sun-line. Once $\omega_e$ reaches a sufficiently small value, the satellite transitions to non-principal axis rotation, apparent in Figure~\ref{fig:run1_evol_zoom}b. Here, $I_d$ decreases as the rotation moves from uniform rotation to SAM to LAM, crossing the separatrix denoted by the dashed line. From approximately five days onward, $\omega_e$ increases and $I_d$ decreases as the satellite spins up further about $+\bm{\hat{b}}_3$, the minimum inertia axis. During this time, $\alpha$ and $\beta$ increase as the pole begins precessing about the sun-line with $\alpha$ taking roughly five days to complete a cycle.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_we_zoom.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_id_zoom.pdf}}
\subcaptionbox{Clocking Angle}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_alpha_zoom.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_beta_zoom.pdf}}
\caption{Run 1 - transition from uniform rotation to tumbling.}
\label{fig:run1_evol_zoom}
\end{figure}
Proceeding further in time, Figure~\ref{fig:run1_evol} shows evolution of the Run 1 solution over three years. On this timescale, we see that the satellite continues in this long axis spin up state until around 160 days when $\beta$ reaches 90$^{\circ}$. At this point, $\omega_e$ decreases and $I_d$ increases as the satellite moves back towards uniform rotation. This trend continues until 285 days when the satellite is finally rotating slowly in near-uniform rotation with $\beta$ approaching 180$^{\circ}$. Given the small $\omega_e$, $\beta$ decreases rapidly towards 0$^{\circ}$. During this time, $\omega_e$ briefly increases, then decreases with $I_d\;{\approx}\;I_s$. Once $\beta$ nears 0$^{\circ}$, the satellite again spins up about $+\bm{\hat{b}}_3$ and enters a second, much longer, tumbling cycle.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_we.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_id.pdf}}
\subcaptionbox{Clocking Angle}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_alpha.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_beta.pdf}}
\caption{Run 1 - long-term dynamical evolution.}
\label{fig:run1_evol}
\end{figure}
To better visualize the pole evolution during these tumbling cycles, Figure~\ref{fig:run1_hvecorb} shows the evolution of $\bm{H}$ in the $\mathcal{O}$ frame over the first tumbling cycle in Run 1 (from 0 to 309 days in Figure~\ref{fig:run1_evol}). The green section is the initial uniform spin down from 0 to 4 days as $\omega_e$ decreases and $\bm{H}$ moves towards the sun-line ($\bm{\hat{Z}}$). The blue tumbling segment from 4 days to 305 days, shows $\bm{H}$ precess about $\bm{\hat{Z}}$ while slowly moving in the $-\bm{\hat{Z}}$ direction. The near-uniform return from $\beta$ near 180$^{\circ}$ to 0$^{\circ}$ is shown in red. The second tumbling cycle is not shown for clarity but follows this same general behavior.
\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_hvecorb.pdf}
\caption{Run 1 - $\bm{H}$ evolution in $\mathcal{O}$ frame over the first tumbling cycle (0 to 308 days). The gray lines are projections of this evolution on the three orthogonal planes.}
\label{fig:run1_hvecorb}
\end{figure}
For Run 2, we illustrate tumbling period resonances. The satellite is again placed in uniform rotation with $P_e=$ 20 min but now with a pole given by $\alpha_o=$ 202$^{\circ}$ and $\beta_o=$ 17$^{\circ}$. The resulting long-term evolution is provided in Figure~\ref{fig:run2_evol}. As with Run 1, $\omega_e$ decreases rapidly, the satellite transitions to tumbling, and it proceeds through a tumbling cycle. This first cycle is followed by a second, shorter cycle. After this tumbling cycle, the satellite again spins up about the minimum inertia axis but this time is captured in a $P_\psi/P_{\bar{\phi}}=$ 1 tumbling period resonance at roughly 290 days rather than entering another cycle. $P_{\bar{\phi}}$ is the average precession period of the satellite's long axis ($\bm{\hat{b}}_3$) about $\bm{H}$ and $P_\psi$ is the rotation period about $\bm{\hat{b}}_3$ itself. See Appendix A for the fundamental period expressions. Given the nearly axisymmetric mass distribution of GOES 8 ($I_s\;\approx\;I_i>I_l$), $\dot{\phi}$ is nearly constant and the average precession period $P_{\bar{\phi}}$ is essentially equal to the true precession period. So at this 1:1 resonance, the satellite returns to the same inertial attitude at multiples of $P_{\bar{\phi}}$ and $P_\psi$. Figure~\ref{fig:run2_evol} shows that $\omega_e$ increases steadily while $P_{\bar{\phi}}$ and $P_\psi$ remain in lock step with one another. Since the period ratio $P_\psi/P_{\bar{\phi}}$ is only a function of $I_l$, $I_i$, $I_s$, and $I_d$, constant $P_\psi/P_{\bar{\phi}}$ requires that $I_d$ be constant as well. While in this resonance, $\beta$ oscillates between 40$^{\circ}$ and 70$^{\circ}$ with a slight secular increase over time. Carefully examining Figure~\ref{fig:run2_evol}c, the satellite's long axis spin up is briefly perturbed when passing through the 1:1 period resonance near 11 days. Also, the period ratio over the second tumbling cycle (from 260 to 285 days) oscillates around a 2:1 ratio. Tumbling resonances were often observed in other simulation runs with 1:1 and 2:1 resonances being most common. Higher order resonances were occasionally observed.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run4_we.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run4_id.pdf}}
\subcaptionbox{Ratio of Fundamental Tumbling Periods}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run4_pratio.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run4_beta.pdf}}
\caption{Run 2 - long-term dynamical evolution.}
\label{fig:run2_evol}
\end{figure}
\section{Averaged YORP Dynamics}
To better understand the behavior illustrated by the full dynamics model, we will now develop, validate, and explore the semi-analytical tumbling-averaged model. For this paper, we will assume the tumbling periods are non-resonant to simplify the averaging. Analysis of specific tumbling resonances and their stability will feature in a follow-up paper.
\subsection{Averaging Approach}
Following Cicalo and Scheeres \cite{cicalo}, we aim to average Eqs.~\ref{eq:alphadot} - \ref{eq:Iddot2} over the satellite's tumbling motion. For this approach, we assume that the variables $\alpha$, $\beta$, $H$ and $I_d$ vary slowly compared to the satellite's intrinsic rotation. We also assume that the solar radiation torque is a relatively small perturbation on the satellite's torque-free motion. So we average over the torque-free motion (i.e. with respect to $\phi$, $\theta$, and $\psi$) assuming constant values for the average parameters $\overline{\alpha}$, $\overline{\beta}$, $\overline{H}$ and $\overline{I}_d$.
Torque-free rigid body rotation is defined by two fundamental tumbling periods $P_{\bar{\phi}}$ and $P_\psi$ \citep{sa1991,bensonda14}. Again, $P_{\bar{\phi}}$ is the average precession period of the satellite's minimum inertia axis ($\bm{\hat{b}}_3$) about $\bm{H}$ and $P_\psi$ is the rotation period about $\bm{\hat{b}}_3$ itself. $P_\theta$ is proportional to $P_\psi$ and is therefore not independent. The average time needed for $\phi$ to increase by $2\pi$ is generally not constant. Nevertheless, we will assume constant $\dot{\phi}$ to greatly simplify the averaging process. Fortunately, $\dot{\phi}$ is essentially constant for bodies with roughly axisymmetric inertia tensors, making this a good approximation for many GEO satellites and rocket bodies. Furthermore, assuming $P_{\bar{\phi}}$ and $P_\psi$ are non-resonant, we can separately average over the independent precession ($\phi$) and coupled nutation ($\theta$) and rotation ($\psi$) motions. Expressing this mathematically for the general variable $F$, we have,
\begin{equation}
{\langle\dot{F}\rangle}_\phi=\frac{1}{2\pi}\int_0^{2\pi}{\dot{F}(\phi,\theta,\psi)}d{\phi}
\label{eq:phiavg}
\end{equation}
and
\begin{equation}
\dot{\overline{F}}=\frac{1}{P_\psi}\int_0^{P_\psi}{{\langle\dot{F}\rangle}_\phi}\Big(\theta(t),\psi(t)\Big)dt
\label{eq:psiavg}
\end{equation}
To evaluate Eq.~\ref{eq:psiavg}, we leverage the complete elliptic integral of the first kind $K$ (see Appendix A) \citep{landau,numericalrecipes}. Rewriting Eq.~\ref{eq:psiavg} with the linearly scaled time variable $\tau$, noting that ${\Delta}t=P_{\psi}$ corresponds to ${\Delta}\tau=4K$,
\begin{equation}
\dot{\overline{F}}=\frac{1}{4K}\int_0^{4K}{{\langle\dot{F}\rangle}_\phi}\Big(\theta(\tau),\psi(\tau)\Big)d\tau
\label{eq:psiavgK}
\end{equation}
Averaging over $\tau$ involves the Jacobi elliptic functions $\cn\tau$, $\sn\tau$, and $\dn\tau$ (see the Appendices).
The tumbling-averaged equations of motion are then given by,
\begin{equation}
\dot{\overline{\alpha}}=\frac{\overline{M_y}+\overline{H}n\cos{\overline{\alpha}}\cos{\overline{\beta}}}{\overline{H}\sin{\overline{\beta}}}
\label{eq:alphadotavg}
\end{equation}
\begin{equation}
\dot{\overline{\beta}}=\frac{\overline{M_x}+\overline{H}n\sin{\overline{\alpha}}}{\overline{H}}
\label{eq:betadotavg}
\end{equation}
\begin{equation}
\dot{\overline{H}}=\overline{M_z}
\label{eq:Hdotavg}
\end{equation}
\begin{equation}
\dot{\overline{I}}_d=-\frac{2\overline{I}_d}{\overline{H}}\Bigg[\frac{\overline{I}_d-I_i}{I_i}\overline{a_{z1}M_1}+\frac{\overline{I}_d-I_s}{I_s}\overline{a_{z2}M_2}+\frac{\overline{I}_d-I_l}{I_l}\overline{a_{z3}M_3}\Bigg]
\label{eq:Iddotavg}
\end{equation}
\begin{equation}
\dot{\overline{\omega}}_e=\frac{1}{\overline{I_d}}\Bigg[\overline{M_z}-\frac{\overline{H}}{\overline{I_d}}\dot{\overline{I}}_d\Bigg]
\label{eq:wedotavg}
\end{equation}
\section{Non-Resonant Averaged YORP}
We must evaluate the six averaged torque components $\overline{M_x}$, $\overline{M_y}$, $\overline{M_z}$, $\overline{a_{z1}M_1}$, $\overline{a_{z2}M_2}$, and $\overline{a_{z3}M_3}$. To facilitate the analytical averaging, we follow Ref. \cite{cicalo} and approximate $\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)$ using its second order Fourier series expansion,
\begin{equation}
\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)\;{\approx}\; g_i = \frac{1}{3\pi}+\frac{1}{2}(\bm{\hat{u}}\cdot\bm{\hat{n}}_i)+\frac{4}{3\pi}(\bm{\hat{u}}\cdot\bm{\hat{n}}_i)^2
\label{eq:illuminationfunction}
\end{equation}
where, given our frame definitions, $u_x = -\sin{\beta}$, $u_y = 0$, and $u_z=\cos{\beta}$. So $\bm{\hat{u}}\cdot\bm{\hat{n}} = {u_x}n_x+{u_z}n_z$.
With this approximation,
\begin{singlespace}
\begin{equation}
\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
\overline{M_x} \\ \overline{M_y} \\ \overline{M_z}
\end{bmatrix} = -P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[c_{si}\overline{(\bm{\hat{u}}\cdot\bm{\hat{n}}_i)g_i\bm{d}_i}+c_{ai}\overline{g_i\bm{r}_i\times\bm{\hat{u}}}+c_{di}\overline{g_i\bm{d}_i}\Bigg]A_i
\label{eq:HM}
\end{equation}
\end{singlespace}
\noindent where $\bm{d}_i = \bm{r}_i\times\bm{\hat{n}}_i$ and the constants $c_{si}=2{\rho_i}s_i$ and $c_{ai} = (1-{\rho_i}s_i)$.
From Eqs.~\ref{eq:BH} and \ref{eq:Hvec}, we see that all $\mathcal{H}$ frame $x$ and $y$ vector components will contain either $\cos{\phi}$ or $\sin{\phi}$. So products with odd combined powers of $x$ and $y$ will average to zero over $\phi$. Expanding Eq.~\ref{eq:HM}, including only non-zero terms, and dropping the $i$th facet indices from the averaged products for brevity, $\overline{M_x}$, $\overline{M_y}$, and $\overline{M_z}$ are then given by,
\begin{equation}
\begin{split}
\overline{M_x}= -P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[&
u_x\Big(\frac{1}{2}c_{di} + \frac{1}{3\pi}c_{si}\Big)\overline{d_xn_x}
+u_xu_z\Big(c_{si} + \frac{8}{3\pi}c_{di}\Big)\overline{d_xn_xn_z}
+ \frac{4}{3\pi}c_{si}u_x^3\overline{d_xn_x^3} \\ &
+ \frac{4}{\pi}c_{si}u_xu_z^2\overline{d_xn_xn_z^2}
+ \frac{1}{2}c_{ai}u_xu_z\overline{r_yn_x}
+ \frac{8}{3\pi}c_{ai}u_xu_z^2\overline{r_yn_xn_z}\Bigg]A_i
\end{split}
\label{eq:Mx}
\end{equation}
\begin{equation}
\begin{split}
\overline{M_y}=-P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[&
u_x\Big(\frac{1}{2}c_{di} + \frac{1}{3\pi}c_{si}\Big)\overline{d_yn_x}
+u_xu_z\Big(c_{si} +\frac{8}{3\pi}c_{di}\Big)\overline{d_yn_xn_z}
+ \frac{4}{3\pi}c_{si}u_x^3\overline{d_yn_x^3} \\ &
+ \frac{4}{\pi}c_{si}u_xu_z^2\overline{d_yn_xn_z^2}
+\frac{1}{3\pi}c_{ai}u_x\overline{r_z}
-\frac{1}{2}c_{ai}u_xu_z\overline{r_xn_x}
+ \frac{1}{2}c_{ai}u_xu_z\overline{r_zn_z} \\ &
- \frac{8}{3\pi}c_{ai}u_xu_z^2\overline{r_xn_xn_z}
+ \frac{4}{3\pi}c_{ai}u_x^3\overline{r_zn_x^2}
+ \frac{4}{3\pi}c_{ai}u_xu_z^2\overline{r_zn_z^2}\Bigg]A_i
\end{split}
\label{eq:My}
\end{equation}
\begin{equation}
\begin{split}
\overline{M_z}=-P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[&
\frac{1}{3\pi}c_{di}\overline{d_z}
+ u_z\Big(\frac{1}{2}c_{di} + \frac{1}{3\pi}c_{si}\Big)\overline{d_zn_z}
+ \Big(\frac{1}{2}c_{si} + \frac{4}{3\pi}c_{di}\Big)\Big(u_x^2\overline{d_zn_x^2}
+ u_z^2\overline{d_zn_z^2}\Big) \\ &
+ \frac{4}{\pi}c_{si}u_x^2u_z\overline{d_zn_x^2n_z}
+ \frac{4}{3\pi}c_{si}u_z^3\overline{d_zn_z^3}
-\frac{1}{2}c_{ai}u_x^2\overline{r_yn_x}
- \frac{8}{3\pi}c_{ai}u_x^2u_z\overline{r_yn_xn_z}\Bigg]A_i
\end{split}
\label{eq:Mz}
\end{equation}
Solutions for the various averaged quantities in Eqs.~\ref{eq:Mx}, \ref{eq:My}, and \ref{eq:Mz} are provided in Appendix B. Note that these quantities are implicitly dependent on $\overline{I}_d$.
The terms $\overline{a_{z1}M_1}$, $\overline{a_{z2}M_2}$, and $\overline{a_{z3}M_3}$
are given by,
\begin{equation}
\begin{split}
\overline{a_{z*}M_*}=-P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[&
\frac{1}{3\pi}c_{di}\overline{a_{z*}d_*}
+ u_z\Big(\frac{1}{2}c_{di} + \frac{1}{3\pi}c_{si}\Big)\overline{a_{z*}d_*n_z} \\ &
+ \Big(\frac{1}{2}c_{si} + \frac{4}{3\pi}c_{di}\Big)\Big(u_x^2\overline{a_{z*}d_*n_x^2}
+ u_z^2\overline{a_{z*}d_*n_z^2}\Big) \\ &
+ \frac{4}{\pi}c_{si}u_x^2u_z\overline{a_{z*}d_*n_x^2n_z}
+ \frac{4}{3\pi}c_{si}u_z^3\overline{a_{z*}d_*n_z^3}
+c_{ai}\overline{ga_{z*}\delta_*}\Bigg]A_i
\end{split}
\label{eq:az*M*}
\end{equation}
where $*=1,2,3$. Also, $\delta_1=(r_2u_3-r_3u_2)$, $\delta_2=(r_3u_1-r_1u_3)$, and $\delta_3=(r_1u_2-r_2u_1)$. To calculate $\overline{a_{z*}d_*}$, $\overline{a_{z*}{d_*}n_z}$, etc., we note that $d_z = a_{z1}d_1+a_{z2}d_2+a_{z3}d_3$. From the averaged Appendix B equations that include $d_z$ (Eqs.~\ref{eq:l_fz}, \ref{eq:l_fznz}, \ref{eq:l_fznx2}, \ref{eq:l_fznz2}, \ref{eq:l_fznx2nz}, \ref{eq:l_fznz3} for LAMs and Eqs.~\ref{eq:s_fz}, \ref{eq:s_fznz}, \ref{eq:s_fznx2}, \ref{eq:s_fznz2}, \ref{eq:s_fznx2nz}, \ref{eq:s_fznz3} for SAMs), we retain just the terms containing $d_*$. Solutions for $\overline{g{a_{z*}}\delta_*}$ are provided separately in Appendix B. Overall, Eqs.~\ref{eq:Mx} - \ref{eq:az*M*} depend on $\overline{I_d}$ and $\overline{\beta}$ but are independent of $\overline{\alpha}$ and $\overline{H}$.
\subsection{Averaged Model Validation}
To gain insight about the YORP-driven behavior of the full dynamics model, we now investigate the tumbling-averaged model. First, we will validate the analytically averaged torques using the full torque-free dynamics model (Eqs.~\ref{eq:wdot} - \ref{eq:M}). For the full model, we numerically average Eq.~\ref{eq:M} over time using trapezoidal integration and use $\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)$ rather than its 2nd order Fourier series approximation. The full model is averaged for ${\Delta}t=200P_e$ where again $P_e=2\pi/\omega_e$. This span is more than sufficient for the time-averaged torques to converge.
Figure~\ref{fig:avg_torques} shows the average torques in the $\mathcal{H}$ frame for the full and analytically averaged models. Both SAM and LAM states are tested. We see that in all cases, the models only differ quantitatively, sharing the same general structure. For the SAM cases, we see that $\overline{M_z}$ is negative for $\beta<$ 90$^{\circ}$ and positive for $\beta>$ 90$^{\circ}$. So the satellite will spin down when $\beta<$ 90$^{\circ}$. Also, $\overline{M_x}\;{\leq}\;0$ across all $\beta$, so $\bm{H}$ will tend to be pushed towards the sun line. For the LAM cases in Figure~\ref{fig:avg_torques}, $\overline{M_y}$ has the largest magnitude of the three torque components. $\overline{M_y}$ drives $\dot{\overline{\alpha}}$ and therefore precession of $\bm{H}$ around the sun line. The precession rate $\dot{\overline{\alpha}}$ varies significantly with $\beta$. Also, $\overline{M_x}\;{\geq}\;0$ for all $\beta$, pushing $\bm{H}$ away from the sun line. $\overline{M_z}$ changes sign at $\beta=$ 90$^{\circ}$, so the satellite will first spin up and then down as $\beta$ increases. Continuing the comparison, Figure~\ref{fig:avg_Iddot} shows $\dot{\overline{I}}_d$ for the full and analytically averaged models assuming an arbitrary $\omega_e=$ 2$\pi$ rad/s. Again, they differ only quantitatively. We see that for both the SAM and LAM states the satellite will be pushed towards more excited tumbling (smaller $I_d$) for $\beta<$ 90$^{\circ}$ and towards uniform rotation (larger $I_d$) for $\beta>$ 90$^{\circ}$. $\dot{\overline{I}}_d$ solutions for LAM/SAM $-$ were virtually indistinguishable from the $+$ solutions and have been excluded from Figure~\ref{fig:avg_Iddot} for brevity. Overall, the $+/-$ solutions for both LAMs and SAMs differ insignificantly for all components except $\overline{M_y}$, where the solution is mirrored around $\beta=90^{\circ}$ and has an opposite sign. So for the $+$ and $-$ solutions, $\dot{\overline{\alpha}}$ will have opposite signs and $\bm{H}$ will precess about the sun line in opposite directions. This symmetric structure is due to the particular satellite geometry. For a fixed $\bm{H}$, the $+/-$ LAM/SAM spin states essentially flip the satellite model 180$^{\circ}$ while maintaining the same inertial precession direction. As a result, some averaged torque contributions from the GOES solar array will change for $+/-$ LAM/SAM due to different reflective properties for the front and back array faces. On the other hand, contributions from the axisymmetric solar sail will not change.
\begin{figure}[H]
\centering
\subcaptionbox{SAM+ $I_d=3500$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_torques_H_frame_id_3500.pdf}}
\subcaptionbox{LAM+ $I_d=3000$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_torques_H_frame_id_3000.pdf}}
\subcaptionbox{SAM- $I_d=3500$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_torques_H_frame_id_3500_minus.pdf}}
\subcaptionbox{LAM- $I_d=3000$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_torques_H_frame_id_3000_minus.pdf}}
\caption{Comparison of full and analytically averaged torques for GOES 8 in the $\mathcal{H}$ frame. The full model is solid and the analytically averaged model is dashed.}
\label{fig:avg_torques}
\end{figure}
\begin{figure}[h]
\centering
\subcaptionbox{SAM+ $I_d=3500$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_Iddot_id_3500.pdf}}
\subcaptionbox{LAM+ $I_d=3000$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_Iddot_id_3000.pdf}}
\caption{GOES 8 $\dot{\overline{I}}_d$ vs. $\beta$.}
\label{fig:avg_Iddot}
\end{figure}
We will now compare the dynamical evolution for the full and analytically averaged models by numerically integrating Eqs.~\ref{eq:alphadotavg} - \ref{eq:Iddotavg}. For both models, the same initial spin state is prescribed with $\overline{\alpha}=0^{\circ}$, $\overline{\beta}=15^{\circ}$, $\overline{I}\!_{d}=$ 3500 kg${\cdot}$m$^2$ (SAM+), and $P_e=$ 120 min. Using MATLAB's ode113 numerical integrator with 1e-12 absolute and relative tolerances for both models, the full model was propagated for three years and the averaged model for six to show at least one tumbling cycle. The resulting evolution is provided in Figure~\ref{fig:evol_comp}. We see that the trends in the two models agree, but tumbling cycle times differ considerably with the full model progressing through the first tumbling cycle in roughly 700 days while the averaged model takes 1500 days. As the full model first passes through the 2:1 and 1:1
tumbling resonances, it is perturbed similarly to run 2 in Fig.~\ref{fig:run2_evol}. These
perturbing resonances may explain the initial jump in $\beta$ and advancement
in the tumbling cycle compared to the averaged model which
does not account for resonances. Another contributing factor to this difference is that $\overline{M_x}$ is slightly smaller for the averaged model than for the full model when $\beta<$ 90$^{\circ}$ (see Figure~\ref{fig:avg_torques}b.). This causes $\beta$ for the averaged solution to evolve more slowly, allowing $\omega_e$ (and $H$) more time to increase. In Figure~\ref{fig:evol_comp}a, the peak $\omega_e$ is 50$\%$ larger for the averaged model than the full model. The added pole "stiffness" provided by this larger spin rate further slows $\beta$ evolution for the averaged model compared to the full model. Artificially increasing the average model $\overline{M_x}$ by 20$\%$, approximately the difference between $\overline{M_x}$ for the two models, brought the averaged model's tumbling cycle time into agreement with the full model.
While the full and averaged models provide quantitatively different results due to our averaging assumptions (most notably the neglect of resonances and the illumination function approximation), the averaged model replicates the tumbling cycles and sun-tracking behavior of the full model. Furthermore, for the Figure~\ref{fig:evol_comp} example, the total averaged model computation time was 7 seconds, compared to 70 minutes for the full model's three year propagation. This roughly three order of magnitude decrease in computation time was consistently observed for the averaged model runs.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=2.8in]{tumble_avg_evol_validation_we.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=2.8in]{tumble_avg_evol_validation_id.pdf}}
\subcaptionbox{Clocking Angle}{\includegraphics[width=2.8in]{tumble_avg_evol_validation_alpha.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=2.8in]{tumble_avg_evol_validation_beta.pdf}}
\caption{GOES 8 full and averaged dynamical evolution (initial conditions: $\overline{\alpha}=0^{\circ}$, $\overline{\beta}=15^{\circ}$, $\overline{I}\!_{d}=$ 3500 kg${\cdot}$m$^2$ SAM+, and $P_e=2\pi/\overline{\omega}_e=$ 120 min).}
\label{fig:evol_comp}
\end{figure}
\subsection{Averaged YORP-Driven Evolution}
\subsubsection{Uniform to Tumbling Transition}
The tumbling-averaged model essentially extends the uniform spin-averaged model explored by Refs. \cite{scheeres2007,albuja2015,albuja2018,benson2020b} to general tumbling motion. Being much faster than the full dynamics model, the tumbling-averaged model readily allows for exploration of long-term uniform rotation and the transition to tumbling. Figure~\ref{fig:uniform_tumbling_transition} shows the six year evolution for GOES 8 starting in nearly uniform major axis rotation. Here we assume an initial $P_e=$ 30 s and long axis rotation angle amplitude $\psi_{\mathrm{max}}=$ 0.01$^{\circ}$. Referencing \cite{sa1991}, this $\psi_{\mathrm{max}}$ corresponds to $I_d/I_s\;{\approx}\;1-10^{-9}$. This slight negative offset from uniform rotation prevents $I_d$ from exceeding $I_s$ during propagation due to truncation error. For the first 3.5 years, the satellite remains in uniform rotation and exhibits a roughly one year periodicity in $\omega_e$. This is due to $\overline{M_z}$ and $\dot{\overline{\omega}}_e$ changing sign at $\beta=$ 90$^{\circ}$ (see Figure~\ref{fig:avg_torques}a and Figure~\ref{fig:eom_contours_17}c) as $\bm{H}$ remains nearly inertially fixed due to the fast spin rate. The same behavior can be observed with the uniform spin-averaged model results in Ref. \cite{benson2020b} (see Figure 12 in that paper). Defunct satellites including Telstar 401 and retired Glonass satellites observed by Refs. \cite{earl,rachman} exhibit similar yearly spin rate oscillations. During this initial 3.5 year period, there is also a secular decrease in $\overline{\omega}_e$. After roughly 3.5 years, the satellite reaches a maximum $P_e$ of approximately 40 min with $\overline{\beta}$ approaching 0$^{\circ}$. At this point, the satellite loses sufficient spin stability and transitions to tumbling. It then spins up about the long axis and progresses into a tumbling cycle with $\bm{H}$ precessing around the sun line.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=2.8in]{tumbling_avg_uni_tumbling_transition_ex_we.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=2.8in]{tumbling_avg_uni_tumbling_transition_ex_id.pdf}}
\subcaptionbox{Clocking Angle}{\includegraphics[width=2.8in]{tumbling_avg_uni_tumbling_transition_ex_alpha.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=2.8in]{tumbling_avg_uni_tumbling_transition_ex_beta.pdf}}
\caption{Averaged model transition from uniform rotation to tumbling for GOES 8 (initial conditions: $\overline{\alpha}=$ 90$^{\circ}$, $\overline{\beta}=$ 90$^{\circ}$, $P_e=$ 30 s, $\overline{I}_d/I_s\;{\approx}\;1-10^{-9}$).}
\label{fig:uniform_tumbling_transition}
\end{figure}
\subsubsection{Tumbling Cycles}
We will now leverage the averaged model to better understand the observed tumbling cycles. Figure~\ref{fig:eom_contours_17} shows the signs of $\dot{\overline{I}}_d$, $\dot{\overline{\beta}}$, and $\dot{\overline{\omega}}_e$ computed over $I_d$ and $\beta$ (the sign contours for $\dot{\overline{H}}$ are nearly identical to those for $\dot{\overline{\omega}}_e$). The black regions denote negative values and the white regions denote positive values. To simplify analysis, $\dot{\overline{\beta}}$ (Eq.~\ref{eq:betadotavg}) has been averaged over $\overline{\alpha}$ to remove dependency. This is valid because $\overline{\alpha}$ is a fast variable compared to $\overline{I}_d$, $\overline{\beta}$, and $\overline{\omega}_e$ during the tumbling cycles. The averaged model evolution from Figure~\ref{fig:evol_comp} has been overlaid on the contours in Figure~\ref{fig:eom_contours_17}. Starting at the green dot, Figure~\ref{fig:eom_contours_17}a shows that $\overline{I}_d$ will initially decrease as the satellite is pushed into more excited tumbling. As we near the separatrix (the dashed grey line), Figure~\ref{fig:eom_contours_17}b shows that $\beta$ will start increasing. At the same time, the satellite effective spin rate ($\overline{\omega}_e$) will begin increasing as well. These combined effects cause the satellite to proceed into more excited tumbling with a faster spin rate and the pole moving away from the sun. Once $\beta$ increases past 90$^{\circ}$ (i.e. pole perpendicular to the sun) the satellite begins spinning down and moving back towards uniform rotation. Upon crossing the separatrix, the signs of $\dot{\overline{\beta}}$ and $\dot{\overline{\omega}}_e$ flip. So, the satellite then spins up, entering a nearly uniform rotation phase with the pole moving back towards the sun direction. Finally, passing through $\beta=$ 90$^{\circ}$, $\dot{\overline{I}}_d$ and $\dot{\overline{\omega}}_e$ flip signs resulting in spin down and progression back towards tumbling. At this point, the next tumbling cycle can begin. From Eqs.~\ref{eq:alphadotavg}, \ref{eq:betadotavg}, and \ref{eq:Iddotavg}, we note that the tumbling cycle duration will be driven directly by $\overline{H}$. The larger the initial $\overline{H}$, the slower the satellite will progress through the tumbling cycle. For GOES 8, any escape to long-term uniform rotation from these tumbling cycles will likely occur in the upper right (after passing upward across the separatrix). To escape, the satellite must spin up sufficiently before $\beta$ decreases below 90$^{\circ}$. Alternatively, capture into these tumbling cycles from uniform rotation ($I_d=I_s$) requires $\beta<$ 90$^{\circ}$ so that $\dot{\overline{I}}_d$ and $\dot{\overline{\omega}}_e$ are negative. If the spin rate is small enough, $\bm{H}$ will be pulled towards the sun line and the satellite will spin down and transition into a tumbling cycle.
\begin{figure}[H]
\centering
\subcaptionbox{$\dot{\overline{I}}_d$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_17_wsgn_1_iddot_with_evol.pdf}}
\subcaptionbox{$\dot{\overline{\beta}}$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_17_wsgn_1_mx_with_evol.pdf}}
\subcaptionbox{$\dot{\overline{\omega}}_e$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_17_wsgn_1_wedot_with_evol.pdf}}
\caption{Signs of averaged parameter derivatives vs. $I_d$ and $\beta$ (SAM+/LAM+) for GOES 8 with Figure~\ref{fig:evol_comp} averaged evolution overlaid in red, starting at the green dot. Black regions denotes negative values and white denotes positive values. The dashed gray line is the separatrix.}
\label{fig:eom_contours_17}
\end{figure}
\subsubsection{Sun-Tracking Behavior}
We will now discuss the sun-tracking precession behavior observed during tumbling cycles. The foundation of the following analysis is that $\bm{M}$ is nearly aligned with $\bm{\hat{Z}}\times\bm{\hat{H}}$ for the majority of the $I_d$ - $\beta$ phase space. To show this, we first calculate the component of $\bm{M}$ along $\bm{\hat{B}}=\bm{\hat{Z}}\times\bm{\hat{H}}/|\bm{\hat{Z}}\times\bm{\hat{H}}|$,
\begin{equation}
\bm{\hat{B}}\cdot\bm{M}=M_y
\label{eq:Bhat}
\end{equation}
and the angle between $\bm{M}$ and $\bm{\hat{B}}$ is then given by,
\begin{equation}
\cos\theta_{BM}=\bm{\hat{B}}\cdot\bm{\hat{M}}=\frac{M_y}{\sqrt{M^2_x+M^2_y+M^2_z}}
\label{eq:thetazxh}
\end{equation}
Plotting Eq.~\ref{eq:thetazxh} over $I_d$ and $\beta$, the resulting values are provided in Figure~\ref{fig:zhatxhhat}a for GOES 8. From the small $\theta_{BM}$, we see that $\bm{M}$ is closely aligned with $\bm{\hat{B}}$ for most LAM $I_d$, $\beta$ values and therefore nearly perpendicular to both $\bm{\hat{Z}}$ and $\bm{\hat{H}}$. This makes sense given the large relative magnitude of $M_y$ to $M_x$ and $M_z$ in Figures~\ref{fig:avg_torques}b,d. Calculating $\overline{M_y}$ for a number of LAM $I_d$ values for GOES 8, Figure~\ref{fig:zhatxhhat}b shows that $\overline{M_y}\;{\approx}\;M\sin{\beta}$ for $I_d/I_s$ values near 0.4 - 0.5 (where $M$ is the arbitrary torque amplitude). From Figure~\ref{fig:evol_comp}b, we see that the satellite spends most of the tumbling cycle near $I_d/I_s=$ 0.45, where this $M\sin{\beta}$ approximation agrees best.
\begin{figure}[H]
\centering
\subcaptionbox{$\theta_{BM}$}{\includegraphics[width=3.2in]{ZhatxHhat_torque_angle_thetasp_17.png}}
\subcaptionbox{$\overline{M_y}$ and $M\sin{\beta}$ Approximation}{\includegraphics[width=3.2in]{My_sinbeta_comparison_thetasp_17.pdf}}
\caption{Structure of $\overline{M_y}$ for GOES 8 (SAM+/LAM+).}
\label{fig:zhatxhhat}
\end{figure}
Given this near orthogonality, we can develop an approximate system to better understand the sun-tracking precession. Approximating the torque as $\bm{M}=M_y\bm{\hat{B}}$, we can calculate $\frac{{^\mathcal{O}}d}{dt}(\bm{H})$ using the transport theorem,
\begin{equation}
\frac{^\mathcal{O}d}{dt}(\bm{H})=M_y\bm{\hat{B}}-\boldsymbol{\omega}_{\mathcal{O}/\mathcal{N}}\times\bm{H}
\label{eq:Hvecdotorb}
\end{equation}
Then assuming $M_y=M\sin{\beta}$ and noting that $\sin{\beta}=|\bm{\hat{Z}}\times\bm{\hat{H}}|$, we can simplify Eq.~\ref{eq:Hvecdotorb} to find,
\begin{equation}
\frac{^\mathcal{O}d}{dt}(\bm{H})=\Bigg(\frac{M}{H}\bm{\hat{Z}}-n\bm{\hat{X}}\Bigg)\times\bm{H}
\label{eq:Hvecdotorb2}
\end{equation}
Since we assume $\bm{M}\cdot\bm{H}=0$, $H$ is constant. Therefore, Eq.~\ref{eq:Hvecdotorb2} is a linear system with constant coefficients. Solving the initial value problem with $\bm{H}(t=0)=H[\sin{\beta_o},0,\cos{\beta_o}]^T$, we find,
\begin{singlespace}
\begin{equation}
\bm{H}(t)=\frac{H}{\omega^2}\begin{bmatrix}
{\delta}(n\cos{\beta_o}+\delta\sin{\beta_o})\cos{\omega}t-n(\delta\cos{\beta_o}-n\sin{\beta_o}) \\
{\omega}(n\cos{\beta_o}+\delta\sin{\beta_o})\sin{\omega}t\\
n(n\cos{\beta_o}+\delta\sin{\beta_o})\cos{\omega}t+{\delta}(\delta\cos{\beta_o}-n\sin{\beta_o}) \\
\end{bmatrix}
\label{eq:H(t)}
\end{equation}
\end{singlespace}
\noindent where $\delta=M/H$ and $\omega = \sqrt{\delta^2+n^2}$. Note that $\bm{H}(t)$ is periodic with period $2\pi/\omega$. Taking the time derivative of Eq.~\ref{eq:H(t)}, we find,
\begin{singlespace}
\begin{equation}
\frac{^\mathcal{O}d}{dt}(\bm{H})=H(n\cos{\beta_o}+\delta\sin{\beta_o})\begin{bmatrix}
-\frac{\delta}{\omega}\sin{\omega}t \\
\cos{\omega}t\\
-\frac{n}{\omega}\sin{\omega}t\\
\end{bmatrix}
\label{eq:Hdot(t)}
\end{equation}
\end{singlespace}
For $\delta>>n$, $\omega\approx\delta$, so $\dot{H}_Z$ is relatively small and evolution occurs mostly parallel to the the $\bm{\hat{X}}$ - $\bm{\hat{Y}}$ plane (i.e. sun-tracking precession). Here, precession occurs much faster than the mean motion $n$ because $\omega>>n$. As $\delta/n$ decreases, the precession rate slows and motion transitions more towards the $\bm{\hat{Y}}$ - $\bm{\hat{Z}}$ plane. As $\delta/n\rightarrow0$, $\dot{H}_X\rightarrow0$ and motion becomes confined parallel to the $\bm{\hat{Y}}$ - $\bm{\hat{Z}}$ plane with $\omega{\rightarrow}n$. Here, the torque is not sufficient to turn $\bm{H}$ which remains inertially fixed. Figure~\ref{fig:H(t)} illustrates this transition from sun-tracking precession to inertially fixed $\bm{H}$ for a number of $\delta/n$ values. Proceeding clockwise from lower right to upper left, $\delta/n$ decreases and circulation gradually transitions from $\bm{\hat{Z}}$ to $\bm{\hat{X}}$.
\begin{figure}[H]
\centering
\includegraphics[width=3.25in]{ZhatxHhat_approx_Horb_evol_t_180_day_dn_vary.pdf}
\caption{$\bm{\hat{H}}(t)$ from Eq.~\ref{eq:H(t)} over 180 days with varying $\delta/n$ (0, 0.5, 1, 2, 100). The green dot denotes the initial state ($\alpha$= 0$^{\circ}$, $\beta=$ 45$^{\circ}$) and the red dots denote the final states for each $\delta/n$.}
\label{fig:H(t)}
\end{figure}
\subsubsection{Influence of End of Life Configurations}
It is important to note that the counter-clockwise ($I_d$, $\beta$) motion in Figure~\ref{fig:eom_contours_17} is just one of the possible evolutionary scenarios. In Benson et al. \cite{benson2020b}, we found that long-term uniform GOES evolution strongly depends on the end of life solar array angle $\theta_{sa}$ (see Figures 8-12 in that paper and the associated discussion). Computing $\overline{M_x}$, $\overline{M_y}$, $\overline{M_z}$, and $\dot{\overline{I}}_d$ over all possible end of life GOES 8 solar array angles with the averaged model, we find the following contours in Figure~\ref{fig:torque_beta_thetasp_sam}. For $\dot{\overline{I}}_d$, $\omega_e=$ 2$\pi$ rad/s was again assumed. Sweeping over $\theta_{sa}$, the averaged components change significantly in sign and magnitude, indicating that $\theta_{sa}$ greatly affects general long-term satellite evolution. The results in Figure~\ref{fig:torque_beta_thetasp_sam} are analogous to the uniform spin-averaged coefficients in Ref. \cite{benson2020b}. The most easily comparable are $\overline{M_z}$ and $\mathcal{C}_{0,z}$ which share very similar structure (see Figures 8 and 11 in that paper). In addition, for $\theta_{sa}$ near odd multiples of 42$^{\circ}$, we find that $\overline{M_x}$, $\overline{M_z}$, and $\dot{\overline{I}}_d$ are approximately zero. These critical $\theta_{sa}$ values also hold for the uniform spin-averaged results in Ref. \cite{benson2020b}. Obviously, these negligible torque configurations are specific to GOES 8's geometry and mass distribution. For other satellites, the averaged framework will allow for fast and efficient studies of the parameter space to identify any similar configurations. These GOES findings illustrate the potential to reduce long-term spin state variation by properly setting end of life configurations.
\begin{figure}[H]
\centering
\subcaptionbox{$\overline{M_x}$}{\includegraphics[width=3in]{tumbling_avg_mx_beta_thetasp_id_3500_360.png}}
\subcaptionbox{$\overline{M_y}$}{\includegraphics[width=3in]{tumbling_avg_my_beta_thetasp_id_3500_360.png}}
\subcaptionbox{$\overline{M_z}$}{\includegraphics[width=3in]{tumbling_avg_mz_beta_thetasp_id_3500_360.png}}
\subcaptionbox{$\dot{\overline{I}}_d$}{\includegraphics[width=3in]{tumbling_avg_iddot_beta_thetasp_id_3500_360.png}}
\caption{GOES 8 Averaged Terms vs. $\beta$ and Solar Array Angle $\theta_{sa}$ (SAM+ $I_d=$ 3500 kg${\cdot}$m$^2$)}
\label{fig:torque_beta_thetasp_sam}
\end{figure}
We will now briefly consider the long-term evolution for GOES 8 with a different solar array angle. Changing GOES 8's $\theta_{sa}$ from 17$^{\circ}$ to 70$^{\circ}$ yields the contours in Figure~\ref{fig:eom_contours_70}. Here, the signs of $\dot{\overline{I}}_d$ and $\dot{\overline{\omega}}_e$ are essentially mirrored about $\beta=$ 90$^{\circ}$ as compared to Figure~\ref{fig:eom_contours_17}. For $\dot{\overline{\beta}}$, the sign is mirrored about the separatrix. Complementing the contours is the six year averaged evolution given by the following initial conditions: $\overline{\alpha}=$ 0$^{\circ}$, $\overline{\beta}=$ 165$^{\circ}$, $\overline{I}\!_{d}=$ 3500 kg${\cdot}$m$^2$ (SAM+), and $P_e=$ 240 min. The satellite goes through several tumbling cycles as in Figure~\ref{fig:eom_contours_17} except that ($I_d$, $\beta$) evolution instead proceeds clockwise with $\beta$ now decreasing over the course of each tumbling cycle.
\begin{figure}[H]
\centering
\subcaptionbox{$\dot{\overline{I}}_d$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_70_wsgn_1_iddot_with_evol.pdf}}
\subcaptionbox{$\dot{\overline{\beta}}$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_70_wsgn_1_mx_with_evol.pdf}}
\subcaptionbox{$\dot{\overline{\omega}}_e$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_70_wsgn_1_wedot_with_evol.pdf}}
\caption{Same as Figure~\ref{fig:eom_contours_17} except with $\theta_{sa}=$ 70$^{\circ}$ and corresponding averaged evolution.}
\label{fig:eom_contours_70}
\end{figure}
\section{Discussion}
Comparing the full and averaged dynamical models in Section IV, we found that the averaged model captures the tumbling cycles and sun-tracking behavior of the full model. Nevertheless, there were quantitative differences between the two models due to our averaging assumptions. Most notable are the neglect of resonances and the second order Fourier series illumination function approximation. Higher order Fourier series approximations of $\mathrm{max}(0,\bm{\hat{u}}\cdot{\bm{\hat{n}}})$ would yield better agreement with the full model at the expense of increased average model complexity. Another shortfall of the current semi-analytical averaged model is that $\dot{\alpha}$ is singular for $\beta=$ 0$^{\circ}$ and 180$^{\circ}$. Again this could be remedied by replacing $\alpha$ and $\beta$ with an alternate coordinate set when very close to these singularities (e.g. $v=\sin{\alpha}\sin{\beta}$ and $w=\cos{\alpha}\sin{\beta}$ which has a $\beta$ ambiguity). In practice though, these singularities were never encountered during averaged model propagation, so this approach was not implemented in our model. Finally, while this paper only considered solar torques, the averaged model could be readily expanded to include energy dissipation as well as averaged gravity gradient and magnetic torques.
Given the transition from uniform rotation to non-principal axis tumbling observed for the GOES model, it is possible that other satellites could undergo similar transitions. There is clear indication that defunct satellites are exhibiting large amplitude, secular period variations consistent with Figure~\ref{fig:uniform_tumbling_transition} \citep{earl,rachman}. From the active debris removal (ADR)/servicing perspective, this implies that a satellite may not remain in uniform rotation indefinitely. In general, a uniform to tumbling transition would require a secular decrease in uniform spin rate with $\dot{\overline{I_d}}<0$. Furthermore, the results in Figure~\ref{fig:uniform_tumbling_transition} demonstrate that the transition to tumbling could occur quickly, in a couple of weeks or less. From Figures~\ref{fig:eom_contours_17} and \ref{fig:eom_contours_70}, it seems possible for a satellite to escape these tumbling cycles and enter fast uniform rotation, a process that could occur as rapidly. As a result, target satellite spin state monitoring and prediction will be crucial for ADR and servicing. The possible existence of tumbling cycles would have additional implications for ADR and servicing missions. Slow, uniform rotation would be ideal for rendezvous, capture, and de-spin procedures. Even for proposed "touchless" electromagnetic detumbling approaches \cite{gomez2015}, leveraging YORP to partially de-spin a target satellite would reduce the time, energy, and risk required by the ADR/servicing spacecraft. So predicting future windows of slow, uniform rotation between the tumbling phases would be valuable. The above analysis shows that the primary driver of sun-tracking for GOES is the near orthogonality of the solar torque and the sun line. It would be valuable to determine how often this orthogonality holds for different satellites and rocket bodies. In terms of satellite size $r$, $I_d\;{\propto}\;r^5$ and the solar torque $\bm{M}\;{\propto}\;r^3$. So Eqs.~\ref{eq:alphadotavg}, \ref{eq:betadotavg}, \ref{eq:Iddotavg} ($\overline{I}_d$ normalized), and \ref{eq:wedotavg} are proportional to $1/r^2$. In other words, reducing satellite size by a factor of ten (maintaining density and optical properties), will cause it to evolve 100 times faster. Similarly, $\delta/n\;{\propto}\;1/r^2$, so sun-tracking precession is equally more effective for smaller satellites.
The importance of solar array angle on long-term GOES evolution demonstrates the potential for dictating the post-disposal spin state evolution of defunct satellites by carefully setting their end of life configurations. For example, configurations that minimize $|\dot{\overline{\beta}}|$ could be used to shut off or greatly slow the observed tumbling cycles, facilitating debris removal and servicing missions. Also, minimizing $|\overline{M_z}|$ would reduce spin rates and their variation amplitude, making satellites easier to capture and reducing potential for material shedding.
Here, it is also worthwhile to briefly discuss the implications of our findings for natural small bodies such as asteroids. In YORP simulations, Vokrouhlicky et al. found that small asteroids can exhibit transitions from uniform rotation to tumbling and subsequent tumbling spin up \cite{vok2007}. Given these similarities, it is possible that the tumbling cycles, angular momentum sun-tracking, and tumbling resonances observed here for artificial satellites hold for some asteroids as well. Since solar radiation pressure goes as $1/a^2$, where $a$ is the heliocentric semi-major axis, Eqs.~\ref{eq:alphadotavg} - \ref{eq:wedotavg} will go as the same. Furthermore, the mean motion $n$ goes as $1/\sqrt{a^3}$, so $\delta/n\;{\propto}\;1/\sqrt{a}$. This implies that uniform to tumbling transitions, tumbling cycles, and sun-tracking precession would be more likely for smaller asteroids in the inner solar system (all else equal). Again, dedicated study of these natural small bodies is needed to determine whether the tumbling-averaged torques provide the necessary structure for this behavior (e.g. near orthogonality of $\bm{\hat{B}}$ and $\bm{M}$).
\section{Conclusions}
This paper illustrates the complex, yet structured Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect behavior for defunct satellites including transitions from uniform rotation to non-principal axis tumbling, angular momentum sun-tracking, tumbling cycles, and resonant YORP tumbling states. To help understand these behaviors, we developed a semi-analytical tumbling-averaged YORP model. This model captures the uniform to tumbling transition, sun-tracking, and tumbling cycle behavior observed with the full dynamics while being roughly three orders of magnitude faster to propagate. Furthermore, the averaged model uncovers the mechanics behind the observed tumbling transition, sun-tracking, and tumbling cycles. Overall, the greater computational efficiency and reduced state space of the averaged framework allows us to more easily classify and analyze the general YORP evolution of different satellites and rocket bodies with various end of life configurations.
\section*{Appendix A: Torque Free Solutions}
Here we summarize the analytical solutions for torque-free rotation. We assume the long axis convention where the $\bm{\hat{b}}_1$, $\bm{\hat{b}}_2$ and $\bm{\hat{b}}_3$ body axes are aligned with the intermediate ($I_i$), maximum ($I_s$), and minimum ($I_l$) principal moments of inertia respectively. 3-1-3 ($\phi$-$\theta$-$\psi$) Euler angles are used to rotate between the $\mathcal{H}$ and $\mathcal{B}$ frames. This is the same convention used in Refs. \cite{sa1991,bensonda14}.
Equating $\bm{H}$ in the $\mathcal{H}$ and $\mathcal{B}$ frames with Eq.~\ref{eq:BH}, we find,
\begin{equation}
a_{z1}=\sin{\theta}\sin{\psi}=\frac{I_i\omega_1}{I_d\omega_e}\;\;\;\;\;\;\;a_{z2}=\sin{\theta}\cos{\psi}=\frac{I_s\omega_2}{I_d\omega_e}\;\;\;\;\;\;a_{z3}=\cos{\theta}=\frac{I_l\omega_3}{I_d\omega_e}
\label{eq:sthetaspsi}
\end{equation}
The angles $\theta$ and $\psi$ can be unambiguously calculated using Eq.~\ref{eq:sthetaspsi} with Eq.~\ref{eq:w_lam} for LAMs or Eq.~\ref{eq:w_sam} for SAMs. The equations for $\phi$ are much more complicated and are provided below.
\subsection*{Long Axis Modes}
For long axis modes (LAMs), the body frame angular velocity $\boldsymbol{\omega}=[\omega_1,\omega_2,\omega_3]^T$ is given by,
\begin{equation}
\omega_1=\pm\omega_e\sqrt{\frac{I_d(I_d-I_l)}{I_i(I_i-I_l)}}\sn{\tau}\;\;\;\;\;\;\omega_2=\omega_e\sqrt{\frac{I_d(I_d-I_l)}{I_s(I_s-I_l)}}\cn{\tau}\;\;\;\;\;\;\omega_3=\pm\omega_e\sqrt{\frac{I_d(I_s-I_d)}{I_l(I_s-I_l)}}\dn{\tau}
\label{eq:w_lam}
\end{equation}
where $\sn\tau$, $\cn\tau$, and $\dn\tau$ are Jacobi elliptic functions \citep{sa1991,numericalrecipes,friedman}. The $\pm$ distinguishes between the two possible LAM regions: $+$ for ${\omega}_3>0$ (LAM+) and $-$ for $\omega_3<0$ (LAM-). For LAMs, $\tau$ is given by,
\begin{equation}
\label{eq:tauLAM}
\tau = \tau_o + \omega_e\sqrt{\frac{I_d(I_i-I_l)(I_s-I_d)}{I_lI_iI_s}}(t-t_o)
\end{equation}
where $t$ is the time and $t_o$, $\tau_o$ are the initial values. The period of $\sn\tau$ and $\cn\tau$ is $4K(k)$ while $\dn\tau$ is periodic on $2K(k)$ where $K(k)$ is the complete elliptic integral of the first kind \citep{landau,numericalrecipes},
\begin{equation}
\label{K}
K(k)={\int_0^{\pi/2}}\frac{du}{\sqrt{1-k^{2}\sin^{2}\!u}}
\end{equation}
and $k$ is the modulus. The parameter $n$ features in the torque-free solutions for $\phi$ and $P_{\bar{\phi}}$. For LAMs, $k$ and $n$ are given by,
\begin{equation}
\label{eq:knLAM}
k^2=\frac{(I_s-I_i)(I_d-I_l)}{(I_i-I_l)(I_s-I_d)}\;\;\;\;\;\;n=\frac{I_l}{I_s}\frac{(I_s-I_i)}{(I_i-I_l)}
\end{equation}
For LAMs, the Euler angle $\phi$ is given by,
\begin{equation}
\label{phiLAMPibar}
\phi = \phi_o + \frac{H}{I_l}(t-t_o)-(I_s-I_l)\sqrt{\frac{I_iI_d}{I_lI_s(I_i-I_l)(I_s-I_d)}}\Big[\bar{\Pi}(\tau,n)-\bar{\Pi}(\tau_o,n)\Big]
\end{equation}
where $\bar{\Pi}(\tau,n)$ is the modified incomplete elliptic integral of the third kind. Most routines for calculating the incomplete elliptic integral of the third kind $\Pi(\tau,n)$ (e.g. Ref. \cite{numericalrecipes}) only accept $0\leq\tau\leq{K(k)}$ even though $\tau$ increases unbounded with $t$. To calculate $\bar{\Pi}(\tau,n)$ correctly, we use the following algorithm \citep{bensonda14}. Dropping the implied dependence of $k$ on $K$ for brevity,
\begin{enumerate}
\item If $\tau$ has most recently passed through an even multiple of $K$, i.e. if $\mathrm{mod}(m,2)=0$,
\begin{equation}
\label{Pibareven}
\bar{\Pi}(\tau,n) = m\Pi(K,n)+\Pi(\tau-mK,n)
\end{equation}
\item Instead, if $\tau$ has most recently passed through an odd multiple of $K$, i.e. if $\mathrm{mod}(m,2)=1$,
\begin{equation}
\label{Pibarodd}
\bar{\Pi}(\tau,n) = (m+1)\Pi(K,n)-\Pi\Big((m+1)K-\tau,n\Big)
\end{equation}
\end{enumerate}
Here, the integer multiple $m=\mathrm{int}(\tau/K)$ and $\mathrm{mod}$ is the remainder after division modulo operator.
For LAMs, the average period of $\phi$ ($P_{\bar{\phi}}$) and the constant period of $\psi$ ($P_\psi$) are given by,
\begin{equation}
\label{PphiLAM}
P_{\bar{\phi}}=\frac{2\pi}{\omega_e}\frac{I_l}{I_d}\Bigg[1-\frac{(I_s-I_l)}{I_s}\frac{\varPi(K,n)}{K}\Bigg]^{-1}
\end{equation}
\begin{equation}
\label{Ppsi_lam}
P_{\psi}=\frac{4}{\omega_e}\sqrt{\frac{I_{l}I_{i}I_{s}}{I_{d}(I_i-I_l)(I_s-I_d)}}K
\end{equation}
\subsection*{Short Axis Modes}
For short axis modes (SAMs), the body frame angular velocity $\boldsymbol{\omega}=[\omega_1,\omega_2,\omega_3]^T$ is given by,
\begin{equation}
\label{eq:w_sam}
\omega_1 = \omega_e\sqrt{\frac{I_d(I_s-I_d)}{I_i(I_s-I_i)}}\sn{\tau}\;\;\;\;\;\;\omega_2 = \pm\omega_e\sqrt{\frac{I_d(I_d-I_l)}{I_s(I_s-I_l)}}\dn{\tau}\;\;\;\;\;\;\omega_3 = \pm\omega_e\sqrt{\frac{I_d(I_s-I_d)}{I_l(I_s-I_l)}}\cn{\tau}
\end{equation}
Again $+$ holds for ${\omega}_2>0$ and $-$ holds for $\omega_2<0$ (SAM$+$ and SAM$-$). For SAMs, $\tau$, $k$, and $n$ are,
\begin{equation}
\label{eq:tauSAM}
\tau = \tau_o + \omega_e\sqrt{\frac{I_d(I_s-I_i)(I_d-I_l)}{I_lI_iI_s}}(t-t_o)
\end{equation}
\begin{equation}
\label{eq:knSAM}
k^2 = \frac{(I_i-I_l)(I_s-I_d)}{(I_s-I_i)(I_d-I_l)}\;\;\;\;\;\;n=\frac{I_l}{I_s}\frac{(I_s-I_d)}{(I_d-I_l)}
\end{equation}
For SAMs, $\phi$ is instead given by,
\begin{equation}
\label{phiSAMPibar}
\phi = \phi_o + \frac{H}{I_l}(t-t_o)-(I_s-I_l)\sqrt{\frac{I_iI_d}{I_lI_s(I_s-I_i)(I_d-I_l)}}\Big[\bar{\Pi}(\tau,n)-\bar{\Pi}(\tau_o,n)\Big]
\end{equation}
For SAMs, $P_{\bar{\phi}}$ is also given by Eq.~\ref{PphiLAM} with $n$ from Eq.~\ref{eq:knSAM}. Finally, $P_\psi$ for SAMs is given by,
\begin{equation}
\label{Ppsi_s}
P_{\psi}=\frac{4}{\omega_e}\sqrt{\frac{I_{l}I_{i}I_{s}}{I_{d}(I_s-I_i)(I_d-I_l)}}K
\end{equation}
\section*{Appendix B: Averaged Quantities}
From Ref. \cite{friedman}, we can obtain the following elliptic function averages,
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn\tau{d}\tau = 0
\label{eq:az1avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn\tau{d}\tau=0
\label{eq:az2avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\dn\tau{d}\tau = \frac{\pi}{2K}
\label{eq:az3avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^2\!\tau{d}\tau = \frac{K - E}{k^2K}
\label{eq:az12avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn^2\!\tau{d}\tau = \frac{E - k'^2K}{k^2K}
\label{eq:az22avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\dn^2\!\tau{d}\tau = \frac{E}{K}
\label{eq:az32avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^2\!\tau\dn\tau{d}\tau = \frac{\pi}{4K}
\label{eq:az12az3avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn^2\!\tau\dn\tau{d}\tau = \frac{\pi}{4K}
\label{eq:az22az3avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\dn^3\!\tau{d}\tau = \frac{(k'^2+1)\pi}{4K}
\label{eq:az33avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^4\!\tau{d}\tau = \frac{(k^2+2)K-2(k^2+1)E}{3k^4K}
\label{eq:az14avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn^4\!\tau{d}\tau = \frac{(4k^2-2)E-k'^2(3k^2-2)K}{3k^4K}
\label{eq:az24avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\dn^4\!\tau{d}\tau = \frac{2(k'^2+1)E-k'^2K}{3K}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^2\!\tau\cn^2\!\tau{d}\tau = \frac{(1+k'^2)E-2k'^2K}{3k^4K}
\label{eq:az12az22avgavgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^2\!\tau\dn^2\!\tau{d}\tau = \frac{(2k^2-1)E+k'^2K}{3k^2K}
\label{eq:az12az32avgavgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn^2\!\tau\dn^2\!\tau{d}\tau = \frac{(1+k^2)E-k'^2K}{3k^2K}
\label{eq:az22az32avgavgLAM}
\end{equation}
\normalsize
where $E$ is the complete elliptic integral of the second kind \citep{numericalrecipes} and $k'^2=1-k^2$.
\subsection*{Long Axis Modes}
After averaging over $\phi$, we follow Ref. \cite{cicalo} and write all averaged expressions in terms of $\overline{a_{z1}}$, $\overline{a_{z2}}$, and $\overline{a_{z3}}$ because they are independent of $\phi$. Following the notation of Eq.~\ref{eq:psiavgK}, for LAMs we have the following expressions with ($1$, $2$, $3$) subscripts denoting the $\mathcal{B}$ frame vector components and using $f$ as a placeholder for $d$ and $r$.
\small
\begin{equation}
\overline{f_z} = f_3\overline{a_{z3}}
\label{eq:l_fz}
\end{equation}
\begin{equation}
\overline{f_xn_x} = \frac{1}{2}\Big(f_3n_3 - f_1n_1\Big)\overline{a^2_{z1}} + \frac{1}{2}\Big(f_3n_3 - f_2n_2\Big)\overline{a^2_{z2}} + \frac{1}{2}\Big(f_1n_1+f_2n_2\Big)
\label{eq:l_fxnx}
\end{equation}
\begin{equation}
\overline{f_yn_x} = \frac{1}{2}\Big(f_2n_1 - f_1n_2\Big)\overline{a_{z3}}
\label{eq:l_fynx}
\end{equation}
\begin{equation}
\overline{f_zn_z} = f_1n_1\overline{a^2_{z1}} + f_2n_2\overline{a^2_{z2}} + f_3n_3\overline{a^2_{z3}}
\label{eq:l_fznz}
\end{equation}
\begin{equation}
\overline{f_xn_xn_z} = \frac{1}{2}\Big(f_1n_1n_3 + f_2n_2n_3\Big)\overline{a_{z3}} - \frac{1}{2}\Big(f_3n^2_1 + 2f_1n_1n_3 - f_3n^2_3\Big)\overline{a^2_{z1}a_{z3}} - \frac{1}{2}\Big(f_3n^2_2 + 2f_2n_2n_3 - f_3n^2_3\Big)\overline{a^2_{z2}a_{z3}}
\label{eq:l_fxnxnz}
\end{equation}
\begin{equation}
\overline{f_yn_xn_z} = \frac{1}{2}\Big(f_3n_1n_2 - f_2n_1n_3\Big)\overline{a^2_{z1}} + \frac{1}{2}\Big(f_1n_2n_3 - f_3n_1n_2\Big)\overline{a^2_{z2}} + \frac{1}{2}\Big(f_2n_1n_3 - f_1n_2n_3\Big)\overline{a^2_{z3}}
\label{eq:l_fynxnz}
\end{equation}
\begin{equation}
\overline{f_zn^2_x} = \frac{1}{2}\Big(f_3n^2_1 + f_3n^2_2\Big)\overline{a_{z3}} - \frac{1}{2}\Big(f_3n^2_1 + 2f_1n_1n_3 - f_3n^2_3\Big)\overline{a^2_{z1}a_{z3}} - \frac{1}{2}\Big(f_3n^2_2 + 2f_2n_2n_3 - f_3n^2_3\Big)\overline{a^2_{z2}a_{z3}}
\label{eq:l_fznx2}
\end{equation}
\begin{equation}
\overline{f_zn^2_z} = \Big(f_3n^2_1 + 2f_1n_1n_3\Big)\overline{a^2_{z1}a_{z3}} + \Big(f_3n^2_2 + 2f_2n_2n_3\Big)\overline{a^2_{z2}a_{z3}} + f_3n^2_3\overline{a^3_{z3}}
\label{eq:l_fznz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn^3_x} = & +\frac{3}{8}\Big(3f_3n^2_1n_3 -2f_1n^3_1 - 3f_2n^2_1n_2 - f_1n_1n^2_2 + 3f_1n_1n^2_3 +f_3n^2_2n_3 + f_2n_2n^2_3\Big)\overline{a^2_{z1}} \\ &
+ \frac{3}{8}\Big(f_2n^2_1n_2 + f_3n^2_1n_3 - f_1n_1n^2_2 + f_1n_1n^2_3 - 2f_2n^3_2 + 3f_3n^2_2n_3 + 3f_2n_2n^2_3\Big)\overline{a^2_{z2}} \\ &
+ \frac{3}{8}\Big(f_1n^3_1 - 3f_3n^2_1n_3 - 3f_1n_1n^2_3 + f_3n^3_3\Big)\overline{a^4_{z1}}
+ \frac{3}{8}\Big(f_2n^3_2- 3f_3n^2_2n_3 - 3f_2n_2n^2_3 + f_3n^3_3\Big)\overline{a^4_{z2}} \\ &
+ \frac{3}{8}\Big(3f_2n^2_1n_2 - 3f_3n^2_1n_3 + 3f_1n_1n^2_2 - 3f_1n_1n^2_3 - 3f_3n^2_2n_3 - 3f_2n_2n^2_3 + 2f_3n^3_3\Big)\overline{a^2_{z1}a^2_{z2}} \\ &
+ \frac{3}{8}\Big(f_1n^3_1 + f_2n^2_1n_2 + f_1n_1n^2_2 + f_2n^3_2\Big)
\end{split}
\label{eq:l_fxnx3}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn_xn^2_z} = & + \frac{1}{2}\Big(f_1n^3_1 - 2f_3n^2_1n_3 + f_2n_2n^2_1 - 4f_1n_1n^2_3 + f_3n^3_3 - f_2n_2n^2_3\Big)\overline{a^2_{z1}} \\ &
+ \frac{1}{2}\Big(f_2n^3_2 - 2f_3n^2_2n_3 + f_1n_1n^2_2 - 4f_2n_2n^2_3 + f_3n^3_3 - f_1n_1n^2_3\Big)\overline{a^2_{z2}} \\ &
+ \frac{1}{2}\Big(-f_1n^3_1 + 3f_3n^2_1n_3 + 3f_1n_1n^2_3 - f_3n^3_3\Big)\overline{a^4_{z1}} \\ &
+ \frac{3}{2}\Big(f_2n^2_1n_2 + f_3n^2_1n_3 - f_1n_1n^2_2 + f_1n_1n^2_3 +f_3n^2_2n_3 + f_2n_2n^2_3 - \frac{2}{3}f_3n^3_3\Big)\overline{a^2_{z1}a^2_{z2}} \\ &
+ \frac{1}{2}\Big(-f_2n^3_2 + 3f_3n^2_2n_3 + 3f_2n_2n^2_3 - f_3n^3_3\Big)\overline{a^4_{z2}}+ \frac{1}{2}\Big(f_1n_1n^2_3 + f_2n_2n^2_3\Big)
\end{split}
\label{eq:l_fxnxnz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn^3_x} = & - \frac{3}{8}\Big((f_2n^3_1 - f_1n_2n^2_1 - 3f_2n_1n^2_3 + 2f_3n_2n_1n_3 + f_1n_2n^2_3\Big)\overline{a^2_{z1}a_{z3}} \\ &
+ \frac{3}{8}\Big(f_1n^3_2 - f_2n_1n^2_2 - 3f_1n_2n^2_3 + 2f_3n_1n_2n_3 + f_2n_1n^2_3\Big)\overline{a^2_{z2}a_{z3}} \\ &
- \frac{3}{8}\Big((- f_2n^3_1 + f_1n^2_1n_2 - f_2n_1n^2_2 + f_1n^3_2)\overline{a_{z3}}
\end{split}
\label{eq:l_fynx3}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn_xn^2_z} = & + \frac{1}{2}\Big(f_2n^3_1 - f_1n_2n^2_1 - 3f_2n_1n^2_3 + 2f_3n_2n_1n_3 + f_1n_2n^2_3\Big)\overline{a^2_{z1}a_{z3}} \\ &
- \frac{1}{2}\Big(f_1n^3_2 - f_2n_1n^2_2 - 3f_1n_2n^2_3 + 2f_3n_1n_2n_3 + f_2n_1n^2_3\Big)\overline{a^2_{z2}a_{z3}}\\ &
- \frac{1}{2}\Big(f_1n_2 - f_2n_1)n^2_3\overline{a_{z3}}
\end{split}
\label{eq:l_fynxnz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^2_xn_z} = & + \frac{1}{2}\Big(f_1n^3_1 - 4f_3n^2_1n_3 + f_1n_1n^2_2 - 2f_1n_1n^2_3 - f_3n^2_2n_3 + f_3n^3_3\Big)\overline{a^2_{z1}}\\ &
+ \frac{1}{2}\Big(f_2n_1^2n_2 - f_3n_1^2n_3 + f_2n_2^3 - 4f_3n_2^2n_3 - 2f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^2}
+ \frac{1}{2}\Big(f_3n_3n_1^2+f_3n_3n_2^2\Big) \\ &
+ \frac{1}{2}\Big(- f_1n^3_1 + 3f_3n^2_1n_3 + 3f_1n_1n^2_3 - f_3n^3_3\Big)\overline{a^4_{z1}}
+ \frac{1}{2}\Big(-f_2n_2^3 + 3f_3n_2^2n_3 + 3f_2n_2n_3^2 - f_3n_3^3\Big)\overline{a_{z2}^4} \\ &
+ \frac{3}{2}\Big(-f_2n^2_1n_2 + f_3n^2_1n_3 - f_1n_1n^2_2 + f_1n_1n^2_3 + f_3n^2_2n_3 + f_2n_2n^2_3 - \frac{2}{3}f_3n^3_3\Big)\overline{a^2_{z1}a^2_{z2}}
\end{split}
\label{eq:l_fznx2nz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^3_z} = & + \Big(3f_3n_1^2n_3 + 3f_1n_1n_3^2 - 2f_3n_3^3\Big)\overline{a_{z1}^2}
+ \Big(3f_3n_2^2n_3 + 3f_2n_2n_3^2 - 2f_3n_3^3\Big)\overline{a_{z2}^2} \\ &
+ \Big(f_1n_1^3 - 3f_3n_1^2n_3 - 3f_1n_1n_3^2 + f_3n_3^3\Big)\overline{a_{z1}^4}
+ \Big(f_2n_2^3 - 3f_3n_2^2n_3 - 3f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^4} \\ &
+ 3\Big(f_2n_1^2n_2 - f_3n_1^2n_3 + f_1n_1n_2^2 - f_1n_1n_3^2 - f_3n_2^2n_3 - f_2n_2n_3^2 + \frac{2}{3}f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2}
+ f_3n_3^3
\end{split}
\label{eq:l_fznz3}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z1}\delta_1}= & +\frac{2}{3\pi}\Big(6n_1n_3r_2u_x^2u_z - 4n_1n_3r_2u_z^3\Big)\overline{a_{z1}^4}
+ \frac{1}{4}\Big(2n_1r_2u_z^2 - n_1r_2u_x^2\Big)\overline{a_{z1}^2a_{z3}} \\ &
+ \frac{2}{3\pi}\Big(6n_1n_2r_3u_x^2u_z - 4n_1n_3r_2u_z^3 - 4n_1n_2r_3u_z^3 + 6n_1n_3r_2u_x^2u_z\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{4}{3\pi}\Big(2n_1n_3r_2u_z^3 - n_1n_2r_3u_x^2u_z - 2n_1n_3r_2u_x^2u_z\Big)\overline{a_{z1}^2}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z2}\delta_2}= & +\frac{2}{3\pi}\Big(4n_1n_2r_3u_z^3 + 4n_2n_3r_1u_z^3 - 6n_1n_2r_3u_x^2u_z - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{2}{3\pi}\Big(4n_2n_3r_1u_z^3 - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^4}
+ \frac{1}{4}\Big(n_2r_1u_x^2 - 2n_2r_1u_z^2\Big)\overline{a_{z2}^2a_{z3}} \\ &
+ \frac{4}{3\pi}\Big(n_1n_2r_3u_x^2u_z - 2n_2n_3r_1u_z^3 + 2n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^2}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z3}\delta_3}= & +\frac{2}{3\pi}\Big(6n_1n_3r_2u_x^2u_z - 4n_1n_3r_2u_z^3\Big)\overline{a_{z1}^2a_{z3}^2}
+ \frac{1}{4}\Big(n_1r_2u_x^2 - 2n_1r_2u_z^2\Big)\overline{a_{z1}^2a_{z3}} \\ &
+ \frac{2}{3\pi}\Big(4n_2n_3r_1u_z^3 - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^2a_{z3}^2}
- \frac{1}{4}\Big(n_2r_1u_x^2 - 2n_2r_1u_z^2\Big)\overline{a_{z2}^2a_{z3}} \\ &
+ \frac{4}{3\pi}\Big(n_2n_3r_1u_x^2u_z - n_1n_3r_2u_x^2u_z\Big)\overline{a_{z3}^2}
- \frac{1}{4\pi}\Big(n_1r_2u_x^2 - n_2r_1u_x^2\Big)\overline{a_{z3}}
\end{split}
\end{equation}
\subsection*{Short Axis Modes}
\normalsize
The following averaged expressions hold for SAMs,
\small
\begin{equation}
\overline{f_z} = f_2\overline{a_{z2}}
\label{eq:s_fz}
\end{equation}
\begin{equation}
\overline{f_xn_x} = \frac{1}{2}\Big(f_3n_3 - f_1n_1\Big)\overline{a^2_{z1}} + \frac{1}{2}\Big(f_3n_3 - f_2n_2\Big)\overline{a^2_{z2}} + \frac{1}{2}\Big(f_1n_1+f_2n_2\Big)
\label{eq:s_fxnx}
\end{equation}
\begin{equation}
\overline{f_yn_x} = \frac{1}{2}(f_1n_3 - f_3n_1)\overline{a_{z2}}
\label{eq:s_fynx}
\end{equation}
\begin{equation}
\overline{f_zn_z} = f_1n_1\overline{a^2_{z1}} + f_2n_2\overline{a^2_{z2}} + f_3n_3\overline{a^2_{z3}}
\label{eq:s_fznz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn_xn_z} = & + \frac{1}{2}\Big(f_2n_3^2-f_2n_1^2 - 2f_1n_2n_1 + 2f_3n_2n_3\Big)\overline{a_{z1}^2a_{z2}} \\&
+ \frac{1}{2}\Big(f_2n_3^2 - f_2n_2^2 + 2f_3n_2n_3 \Big)\overline{a_{z2}^3}
+ \frac{1}{2}\Big(f_2n_2^2 - f_3n_2n_3 + f_1n_1n_2 - f_2n_3^2\Big)\overline{a_{z2}}
\end{split}
\label{eq:s_fxnxnz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn_xn_z} = & + \frac{1}{2}\Big(f_1n_2n_3 - 2f_2n_1n_3 + f_3n_1n_2\Big)\overline{a_{z1}^2} + \frac{1}{2}\Big(2f_1n_2n_3 - f_2n_1n_3 - f_3n_1n_2\Big)\overline{a_{z2}^2} \\ & + \frac{1}{2}\Big(f_2n_1n_3-f_1n_2n_3\Big)
\end{split}
\label{eq:s_fynxnz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^2_x} = & + \frac{1}{2}\Big(f_2n_3^2-f_2n_1^2 - 2f_1n_2n_1 + 2f_3n_2n_3\Big)\overline{a_{z1}^2a_{z2}}
+ \frac{1}{2}\Big(f_2n_3^2 - f_2n_2^2 + 2f_3n_2n_3 \Big)\overline{a_{z2}^3} \\ &
+ \frac{1}{2}\Big(f_2n_1^2 + f_2n_2^2 - 2f_3n_3n_2\Big)\overline{a_{z2}}
\end{split}
\label{eq:s_fznx2}
\end{equation}
\begin{equation}
\overline{f_zn^2_z} = \Big(f_2n_1^2 + 2f_1n_2n_1 - f_2n_3^2 - 2f_3n_2n_3\Big)\overline{a_{z1}^2a_{z2}} + \Big(f_2n_2^2 - 2f_3n_2n_3 - f_2n_3^2\Big)\overline{a_{z2}^3} + \Big(f_2n_3^2 + 2f_3n_2n_3\Big)\overline{a_{z2}}
\label{eq:s_fznz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn^3_x} = & + \frac{1}{8}(3f_1n_1^3 - 9f_3n_1^2n_3 - 9f_1n_1n_3^2 + 3f_3n_3^3\Big)\overline{a_{z1}^4}
+ \frac{3}{8}(3f_2n_2^3 - 3f_3n_2^2n_3 - 3f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^4} \\ &
+ \frac{9}{8}(9f_2n_1^2n_2 - f_3n_1^2n_3 + f_1n_1n_2^2 - 9f_1n_1n_3^2 - 9f_3n_2^2n_3 - 9f_2n_2n_3^2 + \frac{2}{3}f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{3}{8}(- 2f_1n_1^3 - f_2n_1^2n_2 + 3f_3n_1^2n_3 - f_1n_1n_2^2 + 3f_1n_1n_3^2 + f_3n_2^2n_3 + f_2n_2n_3^2\Big)\overline{a_{z1}^2} \\ &
+ \frac{3}{8}(- f_2n_1^2n_2 + f_3n_1^2n_3 - f_1n_1n_2^2 + f_1n_1n_3^2 - 2f_2n_2^3 + 3f_3n_2^2n_3 + 3f_2n_2n_3^2\Big)\overline{a_{z2}^2} \\ &
+ \frac{3}{8}(3f_1n_1^3 + 3f_2n_1^2n_2 + 3f_1n_1n_2^2 + 3f_2n_2^3\Big)
\end{split}
\label{eq:s_fxnx3}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn_xn^2_z} = & + \frac{1}{2}\Big(-f_1n_1^3 + 3f_3n_1^2n_3 + 3f_1n_1n_3^2 - f_3n_3^3\Big)\overline{a_{z1}^4}
+ \frac{1}{2}\Big(- f_2n_2^3 + 3f_3n_2^2n_3 + 3f_2n_2n_3^2 - f_3n_3^3\Big)\overline{a_{z2}^4} \\ &
+ \frac{3}{2}\Big(- f_2n_1^2n_2 + f_3n_1^2n_3 - f_1n_1n_2^2 + f_1n_1n_3^2 + f_3n_2^2n_3 + f_2n_2n_3^2 -\frac{2}{3} f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{1}{2}\Big(f_1n_1^3 - 2f_3n_1^2n_3 + f_2n_2n_1^2 - 4f_1n_1n_3^2 + f_3n_3^3 - f_2n_2n_3^2\Big)\overline{a_{z1}^2} \\ &
+ \frac{1}{2}\Big(f_2n_2^3 - 2f_3n_2^2n_3 + f_1n_1n_2^2 - 4f_2n_2n_3^2 + f_3n_3^3 - f_1n_1n_3^2\Big)\overline{a_{z2}^2}
+ \frac{1}{2}\Big(f_1n_1n_3^2+f_2n_2n_3^2\Big)
\end{split}
\label{eq:s_fxnxnz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn^3_x} = & + \frac{3}{8}\Big(f_3n_1^3 - f_1n_1^2n_3 - 2f_3n_1n_2^2 + 4f_2n_1n_2n_3 - f_3n_1n_3^2 - 2f_1n_2^2n_3 + f_1n_3^3\Big)\overline{a_{z1}^2a_{z2}} \\ &
+ \frac{3}{8}\Big(- 3f_1n_2^2n_3 + f_3n_1n_2^2 + 2f_2n_1n_2n_3 + f_1n_3^3 - f_3n_1n_3^2\Big)\overline{a_{z2}^3} \\ &
+ \frac{3}{8}\Big(- f_3n_1^3 + f_1n_3n_1^2 - f_3n_1n_2^2 - 2f_2n_3n_1n_2 + 3f_1n_3n_2^2\Big)\overline{a_{z2}}
\end{split}
\label{eq:s_fynx3}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn_xn^2_z} = & + \frac{1}{2}\Big(- f_3n_1^3 + f_1n_1^2n_3 + 2f_3n_1n_2^2 - 4f_2n_1n_2n_3 + f_3n_1n_3^2 + 2f_1n_2^2n_3 - f_1n_3^3\Big)\overline{a_{z1}^2a_{z2}} \\ &
+ \frac{1}{2}\Big(3f_1n_2^2n_3 - f_3n_1n_2^2 - 2f_2n_1n_2n_3 - f_1n_3^3 + f_3n_1n_3^2\Big)\overline{a_{z2}^3} \\ &
+ \frac{1}{2}\Big(- 2f_1n_2^2n_3 + 2f_2n_1n_2n_3 + f_1n_3^3 - f_3n_1n_3^2\Big)\overline{a_{z2}}
\end{split}
\label{eq:s_fynxnz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^2_xn_z} = & + \frac{1}{2}\Big(- f_1n_1^3 + 3f_3n_1^2n_3 + 3f_1n_1n_3^2 - f_3n_3^3\Big)\overline{a_{z1}^4}
+ \frac{1}{2}\Big(- f_2n_2^3 + 3f_3n_2^2n_3 + 3f_2n_2n_3^2 - f_3n_3^3\Big)\overline{a_{z2}^4} \\&
+ \frac{3}{2}\Big(- f_2n_1^2n_2 + f_3n_1^2n_3 - f_1n_1n_2^2 + f_1n_1n_3^2 + f_3n_2^2n_3 + f_2n_2n_3^2 -\frac{2}{3} f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{1}{2}\Big(f_1n_1^3 - 4f_3n_1^2n_3 + f_1n_1n_2^2 - 2f_1n_1n_3^2 - f_3n_2^2n_3 + f_3n_3^3\Big)\overline{a_{z1}^2} \\ &
+ \frac{1}{2}\Big(f_2n_1^2n_2 - f_3n_1^2n_3 + f_2n_2^3 - 4f_3n_2^2n_3 - 2f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^2}
+ \frac{1}{2}\Big(f_3n_3n_1^2 + f_3n_3n_2^2\Big)
\end{split}
\label{eq:s_fznx2nz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^3_z} = & + \Big(f_1n_1^3 - 3f_3n_1^2n_3 - 3f_1n_1n_3^2 + f_3n_3^3\Big)\overline{a_{z1}^4}
+ 3\Big(f_3n_1^2n_3 + f_1n_1n_3^2 - \frac{2}{3}f_3n_3^3\Big)\overline{a_{z1}^2} \\ &
+ 3\Big(f_2n_1^2n_2 - f_3n_1^2n_3 + f_1n_1n_2^2 - f_1n_1n_3^2 - f_3n_2^2n_3 - f_2n_2n_3^2 + \frac{2}{3}f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \Big(f_2n_2^3 - 3f_3n_2^2n_3 - 3f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^4}
+ 3\Big(f_3n_2^2n_3 + f_2n_2n_3^2 - \frac{2}{3}f_3n_3^3\Big)\overline{a_{z2}^2}
+ f_3n_3^3
\end{split}
\label{eq:s_fznz3}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z1}\delta_1}= & \frac{2}{3\pi}\Big(6n_1n_3r_2u_x^2u_z - 4n_1n_3r_2u_z^3\Big)\overline{a_{z1}^4}
+ \frac{1}{4}\Big(n_1r_3u_x^2 - 2n_1r_3u_z^2\Big)\overline{a_{z1}^2a_{z2}} \\ &
+ \frac{2}{3\pi}\Big(6n_1n_2r_3u_x^2u_z - 4n_1n_3r_2u_z^3 - 4n_1n_2r_3u_z^3 + 6n_1n_3r_2u_x^2u_z\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{4}{3\pi}\Big(2n_1n_3r_2u_z^3 - n_1n_2r_3u_x^2u_z - 2n_1n_3r_2u_x^2u_z\Big)\overline{a_{z1}^2}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z2}\delta_2}= & \frac{2}{3\pi}\Big(4n_1n_2r_3u_z^3 + 4n_2n_3r_1u_z^3 - 6n_1n_2r_3u_x^2u_z - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
- \frac{3}{12}\Big(n_1r_3u_x^2 + n_3r_1u_x^2 - 2n_1r_3u_z^2 - 2n_3r_1u_z^2\Big)\overline{a_{z1}^2a_{z2}} \\ &
+ \frac{2}{3\pi}\Big(4n_2n_3r_1u_z^3 - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^4}
- \frac{1}{4}\Big(n_3r_1u_x^2 - 2n_3r_1u_z^2\Big)\overline{a_{z2}^3} \\ &
+ \frac{4}{3\pi}\Big(n_1n_2r_3u_x^2u_z - 2n_2n_3r_1u_z^3 + 2n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^2}
+ \frac{1}{4}\Big(n_1r_3u_x^2 - 2n_3r_1u_z^2\Big)\overline{a_{z2}}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z3}\delta_3}= & \frac{2}{3\pi}\Big(6n_1n_3r_2u_x^2u_z - 4n_1n_3r_2u_z^3\Big)\overline{a_{z1}^2a_{z3}^2}
+ \frac{2}{3\pi}\Big(4n_2n_3r_1u_z^3 - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^2a_{z3}^2} \\ &
- \frac{1}{4}\Big(n_3r_1u_x^2 - 2n_3r_1u_z^2\Big)\overline{a_{z2}a_{z3}^2}
+ \frac{4}{3\pi}\Big(n_2n_3r_1u_x^2u_z - n_1n_3r_2u_x^2u_z\Big)\overline{a_{z3}^2}
\end{split}
\end{equation}
\section*{\MakeUppercase{Acknowledgements}}
\normalsize
This work was supported by a NASA Space Technology Research Fellowship through grant NNX16AM53H. DJS acknowledges support from AFOSR through grant FA9550-18-1-0313.
\section{Introduction}
The long-term orbital evolution of debris in geosynchronous earth orbit (GEO) has been studied extensively over the past 50 years \cite{allan1964, schildknecht2007, rosengren2019}. Lacking atmospheric drag and other natural de-orbit mechanisms, GEO debris will remain on orbit indefinitely \citep{rosengren2019}. On the other hand, less work has been done to understand the attitude dynamics of this debris. Many GEO debris objects are retired and otherwise defunct satellites and rocket bodies. The spin rates of these large debris objects are diverse and evolve over time \citep{papushev,cognion,earl,benson2018a}. Understanding their attitude evolution will benefit orbit prediction since attitude-dependent solar radiation pressure (SRP) is the largest non-gravitational perturbation at GEO. Also, spin rate knowledge for these large objects will help predict debris shedding. Most high area-to-mass ratio GEO debris is thought to be multi-layer insulation (MLI) from defunct satellites and rocket bodies \citep{liou}. Finally, given the growing debris population and the large cost to construct, launch, and operate GEO satellites, many organizations are developing active debris removal (ADR) and satellite servicing missions. To grapple and de-spin large, potentially non-cooperative satellites, spin state predictions are vital. With variable spin rates, forecasting windows of slow rotation will reduce collision risk as well as time and energy needed to de-spin. Also, understanding how end of life satellite configurations (e.g. solar array orientation) affect long-term spin state evolution is important. Improved knowledge could help inform decommission procedures to minimize post-disposal spin rates and variability, further facilitating ADR and servicing missions.
Leveraging studies of asteroid dynamics, Albuja et al.~\cite{albuja2015,albuja2018} investigated the influence of solar radiation and thermal re-emission torques on defunct satellite spin states. The combined influence of these torques on a body's spin state is called the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect \citep{rubincam}. Albuja et al.~\cite{albuja2018} found that the YORP effect could explain the observed spin rate evolution of the defunct GOES 8 and 10 weather satellites. The authors closely predicted the rapid observed spin down of GOES 8 in 2014 and its subsequent transition from uniform rotation to non-principal axis tumbling \cite{albuja2018}. Benson et al. \cite{benson2020b} found that solar array orientation greatly impacts YORP-driven uniform spin state evolution, consistent with the dramatically different observed evolution of GOES 8 and 10. This demonstrated the potential to dictate post-disposal spin state evolution with proper end of life configurations. Propagating the GOES dynamics into the tumbling regime, Benson et al.~\cite{benson2020b} found that the satellite's rotational angular momentum vector tends to track the time-varying sun direction. Further exploration has uncovered cyclic behavior where the satellite transitions repeatedly between uniform rotation and tumbling as well as tumbling period resonances. Additional work is needed to understand these behaviors. All study of tumbling YORP for defunct satellites has considered the full dynamics (i.e. Euler's equations of motion) \cite{albuja2018,benson2020b}. These equations are not amenable to long-term numerical propagation as they require short integration time steps to maintain solution accuracy. Furthermore, Euler's equations are expressed in terms of fast variables (i.e. attitude and angular velocity). Since we are interested in studying changes over long periods of time, slowly varying osculating elements (e.g. the rotational angular momentum vector and kinetic energy) are more appropriate. This is directly comparable to orbital dynamics, where the averaged Lagrange and Gauss planetary equations, written in terms of osculating orbital elements, have been used extensively to study long-term orbital evolution \cite{vallado}. This success motivates development of analogous tumbling-averaged dynamical equations for osculating rotational elements, namely the rotational angular momentum vector and kinetic energy.
A number of authors have investigated spin-averaged attitude dynamics models. Albuja et al. \cite{albuja2015} extended the uniform spin-averaged asteroidal YORP work of Scheeres \cite{scheeres2007} to defunct satellites. These models are not applicable to tumbling satellites as the motion is driven by two generally incommensurate periods rather than one for the uniform case \citep{sa1991}. Several tumbling-averaged YORP models have been developed for asteroids \citep{cicalo,breiter2011}. These asteroidal models average over the spin state and heliocentric orbit given the slow spin state evolution. Orbit averaging is not appropriate for defunct satellites due to the possibility for angular momentum sun-tracking. Also, these models only account for diffuse reflections which is insufficient for defunct satellites since many surfaces are dominated by specular reflections.
In this paper we develop a fast, semi-analytical tumbling-averaged attitude dynamics model that accounts for specular and diffuse reflections as well as absorption and instantaneous thermal re-emission of solar radiation. To allow for analytical averaging, we approximate the facet illumination function with its second order Fourier series expansion. For the time-being, we neglect all other perturbations including gravitational/magnetic torques and internal energy dissipation. First we describe relevant frames, dynamics, and the radiation torque equations in Section II. In Section III, we illustrate the YORP-driven tumbling behavior of the full model. Motivated by these results, we then derive the semi-analytical averaged dynamics in Section IV, leaving details for the appendices. Here, we also validate and explore the averaged model. We finish by discussing implications of the findings and providing conclusions.
\section{Preliminaries}
\subsection{Frames}
For this paper we will assume the satellite is in a circular heliocentric orbit at 1 astronomical unit (AU), neglecting its much smaller earth orbit. This approximation was validated by Albuja et al. \cite{albuja2018} for the GOES 8 and 10 satellites. The rotating orbit frame is denoted by $\mathcal{O}$:$\{\bm{\hat{X}}$,$\bm{\hat{Y}}$,$\bm{\hat{Z}}\}$. This frame is centered at the satellite with $\bm{\hat{X}}$ along the orbit angular momentum direction, $\bm{\hat{Z}}$ pointed towards the sun, and $\bm{\hat{Y}}$ in the orbital velocity direction (see Figure~\ref{fig:frames}a). The angular velocity of $\mathcal{O}$ with respect to the inertial frame $\mathcal{N}$ is $\boldsymbol{\omega}_{\mathcal{O}/\mathcal{N}}=n\bm{\hat{X}}$ where $n$ is the heliocentric mean motion. The next frame is the angular momentum frame $\mathcal{H}$:$\{\bm{\hat{x}}$,$\bm{\hat{y}}$,$\bm{\hat{z}}\}$. Here $\bm{\hat{z}}$ is along the satellite's rotational angular momentum vector $\bm{H}$. Rotation from $\mathcal{O}$ to $\mathcal{H}$ is given by the rotation matrix $HO=R_2(\beta)R_3(\alpha)$. $R_i$ denotes a principal rotation about the $i$th axis \citep{schaub}. Consulting Figure~\ref{fig:frames}a, the "clocking" angle $\alpha$ and "coning" angle $\beta$ are the spherical coordinates of $\bm{\hat{H}}$ in the $\mathcal{O}$ frame.
The final relevant frame is the satellite body frame $\mathcal{B}$:$\{\bm{\hat{b}}_1$,$\bm{\hat{b}}_2$,$\bm{\hat{b}}_3\}$. Rotation from $\mathcal{H}$ to $\mathcal{B}$, shown in Figure~\ref{fig:frames}b, is given by (3-1-3) ($\phi$-$\theta$-$\psi$) Euler angles with the rotation matrix $BH$ \cite{schaub},
\small
\begin{singlespace}
\begin{equation}
BH =
\begin{bmatrix}
\cos\phi\cos\psi - \cos\theta\sin\phi\sin\psi & \cos\psi\sin\phi + \cos\phi\cos\theta\sin\psi & \sin\psi\sin\theta \\
- \cos\phi\sin\psi - \cos\psi\cos\theta\sin\phi & \cos\phi\cos\psi\cos\theta - \sin\phi\sin\psi & \cos\psi\sin\theta\\
\sin\phi\sin\theta & -\cos\phi\sin\theta & \cos\theta
\end{bmatrix}
=
\begin{bmatrix}
a_{x1} & a_{y1} & a_{z1} \\
a_{x2} & a_{y2} & a_{z2} \\
a_{x3} & a_{y3} & a_{z3} \\
\end{bmatrix}
\label{eq:BH}
\end{equation}
\end{singlespace}
\normalsize
So an arbitrary vector $\bm{f}$ in the $\mathcal{H}$ frame is given by,
equivalently in matrix form,
\begin{singlespace}
\begin{equation}
\begin{bmatrix}
f_x \\
f_y \\
f_z \\
\end{bmatrix}
=
\begin{bmatrix}
a_{x1} & a_{x2} & a_{x3} \\
a_{y1} & a_{y2} & a_{y3} \\
a_{z1} & a_{z2} & a_{z3} \\
\end{bmatrix}
\begin{bmatrix}
f_1 \\
f_2 \\
f_3 \\
\end{bmatrix}
\label{eq:Hvec}
\end{equation}
\end{singlespace}
where $f_1$, $f_2$, and $f_3$ are the $\mathcal{B}$ frame components.
\begin{figure}[h]
\centering
\subcaptionbox{$\mathcal{O}$ and $\mathcal{H}$ frames}{\includegraphics[width=3in]{orbit_frame_xy.pdf}}
\subcaptionbox{$\mathcal{H}$ and $\mathcal{B}$ frames}{\includegraphics[width=2in]{goes_313_euler_angles.pdf}}
\caption{Relevant frames and rotations.}
\label{fig:frames}
\end{figure}
\subsection{Osculating Elements}
Given the sun-tracking behavior observed in the full dynamical simulations, we are interested in developing our equations in the rotating $\mathcal{O}$ frame. Using the transport theorem, a method to calculate time derivatives in rotating frame \citep{schaub}, we find the time derivative of $\bm{H}$ with respect to the $\mathcal{O}$ frame,
\begin{equation}
\frac{^\mathcal{O}d}{dt}(\bm{H})=-\boldsymbol{\omega}_{\mathcal{O}/\mathcal{N}}\times\bm{H}+\bm{M}
\label{eq:Horbdot}
\end{equation}
where $\bm{M}=\dot{\bm{H}}$ is the net external torque. Then, expressing $\bm{H}$ in the $\mathcal{O}$ frame, we have
\begin{singlespace}
\begin{equation}
\begin{bmatrix}
H_X\\
H_Y\\
H_Z\\
\end{bmatrix}=
\begin{bmatrix}
H\cos{\alpha}\sin{\beta}\\
H\sin{\alpha}\sin{\beta}\\
H\cos{\beta}\\
\end{bmatrix}
\label{eq:Horb}
\end{equation}
\end{singlespace}
\noindent where ($H_X$, $H_Y$, $H_Z$) are the $\mathcal{O}$ frame components and $H=|\bm{H}|$. Taking the time derivative of the Eq.~\ref{eq:Horb}, solving for $\dot{\alpha}$, $\dot{\beta}$, and $\dot{H}$, and substituting the results from Eq.~\ref{eq:Horbdot}, we ultimately obtain,
\begin{equation}
\dot{\alpha}=\frac{M_y+Hn\cos{\alpha}\cos{\beta}}{H\sin{\beta}}
\label{eq:alphadot}
\end{equation}
\begin{equation}
\dot{\beta}=\frac{M_x+Hn\sin{\alpha}}{H}
\label{eq:betadot}
\end{equation}
\begin{equation}
\dot{H}=M_z
\label{eq:Hdot}
\end{equation}
where ($M_x$, $M_y$, $M_z$) denote the torque components in the angular momentum frame. Note that $\dot{\alpha}$ is singular for $\beta=$ 0$^{\circ}$ and 180$^{\circ}$ due to $\sin{\beta}$ in the denominator of Eq.~\ref{eq:alphadot}. While not implemented in our model, one could replace $\alpha$ and $\beta$ with the alternate coordinates $v=\sin{\alpha}\sin{\beta}$ and $w=\cos{\alpha}\sin{\beta}$ when $\bm{H}$ is very near the sun/anti-sun line. These coordinates were simply obtained by finding expressions that cancel $\sin{\beta}$ in the denominator of Eq.~\ref{eq:alphadot}. This alternate set will instead have a $\beta$ ambiguity since $\sin{\beta}$ is symmetric about $\beta=$ 90$^{\circ}$.
Another quantity of interest, the dynamic moment of inertia $I_d$, is given by $I_d=H^2/2T$ where $\bm{H}=[I]\boldsymbol{\omega}$ and the rotational kinetic energy $T=\frac{1}{2}\boldsymbol{\omega}{\cdot}[I]\boldsymbol{\omega}$. $[I]$ and $\boldsymbol{\omega}$ are the body's inertia tensor and inertial angular velocity of the $\mathcal{B}$ frame respectively. With principal inertias $I_s\;{\geq}\;I_i\;{\geq}\;I_l$, we will assume the long axis convention with $[I]=\mathrm{diag}([I_i,I_s,I_l])$ \cite{sa1991}. For torque-free rigid body rotation, $I_d$ defines the closed path that $\boldsymbol{\omega}$ takes through the body frame, known as a polhode \cite{landau}. $I_d$ is constrained to $[I_l,I_s]$ since $T$ is bounded for a given $H$. When $I_l<I_d<I_i$, the satellite is said to be in a long axis mode (LAM) because $\boldsymbol{\omega}$ circulates about the satellite's long axis ($\bm{\hat{b}}_3$) \cite{landau}. When $I_i<I_d<I_s$, the satellite is in a short axis mode (SAM) where $\boldsymbol{\omega}$ instead circulates about the short axis ($\bm{\hat{b}}_2$). $I_d=I_l$ and $I_d=I_s$ correspond to principal axis rotation about $\bm{\hat{b}}_3$ and $\bm{\hat{b}}_2$ respectively. Finally $I_d=I_i$ denotes motion along the separatrix between LAMs and SAMs or uniform rotation about the intermediate axis, both of which are unstable. Various polhodes are illustrated in Figure~\ref{fig:polhode} for the GOES 8 satellite assuming constant $H$. Here, the separatrices are shown in black.
Taking the time derivative of $I_d$, we ultimately obtain,
\begin{equation}
\dot{I}_d=-\frac{2I_d}{H}\Bigg[\frac{I_d-I_i}{I_i}a_{z1}M_1+\frac{I_d-I_s}{I_s}a_{z2}M_2+\frac{I_d-I_l}{I_l}a_{z3}M_3\Bigg]
\label{eq:Iddot2}
\end{equation}
where ($M_1$, $M_2$, $M_3$) denote the net torque components in the body frame. Complementing $I_d$ is another fundamental quantity called the effective spin rate $\omega_e=H/I_d$, which is proportional to $\boldsymbol{\omega}$ (see Appendix A). Analogous to osculating orbital elements that define an instantaneous unperturbed two-body (Keplerian) orbit \cite{vallado}, $\alpha$, $\beta$, $I_d$, and $H$ (or $\omega_e$) define the instantaneous unperturbed rotation state, which changes slowly over time due solar radiation torques and/or other perturbations.
\begin{figure}[h]
\centering
\includegraphics[height=2in]{goes8_polhode_L_convention_flat.png}
\caption{Angular velocity curves for long (LAM) and short (SAM) axis modes.}
\label{fig:polhode}
\end{figure}
\subsection{Full Dynamics}
For the full dynamics, the body frame angular velocity $\boldsymbol{\omega}$ evolution is given by Euler's equations,
\begin{equation}
[I]\dot{\boldsymbol{\omega}}=-[\tilde{\boldsymbol{\omega}}][I]\boldsymbol{\omega}+\bm{M}
\label{eq:wdot}
\end{equation}
where $[\tilde{\;\;\;}]$ is the skew-symmetric cross product operator.
The body's inertial attitude is tracked using quaternions \citep{schaub},
\begin{singlespace}
\begin{equation}
\begin{bmatrix}
\dot{\beta}_0 \\
\dot{\beta}_1 \\
\dot{\beta}_2 \\
\dot{\beta}_3 \\
\end{bmatrix}
=
\frac{1}{2}\begin{bmatrix}
-\beta_1 & -\beta_2 & -\beta_3\\
\beta_0 & -\beta_3 & \beta_2\\
\beta_3 & \beta_0 & -\beta_1\\
-\beta_2 & \beta_1 & \beta_0\\
\end{bmatrix}
\begin{bmatrix}
\omega_1\\
\omega_2\\
\omega_3\\
\end{bmatrix}
\label{eq:quatkde}
\end{equation}
\end{singlespace}
\noindent where $\beta_0$ is the scalar component. In this paper, the full dynamics are propagated with MATLAB's ode113 numerical integrator with 1e-12 absolute and relative tolerances.
\subsection{Solar Torque Model}
For this work, the faceted solar radiation force model provided by Scheeres \cite{scheeres2007} is used. This model accounts for absorption, specular reflection, and Lambertian diffuse reflection and re-emission. The satellite is assumed to be in thermal equilibrium, so all absorbed radiation is immediately re-emitted. The solar radiation force acting on the $i$th satellite facet is given by,
\begin{equation}
\bm{f}_i=-P_{SRP}\Big[\{{\rho_i}s_i(2\bm{\hat{n}}_{i}\bm{\hat{n}}_{i}-U)+U\}\cdot\bm{\hat{u}}\\
+c_{di}\bm{\hat{n}}_{i}\Big]A_i\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)
\label{eq:srpforce}
\end{equation}
Here, $P_{SRP}$ is the solar radiation pressure (nominally 4.56${\times}10^{-6}\;\mathrm{N/m^2}$ at 1 AU), $\rho_i$ is the total facet reflectivity, $s_i$ is the fraction of total reflectivity that is specular, $\bm{\hat{n}}_i$ is the facet unit normal vector, $U$ is the 3$\times$3 identity matrix, $\bm{\hat{u}}$ is the satellite to sun unit vector (equivalent to $\bm{\hat{Z}}$), $A_i$ is the facet area, and $c_{di}=B(1-s_i)\rho_i+B(1-\rho_i)$ where $B$ is the scattering coefficient (2/3 for Lambertian reflection). The operation $\bm{\hat{n}}_i\bm{\hat{n}}_i$ represents a matrix outer product. The illumination function $\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)$ ensures that only illuminated facets contribute. Self-shadowing by other facets and multiple reflections are not currently considered.
The solar radiation torque acting on the faceted satellite model can then be calculated as,
\begin{equation}
\bm{M}={\sum_{i=1}^{n_f}}{\bm{r}_i}\times\bm{f}_i
\label{eq:M}
\end{equation}
where $\bm{r}_i$ is the center of mass to the facet centroid position vector and $n_f$ is the number of facets.
\subsection{GOES Model}
We will now briefly discuss the GOES model used to explore the YORP dynamics in this paper. GOES 8-12 are a family of five retired GEO weather satellites. They are notable for their asymmetry and well-documented dimensions \citep{databook}. When uncontrolled, this asymmetry provides the opportunity for large net solar torques. The 26 facet GOES shape model used for this work is provided in Figure~\ref{fig:goes_baxes} with GOES 8's approximate end of life principal axes and solar array angle $\theta_{sa}$ of 17$^{\circ}$ \citep{benson2020b}. For GOES 8 with a dry mass of 972 kg, the end of life principal inertias are $I_l =$ 980.5, $I_i =$ 3432.1, and $I_s =$ 3570.0 kg${\cdot}$m$^2$ \citep{benson2020b}, Also, $\theta_{sa}$ is measured positive around $-\bm{\hat{b}}_3$, and $\theta_{sa}=$ 0$^{\circ}$ when the solar array sun-side and $+\bm{\hat{b}}_2$ face are parallel. See Ref. \cite{benson2020b} for how $\theta_{sa}$ impacts the model inertias. Table~\ref{tab:goesoptical} provides the optical properties assumed for the various GOES model components \citep{benson2020b}. Note that most of the materials are MLI or aluminized tape which provide almost exclusively specular reflections.
\begin{figure}[H]
\centering
\includegraphics[width=4in]{goes8_shape_model_paxes_long_axis_convention_labeled.png}
\caption{GOES 8 shape model with principal axes and major components labeled.}
\label{fig:goes_baxes}
\end{figure}
\begin{singlespace}
\begin{table}[H]
\centering
\caption{GOES Model Optical Properties}
\begin{tabular}{llllll}
\hline
Component & Material & $\rho_i$ & $s_i$ \\
\hline
Bus & MLI & 0.60 & 1 \\
Solar Array front & Solar cell & 0.27 & 1 \\
Solar Array back & Graphite & 0.07 & 0 \\
Trim Tab front & Al tape & 0.83 & 1 \\
Trim Tab back & Graphite & 0.07 & 0 \\
Solar Sail sides/top & Al Kapton & 0.66 & 1 \\
Solar Sail base & Al tape & 0.83 & 1 \\
\hline
\end{tabular}
\label{tab:goesoptical}
\end{table}
\end{singlespace}
\section{Full YORP Dynamics}
We will now provide simulation results from the full dynamics model (Eqs.~\ref{eq:wdot} - \ref{eq:M}) to illustrate the complex, yet structured YORP-driven dynamical evolution. This will motivate our development of the tumbling-averaged model in Section IV. Again, we neglect the satellite's geosynchronous orbit and assume that the sun rotates in the inertial frame at earth's mean motion $n$ (${\sim}$0.986$^{\circ}$/day). The GOES 8 shape model and mass parameters given above are utilized. We will discuss two simulation runs, Run 1 and Run 2. Run 1 demonstrates uniform to tumbling transition, spin-orbit coupling, and tumbling cycles. Run 2 demonstrates these behaviors in addition to tumbling period resonances. Starting with Run 1, the satellite is placed in uniform rotation about $+\bm{\hat{b}}_2$ with $P_e=2\pi/{\omega_e}=$ 20 min and a pole direction with $\alpha_o=$ 202$^{\circ}$ and $\beta_o=$ 77$^{\circ}$. The initial evolution is provided in Figure~\ref{fig:run1_evol_zoom}. Starting in uniform rotation, Figure~\ref{fig:run1_evol_zoom}a shows that $\omega_e$ decreases rapidly over the first four days as the satellite spins down. During this initial spin down, Figure~\ref{fig:run1_evol_zoom}d shows that $\beta$ decreases as the pole moves towards the sun-line. Once $\omega_e$ reaches a sufficiently small value, the satellite transitions to non-principal axis rotation, apparent in Figure~\ref{fig:run1_evol_zoom}b. Here, $I_d$ decreases as the rotation moves from uniform rotation to SAM to LAM, crossing the separatrix denoted by the dashed line. From approximately five days onward, $\omega_e$ increases and $I_d$ decreases as the satellite spins up further about $+\bm{\hat{b}}_3$, the minimum inertia axis. During this time, $\alpha$ and $\beta$ increase as the pole begins precessing about the sun-line with $\alpha$ taking roughly five days to complete a cycle.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_we_zoom.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_id_zoom.pdf}}
\subcaptionbox{Clocking Angle}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_alpha_zoom.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_beta_zoom.pdf}}
\caption{Run 1 - transition from uniform rotation to tumbling.}
\label{fig:run1_evol_zoom}
\end{figure}
Proceeding further in time, Figure~\ref{fig:run1_evol} shows evolution of the Run 1 solution over three years. On this timescale, we see that the satellite continues in this long axis spin up state until around 160 days when $\beta$ reaches 90$^{\circ}$. At this point, $\omega_e$ decreases and $I_d$ increases as the satellite moves back towards uniform rotation. This trend continues until 285 days when the satellite is finally rotating slowly in near-uniform rotation with $\beta$ approaching 180$^{\circ}$. Given the small $\omega_e$, $\beta$ decreases rapidly towards 0$^{\circ}$. During this time, $\omega_e$ briefly increases, then decreases with $I_d\;{\approx}\;I_s$. Once $\beta$ nears 0$^{\circ}$, the satellite again spins up about $+\bm{\hat{b}}_3$ and enters a second, much longer, tumbling cycle.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_we.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_id.pdf}}
\subcaptionbox{Clocking Angle}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_alpha.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_beta.pdf}}
\caption{Run 1 - long-term dynamical evolution.}
\label{fig:run1_evol}
\end{figure}
To better visualize the pole evolution during these tumbling cycles, Figure~\ref{fig:run1_hvecorb} shows the evolution of $\bm{H}$ in the $\mathcal{O}$ frame over the first tumbling cycle in Run 1 (from 0 to 309 days in Figure~\ref{fig:run1_evol}). The green section is the initial uniform spin down from 0 to 4 days as $\omega_e$ decreases and $\bm{H}$ moves towards the sun-line ($\bm{\hat{Z}}$). The blue tumbling segment from 4 days to 305 days, shows $\bm{H}$ precess about $\bm{\hat{Z}}$ while slowly moving in the $-\bm{\hat{Z}}$ direction. The near-uniform return from $\beta$ near 180$^{\circ}$ to 0$^{\circ}$ is shown in red. The second tumbling cycle is not shown for clarity but follows this same general behavior.
\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run5_hvecorb.pdf}
\caption{Run 1 - $\bm{H}$ evolution in $\mathcal{O}$ frame over the first tumbling cycle (0 to 308 days). The gray lines are projections of this evolution on the three orthogonal planes.}
\label{fig:run1_hvecorb}
\end{figure}
For Run 2, we illustrate tumbling period resonances. The satellite is again placed in uniform rotation with $P_e=$ 20 min but now with a pole given by $\alpha_o=$ 202$^{\circ}$ and $\beta_o=$ 17$^{\circ}$. The resulting long-term evolution is provided in Figure~\ref{fig:run2_evol}. As with Run 1, $\omega_e$ decreases rapidly, the satellite transitions to tumbling, and it proceeds through a tumbling cycle. This first cycle is followed by a second, shorter cycle. After this tumbling cycle, the satellite again spins up about the minimum inertia axis but this time is captured in a $P_\psi/P_{\bar{\phi}}=$ 1 tumbling period resonance at roughly 290 days rather than entering another cycle. $P_{\bar{\phi}}$ is the average precession period of the satellite's long axis ($\bm{\hat{b}}_3$) about $\bm{H}$ and $P_\psi$ is the rotation period about $\bm{\hat{b}}_3$ itself. See Appendix A for the fundamental period expressions. Given the nearly axisymmetric mass distribution of GOES 8 ($I_s\;\approx\;I_i>I_l$), $\dot{\phi}$ is nearly constant and the average precession period $P_{\bar{\phi}}$ is essentially equal to the true precession period. So at this 1:1 resonance, the satellite returns to the same inertial attitude at multiples of $P_{\bar{\phi}}$ and $P_\psi$. Figure~\ref{fig:run2_evol} shows that $\omega_e$ increases steadily while $P_{\bar{\phi}}$ and $P_\psi$ remain in lock step with one another. Since the period ratio $P_\psi/P_{\bar{\phi}}$ is only a function of $I_l$, $I_i$, $I_s$, and $I_d$, constant $P_\psi/P_{\bar{\phi}}$ requires that $I_d$ be constant as well. While in this resonance, $\beta$ oscillates between 40$^{\circ}$ and 70$^{\circ}$ with a slight secular increase over time. Carefully examining Figure~\ref{fig:run2_evol}c, the satellite's long axis spin up is briefly perturbed when passing through the 1:1 period resonance near 11 days. Also, the period ratio over the second tumbling cycle (from 260 to 285 days) oscillates around a 2:1 ratio. Tumbling resonances were often observed in other simulation runs with 1:1 and 2:1 resonances being most common. Higher order resonances were occasionally observed.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run4_we.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run4_id.pdf}}
\subcaptionbox{Ratio of Fundamental Tumbling Periods}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run4_pratio.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=3in]{tumble_transition_goes8_cdr_no_emissivity_avgpaper_run4_beta.pdf}}
\caption{Run 2 - long-term dynamical evolution.}
\label{fig:run2_evol}
\end{figure}
\section{Averaged YORP Dynamics}
To better understand the behavior illustrated by the full dynamics model, we will now develop, validate, and explore the semi-analytical tumbling-averaged model. For this paper, we will assume the tumbling periods are non-resonant to simplify the averaging. Analysis of specific tumbling resonances and their stability will feature in a follow-up paper.
\subsection{Averaging Approach}
Following Cicalo and Scheeres \cite{cicalo}, we aim to average Eqs.~\ref{eq:alphadot} - \ref{eq:Iddot2} over the satellite's tumbling motion. For this approach, we assume that the variables $\alpha$, $\beta$, $H$ and $I_d$ vary slowly compared to the satellite's intrinsic rotation. We also assume that the solar radiation torque is a relatively small perturbation on the satellite's torque-free motion. So we average over the torque-free motion (i.e. with respect to $\phi$, $\theta$, and $\psi$) assuming constant values for the average parameters $\overline{\alpha}$, $\overline{\beta}$, $\overline{H}$ and $\overline{I}_d$.
Torque-free rigid body rotation is defined by two fundamental tumbling periods $P_{\bar{\phi}}$ and $P_\psi$ \citep{sa1991,bensonda14}. Again, $P_{\bar{\phi}}$ is the average precession period of the satellite's minimum inertia axis ($\bm{\hat{b}}_3$) about $\bm{H}$ and $P_\psi$ is the rotation period about $\bm{\hat{b}}_3$ itself. $P_\theta$ is proportional to $P_\psi$ and is therefore not independent. The average time needed for $\phi$ to increase by $2\pi$ is generally not constant. Nevertheless, we will assume constant $\dot{\phi}$ to greatly simplify the averaging process. Fortunately, $\dot{\phi}$ is essentially constant for bodies with roughly axisymmetric inertia tensors, making this a good approximation for many GEO satellites and rocket bodies. Furthermore, assuming $P_{\bar{\phi}}$ and $P_\psi$ are non-resonant, we can separately average over the independent precession ($\phi$) and coupled nutation ($\theta$) and rotation ($\psi$) motions. Expressing this mathematically for the general variable $F$, we have,
\begin{equation}
{\langle\dot{F}\rangle}_\phi=\frac{1}{2\pi}\int_0^{2\pi}{\dot{F}(\phi,\theta,\psi)}d{\phi}
\label{eq:phiavg}
\end{equation}
and
\begin{equation}
\dot{\overline{F}}=\frac{1}{P_\psi}\int_0^{P_\psi}{{\langle\dot{F}\rangle}_\phi}\Big(\theta(t),\psi(t)\Big)dt
\label{eq:psiavg}
\end{equation}
To evaluate Eq.~\ref{eq:psiavg}, we leverage the complete elliptic integral of the first kind $K$ (see Appendix A) \citep{landau,numericalrecipes}. Rewriting Eq.~\ref{eq:psiavg} with the linearly scaled time variable $\tau$, noting that ${\Delta}t=P_{\psi}$ corresponds to ${\Delta}\tau=4K$,
\begin{equation}
\dot{\overline{F}}=\frac{1}{4K}\int_0^{4K}{{\langle\dot{F}\rangle}_\phi}\Big(\theta(\tau),\psi(\tau)\Big)d\tau
\label{eq:psiavgK}
\end{equation}
Averaging over $\tau$ involves the Jacobi elliptic functions $\cn\tau$, $\sn\tau$, and $\dn\tau$ (see the Appendices).
The tumbling-averaged equations of motion are then given by,
\begin{equation}
\dot{\overline{\alpha}}=\frac{\overline{M_y}+\overline{H}n\cos{\overline{\alpha}}\cos{\overline{\beta}}}{\overline{H}\sin{\overline{\beta}}}
\label{eq:alphadotavg}
\end{equation}
\begin{equation}
\dot{\overline{\beta}}=\frac{\overline{M_x}+\overline{H}n\sin{\overline{\alpha}}}{\overline{H}}
\label{eq:betadotavg}
\end{equation}
\begin{equation}
\dot{\overline{H}}=\overline{M_z}
\label{eq:Hdotavg}
\end{equation}
\begin{equation}
\dot{\overline{I}}_d=-\frac{2\overline{I}_d}{\overline{H}}\Bigg[\frac{\overline{I}_d-I_i}{I_i}\overline{a_{z1}M_1}+\frac{\overline{I}_d-I_s}{I_s}\overline{a_{z2}M_2}+\frac{\overline{I}_d-I_l}{I_l}\overline{a_{z3}M_3}\Bigg]
\label{eq:Iddotavg}
\end{equation}
\begin{equation}
\dot{\overline{\omega}}_e=\frac{1}{\overline{I_d}}\Bigg[\overline{M_z}-\frac{\overline{H}}{\overline{I_d}}\dot{\overline{I}}_d\Bigg]
\label{eq:wedotavg}
\end{equation}
\section{Non-Resonant Averaged YORP}
We must evaluate the six averaged torque components $\overline{M_x}$, $\overline{M_y}$, $\overline{M_z}$, $\overline{a_{z1}M_1}$, $\overline{a_{z2}M_2}$, and $\overline{a_{z3}M_3}$. To facilitate the analytical averaging, we follow Ref. \cite{cicalo} and approximate $\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)$ using its second order Fourier series expansion,
\begin{equation}
\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)\;{\approx}\; g_i = \frac{1}{3\pi}+\frac{1}{2}(\bm{\hat{u}}\cdot\bm{\hat{n}}_i)+\frac{4}{3\pi}(\bm{\hat{u}}\cdot\bm{\hat{n}}_i)^2
\label{eq:illuminationfunction}
\end{equation}
where, given our frame definitions, $u_x = -\sin{\beta}$, $u_y = 0$, and $u_z=\cos{\beta}$. So $\bm{\hat{u}}\cdot\bm{\hat{n}} = {u_x}n_x+{u_z}n_z$.
With this approximation,
\begin{singlespace}
\begin{equation}
\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
\overline{M_x} \\ \overline{M_y} \\ \overline{M_z}
\end{bmatrix} = -P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[c_{si}\overline{(\bm{\hat{u}}\cdot\bm{\hat{n}}_i)g_i\bm{d}_i}+c_{ai}\overline{g_i\bm{r}_i\times\bm{\hat{u}}}+c_{di}\overline{g_i\bm{d}_i}\Bigg]A_i
\label{eq:HM}
\end{equation}
\end{singlespace}
\noindent where $\bm{d}_i = \bm{r}_i\times\bm{\hat{n}}_i$ and the constants $c_{si}=2{\rho_i}s_i$ and $c_{ai} = (1-{\rho_i}s_i)$.
From Eqs.~\ref{eq:BH} and \ref{eq:Hvec}, we see that all $\mathcal{H}$ frame $x$ and $y$ vector components will contain either $\cos{\phi}$ or $\sin{\phi}$. So products with odd combined powers of $x$ and $y$ will average to zero over $\phi$. Expanding Eq.~\ref{eq:HM}, including only non-zero terms, and dropping the $i$th facet indices from the averaged products for brevity, $\overline{M_x}$, $\overline{M_y}$, and $\overline{M_z}$ are then given by,
\begin{equation}
\begin{split}
\overline{M_x}= -P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[&
u_x\Big(\frac{1}{2}c_{di} + \frac{1}{3\pi}c_{si}\Big)\overline{d_xn_x}
+u_xu_z\Big(c_{si} + \frac{8}{3\pi}c_{di}\Big)\overline{d_xn_xn_z}
+ \frac{4}{3\pi}c_{si}u_x^3\overline{d_xn_x^3} \\ &
+ \frac{4}{\pi}c_{si}u_xu_z^2\overline{d_xn_xn_z^2}
+ \frac{1}{2}c_{ai}u_xu_z\overline{r_yn_x}
+ \frac{8}{3\pi}c_{ai}u_xu_z^2\overline{r_yn_xn_z}\Bigg]A_i
\end{split}
\label{eq:Mx}
\end{equation}
\begin{equation}
\begin{split}
\overline{M_y}=-P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[&
u_x\Big(\frac{1}{2}c_{di} + \frac{1}{3\pi}c_{si}\Big)\overline{d_yn_x}
+u_xu_z\Big(c_{si} +\frac{8}{3\pi}c_{di}\Big)\overline{d_yn_xn_z}
+ \frac{4}{3\pi}c_{si}u_x^3\overline{d_yn_x^3} \\ &
+ \frac{4}{\pi}c_{si}u_xu_z^2\overline{d_yn_xn_z^2}
+\frac{1}{3\pi}c_{ai}u_x\overline{r_z}
-\frac{1}{2}c_{ai}u_xu_z\overline{r_xn_x}
+ \frac{1}{2}c_{ai}u_xu_z\overline{r_zn_z} \\ &
- \frac{8}{3\pi}c_{ai}u_xu_z^2\overline{r_xn_xn_z}
+ \frac{4}{3\pi}c_{ai}u_x^3\overline{r_zn_x^2}
+ \frac{4}{3\pi}c_{ai}u_xu_z^2\overline{r_zn_z^2}\Bigg]A_i
\end{split}
\label{eq:My}
\end{equation}
\begin{equation}
\begin{split}
\overline{M_z}=-P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[&
\frac{1}{3\pi}c_{di}\overline{d_z}
+ u_z\Big(\frac{1}{2}c_{di} + \frac{1}{3\pi}c_{si}\Big)\overline{d_zn_z}
+ \Big(\frac{1}{2}c_{si} + \frac{4}{3\pi}c_{di}\Big)\Big(u_x^2\overline{d_zn_x^2}
+ u_z^2\overline{d_zn_z^2}\Big) \\ &
+ \frac{4}{\pi}c_{si}u_x^2u_z\overline{d_zn_x^2n_z}
+ \frac{4}{3\pi}c_{si}u_z^3\overline{d_zn_z^3}
-\frac{1}{2}c_{ai}u_x^2\overline{r_yn_x}
- \frac{8}{3\pi}c_{ai}u_x^2u_z\overline{r_yn_xn_z}\Bigg]A_i
\end{split}
\label{eq:Mz}
\end{equation}
Solutions for the various averaged quantities in Eqs.~\ref{eq:Mx}, \ref{eq:My}, and \ref{eq:Mz} are provided in Appendix B. Note that these quantities are implicitly dependent on $\overline{I}_d$.
The terms $\overline{a_{z1}M_1}$, $\overline{a_{z2}M_2}$, and $\overline{a_{z3}M_3}$
are given by,
\begin{equation}
\begin{split}
\overline{a_{z*}M_*}=-P_{SRP}{\sum_{i=1}^{n_f}}\Bigg[&
\frac{1}{3\pi}c_{di}\overline{a_{z*}d_*}
+ u_z\Big(\frac{1}{2}c_{di} + \frac{1}{3\pi}c_{si}\Big)\overline{a_{z*}d_*n_z} \\ &
+ \Big(\frac{1}{2}c_{si} + \frac{4}{3\pi}c_{di}\Big)\Big(u_x^2\overline{a_{z*}d_*n_x^2}
+ u_z^2\overline{a_{z*}d_*n_z^2}\Big) \\ &
+ \frac{4}{\pi}c_{si}u_x^2u_z\overline{a_{z*}d_*n_x^2n_z}
+ \frac{4}{3\pi}c_{si}u_z^3\overline{a_{z*}d_*n_z^3}
+c_{ai}\overline{ga_{z*}\delta_*}\Bigg]A_i
\end{split}
\label{eq:az*M*}
\end{equation}
where $*=1,2,3$. Also, $\delta_1=(r_2u_3-r_3u_2)$, $\delta_2=(r_3u_1-r_1u_3)$, and $\delta_3=(r_1u_2-r_2u_1)$. To calculate $\overline{a_{z*}d_*}$, $\overline{a_{z*}{d_*}n_z}$, etc., we note that $d_z = a_{z1}d_1+a_{z2}d_2+a_{z3}d_3$. From the averaged Appendix B equations that include $d_z$ (Eqs.~\ref{eq:l_fz}, \ref{eq:l_fznz}, \ref{eq:l_fznx2}, \ref{eq:l_fznz2}, \ref{eq:l_fznx2nz}, \ref{eq:l_fznz3} for LAMs and Eqs.~\ref{eq:s_fz}, \ref{eq:s_fznz}, \ref{eq:s_fznx2}, \ref{eq:s_fznz2}, \ref{eq:s_fznx2nz}, \ref{eq:s_fznz3} for SAMs), we retain just the terms containing $d_*$. Solutions for $\overline{g{a_{z*}}\delta_*}$ are provided separately in Appendix B. Overall, Eqs.~\ref{eq:Mx} - \ref{eq:az*M*} depend on $\overline{I_d}$ and $\overline{\beta}$ but are independent of $\overline{\alpha}$ and $\overline{H}$.
\subsection{Averaged Model Validation}
To gain insight about the YORP-driven behavior of the full dynamics model, we now investigate the tumbling-averaged model. First, we will validate the analytically averaged torques using the full torque-free dynamics model (Eqs.~\ref{eq:wdot} - \ref{eq:M}). For the full model, we numerically average Eq.~\ref{eq:M} over time using trapezoidal integration and use $\max(0,\bm{\hat{u}}\cdot\bm{\hat{n}}_i)$ rather than its 2nd order Fourier series approximation. The full model is averaged for ${\Delta}t=200P_e$ where again $P_e=2\pi/\omega_e$. This span is more than sufficient for the time-averaged torques to converge.
Figure~\ref{fig:avg_torques} shows the average torques in the $\mathcal{H}$ frame for the full and analytically averaged models. Both SAM and LAM states are tested. We see that in all cases, the models only differ quantitatively, sharing the same general structure. For the SAM cases, we see that $\overline{M_z}$ is negative for $\beta<$ 90$^{\circ}$ and positive for $\beta>$ 90$^{\circ}$. So the satellite will spin down when $\beta<$ 90$^{\circ}$. Also, $\overline{M_x}\;{\leq}\;0$ across all $\beta$, so $\bm{H}$ will tend to be pushed towards the sun line. For the LAM cases in Figure~\ref{fig:avg_torques}, $\overline{M_y}$ has the largest magnitude of the three torque components. $\overline{M_y}$ drives $\dot{\overline{\alpha}}$ and therefore precession of $\bm{H}$ around the sun line. The precession rate $\dot{\overline{\alpha}}$ varies significantly with $\beta$. Also, $\overline{M_x}\;{\geq}\;0$ for all $\beta$, pushing $\bm{H}$ away from the sun line. $\overline{M_z}$ changes sign at $\beta=$ 90$^{\circ}$, so the satellite will first spin up and then down as $\beta$ increases. Continuing the comparison, Figure~\ref{fig:avg_Iddot} shows $\dot{\overline{I}}_d$ for the full and analytically averaged models assuming an arbitrary $\omega_e=$ 2$\pi$ rad/s. Again, they differ only quantitatively. We see that for both the SAM and LAM states the satellite will be pushed towards more excited tumbling (smaller $I_d$) for $\beta<$ 90$^{\circ}$ and towards uniform rotation (larger $I_d$) for $\beta>$ 90$^{\circ}$. $\dot{\overline{I}}_d$ solutions for LAM/SAM $-$ were virtually indistinguishable from the $+$ solutions and have been excluded from Figure~\ref{fig:avg_Iddot} for brevity. Overall, the $+/-$ solutions for both LAMs and SAMs differ insignificantly for all components except $\overline{M_y}$, where the solution is mirrored around $\beta=90^{\circ}$ and has an opposite sign. So for the $+$ and $-$ solutions, $\dot{\overline{\alpha}}$ will have opposite signs and $\bm{H}$ will precess about the sun line in opposite directions. This symmetric structure is due to the particular satellite geometry. For a fixed $\bm{H}$, the $+/-$ LAM/SAM spin states essentially flip the satellite model 180$^{\circ}$ while maintaining the same inertial precession direction. As a result, some averaged torque contributions from the GOES solar array will change for $+/-$ LAM/SAM due to different reflective properties for the front and back array faces. On the other hand, contributions from the axisymmetric solar sail will not change.
\begin{figure}[H]
\centering
\subcaptionbox{SAM+ $I_d=3500$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_torques_H_frame_id_3500.pdf}}
\subcaptionbox{LAM+ $I_d=3000$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_torques_H_frame_id_3000.pdf}}
\subcaptionbox{SAM- $I_d=3500$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_torques_H_frame_id_3500_minus.pdf}}
\subcaptionbox{LAM- $I_d=3000$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_torques_H_frame_id_3000_minus.pdf}}
\caption{Comparison of full and analytically averaged torques for GOES 8 in the $\mathcal{H}$ frame. The full model is solid and the analytically averaged model is dashed.}
\label{fig:avg_torques}
\end{figure}
\begin{figure}[h]
\centering
\subcaptionbox{SAM+ $I_d=3500$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_Iddot_id_3500.pdf}}
\subcaptionbox{LAM+ $I_d=3000$ $\mathrm{kg{\cdot}m^2}$}{\includegraphics[width=3in]{tumbling_avg_Iddot_id_3000.pdf}}
\caption{GOES 8 $\dot{\overline{I}}_d$ vs. $\beta$.}
\label{fig:avg_Iddot}
\end{figure}
We will now compare the dynamical evolution for the full and analytically averaged models by numerically integrating Eqs.~\ref{eq:alphadotavg} - \ref{eq:Iddotavg}. For both models, the same initial spin state is prescribed with $\overline{\alpha}=0^{\circ}$, $\overline{\beta}=15^{\circ}$, $\overline{I}\!_{d}=$ 3500 kg${\cdot}$m$^2$ (SAM+), and $P_e=$ 120 min. Using MATLAB's ode113 numerical integrator with 1e-12 absolute and relative tolerances for both models, the full model was propagated for three years and the averaged model for six to show at least one tumbling cycle. The resulting evolution is provided in Figure~\ref{fig:evol_comp}. We see that the trends in the two models agree, but tumbling cycle times differ considerably with the full model progressing through the first tumbling cycle in roughly 700 days while the averaged model takes 1500 days. As the full model first passes through the 2:1 and 1:1
tumbling resonances, it is perturbed similarly to run 2 in Fig.~\ref{fig:run2_evol}. These
perturbing resonances may explain the initial jump in $\beta$ and advancement
in the tumbling cycle compared to the averaged model which
does not account for resonances. Another contributing factor to this difference is that $\overline{M_x}$ is slightly smaller for the averaged model than for the full model when $\beta<$ 90$^{\circ}$ (see Figure~\ref{fig:avg_torques}b.). This causes $\beta$ for the averaged solution to evolve more slowly, allowing $\omega_e$ (and $H$) more time to increase. In Figure~\ref{fig:evol_comp}a, the peak $\omega_e$ is 50$\%$ larger for the averaged model than the full model. The added pole "stiffness" provided by this larger spin rate further slows $\beta$ evolution for the averaged model compared to the full model. Artificially increasing the average model $\overline{M_x}$ by 20$\%$, approximately the difference between $\overline{M_x}$ for the two models, brought the averaged model's tumbling cycle time into agreement with the full model.
While the full and averaged models provide quantitatively different results due to our averaging assumptions (most notably the neglect of resonances and the illumination function approximation), the averaged model replicates the tumbling cycles and sun-tracking behavior of the full model. Furthermore, for the Figure~\ref{fig:evol_comp} example, the total averaged model computation time was 7 seconds, compared to 70 minutes for the full model's three year propagation. This roughly three order of magnitude decrease in computation time was consistently observed for the averaged model runs.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=2.8in]{tumble_avg_evol_validation_we.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=2.8in]{tumble_avg_evol_validation_id.pdf}}
\subcaptionbox{Clocking Angle}{\includegraphics[width=2.8in]{tumble_avg_evol_validation_alpha.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=2.8in]{tumble_avg_evol_validation_beta.pdf}}
\caption{GOES 8 full and averaged dynamical evolution (initial conditions: $\overline{\alpha}=0^{\circ}$, $\overline{\beta}=15^{\circ}$, $\overline{I}\!_{d}=$ 3500 kg${\cdot}$m$^2$ SAM+, and $P_e=2\pi/\overline{\omega}_e=$ 120 min).}
\label{fig:evol_comp}
\end{figure}
\subsection{Averaged YORP-Driven Evolution}
\subsubsection{Uniform to Tumbling Transition}
The tumbling-averaged model essentially extends the uniform spin-averaged model explored by Refs. \cite{scheeres2007,albuja2015,albuja2018,benson2020b} to general tumbling motion. Being much faster than the full dynamics model, the tumbling-averaged model readily allows for exploration of long-term uniform rotation and the transition to tumbling. Figure~\ref{fig:uniform_tumbling_transition} shows the six year evolution for GOES 8 starting in nearly uniform major axis rotation. Here we assume an initial $P_e=$ 30 s and long axis rotation angle amplitude $\psi_{\mathrm{max}}=$ 0.01$^{\circ}$. Referencing \cite{sa1991}, this $\psi_{\mathrm{max}}$ corresponds to $I_d/I_s\;{\approx}\;1-10^{-9}$. This slight negative offset from uniform rotation prevents $I_d$ from exceeding $I_s$ during propagation due to truncation error. For the first 3.5 years, the satellite remains in uniform rotation and exhibits a roughly one year periodicity in $\omega_e$. This is due to $\overline{M_z}$ and $\dot{\overline{\omega}}_e$ changing sign at $\beta=$ 90$^{\circ}$ (see Figure~\ref{fig:avg_torques}a and Figure~\ref{fig:eom_contours_17}c) as $\bm{H}$ remains nearly inertially fixed due to the fast spin rate. The same behavior can be observed with the uniform spin-averaged model results in Ref. \cite{benson2020b} (see Figure 12 in that paper). Defunct satellites including Telstar 401 and retired Glonass satellites observed by Refs. \cite{earl,rachman} exhibit similar yearly spin rate oscillations. During this initial 3.5 year period, there is also a secular decrease in $\overline{\omega}_e$. After roughly 3.5 years, the satellite reaches a maximum $P_e$ of approximately 40 min with $\overline{\beta}$ approaching 0$^{\circ}$. At this point, the satellite loses sufficient spin stability and transitions to tumbling. It then spins up about the long axis and progresses into a tumbling cycle with $\bm{H}$ precessing around the sun line.
\begin{figure}[H]
\centering
\subcaptionbox{Effective Spin Rate}{\includegraphics[width=2.8in]{tumbling_avg_uni_tumbling_transition_ex_we.pdf}}
\subcaptionbox{Scaled Dynamic Moment of Inertia}{\includegraphics[width=2.8in]{tumbling_avg_uni_tumbling_transition_ex_id.pdf}}
\subcaptionbox{Clocking Angle}{\includegraphics[width=2.8in]{tumbling_avg_uni_tumbling_transition_ex_alpha.pdf}}
\subcaptionbox{Angle between $\bm{H}$ and $\bm{\hat{u}}$}{\includegraphics[width=2.8in]{tumbling_avg_uni_tumbling_transition_ex_beta.pdf}}
\caption{Averaged model transition from uniform rotation to tumbling for GOES 8 (initial conditions: $\overline{\alpha}=$ 90$^{\circ}$, $\overline{\beta}=$ 90$^{\circ}$, $P_e=$ 30 s, $\overline{I}_d/I_s\;{\approx}\;1-10^{-9}$).}
\label{fig:uniform_tumbling_transition}
\end{figure}
\subsubsection{Tumbling Cycles}
We will now leverage the averaged model to better understand the observed tumbling cycles. Figure~\ref{fig:eom_contours_17} shows the signs of $\dot{\overline{I}}_d$, $\dot{\overline{\beta}}$, and $\dot{\overline{\omega}}_e$ computed over $I_d$ and $\beta$ (the sign contours for $\dot{\overline{H}}$ are nearly identical to those for $\dot{\overline{\omega}}_e$). The black regions denote negative values and the white regions denote positive values. To simplify analysis, $\dot{\overline{\beta}}$ (Eq.~\ref{eq:betadotavg}) has been averaged over $\overline{\alpha}$ to remove dependency. This is valid because $\overline{\alpha}$ is a fast variable compared to $\overline{I}_d$, $\overline{\beta}$, and $\overline{\omega}_e$ during the tumbling cycles. The averaged model evolution from Figure~\ref{fig:evol_comp} has been overlaid on the contours in Figure~\ref{fig:eom_contours_17}. Starting at the green dot, Figure~\ref{fig:eom_contours_17}a shows that $\overline{I}_d$ will initially decrease as the satellite is pushed into more excited tumbling. As we near the separatrix (the dashed grey line), Figure~\ref{fig:eom_contours_17}b shows that $\beta$ will start increasing. At the same time, the satellite effective spin rate ($\overline{\omega}_e$) will begin increasing as well. These combined effects cause the satellite to proceed into more excited tumbling with a faster spin rate and the pole moving away from the sun. Once $\beta$ increases past 90$^{\circ}$ (i.e. pole perpendicular to the sun) the satellite begins spinning down and moving back towards uniform rotation. Upon crossing the separatrix, the signs of $\dot{\overline{\beta}}$ and $\dot{\overline{\omega}}_e$ flip. So, the satellite then spins up, entering a nearly uniform rotation phase with the pole moving back towards the sun direction. Finally, passing through $\beta=$ 90$^{\circ}$, $\dot{\overline{I}}_d$ and $\dot{\overline{\omega}}_e$ flip signs resulting in spin down and progression back towards tumbling. At this point, the next tumbling cycle can begin. From Eqs.~\ref{eq:alphadotavg}, \ref{eq:betadotavg}, and \ref{eq:Iddotavg}, we note that the tumbling cycle duration will be driven directly by $\overline{H}$. The larger the initial $\overline{H}$, the slower the satellite will progress through the tumbling cycle. For GOES 8, any escape to long-term uniform rotation from these tumbling cycles will likely occur in the upper right (after passing upward across the separatrix). To escape, the satellite must spin up sufficiently before $\beta$ decreases below 90$^{\circ}$. Alternatively, capture into these tumbling cycles from uniform rotation ($I_d=I_s$) requires $\beta<$ 90$^{\circ}$ so that $\dot{\overline{I}}_d$ and $\dot{\overline{\omega}}_e$ are negative. If the spin rate is small enough, $\bm{H}$ will be pulled towards the sun line and the satellite will spin down and transition into a tumbling cycle.
\begin{figure}[H]
\centering
\subcaptionbox{$\dot{\overline{I}}_d$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_17_wsgn_1_iddot_with_evol.pdf}}
\subcaptionbox{$\dot{\overline{\beta}}$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_17_wsgn_1_mx_with_evol.pdf}}
\subcaptionbox{$\dot{\overline{\omega}}_e$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_17_wsgn_1_wedot_with_evol.pdf}}
\caption{Signs of averaged parameter derivatives vs. $I_d$ and $\beta$ (SAM+/LAM+) for GOES 8 with Figure~\ref{fig:evol_comp} averaged evolution overlaid in red, starting at the green dot. Black regions denotes negative values and white denotes positive values. The dashed gray line is the separatrix.}
\label{fig:eom_contours_17}
\end{figure}
\subsubsection{Sun-Tracking Behavior}
We will now discuss the sun-tracking precession behavior observed during tumbling cycles. The foundation of the following analysis is that $\bm{M}$ is nearly aligned with $\bm{\hat{Z}}\times\bm{\hat{H}}$ for the majority of the $I_d$ - $\beta$ phase space. To show this, we first calculate the component of $\bm{M}$ along $\bm{\hat{B}}=\bm{\hat{Z}}\times\bm{\hat{H}}/|\bm{\hat{Z}}\times\bm{\hat{H}}|$,
\begin{equation}
\bm{\hat{B}}\cdot\bm{M}=M_y
\label{eq:Bhat}
\end{equation}
and the angle between $\bm{M}$ and $\bm{\hat{B}}$ is then given by,
\begin{equation}
\cos\theta_{BM}=\bm{\hat{B}}\cdot\bm{\hat{M}}=\frac{M_y}{\sqrt{M^2_x+M^2_y+M^2_z}}
\label{eq:thetazxh}
\end{equation}
Plotting Eq.~\ref{eq:thetazxh} over $I_d$ and $\beta$, the resulting values are provided in Figure~\ref{fig:zhatxhhat}a for GOES 8. From the small $\theta_{BM}$, we see that $\bm{M}$ is closely aligned with $\bm{\hat{B}}$ for most LAM $I_d$, $\beta$ values and therefore nearly perpendicular to both $\bm{\hat{Z}}$ and $\bm{\hat{H}}$. This makes sense given the large relative magnitude of $M_y$ to $M_x$ and $M_z$ in Figures~\ref{fig:avg_torques}b,d. Calculating $\overline{M_y}$ for a number of LAM $I_d$ values for GOES 8, Figure~\ref{fig:zhatxhhat}b shows that $\overline{M_y}\;{\approx}\;M\sin{\beta}$ for $I_d/I_s$ values near 0.4 - 0.5 (where $M$ is the arbitrary torque amplitude). From Figure~\ref{fig:evol_comp}b, we see that the satellite spends most of the tumbling cycle near $I_d/I_s=$ 0.45, where this $M\sin{\beta}$ approximation agrees best.
\begin{figure}[H]
\centering
\subcaptionbox{$\theta_{BM}$}{\includegraphics[width=3.2in]{ZhatxHhat_torque_angle_thetasp_17.png}}
\subcaptionbox{$\overline{M_y}$ and $M\sin{\beta}$ Approximation}{\includegraphics[width=3.2in]{My_sinbeta_comparison_thetasp_17.pdf}}
\caption{Structure of $\overline{M_y}$ for GOES 8 (SAM+/LAM+).}
\label{fig:zhatxhhat}
\end{figure}
Given this near orthogonality, we can develop an approximate system to better understand the sun-tracking precession. Approximating the torque as $\bm{M}=M_y\bm{\hat{B}}$, we can calculate $\frac{{^\mathcal{O}}d}{dt}(\bm{H})$ using the transport theorem,
\begin{equation}
\frac{^\mathcal{O}d}{dt}(\bm{H})=M_y\bm{\hat{B}}-\boldsymbol{\omega}_{\mathcal{O}/\mathcal{N}}\times\bm{H}
\label{eq:Hvecdotorb}
\end{equation}
Then assuming $M_y=M\sin{\beta}$ and noting that $\sin{\beta}=|\bm{\hat{Z}}\times\bm{\hat{H}}|$, we can simplify Eq.~\ref{eq:Hvecdotorb} to find,
\begin{equation}
\frac{^\mathcal{O}d}{dt}(\bm{H})=\Bigg(\frac{M}{H}\bm{\hat{Z}}-n\bm{\hat{X}}\Bigg)\times\bm{H}
\label{eq:Hvecdotorb2}
\end{equation}
Since we assume $\bm{M}\cdot\bm{H}=0$, $H$ is constant. Therefore, Eq.~\ref{eq:Hvecdotorb2} is a linear system with constant coefficients. Solving the initial value problem with $\bm{H}(t=0)=H[\sin{\beta_o},0,\cos{\beta_o}]^T$, we find,
\begin{singlespace}
\begin{equation}
\bm{H}(t)=\frac{H}{\omega^2}\begin{bmatrix}
{\delta}(n\cos{\beta_o}+\delta\sin{\beta_o})\cos{\omega}t-n(\delta\cos{\beta_o}-n\sin{\beta_o}) \\
{\omega}(n\cos{\beta_o}+\delta\sin{\beta_o})\sin{\omega}t\\
n(n\cos{\beta_o}+\delta\sin{\beta_o})\cos{\omega}t+{\delta}(\delta\cos{\beta_o}-n\sin{\beta_o}) \\
\end{bmatrix}
\label{eq:H(t)}
\end{equation}
\end{singlespace}
\noindent where $\delta=M/H$ and $\omega = \sqrt{\delta^2+n^2}$. Note that $\bm{H}(t)$ is periodic with period $2\pi/\omega$. Taking the time derivative of Eq.~\ref{eq:H(t)}, we find,
\begin{singlespace}
\begin{equation}
\frac{^\mathcal{O}d}{dt}(\bm{H})=H(n\cos{\beta_o}+\delta\sin{\beta_o})\begin{bmatrix}
-\frac{\delta}{\omega}\sin{\omega}t \\
\cos{\omega}t\\
-\frac{n}{\omega}\sin{\omega}t\\
\end{bmatrix}
\label{eq:Hdot(t)}
\end{equation}
\end{singlespace}
For $\delta>>n$, $\omega\approx\delta$, so $\dot{H}_Z$ is relatively small and evolution occurs mostly parallel to the the $\bm{\hat{X}}$ - $\bm{\hat{Y}}$ plane (i.e. sun-tracking precession). Here, precession occurs much faster than the mean motion $n$ because $\omega>>n$. As $\delta/n$ decreases, the precession rate slows and motion transitions more towards the $\bm{\hat{Y}}$ - $\bm{\hat{Z}}$ plane. As $\delta/n\rightarrow0$, $\dot{H}_X\rightarrow0$ and motion becomes confined parallel to the $\bm{\hat{Y}}$ - $\bm{\hat{Z}}$ plane with $\omega{\rightarrow}n$. Here, the torque is not sufficient to turn $\bm{H}$ which remains inertially fixed. Figure~\ref{fig:H(t)} illustrates this transition from sun-tracking precession to inertially fixed $\bm{H}$ for a number of $\delta/n$ values. Proceeding clockwise from lower right to upper left, $\delta/n$ decreases and circulation gradually transitions from $\bm{\hat{Z}}$ to $\bm{\hat{X}}$.
\begin{figure}[H]
\centering
\includegraphics[width=3.25in]{ZhatxHhat_approx_Horb_evol_t_180_day_dn_vary.pdf}
\caption{$\bm{\hat{H}}(t)$ from Eq.~\ref{eq:H(t)} over 180 days with varying $\delta/n$ (0, 0.5, 1, 2, 100). The green dot denotes the initial state ($\alpha$= 0$^{\circ}$, $\beta=$ 45$^{\circ}$) and the red dots denote the final states for each $\delta/n$.}
\label{fig:H(t)}
\end{figure}
\subsubsection{Influence of End of Life Configurations}
It is important to note that the counter-clockwise ($I_d$, $\beta$) motion in Figure~\ref{fig:eom_contours_17} is just one of the possible evolutionary scenarios. In Benson et al. \cite{benson2020b}, we found that long-term uniform GOES evolution strongly depends on the end of life solar array angle $\theta_{sa}$ (see Figures 8-12 in that paper and the associated discussion). Computing $\overline{M_x}$, $\overline{M_y}$, $\overline{M_z}$, and $\dot{\overline{I}}_d$ over all possible end of life GOES 8 solar array angles with the averaged model, we find the following contours in Figure~\ref{fig:torque_beta_thetasp_sam}. For $\dot{\overline{I}}_d$, $\omega_e=$ 2$\pi$ rad/s was again assumed. Sweeping over $\theta_{sa}$, the averaged components change significantly in sign and magnitude, indicating that $\theta_{sa}$ greatly affects general long-term satellite evolution. The results in Figure~\ref{fig:torque_beta_thetasp_sam} are analogous to the uniform spin-averaged coefficients in Ref. \cite{benson2020b}. The most easily comparable are $\overline{M_z}$ and $\mathcal{C}_{0,z}$ which share very similar structure (see Figures 8 and 11 in that paper). In addition, for $\theta_{sa}$ near odd multiples of 42$^{\circ}$, we find that $\overline{M_x}$, $\overline{M_z}$, and $\dot{\overline{I}}_d$ are approximately zero. These critical $\theta_{sa}$ values also hold for the uniform spin-averaged results in Ref. \cite{benson2020b}. Obviously, these negligible torque configurations are specific to GOES 8's geometry and mass distribution. For other satellites, the averaged framework will allow for fast and efficient studies of the parameter space to identify any similar configurations. These GOES findings illustrate the potential to reduce long-term spin state variation by properly setting end of life configurations.
\begin{figure}[H]
\centering
\subcaptionbox{$\overline{M_x}$}{\includegraphics[width=3in]{tumbling_avg_mx_beta_thetasp_id_3500_360.png}}
\subcaptionbox{$\overline{M_y}$}{\includegraphics[width=3in]{tumbling_avg_my_beta_thetasp_id_3500_360.png}}
\subcaptionbox{$\overline{M_z}$}{\includegraphics[width=3in]{tumbling_avg_mz_beta_thetasp_id_3500_360.png}}
\subcaptionbox{$\dot{\overline{I}}_d$}{\includegraphics[width=3in]{tumbling_avg_iddot_beta_thetasp_id_3500_360.png}}
\caption{GOES 8 Averaged Terms vs. $\beta$ and Solar Array Angle $\theta_{sa}$ (SAM+ $I_d=$ 3500 kg${\cdot}$m$^2$)}
\label{fig:torque_beta_thetasp_sam}
\end{figure}
We will now briefly consider the long-term evolution for GOES 8 with a different solar array angle. Changing GOES 8's $\theta_{sa}$ from 17$^{\circ}$ to 70$^{\circ}$ yields the contours in Figure~\ref{fig:eom_contours_70}. Here, the signs of $\dot{\overline{I}}_d$ and $\dot{\overline{\omega}}_e$ are essentially mirrored about $\beta=$ 90$^{\circ}$ as compared to Figure~\ref{fig:eom_contours_17}. For $\dot{\overline{\beta}}$, the sign is mirrored about the separatrix. Complementing the contours is the six year averaged evolution given by the following initial conditions: $\overline{\alpha}=$ 0$^{\circ}$, $\overline{\beta}=$ 165$^{\circ}$, $\overline{I}\!_{d}=$ 3500 kg${\cdot}$m$^2$ (SAM+), and $P_e=$ 240 min. The satellite goes through several tumbling cycles as in Figure~\ref{fig:eom_contours_17} except that ($I_d$, $\beta$) evolution instead proceeds clockwise with $\beta$ now decreasing over the course of each tumbling cycle.
\begin{figure}[H]
\centering
\subcaptionbox{$\dot{\overline{I}}_d$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_70_wsgn_1_iddot_with_evol.pdf}}
\subcaptionbox{$\dot{\overline{\beta}}$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_70_wsgn_1_mx_with_evol.pdf}}
\subcaptionbox{$\dot{\overline{\omega}}_e$}{\includegraphics[width=3in]{HM_Iddot_contours_thetasp_70_wsgn_1_wedot_with_evol.pdf}}
\caption{Same as Figure~\ref{fig:eom_contours_17} except with $\theta_{sa}=$ 70$^{\circ}$ and corresponding averaged evolution.}
\label{fig:eom_contours_70}
\end{figure}
\section{Discussion}
Comparing the full and averaged dynamical models in Section IV, we found that the averaged model captures the tumbling cycles and sun-tracking behavior of the full model. Nevertheless, there were quantitative differences between the two models due to our averaging assumptions. Most notable are the neglect of resonances and the second order Fourier series illumination function approximation. Higher order Fourier series approximations of $\mathrm{max}(0,\bm{\hat{u}}\cdot{\bm{\hat{n}}})$ would yield better agreement with the full model at the expense of increased average model complexity. Another shortfall of the current semi-analytical averaged model is that $\dot{\alpha}$ is singular for $\beta=$ 0$^{\circ}$ and 180$^{\circ}$. Again this could be remedied by replacing $\alpha$ and $\beta$ with an alternate coordinate set when very close to these singularities (e.g. $v=\sin{\alpha}\sin{\beta}$ and $w=\cos{\alpha}\sin{\beta}$ which has a $\beta$ ambiguity). In practice though, these singularities were never encountered during averaged model propagation, so this approach was not implemented in our model. Finally, while this paper only considered solar torques, the averaged model could be readily expanded to include energy dissipation as well as averaged gravity gradient and magnetic torques.
Given the transition from uniform rotation to non-principal axis tumbling observed for the GOES model, it is possible that other satellites could undergo similar transitions. There is clear indication that defunct satellites are exhibiting large amplitude, secular period variations consistent with Figure~\ref{fig:uniform_tumbling_transition} \citep{earl,rachman}. From the active debris removal (ADR)/servicing perspective, this implies that a satellite may not remain in uniform rotation indefinitely. In general, a uniform to tumbling transition would require a secular decrease in uniform spin rate with $\dot{\overline{I_d}}<0$. Furthermore, the results in Figure~\ref{fig:uniform_tumbling_transition} demonstrate that the transition to tumbling could occur quickly, in a couple of weeks or less. From Figures~\ref{fig:eom_contours_17} and \ref{fig:eom_contours_70}, it seems possible for a satellite to escape these tumbling cycles and enter fast uniform rotation, a process that could occur as rapidly. As a result, target satellite spin state monitoring and prediction will be crucial for ADR and servicing. The possible existence of tumbling cycles would have additional implications for ADR and servicing missions. Slow, uniform rotation would be ideal for rendezvous, capture, and de-spin procedures. Even for proposed "touchless" electromagnetic detumbling approaches \cite{gomez2015}, leveraging YORP to partially de-spin a target satellite would reduce the time, energy, and risk required by the ADR/servicing spacecraft. So predicting future windows of slow, uniform rotation between the tumbling phases would be valuable. The above analysis shows that the primary driver of sun-tracking for GOES is the near orthogonality of the solar torque and the sun line. It would be valuable to determine how often this orthogonality holds for different satellites and rocket bodies. In terms of satellite size $r$, $I_d\;{\propto}\;r^5$ and the solar torque $\bm{M}\;{\propto}\;r^3$. So Eqs.~\ref{eq:alphadotavg}, \ref{eq:betadotavg}, \ref{eq:Iddotavg} ($\overline{I}_d$ normalized), and \ref{eq:wedotavg} are proportional to $1/r^2$. In other words, reducing satellite size by a factor of ten (maintaining density and optical properties), will cause it to evolve 100 times faster. Similarly, $\delta/n\;{\propto}\;1/r^2$, so sun-tracking precession is equally more effective for smaller satellites.
The importance of solar array angle on long-term GOES evolution demonstrates the potential for dictating the post-disposal spin state evolution of defunct satellites by carefully setting their end of life configurations. For example, configurations that minimize $|\dot{\overline{\beta}}|$ could be used to shut off or greatly slow the observed tumbling cycles, facilitating debris removal and servicing missions. Also, minimizing $|\overline{M_z}|$ would reduce spin rates and their variation amplitude, making satellites easier to capture and reducing potential for material shedding.
Here, it is also worthwhile to briefly discuss the implications of our findings for natural small bodies such as asteroids. In YORP simulations, Vokrouhlicky et al. found that small asteroids can exhibit transitions from uniform rotation to tumbling and subsequent tumbling spin up \cite{vok2007}. Given these similarities, it is possible that the tumbling cycles, angular momentum sun-tracking, and tumbling resonances observed here for artificial satellites hold for some asteroids as well. Since solar radiation pressure goes as $1/a^2$, where $a$ is the heliocentric semi-major axis, Eqs.~\ref{eq:alphadotavg} - \ref{eq:wedotavg} will go as the same. Furthermore, the mean motion $n$ goes as $1/\sqrt{a^3}$, so $\delta/n\;{\propto}\;1/\sqrt{a}$. This implies that uniform to tumbling transitions, tumbling cycles, and sun-tracking precession would be more likely for smaller asteroids in the inner solar system (all else equal). Again, dedicated study of these natural small bodies is needed to determine whether the tumbling-averaged torques provide the necessary structure for this behavior (e.g. near orthogonality of $\bm{\hat{B}}$ and $\bm{M}$).
\section{Conclusions}
This paper illustrates the complex, yet structured Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect behavior for defunct satellites including transitions from uniform rotation to non-principal axis tumbling, angular momentum sun-tracking, tumbling cycles, and resonant YORP tumbling states. To help understand these behaviors, we developed a semi-analytical tumbling-averaged YORP model. This model captures the uniform to tumbling transition, sun-tracking, and tumbling cycle behavior observed with the full dynamics while being roughly three orders of magnitude faster to propagate. Furthermore, the averaged model uncovers the mechanics behind the observed tumbling transition, sun-tracking, and tumbling cycles. Overall, the greater computational efficiency and reduced state space of the averaged framework allows us to more easily classify and analyze the general YORP evolution of different satellites and rocket bodies with various end of life configurations.
\section*{Appendix A: Torque Free Solutions}
Here we summarize the analytical solutions for torque-free rotation. We assume the long axis convention where the $\bm{\hat{b}}_1$, $\bm{\hat{b}}_2$ and $\bm{\hat{b}}_3$ body axes are aligned with the intermediate ($I_i$), maximum ($I_s$), and minimum ($I_l$) principal moments of inertia respectively. 3-1-3 ($\phi$-$\theta$-$\psi$) Euler angles are used to rotate between the $\mathcal{H}$ and $\mathcal{B}$ frames. This is the same convention used in Refs. \cite{sa1991,bensonda14}.
Equating $\bm{H}$ in the $\mathcal{H}$ and $\mathcal{B}$ frames with Eq.~\ref{eq:BH}, we find,
\begin{equation}
a_{z1}=\sin{\theta}\sin{\psi}=\frac{I_i\omega_1}{I_d\omega_e}\;\;\;\;\;\;\;a_{z2}=\sin{\theta}\cos{\psi}=\frac{I_s\omega_2}{I_d\omega_e}\;\;\;\;\;\;a_{z3}=\cos{\theta}=\frac{I_l\omega_3}{I_d\omega_e}
\label{eq:sthetaspsi}
\end{equation}
The angles $\theta$ and $\psi$ can be unambiguously calculated using Eq.~\ref{eq:sthetaspsi} with Eq.~\ref{eq:w_lam} for LAMs or Eq.~\ref{eq:w_sam} for SAMs. The equations for $\phi$ are much more complicated and are provided below.
\subsection*{Long Axis Modes}
For long axis modes (LAMs), the body frame angular velocity $\boldsymbol{\omega}=[\omega_1,\omega_2,\omega_3]^T$ is given by,
\begin{equation}
\omega_1=\pm\omega_e\sqrt{\frac{I_d(I_d-I_l)}{I_i(I_i-I_l)}}\sn{\tau}\;\;\;\;\;\;\omega_2=\omega_e\sqrt{\frac{I_d(I_d-I_l)}{I_s(I_s-I_l)}}\cn{\tau}\;\;\;\;\;\;\omega_3=\pm\omega_e\sqrt{\frac{I_d(I_s-I_d)}{I_l(I_s-I_l)}}\dn{\tau}
\label{eq:w_lam}
\end{equation}
where $\sn\tau$, $\cn\tau$, and $\dn\tau$ are Jacobi elliptic functions \citep{sa1991,numericalrecipes,friedman}. The $\pm$ distinguishes between the two possible LAM regions: $+$ for ${\omega}_3>0$ (LAM+) and $-$ for $\omega_3<0$ (LAM-). For LAMs, $\tau$ is given by,
\begin{equation}
\label{eq:tauLAM}
\tau = \tau_o + \omega_e\sqrt{\frac{I_d(I_i-I_l)(I_s-I_d)}{I_lI_iI_s}}(t-t_o)
\end{equation}
where $t$ is the time and $t_o$, $\tau_o$ are the initial values. The period of $\sn\tau$ and $\cn\tau$ is $4K(k)$ while $\dn\tau$ is periodic on $2K(k)$ where $K(k)$ is the complete elliptic integral of the first kind \citep{landau,numericalrecipes},
\begin{equation}
\label{K}
K(k)={\int_0^{\pi/2}}\frac{du}{\sqrt{1-k^{2}\sin^{2}\!u}}
\end{equation}
and $k$ is the modulus. The parameter $n$ features in the torque-free solutions for $\phi$ and $P_{\bar{\phi}}$. For LAMs, $k$ and $n$ are given by,
\begin{equation}
\label{eq:knLAM}
k^2=\frac{(I_s-I_i)(I_d-I_l)}{(I_i-I_l)(I_s-I_d)}\;\;\;\;\;\;n=\frac{I_l}{I_s}\frac{(I_s-I_i)}{(I_i-I_l)}
\end{equation}
For LAMs, the Euler angle $\phi$ is given by,
\begin{equation}
\label{phiLAMPibar}
\phi = \phi_o + \frac{H}{I_l}(t-t_o)-(I_s-I_l)\sqrt{\frac{I_iI_d}{I_lI_s(I_i-I_l)(I_s-I_d)}}\Big[\bar{\Pi}(\tau,n)-\bar{\Pi}(\tau_o,n)\Big]
\end{equation}
where $\bar{\Pi}(\tau,n)$ is the modified incomplete elliptic integral of the third kind. Most routines for calculating the incomplete elliptic integral of the third kind $\Pi(\tau,n)$ (e.g. Ref. \cite{numericalrecipes}) only accept $0\leq\tau\leq{K(k)}$ even though $\tau$ increases unbounded with $t$. To calculate $\bar{\Pi}(\tau,n)$ correctly, we use the following algorithm \citep{bensonda14}. Dropping the implied dependence of $k$ on $K$ for brevity,
\begin{enumerate}
\item If $\tau$ has most recently passed through an even multiple of $K$, i.e. if $\mathrm{mod}(m,2)=0$,
\begin{equation}
\label{Pibareven}
\bar{\Pi}(\tau,n) = m\Pi(K,n)+\Pi(\tau-mK,n)
\end{equation}
\item Instead, if $\tau$ has most recently passed through an odd multiple of $K$, i.e. if $\mathrm{mod}(m,2)=1$,
\begin{equation}
\label{Pibarodd}
\bar{\Pi}(\tau,n) = (m+1)\Pi(K,n)-\Pi\Big((m+1)K-\tau,n\Big)
\end{equation}
\end{enumerate}
Here, the integer multiple $m=\mathrm{int}(\tau/K)$ and $\mathrm{mod}$ is the remainder after division modulo operator.
For LAMs, the average period of $\phi$ ($P_{\bar{\phi}}$) and the constant period of $\psi$ ($P_\psi$) are given by,
\begin{equation}
\label{PphiLAM}
P_{\bar{\phi}}=\frac{2\pi}{\omega_e}\frac{I_l}{I_d}\Bigg[1-\frac{(I_s-I_l)}{I_s}\frac{\varPi(K,n)}{K}\Bigg]^{-1}
\end{equation}
\begin{equation}
\label{Ppsi_lam}
P_{\psi}=\frac{4}{\omega_e}\sqrt{\frac{I_{l}I_{i}I_{s}}{I_{d}(I_i-I_l)(I_s-I_d)}}K
\end{equation}
\subsection*{Short Axis Modes}
For short axis modes (SAMs), the body frame angular velocity $\boldsymbol{\omega}=[\omega_1,\omega_2,\omega_3]^T$ is given by,
\begin{equation}
\label{eq:w_sam}
\omega_1 = \omega_e\sqrt{\frac{I_d(I_s-I_d)}{I_i(I_s-I_i)}}\sn{\tau}\;\;\;\;\;\;\omega_2 = \pm\omega_e\sqrt{\frac{I_d(I_d-I_l)}{I_s(I_s-I_l)}}\dn{\tau}\;\;\;\;\;\;\omega_3 = \pm\omega_e\sqrt{\frac{I_d(I_s-I_d)}{I_l(I_s-I_l)}}\cn{\tau}
\end{equation}
Again $+$ holds for ${\omega}_2>0$ and $-$ holds for $\omega_2<0$ (SAM$+$ and SAM$-$). For SAMs, $\tau$, $k$, and $n$ are,
\begin{equation}
\label{eq:tauSAM}
\tau = \tau_o + \omega_e\sqrt{\frac{I_d(I_s-I_i)(I_d-I_l)}{I_lI_iI_s}}(t-t_o)
\end{equation}
\begin{equation}
\label{eq:knSAM}
k^2 = \frac{(I_i-I_l)(I_s-I_d)}{(I_s-I_i)(I_d-I_l)}\;\;\;\;\;\;n=\frac{I_l}{I_s}\frac{(I_s-I_d)}{(I_d-I_l)}
\end{equation}
For SAMs, $\phi$ is instead given by,
\begin{equation}
\label{phiSAMPibar}
\phi = \phi_o + \frac{H}{I_l}(t-t_o)-(I_s-I_l)\sqrt{\frac{I_iI_d}{I_lI_s(I_s-I_i)(I_d-I_l)}}\Big[\bar{\Pi}(\tau,n)-\bar{\Pi}(\tau_o,n)\Big]
\end{equation}
For SAMs, $P_{\bar{\phi}}$ is also given by Eq.~\ref{PphiLAM} with $n$ from Eq.~\ref{eq:knSAM}. Finally, $P_\psi$ for SAMs is given by,
\begin{equation}
\label{Ppsi_s}
P_{\psi}=\frac{4}{\omega_e}\sqrt{\frac{I_{l}I_{i}I_{s}}{I_{d}(I_s-I_i)(I_d-I_l)}}K
\end{equation}
\section*{Appendix B: Averaged Quantities}
From Ref. \cite{friedman}, we can obtain the following elliptic function averages,
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn\tau{d}\tau = 0
\label{eq:az1avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn\tau{d}\tau=0
\label{eq:az2avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\dn\tau{d}\tau = \frac{\pi}{2K}
\label{eq:az3avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^2\!\tau{d}\tau = \frac{K - E}{k^2K}
\label{eq:az12avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn^2\!\tau{d}\tau = \frac{E - k'^2K}{k^2K}
\label{eq:az22avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\dn^2\!\tau{d}\tau = \frac{E}{K}
\label{eq:az32avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^2\!\tau\dn\tau{d}\tau = \frac{\pi}{4K}
\label{eq:az12az3avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn^2\!\tau\dn\tau{d}\tau = \frac{\pi}{4K}
\label{eq:az22az3avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\dn^3\!\tau{d}\tau = \frac{(k'^2+1)\pi}{4K}
\label{eq:az33avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^4\!\tau{d}\tau = \frac{(k^2+2)K-2(k^2+1)E}{3k^4K}
\label{eq:az14avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn^4\!\tau{d}\tau = \frac{(4k^2-2)E-k'^2(3k^2-2)K}{3k^4K}
\label{eq:az24avgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\dn^4\!\tau{d}\tau = \frac{2(k'^2+1)E-k'^2K}{3K}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^2\!\tau\cn^2\!\tau{d}\tau = \frac{(1+k'^2)E-2k'^2K}{3k^4K}
\label{eq:az12az22avgavgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\sn^2\!\tau\dn^2\!\tau{d}\tau = \frac{(2k^2-1)E+k'^2K}{3k^2K}
\label{eq:az12az32avgavgLAM}
\end{equation}
\begin{equation}
\frac{1}{4K}\int_0^{4K}\cn^2\!\tau\dn^2\!\tau{d}\tau = \frac{(1+k^2)E-k'^2K}{3k^2K}
\label{eq:az22az32avgavgLAM}
\end{equation}
\normalsize
where $E$ is the complete elliptic integral of the second kind \citep{numericalrecipes} and $k'^2=1-k^2$.
\subsection*{Long Axis Modes}
After averaging over $\phi$, we follow Ref. \cite{cicalo} and write all averaged expressions in terms of $\overline{a_{z1}}$, $\overline{a_{z2}}$, and $\overline{a_{z3}}$ because they are independent of $\phi$. Following the notation of Eq.~\ref{eq:psiavgK}, for LAMs we have the following expressions with ($1$, $2$, $3$) subscripts denoting the $\mathcal{B}$ frame vector components and using $f$ as a placeholder for $d$ and $r$.
\small
\begin{equation}
\overline{f_z} = f_3\overline{a_{z3}}
\label{eq:l_fz}
\end{equation}
\begin{equation}
\overline{f_xn_x} = \frac{1}{2}\Big(f_3n_3 - f_1n_1\Big)\overline{a^2_{z1}} + \frac{1}{2}\Big(f_3n_3 - f_2n_2\Big)\overline{a^2_{z2}} + \frac{1}{2}\Big(f_1n_1+f_2n_2\Big)
\label{eq:l_fxnx}
\end{equation}
\begin{equation}
\overline{f_yn_x} = \frac{1}{2}\Big(f_2n_1 - f_1n_2\Big)\overline{a_{z3}}
\label{eq:l_fynx}
\end{equation}
\begin{equation}
\overline{f_zn_z} = f_1n_1\overline{a^2_{z1}} + f_2n_2\overline{a^2_{z2}} + f_3n_3\overline{a^2_{z3}}
\label{eq:l_fznz}
\end{equation}
\begin{equation}
\overline{f_xn_xn_z} = \frac{1}{2}\Big(f_1n_1n_3 + f_2n_2n_3\Big)\overline{a_{z3}} - \frac{1}{2}\Big(f_3n^2_1 + 2f_1n_1n_3 - f_3n^2_3\Big)\overline{a^2_{z1}a_{z3}} - \frac{1}{2}\Big(f_3n^2_2 + 2f_2n_2n_3 - f_3n^2_3\Big)\overline{a^2_{z2}a_{z3}}
\label{eq:l_fxnxnz}
\end{equation}
\begin{equation}
\overline{f_yn_xn_z} = \frac{1}{2}\Big(f_3n_1n_2 - f_2n_1n_3\Big)\overline{a^2_{z1}} + \frac{1}{2}\Big(f_1n_2n_3 - f_3n_1n_2\Big)\overline{a^2_{z2}} + \frac{1}{2}\Big(f_2n_1n_3 - f_1n_2n_3\Big)\overline{a^2_{z3}}
\label{eq:l_fynxnz}
\end{equation}
\begin{equation}
\overline{f_zn^2_x} = \frac{1}{2}\Big(f_3n^2_1 + f_3n^2_2\Big)\overline{a_{z3}} - \frac{1}{2}\Big(f_3n^2_1 + 2f_1n_1n_3 - f_3n^2_3\Big)\overline{a^2_{z1}a_{z3}} - \frac{1}{2}\Big(f_3n^2_2 + 2f_2n_2n_3 - f_3n^2_3\Big)\overline{a^2_{z2}a_{z3}}
\label{eq:l_fznx2}
\end{equation}
\begin{equation}
\overline{f_zn^2_z} = \Big(f_3n^2_1 + 2f_1n_1n_3\Big)\overline{a^2_{z1}a_{z3}} + \Big(f_3n^2_2 + 2f_2n_2n_3\Big)\overline{a^2_{z2}a_{z3}} + f_3n^2_3\overline{a^3_{z3}}
\label{eq:l_fznz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn^3_x} = & +\frac{3}{8}\Big(3f_3n^2_1n_3 -2f_1n^3_1 - 3f_2n^2_1n_2 - f_1n_1n^2_2 + 3f_1n_1n^2_3 +f_3n^2_2n_3 + f_2n_2n^2_3\Big)\overline{a^2_{z1}} \\ &
+ \frac{3}{8}\Big(f_2n^2_1n_2 + f_3n^2_1n_3 - f_1n_1n^2_2 + f_1n_1n^2_3 - 2f_2n^3_2 + 3f_3n^2_2n_3 + 3f_2n_2n^2_3\Big)\overline{a^2_{z2}} \\ &
+ \frac{3}{8}\Big(f_1n^3_1 - 3f_3n^2_1n_3 - 3f_1n_1n^2_3 + f_3n^3_3\Big)\overline{a^4_{z1}}
+ \frac{3}{8}\Big(f_2n^3_2- 3f_3n^2_2n_3 - 3f_2n_2n^2_3 + f_3n^3_3\Big)\overline{a^4_{z2}} \\ &
+ \frac{3}{8}\Big(3f_2n^2_1n_2 - 3f_3n^2_1n_3 + 3f_1n_1n^2_2 - 3f_1n_1n^2_3 - 3f_3n^2_2n_3 - 3f_2n_2n^2_3 + 2f_3n^3_3\Big)\overline{a^2_{z1}a^2_{z2}} \\ &
+ \frac{3}{8}\Big(f_1n^3_1 + f_2n^2_1n_2 + f_1n_1n^2_2 + f_2n^3_2\Big)
\end{split}
\label{eq:l_fxnx3}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn_xn^2_z} = & + \frac{1}{2}\Big(f_1n^3_1 - 2f_3n^2_1n_3 + f_2n_2n^2_1 - 4f_1n_1n^2_3 + f_3n^3_3 - f_2n_2n^2_3\Big)\overline{a^2_{z1}} \\ &
+ \frac{1}{2}\Big(f_2n^3_2 - 2f_3n^2_2n_3 + f_1n_1n^2_2 - 4f_2n_2n^2_3 + f_3n^3_3 - f_1n_1n^2_3\Big)\overline{a^2_{z2}} \\ &
+ \frac{1}{2}\Big(-f_1n^3_1 + 3f_3n^2_1n_3 + 3f_1n_1n^2_3 - f_3n^3_3\Big)\overline{a^4_{z1}} \\ &
+ \frac{3}{2}\Big(f_2n^2_1n_2 + f_3n^2_1n_3 - f_1n_1n^2_2 + f_1n_1n^2_3 +f_3n^2_2n_3 + f_2n_2n^2_3 - \frac{2}{3}f_3n^3_3\Big)\overline{a^2_{z1}a^2_{z2}} \\ &
+ \frac{1}{2}\Big(-f_2n^3_2 + 3f_3n^2_2n_3 + 3f_2n_2n^2_3 - f_3n^3_3\Big)\overline{a^4_{z2}}+ \frac{1}{2}\Big(f_1n_1n^2_3 + f_2n_2n^2_3\Big)
\end{split}
\label{eq:l_fxnxnz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn^3_x} = & - \frac{3}{8}\Big((f_2n^3_1 - f_1n_2n^2_1 - 3f_2n_1n^2_3 + 2f_3n_2n_1n_3 + f_1n_2n^2_3\Big)\overline{a^2_{z1}a_{z3}} \\ &
+ \frac{3}{8}\Big(f_1n^3_2 - f_2n_1n^2_2 - 3f_1n_2n^2_3 + 2f_3n_1n_2n_3 + f_2n_1n^2_3\Big)\overline{a^2_{z2}a_{z3}} \\ &
- \frac{3}{8}\Big((- f_2n^3_1 + f_1n^2_1n_2 - f_2n_1n^2_2 + f_1n^3_2)\overline{a_{z3}}
\end{split}
\label{eq:l_fynx3}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn_xn^2_z} = & + \frac{1}{2}\Big(f_2n^3_1 - f_1n_2n^2_1 - 3f_2n_1n^2_3 + 2f_3n_2n_1n_3 + f_1n_2n^2_3\Big)\overline{a^2_{z1}a_{z3}} \\ &
- \frac{1}{2}\Big(f_1n^3_2 - f_2n_1n^2_2 - 3f_1n_2n^2_3 + 2f_3n_1n_2n_3 + f_2n_1n^2_3\Big)\overline{a^2_{z2}a_{z3}}\\ &
- \frac{1}{2}\Big(f_1n_2 - f_2n_1)n^2_3\overline{a_{z3}}
\end{split}
\label{eq:l_fynxnz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^2_xn_z} = & + \frac{1}{2}\Big(f_1n^3_1 - 4f_3n^2_1n_3 + f_1n_1n^2_2 - 2f_1n_1n^2_3 - f_3n^2_2n_3 + f_3n^3_3\Big)\overline{a^2_{z1}}\\ &
+ \frac{1}{2}\Big(f_2n_1^2n_2 - f_3n_1^2n_3 + f_2n_2^3 - 4f_3n_2^2n_3 - 2f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^2}
+ \frac{1}{2}\Big(f_3n_3n_1^2+f_3n_3n_2^2\Big) \\ &
+ \frac{1}{2}\Big(- f_1n^3_1 + 3f_3n^2_1n_3 + 3f_1n_1n^2_3 - f_3n^3_3\Big)\overline{a^4_{z1}}
+ \frac{1}{2}\Big(-f_2n_2^3 + 3f_3n_2^2n_3 + 3f_2n_2n_3^2 - f_3n_3^3\Big)\overline{a_{z2}^4} \\ &
+ \frac{3}{2}\Big(-f_2n^2_1n_2 + f_3n^2_1n_3 - f_1n_1n^2_2 + f_1n_1n^2_3 + f_3n^2_2n_3 + f_2n_2n^2_3 - \frac{2}{3}f_3n^3_3\Big)\overline{a^2_{z1}a^2_{z2}}
\end{split}
\label{eq:l_fznx2nz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^3_z} = & + \Big(3f_3n_1^2n_3 + 3f_1n_1n_3^2 - 2f_3n_3^3\Big)\overline{a_{z1}^2}
+ \Big(3f_3n_2^2n_3 + 3f_2n_2n_3^2 - 2f_3n_3^3\Big)\overline{a_{z2}^2} \\ &
+ \Big(f_1n_1^3 - 3f_3n_1^2n_3 - 3f_1n_1n_3^2 + f_3n_3^3\Big)\overline{a_{z1}^4}
+ \Big(f_2n_2^3 - 3f_3n_2^2n_3 - 3f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^4} \\ &
+ 3\Big(f_2n_1^2n_2 - f_3n_1^2n_3 + f_1n_1n_2^2 - f_1n_1n_3^2 - f_3n_2^2n_3 - f_2n_2n_3^2 + \frac{2}{3}f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2}
+ f_3n_3^3
\end{split}
\label{eq:l_fznz3}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z1}\delta_1}= & +\frac{2}{3\pi}\Big(6n_1n_3r_2u_x^2u_z - 4n_1n_3r_2u_z^3\Big)\overline{a_{z1}^4}
+ \frac{1}{4}\Big(2n_1r_2u_z^2 - n_1r_2u_x^2\Big)\overline{a_{z1}^2a_{z3}} \\ &
+ \frac{2}{3\pi}\Big(6n_1n_2r_3u_x^2u_z - 4n_1n_3r_2u_z^3 - 4n_1n_2r_3u_z^3 + 6n_1n_3r_2u_x^2u_z\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{4}{3\pi}\Big(2n_1n_3r_2u_z^3 - n_1n_2r_3u_x^2u_z - 2n_1n_3r_2u_x^2u_z\Big)\overline{a_{z1}^2}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z2}\delta_2}= & +\frac{2}{3\pi}\Big(4n_1n_2r_3u_z^3 + 4n_2n_3r_1u_z^3 - 6n_1n_2r_3u_x^2u_z - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{2}{3\pi}\Big(4n_2n_3r_1u_z^3 - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^4}
+ \frac{1}{4}\Big(n_2r_1u_x^2 - 2n_2r_1u_z^2\Big)\overline{a_{z2}^2a_{z3}} \\ &
+ \frac{4}{3\pi}\Big(n_1n_2r_3u_x^2u_z - 2n_2n_3r_1u_z^3 + 2n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^2}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z3}\delta_3}= & +\frac{2}{3\pi}\Big(6n_1n_3r_2u_x^2u_z - 4n_1n_3r_2u_z^3\Big)\overline{a_{z1}^2a_{z3}^2}
+ \frac{1}{4}\Big(n_1r_2u_x^2 - 2n_1r_2u_z^2\Big)\overline{a_{z1}^2a_{z3}} \\ &
+ \frac{2}{3\pi}\Big(4n_2n_3r_1u_z^3 - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^2a_{z3}^2}
- \frac{1}{4}\Big(n_2r_1u_x^2 - 2n_2r_1u_z^2\Big)\overline{a_{z2}^2a_{z3}} \\ &
+ \frac{4}{3\pi}\Big(n_2n_3r_1u_x^2u_z - n_1n_3r_2u_x^2u_z\Big)\overline{a_{z3}^2}
- \frac{1}{4\pi}\Big(n_1r_2u_x^2 - n_2r_1u_x^2\Big)\overline{a_{z3}}
\end{split}
\end{equation}
\subsection*{Short Axis Modes}
\normalsize
The following averaged expressions hold for SAMs,
\small
\begin{equation}
\overline{f_z} = f_2\overline{a_{z2}}
\label{eq:s_fz}
\end{equation}
\begin{equation}
\overline{f_xn_x} = \frac{1}{2}\Big(f_3n_3 - f_1n_1\Big)\overline{a^2_{z1}} + \frac{1}{2}\Big(f_3n_3 - f_2n_2\Big)\overline{a^2_{z2}} + \frac{1}{2}\Big(f_1n_1+f_2n_2\Big)
\label{eq:s_fxnx}
\end{equation}
\begin{equation}
\overline{f_yn_x} = \frac{1}{2}(f_1n_3 - f_3n_1)\overline{a_{z2}}
\label{eq:s_fynx}
\end{equation}
\begin{equation}
\overline{f_zn_z} = f_1n_1\overline{a^2_{z1}} + f_2n_2\overline{a^2_{z2}} + f_3n_3\overline{a^2_{z3}}
\label{eq:s_fznz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn_xn_z} = & + \frac{1}{2}\Big(f_2n_3^2-f_2n_1^2 - 2f_1n_2n_1 + 2f_3n_2n_3\Big)\overline{a_{z1}^2a_{z2}} \\&
+ \frac{1}{2}\Big(f_2n_3^2 - f_2n_2^2 + 2f_3n_2n_3 \Big)\overline{a_{z2}^3}
+ \frac{1}{2}\Big(f_2n_2^2 - f_3n_2n_3 + f_1n_1n_2 - f_2n_3^2\Big)\overline{a_{z2}}
\end{split}
\label{eq:s_fxnxnz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn_xn_z} = & + \frac{1}{2}\Big(f_1n_2n_3 - 2f_2n_1n_3 + f_3n_1n_2\Big)\overline{a_{z1}^2} + \frac{1}{2}\Big(2f_1n_2n_3 - f_2n_1n_3 - f_3n_1n_2\Big)\overline{a_{z2}^2} \\ & + \frac{1}{2}\Big(f_2n_1n_3-f_1n_2n_3\Big)
\end{split}
\label{eq:s_fynxnz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^2_x} = & + \frac{1}{2}\Big(f_2n_3^2-f_2n_1^2 - 2f_1n_2n_1 + 2f_3n_2n_3\Big)\overline{a_{z1}^2a_{z2}}
+ \frac{1}{2}\Big(f_2n_3^2 - f_2n_2^2 + 2f_3n_2n_3 \Big)\overline{a_{z2}^3} \\ &
+ \frac{1}{2}\Big(f_2n_1^2 + f_2n_2^2 - 2f_3n_3n_2\Big)\overline{a_{z2}}
\end{split}
\label{eq:s_fznx2}
\end{equation}
\begin{equation}
\overline{f_zn^2_z} = \Big(f_2n_1^2 + 2f_1n_2n_1 - f_2n_3^2 - 2f_3n_2n_3\Big)\overline{a_{z1}^2a_{z2}} + \Big(f_2n_2^2 - 2f_3n_2n_3 - f_2n_3^2\Big)\overline{a_{z2}^3} + \Big(f_2n_3^2 + 2f_3n_2n_3\Big)\overline{a_{z2}}
\label{eq:s_fznz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn^3_x} = & + \frac{1}{8}(3f_1n_1^3 - 9f_3n_1^2n_3 - 9f_1n_1n_3^2 + 3f_3n_3^3\Big)\overline{a_{z1}^4}
+ \frac{3}{8}(3f_2n_2^3 - 3f_3n_2^2n_3 - 3f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^4} \\ &
+ \frac{9}{8}(9f_2n_1^2n_2 - f_3n_1^2n_3 + f_1n_1n_2^2 - 9f_1n_1n_3^2 - 9f_3n_2^2n_3 - 9f_2n_2n_3^2 + \frac{2}{3}f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{3}{8}(- 2f_1n_1^3 - f_2n_1^2n_2 + 3f_3n_1^2n_3 - f_1n_1n_2^2 + 3f_1n_1n_3^2 + f_3n_2^2n_3 + f_2n_2n_3^2\Big)\overline{a_{z1}^2} \\ &
+ \frac{3}{8}(- f_2n_1^2n_2 + f_3n_1^2n_3 - f_1n_1n_2^2 + f_1n_1n_3^2 - 2f_2n_2^3 + 3f_3n_2^2n_3 + 3f_2n_2n_3^2\Big)\overline{a_{z2}^2} \\ &
+ \frac{3}{8}(3f_1n_1^3 + 3f_2n_1^2n_2 + 3f_1n_1n_2^2 + 3f_2n_2^3\Big)
\end{split}
\label{eq:s_fxnx3}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_xn_xn^2_z} = & + \frac{1}{2}\Big(-f_1n_1^3 + 3f_3n_1^2n_3 + 3f_1n_1n_3^2 - f_3n_3^3\Big)\overline{a_{z1}^4}
+ \frac{1}{2}\Big(- f_2n_2^3 + 3f_3n_2^2n_3 + 3f_2n_2n_3^2 - f_3n_3^3\Big)\overline{a_{z2}^4} \\ &
+ \frac{3}{2}\Big(- f_2n_1^2n_2 + f_3n_1^2n_3 - f_1n_1n_2^2 + f_1n_1n_3^2 + f_3n_2^2n_3 + f_2n_2n_3^2 -\frac{2}{3} f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{1}{2}\Big(f_1n_1^3 - 2f_3n_1^2n_3 + f_2n_2n_1^2 - 4f_1n_1n_3^2 + f_3n_3^3 - f_2n_2n_3^2\Big)\overline{a_{z1}^2} \\ &
+ \frac{1}{2}\Big(f_2n_2^3 - 2f_3n_2^2n_3 + f_1n_1n_2^2 - 4f_2n_2n_3^2 + f_3n_3^3 - f_1n_1n_3^2\Big)\overline{a_{z2}^2}
+ \frac{1}{2}\Big(f_1n_1n_3^2+f_2n_2n_3^2\Big)
\end{split}
\label{eq:s_fxnxnz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn^3_x} = & + \frac{3}{8}\Big(f_3n_1^3 - f_1n_1^2n_3 - 2f_3n_1n_2^2 + 4f_2n_1n_2n_3 - f_3n_1n_3^2 - 2f_1n_2^2n_3 + f_1n_3^3\Big)\overline{a_{z1}^2a_{z2}} \\ &
+ \frac{3}{8}\Big(- 3f_1n_2^2n_3 + f_3n_1n_2^2 + 2f_2n_1n_2n_3 + f_1n_3^3 - f_3n_1n_3^2\Big)\overline{a_{z2}^3} \\ &
+ \frac{3}{8}\Big(- f_3n_1^3 + f_1n_3n_1^2 - f_3n_1n_2^2 - 2f_2n_3n_1n_2 + 3f_1n_3n_2^2\Big)\overline{a_{z2}}
\end{split}
\label{eq:s_fynx3}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_yn_xn^2_z} = & + \frac{1}{2}\Big(- f_3n_1^3 + f_1n_1^2n_3 + 2f_3n_1n_2^2 - 4f_2n_1n_2n_3 + f_3n_1n_3^2 + 2f_1n_2^2n_3 - f_1n_3^3\Big)\overline{a_{z1}^2a_{z2}} \\ &
+ \frac{1}{2}\Big(3f_1n_2^2n_3 - f_3n_1n_2^2 - 2f_2n_1n_2n_3 - f_1n_3^3 + f_3n_1n_3^2\Big)\overline{a_{z2}^3} \\ &
+ \frac{1}{2}\Big(- 2f_1n_2^2n_3 + 2f_2n_1n_2n_3 + f_1n_3^3 - f_3n_1n_3^2\Big)\overline{a_{z2}}
\end{split}
\label{eq:s_fynxnz2}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^2_xn_z} = & + \frac{1}{2}\Big(- f_1n_1^3 + 3f_3n_1^2n_3 + 3f_1n_1n_3^2 - f_3n_3^3\Big)\overline{a_{z1}^4}
+ \frac{1}{2}\Big(- f_2n_2^3 + 3f_3n_2^2n_3 + 3f_2n_2n_3^2 - f_3n_3^3\Big)\overline{a_{z2}^4} \\&
+ \frac{3}{2}\Big(- f_2n_1^2n_2 + f_3n_1^2n_3 - f_1n_1n_2^2 + f_1n_1n_3^2 + f_3n_2^2n_3 + f_2n_2n_3^2 -\frac{2}{3} f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{1}{2}\Big(f_1n_1^3 - 4f_3n_1^2n_3 + f_1n_1n_2^2 - 2f_1n_1n_3^2 - f_3n_2^2n_3 + f_3n_3^3\Big)\overline{a_{z1}^2} \\ &
+ \frac{1}{2}\Big(f_2n_1^2n_2 - f_3n_1^2n_3 + f_2n_2^3 - 4f_3n_2^2n_3 - 2f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^2}
+ \frac{1}{2}\Big(f_3n_3n_1^2 + f_3n_3n_2^2\Big)
\end{split}
\label{eq:s_fznx2nz}
\end{equation}
\begin{equation}
\begin{split}
\overline{f_zn^3_z} = & + \Big(f_1n_1^3 - 3f_3n_1^2n_3 - 3f_1n_1n_3^2 + f_3n_3^3\Big)\overline{a_{z1}^4}
+ 3\Big(f_3n_1^2n_3 + f_1n_1n_3^2 - \frac{2}{3}f_3n_3^3\Big)\overline{a_{z1}^2} \\ &
+ 3\Big(f_2n_1^2n_2 - f_3n_1^2n_3 + f_1n_1n_2^2 - f_1n_1n_3^2 - f_3n_2^2n_3 - f_2n_2n_3^2 + \frac{2}{3}f_3n_3^3\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \Big(f_2n_2^3 - 3f_3n_2^2n_3 - 3f_2n_2n_3^2 + f_3n_3^3\Big)\overline{a_{z2}^4}
+ 3\Big(f_3n_2^2n_3 + f_2n_2n_3^2 - \frac{2}{3}f_3n_3^3\Big)\overline{a_{z2}^2}
+ f_3n_3^3
\end{split}
\label{eq:s_fznz3}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z1}\delta_1}= & \frac{2}{3\pi}\Big(6n_1n_3r_2u_x^2u_z - 4n_1n_3r_2u_z^3\Big)\overline{a_{z1}^4}
+ \frac{1}{4}\Big(n_1r_3u_x^2 - 2n_1r_3u_z^2\Big)\overline{a_{z1}^2a_{z2}} \\ &
+ \frac{2}{3\pi}\Big(6n_1n_2r_3u_x^2u_z - 4n_1n_3r_2u_z^3 - 4n_1n_2r_3u_z^3 + 6n_1n_3r_2u_x^2u_z\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
+ \frac{4}{3\pi}\Big(2n_1n_3r_2u_z^3 - n_1n_2r_3u_x^2u_z - 2n_1n_3r_2u_x^2u_z\Big)\overline{a_{z1}^2}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z2}\delta_2}= & \frac{2}{3\pi}\Big(4n_1n_2r_3u_z^3 + 4n_2n_3r_1u_z^3 - 6n_1n_2r_3u_x^2u_z - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z1}^2a_{z2}^2} \\ &
- \frac{3}{12}\Big(n_1r_3u_x^2 + n_3r_1u_x^2 - 2n_1r_3u_z^2 - 2n_3r_1u_z^2\Big)\overline{a_{z1}^2a_{z2}} \\ &
+ \frac{2}{3\pi}\Big(4n_2n_3r_1u_z^3 - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^4}
- \frac{1}{4}\Big(n_3r_1u_x^2 - 2n_3r_1u_z^2\Big)\overline{a_{z2}^3} \\ &
+ \frac{4}{3\pi}\Big(n_1n_2r_3u_x^2u_z - 2n_2n_3r_1u_z^3 + 2n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^2}
+ \frac{1}{4}\Big(n_1r_3u_x^2 - 2n_3r_1u_z^2\Big)\overline{a_{z2}}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\overline{ga_{z3}\delta_3}= & \frac{2}{3\pi}\Big(6n_1n_3r_2u_x^2u_z - 4n_1n_3r_2u_z^3\Big)\overline{a_{z1}^2a_{z3}^2}
+ \frac{2}{3\pi}\Big(4n_2n_3r_1u_z^3 - 6n_2n_3r_1u_x^2u_z\Big)\overline{a_{z2}^2a_{z3}^2} \\ &
- \frac{1}{4}\Big(n_3r_1u_x^2 - 2n_3r_1u_z^2\Big)\overline{a_{z2}a_{z3}^2}
+ \frac{4}{3\pi}\Big(n_2n_3r_1u_x^2u_z - n_1n_3r_2u_x^2u_z\Big)\overline{a_{z3}^2}
\end{split}
\end{equation}
\section*{\MakeUppercase{Acknowledgements}}
\normalsize
This work was supported by a NASA Space Technology Research Fellowship through grant NNX16AM53H. DJS acknowledges support from AFOSR through grant FA9550-18-1-0313.
|
1,108,101,566,422 | arxiv |
\section{Introduction}
The number density of galaxy clusters as a function of mass and
redshift can be used to constrain cosmological parameters and probe
the growth of structure
\citep[e.g.,][]{Henry00,Borgani01,Gladders07,Henry09,Mantz10,Vikhlinin09}.
This approach is conceptually straightforward, but the actual
implementation of this method is more difficult. This is because the
clusters are identified based on their baryonic properties (e.g.,
galaxy counts, SZ decrement, X-ray luminosity or temperature), which
need to be related to the underlying dark matter distribution. The
relation between observed cluster properties and the mass depends
itself on the relative importance of the various physical processes
that play a role in galaxy and cluster formation. Galaxy clusters
provide an excellent laboratory to study these, because
multi-wavelength observations provide us with a complete census of the
various components: we can actually observe the stars, gas and dark
matter content.
The use of clusters to constrain cosmology and to improve our
understanding of cluster physics are closely inter-related: feedback
processes and differences in formation history lead to variations in
the observable properties at a given mass. The resulting intrinsic
scatter changes the selection function and thus leads to biased
constraints on cosmological parameters if left unaccounted for.
Correctly interpreting the scatter in the scaling relations requires a
good understanding of the various sources of uncertainty, which can be
either physical or statistical.
A variety of methods can be used to determine the mass of a cluster.
Most of these are based on dynamics and assume the cluster is relaxed.
In this case, the mass can obtained from the velocity dispersion of
the cluster galaxies. Measurements of the gas pressure, obtained from
observations of the hot X-ray emitting intracluster medium (ICM),
provide another powerful tracer of the dark matter content. This
approach, which assumes hydrostatic equilibrium, has been used
extensively, thanks to the high quality observations obtained using
powerful X-ray telescopes such as the {\it Chandra} X-ray Observatory
and {\it XMM-Newton} \citep[e.g.,][]{Vikhlinin09,Mantz10}.
The interpretation of such measurements is often complicated by the
presence of substructures and the fact that most clusters are not relaxed.
A major concern is the assumption of hydrostatic equilibrium,
because numerical simulations have shown that active galactic nuclei,
turbulence, and bulk motions of the gas, as well as variations in the
merging history can lead to systematic underestimates of masses based
on X-ray observations \citep[e.g.,][]{Evrard90, Dolag05, Rasia06,
Nagai07}. Although recent simulations incorporate a wide range of
physical processes it is not clear to what extent the simulations
provide a realistic estimate of the systematic error in the mass. It
is therefore critical to compare dynamical techniques to methods that
do not suffer from these problems.
This is possible thanks to a phenomenon called weak gravitational
lensing: the cluster mass distribution perturbs the paths of photons
emitted by distant galaxies. As a result the images of these
background galaxies appear slightly distorted (or sheared). Unlike
dynamical methods, gravitational lensing does not require one to make
assumptions regarding the dynamical state of the cluster. The
amplitude of this distortion provides us with a direct measurement of
the gravitational tidal field, which in turn provides us with an
estimate for the projected cluster mass. The recent improvements in
sample size and precision allowed \cite{Mahdavi08} to compare weak
lensing and hydrostatic masses for a sample of 18
clusters. \cite{Mahdavi08} found that at large radii the X-ray results
underestimate the mass, in agreement with the findings from numerical
simulations. These findings demonstrate the usefulness of weak lensing
for multi-wavelength studies of galaxy clusters.
Weak lensing masses are now routinely measured for large samples of
clusters \citep[e.g.][]{Hoekstra07,Bardeau07,Okabe10}. Tests on
simulated data have shown that the best analysis methods can reduce
systematic errors in the shear measurements to $\sim 1-2\%$
\citep[][]{STEP1,STEP2}. Much of the recent progress has been driven
by the desire to measure the weak lensing signal caused by intervening
large-scale structure, a.k.a. cosmic shear \citep[for a recent review
see][]{HJ08}. The cosmic shear signal has now been detected at high
significance in a number of surveys \citep[e.g.,][]{Hoekstra02,
Waerbeke05, Hoekstra06, Benjamin07, Fu08, Schrabback10} and is one
of the most promising tools to study dark energy.
This cosmological signal, however, limits the accuracy with which
cluster masses can be determined: the observed lensing signal is a
combination of the cluster signal {\it and} cosmic shear. As first
discussed in \cite{Hoekstra01} the large-scale structure along the
line-of-sight is an additional source of noise, but does not bias the
measurement of the mass. As shown by \cite{Hoekstra03} this 'cosmic
noise' is particularly relevant when studying cluster mass density
profiles \citep[also see][]{Dodelson04}. Although the effects of
uncorrelated structures along the line-of-sight are often
acknowledged, their contribution to the formal error budget has
typically been ignored. This is somewhat surprising, given that there
is little doubt that cosmic shear has been measured.
In this paper we revisit the predictions presented in
\cite{Hoekstra01,Hoekstra03} using ray-tracing results from the
Millennium Simulation \citep{Springel05,Hilbert09}, demonstrating in
\S3 once more that cosmic noise should not be ignored in weak lensing
studies. We also quantify for the first time the noise introduced by
the finite sampling of the source redshift distribution. In \S4 we
examine whether cosmic noise can be suppressed using readily available
data.
\section{Cosmic noise}
The observed lensing signal is the combination of the shear induced by
the cluster mass distribution {\it and} that of other structures along
the line-of-sight. The expectation value of the latter vanishes, but
it does introduce additional variance in the cluster mass estimate.
The effect of this cosmic noise on weak lensing cluster studies can be
quantified analytically \citep{Hoekstra01,Hoekstra03} or using
numerical simulations \citep{White02}. Not surprisingly, these studies
have shown that the cosmic noise is most important when the cluster
signal itself becomes small: i.e., when data at large cluster-centric
radii are used, or when clusters at low redshifts are studied. Cosmic
noise, however, is also a considerable source of uncertainty for
clusters at intermediate redshifts.
Even for a massive cluster, the induced change in the shape of a
source galaxy's image is typically small compared to its intrinsic
ellipticity. It is therefore convenient to azimuthally average the
tangential shear $\gamma_T$ and study its variation as a function of
radius. It can be related to the surface density through
\begin{equation}
\langle\gamma_T\rangle(r)=\frac{\bar\Sigma(<r) - \bar\Sigma(r)}
{\Sigma_{\rm crit}}=\bar\kappa(<r)-\bar\kappa(r),
\end{equation}
\noindent where $\bar\Sigma(<r)$ is the mean surface density within an
aperture of radius $r$, and $\bar\Sigma(r)$ is the mean surface
density on a circle of radius $r$. The convergence $\kappa$, or
dimensionless surface density, is the ratio of the surface density and
the critical surface density $\Sigma_{\rm crit}$, which is given by
\begin{equation}
\Sigma_{\rm crit}=\frac{c^2}{4\pi G}\frac{D_s}{D_l D_{ls}},
\end{equation}
\noindent where $D_l$ is the angular diameter distance to the
lens. $D_{s}$ and $D_{ls}$ are the angular diameter distances from the
observer to the source and from the lens to the source,
respectively.
The variance in the azimuthally averaged tangential shear in an
annulus ranging from $r_1$ to $r_2$ caused by large-scale structure
along the line-of-sight is given by \citep{Hoekstra03}:
\begin{equation}
\sigma^2_{\rm LSS}(r_1,r_2)= 2\pi\int_0^\infty dl~l
P_\kappa(l) g^2(l,r_1,r_2), \label{eqmap}
\end{equation}
\noindent where the convergence power spectrum $P_\kappa(l)$ is given
by:
\begin{equation}
P_\kappa(l)=\frac{9 H_0^4 \Omega_m^2}{4 c^4}
\int_0^{w_H} dw \left(\frac{\bar W(w)}{a(w)}\right)^2
P_\delta\left(\frac{l}{f_K(w)};w\right).
\end{equation}
\noindent Here $w$ is the radial coordinate, $a(w)$ the cosmic scale
factor, and $f_K(w)$ the comoving angular diameter
distance. $P_\delta(l;w)$ is the matter power spectrum. We consider
relatively small scales, and therefore need to account for the
non-linear evolution \citep[e.g.][]{Jain97, Schneider98,
Hilbert09}. $\bar W(w)$ is the average ratio of angular
diameter distances $D_{ls}/D_{s}$ for a redshift distribution of
sources $p_w(w)$:
\begin{equation}
\bar W(w)=\int_w^{w_H} dw' p_w(w')\frac{f_K(w'-w)}{f_K(w')}.
\end{equation}
The function $g(l,r_1,r_2)$ in Eqn.~(\ref{eqmap}) is a filter of the
convergence power spectrum and is specified by our choice to consider
the azimuthally averaged tangential shear. We refer to
\cite{Hoekstra03} for a more detailed discussion, including the
expression for $g(l,r_1,r_2)$. In this paper we measure the
azimuthally averaged tangential shear as a function of radius
$r=(r_1+r_2)/2$, in bins that are $r_2-r_1=15$ arcseconds wide.
\subsection{Source redshift sampling}
In this paper we identify another source of error, which is important
for high redshift clusters and at small radii. It arises because the
amplitude of the lensing signal depends on the redshift of the
sources. At large distances from the cluster, the signal is obtained
by averaging over a relatively large number of galaxies, thus sampling
the average redshift distribution fairly well. However, at small radii
the number of sources is much smaller leading to a large variance in
the actual redshift distribution $n(z)$. This problem can be dealt
with using photometric redshifts for the sources, but the required
increase in observing time may make this difficult to achieve in
practice. This sampling variance depends on the width of the redshift
distribution and can be estimated from observations of blank fields
\citep[e.g.,][]{Ilbert06,Ilbert09}. This estimate, however, does not
account for the clustering of the source galaxies, which increases the
scatter further. To include the effects of source clustering one can
place corresponding apertures in observed fields with redshifts and
measure the scatter. This approach, however, does require a rather large
survey area. To quantify how the {\it combination} of distant (i.e.,
uncorrelated) large-scale structure and variations in the source
redshift distribution affect weak lensing mass determinations we need
a realistic distribution of source galaxies, which themselves are part
of the large-scale structure. This requires mock data sets based on
cosmological numerical simulations of a large area.
\subsection{Numerical simulations}
We use results from the Millennium Simulation \citep{Springel05}, which is
a large $N$-body simulation following the evolution of $2160^3$ dark matter
particles in a periodic box with a side length of $500\,h^{-1}\,{\rm Mpc}$,
using a flat $\Lambda$CDM cosmology\footnote{The values
for the cosmological parameters that were adopted are: a matter density of
$\Omega_m=0.25$, a cosmological constant with $\Omega_\Lambda=0.75$, a Hubble
constant of $H_0=73$km/s/Mpc, a spectral index $n=1$ and a normalisation
$\sigma_8=0.9$ for the primordial power spectrum of density fluctuations.}.
The lensing signal is obtained from a careful ray-tracing analysis
presented in detail in \cite{Hilbert09}. The simulation is carried out
by dividing the periodically continued matter distribution of the
Millennium Simulation into $36$ slices out to $z=3$, each with a
thickness of $\approx 100\,h^{-1}\,{\rm Mpc}$. These are subsequently
projected onto lens planes, and a set of light rays is propagated
through the array of planes. Using a set of recursion relations, the
ray positions on each plane and the Jacobian matrices for the light
paths from the observer to each plane are obtained. Different
realizations are obtained by choosing different observer positions, in
our case yielding 512 patches of one square degree each.
The periodic repetition of structures along the line-of-sight, which
is caused by the finite volume of the Millennium Simulation, is
minimized by choosing a line-of-sight direction that encloses
carefully chosen angles with the faces of the simulation box. The
advantage of this approach is that the matter distribution remains
continuous across slice boundaries, so that correlations on scales
larger than the thickness of the redshift slices are
maintained.
Information on the properties of galaxies is obtained from
the semi-analytic models of galaxy formation by \cite{DeLucia07}.
Combined with the ray-tracing results, this allows us to obtain
realistic lensed positions and magnitudes for each galaxy, together
with shear and convergence at the galaxies' locations. For our galaxy
catalogues, we impose a magnitude cut of $r_{\rm SDSS}<25$.
The average redshift distribution of these source galaxies is presented
in Figure~\ref{zdist}. The results agree fairly well with photometric
redshift distributions from the COSMOS survey (\cite{Ilbert09}; red dotted
histogram) and the CFHT Legacy Survey (\cite{Ilbert06}; blue
dashed histogram). The actual redshift distributions appear to peak at
somewhat lower redshift, but this difference is not important for the
study presented here.
\begin{figure}
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=\hsize
\epsffile{zdist_sim.eps}}
\begin{small}
\caption{Redshift distribution $n(z)$ of the simulated source
galaxies, with apparent magnitudes $r<25$ (solid line). For
comparison redshift histograms for the COSMOS survey (red dotted
histogram; Ilbert et al., 2009) and the CFHTLS (blue dashed
histogram; Ilbert et al., 2005) are shown (same selection in
apparent magnitude). The difference may be due to limitations of the
simulation, but may also reflect incompleteness at high redshifts in
the case of the photometric redshift catalogs. Nonetheless, the
redshift distribution derived from the simulations is adequate for
our study.
\label{zdist}}
\end{small}
\end{center}
\end{figure}
\subsection{Cluster Signal}
The numerical simulations provide a realistic lensing signal that
would be observed in a random patch of sky. This signal is also
present when a cluster of galaxies is studied: the observed lensing
signal is the combination of that of the cluster and the distant
large-scale structure (distant in the sense that it does not know
about the cluster). To simulate this, we can simply add the
cluster signal to that from the simulations. We assume that the
density profile of a cluster is described by the NFW \citep{NFW}
profile
\begin{equation}
\rho(r)=\frac{M_{\rm vir}}{4\pi f(c)}\frac{1}{r(r+r_s)^2},
\end{equation}
\noindent where $M_{\rm vir}$ is the virial mass, the mass enclosed
within the radius $r_{\rm vir}$. The virial radius is related to the
scale radius $r_s$ through the concentration $c=r_{\rm vir}/r_s$ and
the function $f(c)=\ln(1+c)-c/(1+c)$. By definition, the virial mass
and radius are related through
\begin{equation}
M_{\rm vir}=\frac{4\pi}{3} \Delta_{\rm vir}(z)\rho_{\rm bg}(z)r_{\rm vir}^3,
\end{equation}
\noindent where $\rho_{\rm bg}=3H_0^2\Omega_m(1+z)^3/(8\pi G)$ is the
mean density at the cluster redshift and the virial overdensity
$\Delta_{\rm vir}\approx (18\pi^2+82\xi-39\xi^2)/\Omega_m(z)$, with
$\xi=\Omega_m(z)-1$ \citep{Bryan98}. For the $\Lambda$CDM cosmology
considered here, $\Delta_{\rm vir}(0)=337$. Expressions for the
surface density and tangential shear of the NFW profile have been
derived by \cite{Bartelmann96} and \cite{Wright00} and we refer the
interested reader to these papers for the relevant equations.
In simulations of collisionless cold dark matter the NFW density
profile provides a good description of the radial mass distribution
for halos with a wide range in mass \citep[e.g.,][]{NFW95,NFW}. The
density profile is described by specifying $M_{\rm vir}$ and
concentration $c$ (or equivalently $r_s$). Numerical simulations,
however, indicate that the average concentration depends on the halo
mass and the redshift \citep{NFW95,Bullock01,Duffy08}. To account for
this correlation we use the relation between the virial mass $M_{\rm
vir}$ and concentration $c$ from \cite{Duffy08} who studied
numerical simulations using the best fit parameters of the WMAP5
cosmology\footnote{This is a different cosmology from the one used to
run the Millennium Simulation, but we note that the actual choice of
mass-concentration relation is not important for the main results
presented in this paper.} \citep{Komatsu09}. The best fit $c(M_{\rm
vir})$ is given by:
\begin{equation}
c=7.85\left({\frac{M_{\rm vir}}{2\times 10^{12}}}\right)^{-0.081}{(1+z)^{-0.71}}.\label{cmrel}
\end{equation}
Simulations show considerable variation in the density profiles,
resulting in a lognormal distribution of $c$ with a scatter
$\sigma_{\log c}\sim 0.1$ for halos of a given mass
\citep[e.g.,][]{Jing00,Neto07}. Furthermore, \cite{Neto07} showed that
the concentration distributions are different for 'relaxed' and
'unrelaxed' halos. Although these physical variations are an
additional source of uncertainty when attempting to constrain the
mass-concentration relation observationally, they are not relevant for
our study of cosmic noise.
\section{Results}
For each of the 512 realisations from the Millennium Simulation we
measure the azimuthally averaged tangential shear as a function of
radius from the centre of the image. The solid black line in
Figure~\ref{gt_lss} shows the resulting dispersion $\sigma_{\rm
LSS}=\langle\gamma_T^2\rangle^{1/2}$ as determined from the 512
realisations. The prediction based on the \cite{PD96}
prescription for the non-linear evolution of the power spectrum is
indicated by the red line. On large scales ($>5'$) the prediction is
about 15\% lower, which we attribute to the adopted non-linear power
spectrum. This difference is consistent with the conclusions of
\cite{Hilbert09} who compared various prescriptions for the non-linear
power spectrum.
\begin{figure}
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=\hsize
\epsffile{scat_lss.eps}}
\begin{small}
\caption{The dispersion $\sigma_{\rm
LSS}=\langle\gamma_T^2\rangle^{1/2}$ introduced by distant large
scale structure. The solid black line shows the dispersion measured
from the 512 realisations from the Millennium Simulation. At small
radii the small number of sources introduces additional scatter
(indicated by the dashed blue curve). The smooth red line
corresponds to the analytical prediction from Hoekstra (2003). The
prediction does not account for noise arising from the finite number
of sources and should be compared to the dotted black line (which is
corrected for this effect). The prediction is about 15\% lower,
which is due to the adopted non-linear power spectrum (see
text and Hilbert et al. 2009).
\label{gt_lss}}
\end{small}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=\hsize
\epsffile{scat_all.eps}}
\begin{small}
\caption{The dispersion $\langle \gamma_T^2\rangle^{\frac{1}{2}}$ for a cluster with $M_{\rm
vir}=10^{15}M_\odot$ with redshift $z=0.2, 0.4$ and 0.6
(solid-black, dashed-red and dotted blue lines, respectively). At large
radii the cosmic shear contribution, which is independent of cluster
redshift dominates the dispersion. On small scales, the dispersion
in source redshifts increases with lens redshift.
\label{scat_all}}
\end{small}
\end{center}
\end{figure}
Close pairs of galaxies are sheared by similar amounts if all sources
are at the same redshift. In this case, the dispersion in the
tangential shear in a given radial bin would be small (for a given
realisation). This is not true for actual observations, because the
source redshift distribution is broad (see Figure~\ref{zdist}). At
large radii, where the signal is averaged over many galaxies, the
source redshift distribution is expected to be close to the average.
At small radii this is not a good representation, because the small
number of sources samples the average distribution only sparsely.
This leads to additional noise if photometric redshifts for the
sources are not available. Unlike the distant large-scale
structure, this effect is only relevant at small radii.
On small scales we can assume that the dispersion in the tangential
shear for a single realisation is predominantly caused by the spread
in source redshifts. The resulting mean dispersion as a function of
radius is indicated by the dashed blue curve in Figure~\ref{gt_lss}.
This is the contribution to $\sigma_{\rm LSS}$ caused by the finite
sampling of the source redshift distribution. If we remove this source
of scatter, the agreement between the \cite{Hoekstra03} prediction and
the Millennium Simulation is excellent (as indicated by the dotted
line), keeping in mind the difference in amplitude of the non-linear
power spectrum based on \cite{PD96}. Hence, variations in the actual
source redshift distribution lead to an increase in the observed
variance at radii $\lesssim 4'$. Most of the noise caused by the lack
of photometric redshifts arises from the fact the redshift
distribution is broad, but we expect that the scatter is boosted by
the fact that sources are in fact clustered. Comparison with the
simulations confirms this, but the increase in scatter is modest: the
increase is only $\sim 20\%$ compared to the estimate based on the
$n(z)$ alone.
The lack of knowledge of the actual source redshift distribution
contributes to the uncertainty in the cluster mass because it leads to
scatter in the ratio $\beta=D_{ls}/D_s$ in the expression for the
critical surface density. For a low redshift cluster most sources are
at much higher redshifts and $\beta\sim 1$. Consequently the variation
in $\beta$ is small. As the redshift of the lens is closer to the mean
source redshift, the variation in $D_{ls}/D_s$ increases. This is
demonstrated in Figure~\ref{scat_all}, which shows the dispersion for
a cluster with mass $M_{\rm vir}=10^{15}M_\odot$ at various redshifts.
At large radii the variation in the redshift distribution is
negligible and the dispersion converges to the cosmic shear signal. At
small radii, the scatter caused by the variation in the source
redshift distribution increases rapidly with cluster redshift. Note
that deeper observations will improve the sampling of the redshift
distribution because of the larger number of sources. However, at the
same time the average source redshift will increase, resulting in a
larger cosmic noise contribution \citep[see][]{Hoekstra01}.
\subsection{Mass estimates}
In this section we study the combined effect of cosmic noise and the
finite sampling of the source redshift distribution on the weak
lensing mass estimate of a cluster with a virial mass $M_{\rm
vir}=10^{15}M_\odot$. We assume it follows an NFW profile with the
concentration given by Eqn.~(\ref{cmrel}) and add the corresponding
lensing signal to the shear inferred from the ray-tracing analysis,
yielding 512 realisations of the cluster lensing signal. We fit an
NFW model to the resulting lensing signal out to an outer radius
$R_{\rm out}$. The innermost point is $7\farcs5$, but the results do
not depend much on this choice: the small number of sources at these
radii means they have a low statistical weight in the fit. We consider
only shape noise as a source of uncertainty and determine the best fit
masses from a standard least-squares fit.
\begin{figure}
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=\hsize
\epsffile{mvir_mc.eps}}
\begin{small}
\caption{Histogram of the best fit virial masses $M_{\rm vir}$ for the
512 realisations for lenses at $z=0.05,0.1,0.2$ and 0.4, when we
adopt the mass-concentration relation from Duffy et al. (2008). The
red histograms show the distribution of results when only distant
large scale structure is considered. For low redshifts the
distribution of masses is skewed, but the average remains unbiased
(indicated by the vertical dashed line). The black histograms show
the results when the variation in the source redshift distribution
is included. Note that these results do not include any shape noise.
\label{mvir_mc}}
\end{small}
\end{center}
\end{figure}
\begin{table}
\begin{center}
\caption{Dispersion in $M_{\rm vir}$ and $c$ \label{tabfree}}
\begin{tabular}{lcccccc}
\hline
\multicolumn{1}{c}{} &\multicolumn{2}{c}{adopting $c(M)$} &\multicolumn{4}{c}{$M_{\rm vir}$ \& $c$ free parameters} \\
\multicolumn{1}{c}{} &\multicolumn{1}{c}{$R_{\rm out}$=10'}&\multicolumn{1}{c}{$R_{\rm out}$=25'}&\multicolumn{2}{c}{$R_{\rm out}$=10'}&\multicolumn{2}{c}{$R_{\rm out}$=25'}\\
$z_{\rm lens}$ & $\sigma_M$ & $\sigma_M$ & $\sigma_M$ & $\sigma_c$ & $\sigma_M$ & $\sigma_c$ \\
\hline
\multicolumn{7}{c}{LSS only}\\
\hline
0.05 & 0.25 & 0.22 & 0.69 & 0.82 & 0.43 & 0.81 \\
0.1 & 0.15 & 0.17 & 0.35 & 0.53 & 0.25 & 0.63 \\
0.2 & 0.12 & 0.15 & 0.21 & 0.44 & 0.21 & 0.59 \\
0.4 & 0.13 & 0.18 & 0.21 & 0.48 & 0.23 & 0.69 \\
0.6 & 0.18 & 0.24 & 0.28 & 0.61 & 0.29 & 0.88 \\
\hline
\multicolumn{7}{c}{variation in $n(z)$ only}\\
\hline
0.05 & 0.01 & 0.01 & 0.01 & 0.04 & 0.01 & 0.03 \\
0.1 & 0.02 & 0.01 & 0.02 & 0.08 & 0.01 & 0.06 \\
0.2 & 0.03 & 0.02 & 0.03 & 0.25 & 0.02 & 0.23 \\
0.4 & 0.07 & 0.06 & 0.05 & 0.33 & 0.05 & 0.31 \\
0.6 & 0.10 & 0.08 & 0.08 & 0.45 & 0.07 & 0.41 \\
\hline
\multicolumn{7}{c}{combination of LSS and variation in $n(z)$}\\
\hline
0.05 & 0.25 & 0.22 & 0.73 & 0.83 & 0.44 & 0.81 \\
0.1 & 0.14 & 0.16 & 0.40 & 0.56 & 0.24 & 0.65 \\
0.2 & 0.11 & 0.14 & 0.23 & 0.54 & 0.21 & 0.67 \\
0.4 & 0.12 & 0.15 & 0.22 & 0.65 & 0.21 & 0.82 \\
0.6 & 0.16 & 0.22 & 0.28 & 0.85 & 0.26 & 1.08 \\
\hline
\multicolumn{7}{c}{statistical}\\
\hline
0.05 & 0.17 & 0.10 & 0.93 & 1.27 & 0.16 & 0.66 \\
0.1 & 0.12 & 0.08 & 0.25 & 0.81 & 0.10 & 0.54 \\
0.2 & 0.10 & 0.07 & 0.15 & 0.65 & 0.09 & 0.52 \\
0.4 & 0.12 & 0.10 & 0.15 & 0.73 & 0.11 & 0.63 \\
0.6 & 0.17 & 0.14 & 0.19 & 1.00 & 0.15 & 0.89 \\
\hline
\hline
\end{tabular}
\end{center}
{\footnotesize Dispersions in the values for the best-fit virial mass
$M_{\rm vir}$ (in units of $10^{15}\hbox{M$_\odot$}$) and concentration $c$ as a
function of lens redshift and the maximum radius that is used in the
fit. We list results for the effects of distant large-scale
structure and variation in the source redshift distribution
separately and combined. These results do not include the
statistical error due to the intrinsic ellipticities of the source
galaxies. The statistical errors (but now without LSS contributions)
are given in the fourth set of results.}
\end{table}
We first consider the case where $M_{\rm vir}$ is the only parameter
that is fit, because the concentration is specified through
Eqn.~\ref{cmrel}. Figure~\ref{mvir_mc} shows the resulting
distribution of masses for clusters at various redshifts if we fit the
NFW model out to $R_{\rm out}=10'$. For low redshifts the distribution
is somewhat skewed, but the mean value is unbiased (as indicated by
the vertical dashed lines). Table~\ref{tabfree} lists the values for
the scatter $\sigma_M$ caused by the combined effects of cosmic noise
and source redshift variation. Comparison with the statistical errors
(computed assuming a total intrinsic source ellipticity of 0.3) shows
that the cosmic noise contribution is quite comparable at all
redshifts. Cosmic noise is minimal at intermediate redshifts
$(0.2<z<0.4)$.
The increase for $z>0.4$ is caused by the fact that the angular extent
of the cluster decreases, whereas the aperture $R_{\rm out}$ is kept
fixed. The dashed lines in Figure~\ref{scat_rout} show how the cosmic
noise depends on $R_{\rm out}$. For reference, Table~\ref{tabfree}
also lists the values for $\sigma_M$ for $R_{out}=25'$. For $z=0.05$
(black line) the cosmic noise decreases with aperture size, but at
higher redshifts it increases. However, the statistical uncertainty
decreases with increasing $R_{\rm out}$, as indicated by the dotted
lines. The solid lines indicate the net result: for $z=0.05$ there is
a net gain, but for a cluster with $z>0.2$ (blue line) there is no
benefit extending the fit beyond $10'$.
\begin{figure}
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=\hsize
\epsffile{scat_rout.eps}}
\begin{small}
\caption{The solid lines show the total uncertainty in the best-fit
virial mass (for $M_{\rm vir}=10^{15}\hbox{M$_\odot$}$) as a function of the
maximum radius used to fit the NFW for a lens redshift of $z=0.05$
(black), $z=0.1$ (red) and $z=0.2$ (blue). The contribution from the
shape noise (i.e., statistical error) is indicated by the dotted
curves, whereas the dashed lines show the LSS contribution. For
$z=0.05$ there is a clear benefit from wide-field imaging data, but
at $z=0.2$ the uncertainty is flat for $R_{\rm out}>10'$.
\label{scat_rout}}
\end{small}
\end{center}
\end{figure}
We also compare the relative contributions of the distant large-scale
structure and the variation in $n(z)$. The latter is computed by using
the simulated redshift distribution, but without adding the cosmic
noise to the cluster signal. To compute the former, we add the cluster
shear computed for the average source redshift to the ray-tracing
results. Interestingly, the combined effect of LSS and variation in
$n(z)$ is to slightly reduce the scatter in the recovered masses,
compared to the LSS-only case. This can be easily understood: a
structure at lower redshift will increase the lensing signal, but will
also increase the number of sources at these redshifts. As the latter
are lensed less than the average source, they partly offset the
increase in lensing signal. Note that the combined effect does not
bias the cluster mass estimates.
The lensing signal of high redshift clusters can be boosted by
removing foreground galaxies using photometric redshift information
and optimally weighing the remaining sources based on their
$D_{ls}/D_s$. However, the cosmic noise increases also rapidly with
source redshift: $\sigma_{\rm LSS}\propto z^{1.4}$ for $(z<1)$, which
might limit the expected improvement in precision. The redshift
distribution used here drops quickly beyond $z\sim 1$ and we find that
for clusters with $z>0.4$ the photometric redshift information does
improve the mass measurements.
\subsection{Joint constraints on mass and concentration}
So far we examined the effect of cosmic noise when one assumes a
mass-concentration relation which is based on numerical simulations.
Instead, many studies fit the lensing signal with both $M_{\rm vir}$
and $c$ as free parameters. This allows one to directly constrain the
concentration and therefore test the numerical simulations
\citep[e.g.,][]{Clowe02,Hoekstra02,Mandelbaum08,Okabe10}. Cosmic
noise, however, significantly increases the formal uncertainties in
such measurements \citep{Hoekstra03}. Figure~\ref{mvirc} shows the
distribution of best fit values for $M_{\rm vir}$ and concentration
$c$ for the 512 realisations when fitting both parameters
simultaneously. The contours indicate the statistical uncertainties in
the parameters, whereas the points show the spread due to cosmic noise
and variation in the source redshift distribution. Table~\ref{tabfree}
lists the scatter in the parameters. It is clear that cosmic noise has
a large impact on the ability to constrain the concentrations. In
particular, note the outliers with high inferred masses and low
concentrations.
We also examined whether cosmic noise biases the slope of the
mass-concentration relation that is inferred from studies of samples
of clusters. For instance, \cite{Okabe10} obtain a power-law slope of
$0.40\pm0.19$, which is steeper than is seen in numerical
simulations. They examined the correlation in parameters due to the
shape noise for simulated profiles with masses $M_{\rm
vir}=0.2-1.5\times 10^{15}\hbox{M$_\odot$}$. \cite{Okabe10} find a bias of
0.06, much smaller than the observed value. We performed a similar
test and find that cosmic noise also biases the inferred slope, but by
a similar amount. The combined effect of shape and cosmic noise is a
bias of only 0.08 in the slope, because the correlation between
$M_{\rm vir}$ and $c$ is similar for both sources of error. We note
that the inferred slope is steeper if smaller range in mass is
considered: we find a bias of 0.17 for a range $M_{\rm
vir}=1-1.5\times 10^{15}\hbox{M$_\odot$}$. Hence, the most important
consequence of including cosmic noise is to reduce the significance of the
measurement of the slope of the mass-concentration relation.
\begin{figure}
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=\hsize
\epsffile{mvir_c.eps}}
\begin{small}
\caption{Distribution of best-fit $M_{\rm vir}$ and $c$ when both parameters
are free to vary. The points indicate the spread in results when distant
large-scale structure and source redshift variation are included. The contours
indicate the regions that enclose $68\%$ and $90\%$ of the fits when only
statistical (shape) noise is considered. The red line shows the mass-concentration
relation from Duffy et al. (2008).
\label{mvirc}}
\end{small}
\end{center}
\end{figure}
\section{Reducing cosmic noise}
The results presented in the previous section indicate that weak
lensing studies should include cosmic noise in their error budget. An
interesting question is whether one can reduce, or even
remove, the effects of cosmic noise. A statistical approach was
discussed by \cite{Dodelson04} who proposed a minimum variance
estimator to account for cosmic noise in mass reconstructions. A
concern, however, is that substructures associated with the cluster
might be suppressed as well.
\subsection{Accounting for additional clusters}
In this section we will explore whether the observations themselves
can be used to reduce the cosmic noise. Although one can imagine many
different ways to predict the cosmic noise signal, we will consider a
relatively simple method. It requires only a minimum of colour
information and is therefore readily available: most studies include
(some) color information to identify cluster members.
Massive collapsed structures, such as galaxy clusters and groups of
galaxies contribute a large fraction of the power on the physical
scales relevant for cosmic noise. Fortunately they can be identified in
multi-colour data, similar to what is done in optical cluster
surveys. The most massive systems can readily be located using a
red-sequence method \citep[e.g.,][]{Gladders00}. Photometric
redshifts, involving more colours, can be used to find lower mass
halos. For instance, \cite{Milkeraitis10} used the Millennium
Simulation to examine how well one can identify clusters using five
optical filters. They find that clusters with masses larger than $\sim
5\times 10^{13}$ can be detected with fairly high completeness ($\sim
80\%$) and low false detection rate ($\sim 20\%$).
After having identified the clusters one needs to estimate their
contribution to the lensing signal. Here we take an optimistic
approach and assume we can find all halos down to a virial mass limit
$M_{\rm lim}$. In practice such a clear mass limit may be more
difficult to achieve. We fit these halos simultaneously with the
cluster of interest (where we ignore shape noise and assume the halos
follow an NFW profile with our adopted mass-concentration
relation). For a limiting mass $M_{\rm lim}=5\times 10^{13}\hbox{M$_\odot$}$ on
average 5.4 halos are fit in addition to the input cluster (with
actual numbers ranging from 0 to 12). We find that this procedure does
not bias the recovered cluster mass.
Figure~\ref{scat_mlim} shows the resulting scatter in the best fit
virial mass as a function of the mass limit of the halos included in
the fit. The solid lines show the results when the NFW model is fit
out to 10', whereas the dashed lines show the results for $R_{\rm
out}=25'$. For reference, the left panel indicates the corresponding
statistical uncertainties in the virial mass due to the shape noise.
\begin{figure}
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=\hsize
\epsffile{scat_mlim.eps}}
\begin{small}
\caption{The scatter in the best-fit virial mass (for $M_{\rm
vir}=10^{15}\hbox{M$_\odot$}$) as a function of $M_{\rm lim}$, the minimum
mass of halos that are fit simultaneously with the cluster of
interest. The improvement is largest for a cluster at $z=0.05$
(black line). The benefits are smaller for $z=0.1$ (red line) and
$z=0.2$ (blue line). The solid lines show the results when the NFW
model is fit out to 10', whereas the dashed lines show the results
for $R_{\rm out}=25'$. For reference, the left panel shows the
statistical uncertainty in the virial mass due to the intrinsic
shapes of the source galaxies.
\label{scat_mlim}}
\end{small}
\end{center}
\end{figure}
The results suggest our simple approach is indeed able to reduce the
effect of cosmic noise. Figure~\ref{scat_mlim} shows that this is most
relevant for clusters at very low redshifts. However, even with a
(low) mass limit of $M_{\rm lim}=5\times 10^{13}\hbox{M$_\odot$}$, the cosmic
noise remains a dominant source of uncertainty. We have examined several
other approaches, such as using the luminosities of galaxies to
predict the lensing signal, and found that none is able to
significantly improve upon the relatively simple approach outlined
above. We now will attempt to understand why this is.
\subsection{Limitations}
To compute the cosmic noise signal in \S3 we need the non-linear power
spectrum of density fluctuations to account for the fact that
collapsed halos increase the power on small scales. In general it is
computed using (fitting functions to) numerical simulations, such as
the prescription of \cite{PD96} that we used here. The observation
that dark matter halos are well described by NFW profiles allows for
an analytic approach as suggested by \cite{Seljak00}. In this model
the abundance of halos is given by the halo mass function and their
clustering is described by a mass dependent bias relation. The dark
matter profiles themselves are described by spherical NFW profiles
that are functions of the mass only (i.e., they follow a
mass-concentration relation). The resulting power spectrum is the sum
of the contribution from a Poisson term that corresponds to individual
halos $P^{\rm P}(k)$ and a term arising from the clustering of halos
$P^{\rm hh}(k)$ themselves. This halo-model has proven to be useful to
study the clustering of galaxies and interpret the galaxy-mass
cross-correlation function. On small scales the Poisson term
dominates. It is given by
\begin{equation}
P^{\rm P}(k)=\frac{1}{(2\pi)^3}\int {\rm d}\nu f(\nu)\frac{M(\nu)}{\bar\rho}
\left|y(k,M(\nu))\right|^2,\label{ppoisson}
\end{equation}
\noindent where $\bar\rho$ is the mean matter density and $y(k,M)$ is
the ratio of the Fourier transform of the halo profile $\hat\rho(k)$
and the halo mass $M(\nu)$. The peak height $\nu$ of such an overdensity is
given by
\begin{equation}
\nu=\left[\frac{\delta_{\rm c}(z)}{\sigma(M)}\right]^2,
\end{equation}
\noindent where $\delta_{\rm c}$ is the value of a spherical
overdensity at which it collapses at a redshift $z$. $\sigma(M)$ is
the rms fluctuation in spheres that contain mass $M$ at an initial
time, extrapolated to $z$ using linear theory. The function $f(\nu)$
is related to the halo mass function ${\rm d}n/{\rm d}M$ through
\begin{equation}
\frac{{\rm d}n}{{\rm d}M}{\rm d}M=\frac{\bar\rho}{M}f(\nu){\rm d}\nu.
\end{equation}
\noindent We use the expressions from \cite{Sheth01} for $f(\nu)$ and the $M(c)$
relation from \cite{Duffy08} to compute the Poisson term. The halo-halo
term is important on large scales and is computed by integrating over the
mass function with the halo bias $b(\nu)$ and Fourier transform of the density
profile
\begin{equation}
P^{\rm hh}(k)=P_{\rm lin}(k)\left(\int {\rm d}\nu f(\nu) b(\nu) y(k,M(\nu))\right)^2,
\label{phh}
\end{equation}
\noindent where $P_{\rm lin}(k)$ is the linear power spectrum.
\begin{figure}
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=\hsize
\epsffile{scat_cor.eps}}
\begin{small}
\caption{The cosmic noise signal (LSS-only) as a function of
radius. The black line is the total signal and the dashed black line
is the signal from the halo model. The Poisson term in the halo
model has been multiplied by 1.15 to match the simulations. The
solid red line indicates the cosmic noise when halos with $M_{\rm
vir}>5\times 10^{13}\hbox{M$_\odot$}$ are included in the fit (see text for
details). The red dashed line shows the halo model signal if such
halos are removed perfectly. The dashed region indicates our
estimate for the range in scatter when the uncertainties are taken
into account.
\label{scat_cor}}
\end{small}
\end{center}
\end{figure}
The black line in Figure~\ref{scat_cor} shows the cosmic noise
(LSS-only) measured from the simulations. The dashed black line is the
halo model prediction, where the Poisson term has been multiplied by
1.15 to match the simulations. The solid red line in
Figure~\ref{scat_cor} shows $\sigma_{\rm LSS}$ if we fit all halos
with $M_{\rm vir}>5\times 10^{13}\hbox{M$_\odot$}$ as described in the previous
section. The theoretical limit of the reduction in power by accounting
for massive halos along the line-of-sight is obtained by integrating
Eqns.~(\ref{ppoisson}) and~(\ref{phh}) up to $M_{\rm lim}$, rather
than extending the integral over all masses. The dashed line in
Figure~\ref{scat_cor} shows the corresponding result for $M_{\rm
lim}=5\times10^{13}\hbox{M$_\odot$}$. It is clear that this estimate
overestimates the reduction in cosmic noise, compared to the actual
simulated results.
The reason for this is simple: the theoretical limit implicitely
assumes that the halo masses were determined perfectly, which clearly
is too optimistic. Differences in the true mass $M_{\rm t}$ and the
fitted mass $M_{\rm f}$ add additional power to the theoretical limit.
For the Poisson term the residual power $P^{\rm P}{\rm res}$ is given
by:
\begin{eqnarray}
P^{\rm P}_{\rm res}(k)&=& \frac{1}{(2\pi)^3}\int
\limits^{\infty}_{M_{\rm lim}} {\rm d}\nu f(\nu)\frac{M_{\rm t}(\nu)}{\bar\rho}\times \nonumber\\
& &\hspace{-1.0cm}\int\limits_0^\infty {\rm d}M_{\rm f}
\left|y(k,M_{\rm t})-y(k,M_{\rm f})\right|^2W(M_{\rm t}-M_{\rm f}),
\end{eqnarray}
\noindent where $W(M_{\rm t}-M_{\rm f})$ describes the distribution of
the difference between the true and recovered masses. Comparison with
the simulations shows that $W$ can be approximated by a Gaussian with
a dispersion $\sigma$ that depends on the halo mass, with
$\sigma=2.3\times 10^{13}\hbox{M$_\odot$}+0.28\times M_{\rm t}$. We need to add
the contribution $P^{\rm hh}$ to $P^{\rm P}_{\rm res}$, which in the
ideal case is integrated up to $M_{\rm lim}$. However, we cannot fit
the contributions from halos outside the field of view and the actual
halo-halo contribution will lie between the ideal case and the full
$P^{\rm hh}$.
The shaded region in Figure~\ref{scat_cor} indicates the expected
range in $\sigma_{\rm LSS}$ when we account for the uncertainties in
the modelling. The actual results agree very well with our estimates
based on the halo model. In reality the situation is even more dire,
because we ignored shape noise in the calculations presented here.
\section{Conclusions}
We used the Millennium Simulation to study how large-scale structure
along the line-of-sight (cosmic noise) affects the uncertainty in the
weak lensing masses of clusters of galaxies. After accounting for
differences in the calculation of the non-linear power spectrum of
density fluctuations, analytical estimates agree well with the
simulations. The simulations therefore support the findings by
\cite{Hoekstra01,Hoekstra03} that cosmic noise is a relevant source of
uncertainty in weak lensing mass determinations and therefore should
be included in the reported error budget. We do note that the adopted
$\sigma_8$ in the simulation is higher than the currently favoured
value, which reduces the amplitude of the cosmic noise somewhat.
We also examined whether variations in the source galaxy redshift
distribution are an important source of uncertainty. Although the
importance increases with the redshift of the cluster, we find it is
never significant when compared to statistical errors or cosmic noise.
For the simulated redshift distribution of sources used here we find that source
redshift information improves the precision of the mass measurement,
because the boost in lensing signal by the removal of foreground
galaxies is larger than the increase in cosmic noise due to the
increase in the mean source redshift.
Finally we examined whether it is possible to reduce the effect of
cosmic noise by identifying galaxy clusters and groups along the
line-of-sight. Such structures can be located fairly easily in
multi-colour data. We study a simple approach where we fit the masses
of these additional structures down to a mass $M_{\rm lim}$ and find
that cosmic noise can indeed be reduced, in particular for clusters at
very low redshifts $(z\sim 0.05)$. Nonetheless, the cosmic noise
remains a dominant source of uncertainty. To better understand the
limitations of modelling the contribution from distant large-scale
structure, we computed the expected signals using the halo model. We
find that the uncertainties (or variations) in the profiles
fundamentally limit the suppression of cosmic noise. As a consequence,
cosmic noise will remain a dominant source of uncertainty in weak
lensing cluster mass measurements, and should not be ignored.
\section*{Acknowledgments} The authors would like to thank Tim Schrabback
for useful comments. JH and SH acknowledge support by the Deutsche
Forschungsgemeinschaft within the Priority Programme 1177 under the
project SCHN 342/6 and by the German Federal Ministry of Education and
Research (BMBF) through the TR33 ``The Dark Universe''. HH
acknowledges support from the Netherlands Organization for Scientific
Research (NWO) and a Marie Curie International Reintegration Grant.
\bibliographystyle{mn2e}
|
1,108,101,566,423 | arxiv | \section{Introduction}
Galaxies have long been seen as islands distantly scattered and stable in the
universe. We now know that galaxies are not randomly distributed in
space. They are in groups that are subject to the expansion of the universe
and mutual gravitational interaction. The interaction between galaxies has
substantially modified the cosmic structures throughout the evolution of the
Universe. These events are determined by the attractive character of
gravity which in turn induces in larger systems, collisions, tidal forces
and dynamical frictions \citep{1999AJ....117.2695R}. The strong perturbations on
the interacting systems are due to the tidal
force. This can dismember large quantities of material to form bridges
and tails, and thus injecting chemically processed interstellar material into
the
intergalactic space, contaminating distances up to 10 times larger
than the diameter of the iterating galaxies \citep{1997ASPC..114...71D}.
One of the many types of interactions occur when there
is a ring of gas, dust and stars positioned perpendicularly
with the galaxy's main plane. These systems are known as
polar ring galaxies (PRG), peculiar systems with early-type
or elliptical host galaxies. The term Polar ring galaxies was first introduced by \cite{1983AJ.....88..909S} and used by Whitmore in the publication \cite{1987ApJ...314..439W}.
\cite{1990AJ....100.1489W} published his ``Polar Ring Catalog" (PRC), with a
total of 157 objects: 6 kinematically confirmed (rotation, detected in two
orthogonal planes), 27 galaxies as ''good candidates", 73 as ``Possible
candidates", and 51 galaxies as ''related objects". \cite{1998ApJ...499..635B} discuss
the origin of the fundamental observational properties of polar ring
galaxies. \cite{1998A&AS..129..357F} make a more
comprehensive classification of all collisional ring galaxies, which
includes the PRGs. Within our neighboring Universe, 20 PRGs have recently
been confirmed in the catalog by \cite{2009Natur.461...43G}, as well as \cite{2011MNRAS.418..244M} display a new catalogue with candidates to polar-ring galaxies selected from the SDSS.
The \cite{2003A&A...401..817B} reviewed the two scenario for the formation of PRGs: (1) the fusion that occurs in a frontal collision between two spiral galaxies whose discs are orthogonal, (2) the accretion
scenario, in which during the interaction between two galaxies the host collects
material from another galaxy to form the ring. Both scenarios require a
specific geometric configuration for the formation of a polar ring.
Also, \cite{2006ApJ...636L..25M} proposed the (3) \textit{cold accretion
scenario} for the formation of isolated PRG's. Based on a large cosmological
hydrodynamical simulation, they showed that their formation can occur naturally
in a hierarchical universe where most low-mass galaxies are assembled through
the accretion of cold gas infalling along megaparsecscale filamentary
structures.
Here we report the results of a study of the PRG \mbox{\mbox{AM\,2020-504}},
based
on broad band images and long-slit spectroscopy obtained at
the Observat\'orio Pico dos Dias, Brazil. The main goal of this paper is to
investigate the scenario of formation by determinations of the oxygen abundance
in the star-forming regions located in the ring and infering the dust and gas
content of the system. This was done through broadband images and spectroscopic
data, from which we study the kinematics, surface and aperture photometry. In
Section \ref{sec2} we present a review of \mbox{\mbox{AM\,2020-504}}.
Observation and data reductions are presented in Section \ref{sec3}. The results
and discution are presented in Section \ref{sec4}, while the conclusions are
given in Section \ref{con}.
\section{\mbox{AM\,2020-504} review} \label{sec2}
\mbox{AM\,2020-504} is composed by a narrow ring
surrounding a very bright host galaxy. This object appears in many PRG catalogs
\citep[e.g.][]{1990AJ....100.1489W, 2003A&A...401..817B, 2002A&A...383..390R,
1998A&AS..129..357F, 2004A&A...422..941C}. Based on photometric and
spectroscopic observations, \cite{1993A&A...267...21A} concluded that the
material of the ring has been likely accreted, as indicated by the kinematical
decoupling of the inner core of the host galaxy, the different color of the
material in the ring and in the galaxy, and the large amount of HI, quite
unusual for an E galaxy. They modeled the surface brightness of the galaxy,
assuming that the central component is seen edge-on, to determine the geometry
of the system. They found that the luminosity profile of the host galaxy is well
described by an oblate Jaffe model with axis ratio $c/a = 0.6$ for R$>9\hbox{$^{\prime\prime}$} $,
where $c$ and $a$ are the minor and major axis of the galaxy, and $R$ is
the galactic radius.
The intrinsic inclination of the ring plane derived using the \mbox{(B-R)} and
H$\alpha$ images is consistent with the ring being very nearly polar. The ring
is warped and tilted 18$^\circ$ from edge on, passing with the NE side in front
of the elliptical galaxy. \cite{1993A&A...268..103A} reproduced the UV
SED by a model consisting of an elliptical galaxy with a star burst, which give
a lower limit for the age of the polar ring of $1.5\times10^8$ yr, consistent
with the structure being quite young.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure01.eps}
\caption{ Objects near the polar ring galaxy \mbox{AM\,2020-504}
(Table\ref{field_vel}). Image obtained from
the DSS.}\label{field}
\end{figure}
The field near \mbox{\mbox{AM\,2020-504}} is shown in Figure \ref{field}, where
it is
labeled A. \mbox{AM\,2020-504} has no detected leftover
materials or bridges connecting it with another structure or surounding galaxy.
We found six other objects belonging
to the group in a search field of 30\hbox{$^\prime$} around \mbox{AM\,2020-504}.
These galaxies cover a velocity range of 730\,km/s. Closer
to \mbox{AM\,2020-504}, at a projected distance of $5\hbox{$^\prime$} $, is
2MASXJ2023488-5043492
(label B), whose velocity difference is 330\,km/s. Objects C (ESO\,234-G016)
and D (ESO\,234-G017) have very similar radial velocities (\mbox{190 km/s} and
\mbox{5 km/s}, respectively),
forming a sort of plane in the radial velocity space (see Table\ref{field_vel}).
The coordinates of the objects, as their radial velocities were obtained
from NED\footnote[1]{NASA/IPAC Extragalactic Database (NED) is operated by the
Jet Propulsion Laboratory, California Institute of Technology, under contract
with the National Aeronautics and Space Administration}.
\begin{table}
\scalefont{0.8}
\begin{center}
\caption {Galaxies within a 30\hbox{$^\prime$} near \mbox{AM\,2020-504}. Labels are
according to Figure \ref{field}.}
\label{field_vel}
\begin{tabular}{|c|l|c|c|c}
\hline
\textbf{Name} & \textbf{Label} & \textbf{Velocity} & \textbf{Relative} &
\textbf{Distance} \\
\textbf{} & \textbf{} & \textbf{(km/s)} & \textbf{Velocity} &
\textbf{(arcmin)} \\
\hline
\hline
A & \mbox{AM\,2020-504} & 5006$\pm$43 & 0 & 0 \\
B & 2MASXJ2023488-5043492 & 4676$\pm$45 & -330 & 4.7 \\
C & ESO\,234-G016 & 5196$\pm$27 & 190 & 7.9 \\
D & ESO\,234-G017 & 5011 & 5 & 8.3 \\
E & NGC\,6899 & 5731$\pm$10 & 724 & 13.8 \\
F & ESO\,234-G013 & 4786 & -220 & 14.3 \\
G & 2MASXJ20241155-5022394 & 5648 & 642 & 16.7 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Observations and data reduction} \label{sec3}
\subsection{Broadband optical imagery}
Photometric observations were performed with the 1.6-m telescope at
Observat\'orio Pico dos Dias (OPD), Brazil
on July 2008. The telescope was equipped with direct imaging Camera 1, with the
CCD\,106, a back-illuminated
1024\,x\,1024 detector.
The data were acquired with standard Johnson B, V, R and I filters. Calibration
was accomplished using repeated observations of standard stars from
\cite{1992AJ....104..340L} selected fields Mark-A and PG13223-086. The log of
observations is given in Table \ref{obs}.
Data reductions were performed in the standard manner using the
IRAF\footnote[1]{Image Reduction and Analysis Facility is developed and
maintained by
the National Optical Astronomy Observatories (NOAO)} package. This included dark
and
bias subtraction, and flat-field correction (we have used a mean of several dome
flats
taken in the appropriate filter). The cosmic rays were removed manually by
masking them
with the median of adjacent pixels.
\begin{table}
\begin{center}
\caption {Log of CCD image observations.}
\label{obs}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Date} & \textbf{Band-pass} & \textbf{Exp. (s)}\\
\hline
\hline
2008 Jul 07-08 & B & 6 x 300 \\
2008 Jul 07-08 & V & 6 x 300 \\
2008 Jul 07-08 & R & 6 x 300 \\
2008 Jul 07-08 & I & 6 x 240 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Spectroscopic observations}
The spectroscopic observations were performed with the 1.6-m telescope at OPD
equipped with a Cassegrain spectrograph and CCD105, a back-illuminated
2048\,x\,2048 detector on June 2008 and September 2008. Difraction gratings of
300
lines/mm and 600 lines/mm were used. The aim of the 300 lines/mm grating was to
have a larger spectral coverage (4100--8600\AA), while the 600 lines/mm was
used to obtain higher resolution in the main lines H$\beta$,
[O\,III]$\lambda$5007,
H$\alpha$, [N\,II]$\lambda$6584 and [S\,II]$\lambda$6716,$\lambda$6731.
For the observations we used two slit positions, one along
the major axis of the host galaxy (slit-1) and another along the major axis of
the ring (slit-2).
Slit-1 has an inclination of $72.5^\circ$
NE and slit-2 has an inclination of $17^\circ$ N-W, as shown in Figure
\ref{gala}. Spectrophotometric standard stars were observed each night to perform flux
calibration. Those are tertiary standards from \cite{1981PASP...93....5B}, as revised by
\cite{1992PASP..104..533H}, see also
\cite{1994PASP..106..566H}. Arc lamps were taken before and after each exposure in
order to provide accurate wavelength calibration. A log of the spectral
observations is given in Table \ref{espec}. The spectra processing and data
analysis were done using standard procedures employing IRAF and RVSAO
packages. This includes bias substraction, flat field correction, cosmic
ray removal, sky substraction, wavelength and flux calibration ({\tt
IMAGES}/{\tt IMFIT}, {\tt IMUTIL}, {\tt STSDAS}/{\tt IMGTOOLS}, {\tt TWODSPEC}
and {\tt ONEDSPEC} tasks, respectively). The wavelength calibration
errors were $\simeq 8$\AA\ and $\simeq 10$\AA\ for slits 1 and 2, respectively.
The standard extraction aperture was set for the
emission region. The spectra were reduced using the measurements of the
standard stars observed at
similar airmasses. The line fluxes were obtained using the IRAF/{\tt
SPLOT} task. This task was also used to obtain the center of the emission
lines in order to later calculate the radial velocities of the measured
lines. As a double check of these results, the RVSAO/IRAF external package
was used to calculate the apparent radial velocities from the observed
spectral shifts. The EMSAO task finds emission lines in a spectrum and
computes the observed centers, yielding individual shifts and errors for
each line as well as a single velocity by combining all of the lines
\citep{1995ASPC...77..496M}.
\begin{table}
\centering
\caption {Log of spectral observations, the slit positions and exposure
times.}
\label{espec}
\begin{tabular}{|l|c|c}
\hline
\textbf{} & \textbf{Slit 1} & \textbf{Slit 2} \\
\hline
\hline
\textbf{Date} & 2008-Sep-29 & 2008-Jul-04\\
\textbf{Grating} (lines/ mm) & 600 & 300\\
\textbf{Spectral Range} (\AA) & 4600-6730 & 4100-8600 \\
\textbf{Angle} & $72.5^\circ$ N-E & $17^\circ$ N-W \\
\textbf{Exposure Time} (s) & 1800 & 1200\\
\textbf{Slit Width} (\hbox{$^{\prime\prime}$} ) & 3 & 3 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure02.eps}
\caption{B-band image of the galaxy \mbox{AM\,2020-504} showing the slit
positions.}
\label{gala}
\end{figure}
\section{Results and Discussion} \label{sec4}
\subsection{Rotation Curves}
The radial velocities along the slits were calculated based on the Doppler shift
of spectral lines. To construct the rotation curve of this system, the strongest
emission lines were used, namely: H$\beta$, [O\,III]$\lambda$5007 and
H$\alpha$. In both slit positions, extractions of one-dimensional spectra were
performed in order to obtain the rotation curves, as well as information on
specific regions.
The radial velocity of the galaxy, calculated by \mbox{averaging} the central
positions of the Slit 1, is \mbox{$5045\pm23$ km/s}. The value is similar to
that found by \cite{1987IAUS..127..413W} and \cite{1993A&A...267...21A}.
The rotation profile along the ring major axis is shown in Figure
\ref{rota}. The northern
portion of the ring is approaching and the southern portion is receding from us.
This rotation curve is symmetrical and well behaved. The last three points each
side of the rotation curve suggest that the northern and southern portions of
the ring have a difference in rotation velocity of about 60 km/s, but this
difference is under the error bars.
To a certain degree, asymmetries could be explained if the ring was warped. In
fact, \cite{1993A&A...267...21A} suggested that the ring is warped, and
generated models that adjust fairly well the morphology and the rotation
velocity curves in some directions for which they had long slit spectra, showing
those asymmetries.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure03.eps}
\caption{Rotation profile of \mbox{AM\,2020-504} along the ring major axis
(PA=17$^\circ$).}
\label{rota}
\end{figure}
\subsection{Spectral analysis} \label{Section2.2}
To analyse the emission from the gaseous component, we constructed four
diagnostic diagrams proposed by \cite{1981PASP...93....5B} and
\cite{1999A&A...345..733C}, using Slit 2 spectra. These diagrams are used to
distinguish objects ionized only by massive stars from those containing active
nuclei (AGN) and/or shock excited gas.
The used diagrams were [O\,III]${\lambda5007}$/H$\beta$
versus [O\,I]${\lambda6300}$/H$\alpha$, [N\,II]${\lambda6584}$/H$\alpha$, and
[S\,II]$({\lambda6717+\lambda6731})$/H$\alpha$; and
[N\,II]${\lambda6584}$/H$\alpha$ versus
[S\,II]$({\lambda6717+\lambda6731})$/H$\alpha$.
The emission-line intensities were measured using the IRAF SPLOT routine
considering a Gaussian line profile fitting. In Table~\ref{tablen} are
presented the distance to the galactic center (assuming 344 pc/arcsec),
the emission line intensity measurements normalized to the flux of H$\beta$=100,
the nebular reddening coefficient
C(H$\beta$), calculated by comparing the
Balmer decrement H$\alpha$/H$\beta$ to the theoretical value 2.86 given
by \cite{1989agna.book.....O} for an electron temperature of 10\,000 K
and considering the interstellar law of \cite{1958AJ.....63..201W}, and the
observed H$\beta$ flux for each aperture.
The error associated with the line fluxes were estimated following the
same
procedure than \cite{2008A&A...492..463O}, is given by
$\sigma^{2}=\sigma^{2}_{cont}+\sigma^{2}_{line}$,
where $\sigma^{2}_{cont}$ is the error due to the continuum noise, calculated
to be $\sigma^{2}_{cont}=\sqrt{N}\Delta\sigma_{\rm rms}$,
$N$ is the number of pixels covered
by the emission line, $\Delta$ is the dispersion of the spectrum (units
of wavelength per pixel), and $\sigma_{\rm rms}$ is the root mean square of
the continuum flux density (flux per unit wavelength); and $\sigma^{2}_{line}$
is the Poisson error of the emission line.
We have compared the values of the reddening coefficient C(H$\beta$) in
the ring of \mbox{AM\,2020-504}
with the ones in other PRGs and in isolated galaxies. We found an averaged
C(H$\beta$) of 0.8, which
is similar to that ($\approx0.9$) in the ring galaxy
SDSSJ075234.33+292049.8 \citep{2010MNRAS.401.2067B}, and larger than the ones in disks
of
spiral galaxies, for sample, in M\,33 is $\approx0.4$ \citep{2011ApJ...729...56B}
and in M\,101 is $\approx0.4$
\citep{1996ApJ...456..504K}.
In Figures \ref{diag_cozi} and \ref{diag_oli}
the diagrams are shown, where different symbols are used
to represent the ring and the nuclear regions.
Figure \ref{diag_cozi} shows the diagnostics diagram proposed by
\cite{1999A&A...345..733C}, where we plot the values of the [NII]/H$\alpha$
against [SII]/H$\alpha$ ratios. These two line ratios are significantly higher
in LINERs and Seyfert2s than in H\,II regions and Starbursts. The distinction
between the two AGN types (e.g. Seyferts and LINERs) is not possible in this
diagram, but regions undergoing photoionization by O and B stars are clearly
separated from AGN ionizing sources. The \cite{1999A&A...345..733C} criteria
established two regions in this diagram, separated by the continuous lines in
Figure \ref{diag_cozi}, where the gas is excited by the two different
mechanisms, e.g. AGN and photoionization by stars, which is consistent with the
lower limits for the presence of diffuse ionized gas in the halos of edge-on
Starbursts as proposed by \cite{1996ApJ...462..651L}. The points corresponding
to the ring (circles) are broken down between the northern part of the ring
(open) and South of the ring (closed), the host galaxy is represented by open
triangles and the nucleus by closed triangle.
The lines in the diagrams of Figure \ref{diag_oli} were used to separate objects
with distinct ionization sources, following \cite{2006MNRAS.372..961K} criteria.
These authors combined both photoionization model results with stellar
population synthesis models, built by \cite{2001ApJ...556..121K}, in order to
analyse the host properties of
about 80\,000 emission-line galaxies selected from the Sloan Digital Sky Survey.
They showed that Seyferts and low-ionization narrow emission-line regions
(LINERs) form clearly separated branches on the standard optical diagnostic
diagrams, such as the ones used in this paper. We can see that the nuclear
points occupy the AGN's site in four diagrams (see Figures \ref{diag_cozi} and
\ref{diag_oli}), in two of them, these points are at the LINERs site, while the
ring points are in the region occupied for H\,II-like objects.
A fundamental subjects in galaxy formation studies is our understanding
on the metallicity. In particular, chemical abundances of H\,II regions in
polar ring galaxies
have deep implications for the evolutionary scenario of these objects and
yield hints on the mechanisms
at work during their formation.
Three main formation processes have been proposed (for a more detailed
discussion see \citealt{2011A&A...531A..21S} and reference therein):
\begin{itemize}
\item Cold accretion of pristine gas --- In this scenario, a polar structure
may be formed by cold gas accretion
and it is expected a gas phase with very low metallicity $Z\sim1/10 Z_{\odot}$.
The metallicity of the galaxy
would then be lower than that of spiral disks of the same luminosity, and any
metallicity gradient along the polar ring would not be present
(\citealt{2006ApJ...636L..25M}, \citealt{2009MNRAS.397L..64A}).
\item Major dissipative merger --- The PRG is formed from a merger of two disk
galaxies unequal mass (e.g. \citealt{1997ApJ...483..608B}).
\item Tidal accretion of material --- The polar ring may be formed by the
disruption of a dwarf companion galaxy or tidal accretion of gas stripping
from a disk galaxy.
\end{itemize}
In both, major merger and tidal accretion, a somewhat high metallicity would be
found.
To test which of these scenarios represent the formation of \mbox{AM\,2020-504},
the oxygen abundance
have been estimated in the polar disk regions.
Unfortunately, accurate chemical
abundances can only be derived by measuring temperature sensitive
line ratios, such as [O\,III]$(\lambda4959+\lambda5007)/\lambda4363)$,
which are unobservable in the spectra of the H\,II regions in the ring
of \mbox{AM\,2020-504}. In these cases, empirical calibrations between
abundances
and more easily measured emission-line ratios have to be used
to estimate metal abundances (see \citealt{2011MNRAS.415.3616D} and reference
therein). Therefore, we
estimate the oxygen abundance O/H (used as tracer of the metallicity) using the
calibration of O/H with the parameters
proposed by \cite{2009MNRAS.398..949P}
\begin{equation}
O3N2= \rm \log\left(\frac{I([O\:III]\lambda5007)}{I(H\beta)} \times
\frac{I(H\alpha)}{I([N\:II]\lambda6584)}\right),
\end{equation}
\begin{equation}
N2=\rm \log\left(\frac{I([N\:II]\lambda6584)}{I(H\alpha)}\right)
\end{equation}
and given by
\begin{equation}
{\rm 12 + \log(O/H)} = 8.73 - 0.32 \times O3N2,
\end{equation}
\begin{equation}
{\rm 12 + \log(O/H)} = 0.57\times N2+9.07.
\end{equation}
In Table~\ref{tablen} the values of these parameters and the derived oxygen
abundance for each region classified in the
diagnostic diagrams as H\,II regions are presented. We found that $O3N2$
and $N2$ parameters indicate the ring H\,II regions have oxygen abundances
[12+log(O/H)]
from 8.3 to 8.8 dex, with an average oxygen value of $8.53\pm0.11$ dex. This
value is about the the solar oxygen abundance, i.e. 8.66 dex
(\citealt{2004A&A...417..751A}), and near to the maximum oxygen abundance value
derived for central parts of spiral galaxies, i.e. 8.87 dex
(\citealt{MNR:MNR11444}).
In Figure~\ref{gradi} the oxygen abundance via the two parameters presented
above as a function of the galactocentric distance of \mbox{AM\,2020-504} is
shown. We can see that both parameters indicate an oxygen gradient across the
ring.
A linear regression in the oxygen estimates via $O3N2$ and $N2$ yields gradients
of \mbox{$-0.017(\pm0.006$) dex/kpc} and
\mbox{$-0.051(\pm0.013$) dex/kpc}, respectively. These values are similar to
the ones found in spiral galaxies (see \citealt{2005A&A...437..837D},
\citealt{2004A&A...425..849P}).
We also tested whether \mbox{AM\,2020-504} follows the
metallicity-luminosity relation of spiral galaxies. \cite{2004A&A...425..849P}
found that
characteristic oxygen abundance in spiral galaxies
as a function of absolute blue magnitude $M_{\rm B}$ follows the relation
\begin{equation}
12 + \log({\rm O/H}) = 6.93 (\pm 0.37)- 0.079 (\pm 0.018)M_{\rm B}.
\end{equation}
We computed the absolute magnitude of \mbox{AM\,2020-504}, evaluated
considering the central spheroid and the polar ring for a distance of 71 Mpc
(H$_0$=71, \citealt{2000ApJ...529..786M}), being
$M_{\rm B}$=-18.24. Using this value, we obtained from the relation above
12+log(O/H)=8.37$\pm0.2$ dex, about the same
average oxygen value found in the polar ring.
Since the averaged metallicity in \mbox{AM\,2020-504} is high ($Z\approx
Z_{\odot}$),
there is a clear oxygen gradient across the polar ring, and this galaxy
follows the metallicity-luminosity relation of normal spiral galaxies, our
results
support the formation scenarios of accretion or major merger for this object
and rule out the cold accretion of pristine gas.
Some other works have determined the oxygen abundance in PRGs in order to
test possible formation scenarios for these
objects. For example, \cite{2002AstL...28..443S}, using a O/H-$N2$ calibration
found the oxygen abundance of 12+log(O/H)$\sim8.8$ dex
for the PRG UGC\,5600. \cite{2010MNRAS.401.2067B}, using a calibration between
the electron temperature and strong oxygen emission-lines,
found that oxygen abundance in different regions of the apparent ring galaxy
SDSSJ075234.33+292049.8
is 12+log(O/H)=8.49$\pm±$0.08 dex. \cite{2010ApJ...714.1081S} derive the
oxygen abundance in the polar
disk of NGC4650A by using both the empirical methods and direct electron
temperature detections and found 12+log(O/H)=$8.2\pm± 0.1$.
Recently \cite{2011A&A...531A..21S}, using the $P$-method
\citep{2001A&A...369..594P},
reported averaged oxygen abundance values of $8.5\pm0.5$ and $7.7\pm1.0$
for H\,II regions located in the ring of the galaxies PRG UGC\,7576 and
UGC\,9796.
Despite of these results are in consonance with our estimations,
for the majority of the galaxies above the absence of any metallicity gradient
along the polar disk have been found.
Since different methods or different calibrations of the same oxygen
indicator provide
different oxygen values, with discrepancies of up to 1.0 dex (e.g.
\citealt{2008ApJ...681.1183K}),
and few PRGs have been observed, additional analysis is needed to
confirm the (dis)aggrement found above.
\begin{figure}
\centering
\includegraphics[width=7cm,angle=270]{Figure04.eps}
\caption{\mbox{AM\,2020-504} diagnostic diagram
log[NII]${\lambda6584}$/H$\alpha$ versus
log[SII]$({\lambda6717+\lambda6731})$/H$\alpha$ (Coziol, 1999). The black
triangle (nuclear region) and
the white triangle both correspond to the host galaxy. The black and white
circles correspond to the southern and northern regions of the ring,
respectively.}
\label{diag_cozi}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,angle=270]{Figure05.eps}
\caption{Diagnostic diagrams [N\,II]${\lambda6584}$/H$\alpha$ (left),
[O\,I]${\lambda6300}$/H$\alpha$ (middle), and
[S\,II]$({\lambda6717+\lambda6731})$/H$\alpha$ (right) vs.
[O\,III]${\lambda5007}$/H$\beta$ (Baldwin et al. 1981). The curves, taken from
Kewley et al. (2006) separate objects ionized by massive stars from the ones
containing active nuclei and/or shock excited gas. The straight line taken from
Kewley et al. (2006), separates Seyfert and LINER objects. The symbols are as in
Figure \ref{diag_cozi}}
\label{diag_oli}
\end{figure}
\begin{table*}
\scalefont{0.8}
\centering
\centering
\caption{Reddening corrected emission line intensities (relative to
H$\beta$=100) and global properties}
\begin{tabular}{llllllllllll}
\hline
\noalign{\smallskip}
\hline
\noalign{\smallskip}
r & log($F$(H$\beta$)) & C(H$\beta$) &
[O\,III]$\lambda$5007 & H$\alpha$ & [N\,II]$\lambda$6584 &
[S\,II]$\lambda$6716 & [S\,II]$\lambda$6731 & O3N2 &
N2 & 12+log(O/H) & 12+log(O/H) \\
(kpc) & $\rm erg\:s^{-1}\:cm^{-2}$ &
& & &
& & & && O3N2 & N2
\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
5.83 & -15.01 & 1.11 & 162$\pm11$ & 263$\pm16$ &
45$\pm3$ & 46$\pm4$ & 51$\pm5$ & 0.97 & $-$0.76
& 8.41 & 8.46 \\
5.10 & -14.58 & 0.88 & 192$\pm11$ & 268$\pm15$ &
39$\pm2$ & 29$\pm2$ & 68$\pm5$ & 1.12 & $-$0.83
& 8.37 & 8.40 \\
4.37 & -14.81 & 0.35 & 153$\pm9$ & 279$\pm14$ &
61$\pm4$ & --- & --- & 0.84 & $-$0.66
& 8.45 & 8.54 \\
3.65 & -15.07 & 1.31 & 183$\pm9$ & 260$\pm13$ &
59$\pm3$ & 39$\pm3$ & 78$\pm6$ & 0.90 & $-$0.64 &
8.43 & 8.56 \\
2.91 & -14.95 & 0.80 & 150$\pm8$ & 270$\pm13$ &
57$\pm3$ & 57$\pm4$ & 90$\pm6$ & 0.85 & $-$0.67 &
8.45 & 8.53 \\
2.20 & -15.22 & 0.68 & 141$\pm8$ & 272$\pm21$ &
65$\pm3$ & --- & --- & 0.77 & $-$0.62 &
8.48 & 8.57 \\
1.45 & -15.17 & 0.78 & 135$\pm6$ & 270$\pm11$ &
87$\pm4$ & 135$\pm7$ & 65$\pm4$ & --- & --- & ---
& --- \\
0.73 & -15.25 & 1.05 & 169$\pm8$ & 265$\pm10$ &
101$\pm4$ & 142$\pm8$ & 90$\pm3$ & --- & --- & ---
& --- \\
0.38 & -15.38 & 1.47 & 204$\pm10$ & 256$\pm10$ & 114$\pm5$
& 136$\pm5$ & 114$\pm8$ & --- & --- & --- & --- \\
0.0 & -15.39 & 1.97 & 313$\pm12$ & 247$\pm8$ &
153$\pm6$ & 143$\pm9$ & 145$\pm11$ & --- & ---
& ---- & ---\\
0.38 & -15.39 & 1.71 & 262$\pm10$ & 252$\pm10$ &
154$\pm6$ & 124$\pm8$ & 132$\pm10$ & --- & ---
& --- & --- \\
0.73 & -15.21 & 1.03 & 207$\pm8$ & 265$\pm9$ &
174$\pm6$ & 113$\pm9$ & 148$\pm7$ & --- & --- &
--- & --- \\
1.45 & -15.35 & 1.68 & 352$\pm14$ & 253$\pm10$ &
152$\pm6$ & 109$\pm5$ & 152$\pm7$ & --- & --- &
--- & --- \\
2.20 & -15.02 & 0.47 & 192$\pm7$ & 276$\pm11$ &
106$\pm5$ & 117$\pm6$ & 90$\pm7$ & 0.69 & $-$0.41 &
8.50 & 8.74 \\
2.91 & -15.11 & 0.63 & 169$\pm6$ & 273$\pm11$ &
67$\pm9$ & --- & --- & 0.83 & $-$0.61 &
8.46 & 8.58 \\
3.65 & -15.27 & 0.86 & 209$\pm10$ & 268$\pm12$ &
66$\pm3$ & 105$\pm6$ & 45$\pm3$ & 0.92 & $-$0.60 &
8.43 & 8.58 \\
4.37 & -15.27 & 1.44 & 137$\pm7$ & 257$\pm15$ &
47$\pm4$ & 69$\pm4$ & 55$\pm3$ & 0.87 & $-$0.73 &
8.45 & 8.48 \\
5.10 & -15.03 & 0.53 & 124$\pm6$ & 275$\pm16$ &
55$\pm5$ & 79$\pm5$ & 16$\pm1$ & 0.79 & $-$0.69 &
8.47 & 8.51 \\
\noalign{\smallskip}
\hline
\label{tablen}
\end{tabular}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,angle=270]{Figure06.eps}
\caption{Gradients of 12+log (O/H) in \mbox{AM\,2020-504}. The solid
lines are linear regressions on oxygen abundance determinations obtained by the
the parameters indicate in each plot.}
\label{gradi}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure07.eps}
\caption{(B-R) Color map: the blue color of the ring (dark) compared to the red
color of the galaxy (lighter). Circles mark the positions used for apperture photometry. Measured
magnitudes are presented in Table~\ref{mag}.}
\label{foto}
\end{figure}
\subsection{Apperture Photometry} \label{color-2}
We carried out circular apperture photometry at selected positions of the
system, covering the ring, the host galaxy and its nucleus. Based in the
diagrams displayed in \mbox{Figure \ref{foto}}, in Table \ref{mag} we present
the measured magnitudes in B, (B-V), (B-R), (V-R) and (V-I) for the labeled
regions. The ring apertures have a mean (B-V) value of 0.43 mag, while the host
galaxy is 1.25 mag. The low (B-V) values on the ring, indicate that it is a
structure very different from the central galaxy: the ring is younger and of
distinct origin. In the (B-V) $\times$ (B-R) diagram, shown in Figure
\ref{color_color}, the ring, host galaxy and nucleus are very well separated.
Again, the host galaxy tends to be redder and the ring bluer. These values are
consistent with those found in other ring galaxies, as described in the case of
HRG2302 \citep{1999A&A...351..860M}. A ring bluer than the host galaxy is
expected in PRGs, because their rings are the result of recent interactions, and
they are made of material that comes from donnor galaxies, which are probably
spiral, and in this case, the material comes from its outer, less bound, parts.
These colors suggest a contribution of old stellar population in the host
galaxy, and the contribution of young stars in the ring may be due to localized
star formation (see also \citealt{1999A&A...351..860M}).
\textbf{Colormaps:} The (B-R) color map is shown in Figure \ref{foto}. In
this greyscale map, darker regions represent bluer colors, while lighter
regions represent redder colors. Clearly the ring is bluer than the host galaxy.
This is also seen in other colormaps, like (B-I) and (B-V).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure08.eps}
\caption{(B-V)$\times$(B-R) diagram. Filled circles mark the ring appertures,
empty triangles correspond to the host galaxy and the filled triangle is the
nuclear region.}
\label{color_color}
\end{figure}
\begin{table}
\begin{center}
\caption {Apperture photometry data. Apperture numbers correspond to the
positions marked in Figure~\ref{foto}.}
\label{mag}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Region} & \textbf{Label} & \textbf{B} & \textbf{(B-V)} &
\textbf{(B-R)} & \textbf{(V-R)} & \textbf{(V-I)} \\
\hline
\hline
&1 & 18.65 &0.86 & 0.33 & -0.53 & 0.14\\
&2 & 17.26 &0.61 & 0.69 & 0.08 & 0.03 \\
& 3 & 17.10 &0.45 & 0.01 & -0.44 & 0.42\\
&4 & 18.58 &0.31 & -0.07 & -0.38 & 0.44 \\
Ring &5 & 18.42 &0.52 & -0.01 & -0.40 & 0.40 \\
& 6 & 18.70 &0.27 & 0.56 & 0.29 & 0.0 \\
&7 & 18.69 &0.03 & -0.81 & -0.84 & 0.06\\
&8 & 18.55 &0.04 & -0.39 & -0.43 & 0.19\\
&9 & 18.70 &0.27 & 0.56 & 0.29 & 0.07 \\
&10 & 18.12 &0.91 & 0.62 & -0.29 & 0.72 \\
\hline
&11 & 17.14 &1.03 & 1.00 & 0.11 & 0.88\\
&12 & 17.15 &1.12 & 0.78 & -0.34 & 0.93 \\
&13 & 17.23 &1.52 & 0.92 & -0.60 & 0.74 \\
Host &14 & 17.17 &1.13 & 0.33 & -0.80 & 0.8\\
Galaxy &15 & 19.62 &1.17 & 0.49 & -0.68 & 0.0 \\
&16 & 17.65 &1.11 & -0.35 & -1.46 & 0.34\\
&17 & 17.69 &1.68 & 1.29 & -0.39 & 0.72 \\
\hline
Nucleus &18 & 19.70 &1.73 & 1.69 & -0.04 & 0.05 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion} \label{con}
This work presents a study of \mbox{AM\,2020-504}, a galaxy with a well defined
polar ring surrounding an elliptical host galaxy (\citealt{1987IAUS..127..413W},
\citealt{1993A&A...267...21A} and \citealt{2002A&A...391..103I}). The ring was
probably formed by accretion of material from a donnor galaxy during an
interaction event. In the field around the galaxy, we did not find any nearby
object that might have given material for the formation of the ring, but there
is a group of nearby galaxies with similar radial velocities.
We estimated a redshift of \textit{z}= 0.01683, corresponding to a heliocentric
radial velocity of 5045$\pm$23 km/s, confirming the values found by
\cite{1987IAUS..127..413W} and \cite{1993A&A...267...21A}. The rotation curve of
the ring is symmetrical and well behaved. The last two points each side of the
rotation curve suggest that the northern and southern portions of the ring have
a difference in rotation velocity of about 60 km/s, but this difference is under
the error bars. To a certain degree, asymmetries could be explained if the ring
was warped.
We found the (B-R) color index averaged 0.35 and 1.73 for the ring and core of
the host galaxy respectively. Thus the ring is bluer than the host galaxy (bulge
+ nucleus), and that is what we expectif the ring is the result of a recent
interaction.
The B-band brightness profile along the minor axis of the
galaxy is asymmetric due to the ring. The NW peak is higher and corresponds to
the bright spots seen in the images. This morphological feature, as the general
S--shaped appearence of the ring are in good agreement with the warped model of
the polar ring done by \cite{1993A&A...267...21A}. The light profile along the
host galaxy major axis also looks asymmetric on both sides close to the center.
This seems to be due to the presence of dust where the ring passes in front of
the galaxy, an indication that the near side of the ring is to the NE of the
galaxy.
This system is a harbours an AGN as indicated by some diagnostic diagrams.
Using two empirical methods based on the emission-lines easily observable,
we found: (i) oxygen abundances for the H\,II regions located at the ring in
the range 12+log(O/H)=8.3-8.8 dex with an average value of $8.53\pm0.11$ dex and
(ii) the presence of an oxygen gradient across the ring of about $-0.035$
dex/kpc. We also found that \mbox{AM\,2020-504} follows the
metallicity-luminosity relation of typical spiral galaxies. These results
support the accretion scenario for this object and rule out cold accretion.
\section{Acknowledgements}
This work was partially supported by Universidade do Vale do Para\'{i}ba -
UNIVAP and the Minist\'erio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o
(MCTI), Laborat\'{o}rio Nacional de Astrof\'{i}sica. P. Freitas-Lemes thanks
FAPESP for the scholarship granted under process 2010/17136-4. O.L.Dors is
greateful to the FAPESP for support under grant 2009/14787-7.
We thank the anonymous referee for helping us make this manuscript a better paper.
|
1,108,101,566,424 | arxiv | \section{Introduction}
\section{Introduction} \label{sec:INTRO}
One of the bedrock principles of physics is that there is just one
time. Indeed, entertaining multiple timelike directions is
tantamount to jeopardizing the whole edifice of causality.\footnote{With
two timelike directions one automatically has
closed timelike curves, leading to numerous pathologies.
Examples include killing your grandfather. Or, being your own
grandfather (but not in the sense of reference \cite{STUPID}). On the other hand, having two timelike directions would make it possible
to bypass timelike cosmological singularities in FLRW cosmology.}
That being said, many formal investigations greatly simplify in Kleinian (i.e.,~$2+2$ signature) spacetimes.
For example, much of the power of modern approaches to scattering amplitudes stems from working with complexified
momenta, and particular simplifications arise in Kleinian signature (see e.g.,~\cite{Penrose:1967wn,Penrose:1968me,Penrose:1985bww, Penrose:1986ca,Parke:1986gb, Witten:2003nn,
Arkani-Hamed:2009hub,Monteiro:2020plf,Atanasov:2021oyu,Crawley:2021auj}). Certain string
theories with extended $\mathcal{N}=2$ worldsheet supersymmetry naturally
describe $2+2$ target spacetimes
\cite{Ooguri:1990ww,Ooguri:1991fp,Ooguri:1991ie}, and the
original formulation of F-theory \cite{Vafa:1996xn} (see also
\cite{Castellani:1982ke,Bergshoeff:1982az,Blencowe:1988sk,Bars:1996dz,Bars:1996cm,Hewson:1996yh,
Kutasov:1996zm,Kutasov:1996vh,Tseytlin:1996ne,Bars:1997bz,Bars:1997xb,Nishino:1997gq,Nishino:1997sw,
Hewson:1997wv,Linch:2015lwa,Linch:2015fya,Heckman:2017uxe,Heckman:2018mxl,Heckman:2019dsj})
takes place in an auxiliary $10+2$ signature spacetime. There is also a
precise sense in which $\mathcal{N}=1/2$ supersymmetry can be realized in
Kleinian signature spacetimes~\cite{Heckman:2018mxl,Heckman:2019dsj}, and this
has potential applications to the cosmological constant problem.
Having two times would also provide novel routes to model building, especially in the context of the physics
of extra dimensions \cite{Dvali:1999hn}.
Given this state of affairs, it is fair to ask whether these $2+2$ signature
lessons are simply ``formal tricks,'' or if they have some direct significance in $3+1$
signature. One aim of the present work will be to provide a potential avenue
for connecting $2+2$ physics to our $3+1$ world. Along these lines, we
will explore the sense in which analytic continuation in a spatial direction
is similar to the procedure of continuing to Euclidean signature. In the
latter case, the Euclidean signature path integral can be interpreted as
constructing a wave functional for the ground state of a quantum field theory.
In Kleinian signature, the resulting path integral is not bounded below, but
perturbation theory can still be used to extract the profile of a wave
functional with low particle flux in the direction of analytic continuation.
The other aim of this paper will be to explore some of the structures present
in Kleinian systems. Symmetries of a $2+2$ action serve to constrain
correlation functions via the corresponding Ward identities.~In particular, we study the ways in which
supersymmetry is similar---and also distinct---in this spacetime signature. Since
the Lorentz group in this space splits as $\mathrm{Spin}(2,2)\simeq \mathrm{SL}(2,\mathbb{R})_{L}\times
\mathrm{SL}(2,\mathbb{R})_{R}$, we can label supersymmetric theories according to the
number of left- and right-handed real spinor generators. The analog of $\mathcal{N}=1$
supersymmetry in Lorentzian signature is therefore instead $\mathcal{N}=(1,1)$
supersymmetry. Spontaneous breaking of the Lorentz group to either chiral
subgroup, $\mathrm{SL}(2,\mathbb{R})_{L}$ or $\mathrm{SL}(2,\mathbb{R})_{R}$, or to the diagonal
subgroup $\mathrm{SL}(2,\mathbb{R})_{D}$ leads to distinct $\mathcal{N}=1/2$
subalgebras.~We comment that the case of chiral $\mathcal{N}=1/2$
supersymmetry is quite similar to the Euclidean signature case
investigated in \cite{Casalbuoni:1975hx, Casalbuoni:1975bj, Casalbuoni:1976tz, Schwarz:1982pf, Ferrara:2000mm,
Klemm:2001yu, Abbaspur:2002xj, deBoer:2003dpn, Ooguri:2003qp,Ooguri:2003tt,Seiberg:2003yz,Britto:2003aj,
Berkovits:2003kj,Cortes:2019mfa,Cortes:2020shr,Gall:2021tiu}.
One of the intriguing features of such $\mathcal{N} = 1/2$ systems
is that in a supersymmetric state, the collection of bubble diagrams corresponding to the quantum corrections to the vacuum energy automatically cancel,
so there is no large cosmological constant problem (at least in Kleinian
signature). The potential use of this, and closely related structures in 3D
$\mathcal{N} = 1$ supersymmetric theories, has been suggested as a way to
protect the ground state from large quantum corrections~\cite{Witten:1994cga, Vafa:1996xn, Heckman:2018mxl, Heckman:2019dsj}.
We present some brief speculative comments on the application of our $2+2$ system to such $3+1$ questions.
\section{$2+2$ and Low Flux States}
In this section we show that the $2+2$ signature path integral constructs a state of low particle
flux of a Lorentzian signature theory. Recall that for a quantum
theory with a bounded Hamiltonian $H$, we can construct the ground state $\left\vert 0\right\rangle$ by acting
on a generic state $\vert \psi \rangle$ in the Hilbert space with the exponentiated Hamiltonian
operator for a long period of time as
\begin{equation}
\left\vert 0\right\rangle \propto \lim_{t\to\infty} e^{-Ht}\left\vert \psi
\right\rangle .
\end{equation}
It is often useful to view the
corresponding vacuum wavefunctional as being constructed by the
path integral:
\begin{equation}
\left\langle \Phi_{f}\left( \vec{x}\right) \lvert 0
\right\rangle \sim\int^{\Phi_{f}\left( \vec{x}\right)
}{\cal D}\phi \, e^{-S_{4,0}[\phi]}, \label{EucPath}%
\end{equation}
where $S_{4,0}$ denotes the action of the Lorentzian theory analytically continued to Euclidean ($4+0$) signature, and where the integration is done over all field configurations that interpolate between the field vanishing in the far past and the field profile $\Phi_f(\vec x)$ at time $t_f$.\footnote{Relatedly, $\lvert \Phi_{f}\left( \vec{x}\right)\rangle$ denote (Heisenberg picture) field eigenstates, so that $\phi(t_f,\vec x)\lvert \Phi_{f}\left( \vec{x}\right)\rangle = \Phi_f(\vec x)\lvert \Phi_{f}\left( \vec{x}\right)\rangle$.}
A similar set of manipulations can be used to construct a class of non-normalizable
wavefunctionals associated with low flux states. We work in a
$(3+1)$-dimensional spacetime with signature $(-,+,+,+)$ and $(2+2)$-dimensional spacetime with signature $(-,+,+,-)$, as obtained
by analytically continuing in the $z$-direction. To illustrate the main idea, consider a single real scalar field in flat
spacetime of $3+1$ signature with the Lagrangian density:
\begin{equation}
\begin{aligned}
\mathcal{L}_{3,1} & =-\frac{1}{2}\eta^{\mu\nu}\partial_{\mu}\phi
\partial_{\nu}\phi-V(\phi)\,,\\
& =\frac{1}{2}\left( \partial_{t}\phi\right) ^{2}-\frac{1}{2}%
\vec{\nabla}\phi\cdot\vec{\nabla}\phi-V(\phi),
\end{aligned}
\end{equation}
where $\vec\nabla$ denotes the spatial gradient. The stress-energy tensor is given by:
\begin{equation}
T_{\mu\nu}=\partial_{\mu}\phi\partial_{\nu}\phi+\eta_{\mu\nu}\mathcal{L}%
_{3,1}.
\end{equation}
In particular, notice that the $tt$ and $zz$ components are the Lagrangian
densities of a scalar field in $4+0$ and $2+2$ signature:\footnote{We comment that some authors prefer a different
sign convention for the Euclidean signature Lagrangian: $\mathcal{L}^{\mathrm{us}}_{4,0} = - \mathcal{L}^{\mathrm{them}}_{4,0}$.
The important physical point is that for either convention, we have a sensible statistical field theory interpretation.}
\begin{align}
T_{tt} & =\frac{1}{2}\left( \left( \partial_{t}\phi\right) ^{2}+\left(
\partial_{z}\phi\right) ^{2}+\left( \partial_{x}\phi\right) ^{2}+\left(
\partial_{y}\phi\right) ^{2}\right) +V(\phi) \equiv \mathcal{L}_{4,0} \,,
\\
T_{zz} & =\frac{1}{2}\left( \left( \partial_{t}\phi\right) ^{2}+\left(
\partial_{z}\phi\right) ^{2}-\left( \partial_{x}\phi\right) ^{2}-\left(
\partial_{y}\phi\right) ^{2}\right) -V(\phi) \equiv \mathcal{L}_{2,2} .
\end{align}
Quantizing the $3+1$ signature theory, it is convenient to introduce spatial field configurations
$\Phi\left( \vec{x}\right)\equiv \phi(\vec x, t_{\ast}) $ at a fixed time $t_{\ast}$.
Similarly, we can represent the conjugate momentum $\pi\left(x\right)
\equiv \partial_{t}\phi(x) |_{t = t_{\ast}}$ as the functional derivative $\Pi(\vec y)= -i\delta/\delta\Phi(\vec y)$.
The field and its momentum then satisfy the canonical commutation relation:
\begin{equation}
[\Phi\left( \vec{x}\right) ,\Pi\left( \vec{y}%
\right) ]=i\delta^{3}(\vec{x}-\vec{y}).
\end{equation}
Related to $T_{tt}$ and $T_{zz}$, we can write corresponding
operator-valued densities in the field basis:
\begin{align}
\mathcal{H} & =\frac{1}{2}\left( \Pi^{2}+\left( \partial_{z}\Phi\right)
^{2}+\left( \partial_{x}\Phi\right) ^{2}+\left( \partial_{y}\Phi\right)
^{2}\right) +V(\Phi)\,,\\
\widetilde{\mathcal{H}} & =\frac{1}{2}\left( \Pi^{2}+\left( \partial
_{z}\Phi\right) ^{2}-\left( \partial_{x}\Phi\right) ^{2}-\left(
\partial_{y}\Phi\right) ^{2}\right) -V(\Phi) \,.
\end{align}
From this, we can also define two integrated operators:
\begin{equation}
H=\int {\rm d}^{3}x\,\mathcal{H}\text{ \ \ and \ \ }\widetilde{H}=\int
{\rm d}^{3}x\,\widetilde{\mathcal{H}}.
\end{equation}
The operator $H$ measures the energy of a state and the operator
$\widetilde{H}$ measures the flux through the $z$ direction in a given state. (Note that $\tl H$ is distinct from
a generator of translations, since these would come
from integrating $T_{0\mu}$ over a spatial slice.)
Given a state
$\left\vert \psi_i \right\rangle $ at an initial time $t_{i}$, we can evolve
it forward using the exponentiated operators:
\begin{align}
U & =\exp(-iH\Delta t)\,, & W&=\exp(-H\Delta t)\,,\\
\widetilde{U} & =\exp(+i\widetilde{H}\Delta t\,)\,, & \widetilde{W}&=\exp(-\widetilde{H}\Delta t\,)\,.%
\end{align}
While $U$ and $\widetilde{U}$ are manifestly unitary, the operators $W$ and
$\widetilde{W}$ instead act as projectors (they
also do not preserve the norm). Given an initial state $\left\vert \psi
_{i}\right\rangle $, we can act on it with the time evolution
operator $U$ and, as is well known, the resulting evolved state is captured by
the standard path integral in Lorentzian signature.
Consider next the evolution generated by acting with $\widetilde{U}$. In this case, the expectation value
$\left\langle \Phi_{f}(\vec{x})\right\vert \widetilde{U}
\cdots \widetilde{U}\left\vert \Phi_{i}(\vec{x})\right\rangle $ can
also be obtained by inserting a basis of eigenstates $\left\vert
\Phi(\vec{x})\right\rangle \left\langle \Phi(\vec{x}
)\right\vert $ and $\left\vert \Pi(\vec{x})\right\rangle
\left\langle \Pi(\vec{x})\right\vert $, and one now obtains:\footnote{Although it is customary
to indicate the path integral with ``limits of integration'' as indicated, the functional integral does not obey a fundamental theorem of calculus if we functionally differentiate with respect to these boundary conditions. Rather, the notation serves as a reminder to sum over all field configurations with prescribed boundary conditions at the beginning and end of a given path.}
\begin{equation}
\begin{aligned}
\left\langle \Phi_{f}(\vec{x})\right\vert \widetilde{U}%
\cdots \widetilde{U}\left\vert \Phi_{i}(\vec{x})\right\rangle &
=\int_{\Phi_{i}}^{\Phi_{f}}{\cal D}\phi\,{\cal D}\pi\,\exp\left( i\int {\rm d}t{\rm d}^{3}x\left[\pi\dot{\phi
}+\widetilde{\mathcal{H}}\right]\right)\,, \label{LegendreTransform}\\
& =\int_{\Phi_{i}}^{\Phi_{f}}{\cal D}\phi \,e^{i\widetilde{S}}\,,
\end{aligned}
\end{equation}
where, after integrating out the canonical momentum using $\pi = -\dot\phi$ we find
\begin{equation}
\widetilde{S}=\int {\rm d}t\,{\rm d}^{3}x\,\left( \frac{1}{2}\left[
-\left( \partial_{t}\phi\right) ^{2}+\left( \partial_{z}\phi\right)
^{2}-\left( \partial_{x}\phi\right) ^{2}-\left( \partial_{y}\phi\right)
^{2}\right] -V(\phi)\right) .
\end{equation}
Namely, we get back the ``standard'' Lagrangian, but where the roles of $z$ and $t$ have traded places: $z$ is now
functioning effectively as a time coordinate.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale = 1.25, trim = {0cm 0.0cm 0cm 0.0cm}]{states.pdf}
\caption{Depiction of spatial-slicing of states, indicated at $z = z_L$ and $z = z_R$ respectively as
Schr\"odinger picture states of a $2+1$-dimensional theory $\vert \Phi_{L}(t,x,y)\rangle$ and $\vert \Phi_{R}(t,x,y)\rangle$.}
\label{fig:states}
\end{center}
\end{figure}
In light of the previous discussion,
there is clearly a sense in which acting with $\widetilde{U}$ corresponds to
evolution of a state in the $z$ direction. With this in mind, we now
contemplate a different question: Suppose we slice up our space-time \textit{spatially}
into $2+1$-dimensional systems, indexed by the $z$ direction. We are then free to speak of states (in the $2+1$ Schr\"{o}dinger picture) $\vert \Phi(t,x,y,z_{\ast}) \rangle$, where we fix a reference value of $z_{\ast}$. Consider two such $2+1$ slices separated in the $z$ direction, and labeled as $\vert \Phi_L(t,x,y) \rangle$ and $\vert \Phi_{R}(t,x,y)\rangle$, respectively (see figure \ref{fig:states}).
Now, suppose we are interested in computing expectation values
of field operators which are ordered in the $z$ direction rather than in the
standard time-ordering. That is, given states $\left\vert \Phi_{L}(t,x,y)\right\rangle $
and $\left\vert \Phi_{R}(t,x,y)\right\rangle $ specified at fixed $z$ values,
how would we go about computing:
\begin{equation}
\left\langle \Phi_{L}\left\vert Z\left\{ \phi\left( x_{1}\right)
\cdots\phi\left( x_{n}\right) \right\} \right\vert \Phi_{R}\right\rangle ,
\end{equation}
where $Z\left\{ \cdots \right\}$ represents $z$-ordering; the $z$ direction equivalent of the usual time-ordering prescription of quantum field theory?
Our proposal is that such quantities can be obtained by evaluating a path
integral expectation value in $2+2$ signature:
\begin{equation}
\left\langle \phi\left( x_{1}\right) \cdots \phi\left( x_{n}\right)
\right\rangle _{L,R}=\frac{\displaystyle\int_{\Phi_{R}}^{\Phi_{L}}{\cal D}\phi\,\exp\left( -\int
{\rm d}^{4}x\text{ }\mathcal{L}_{2,2}\right) \phi\left( x_{1}\right)
\cdots\phi\left( x_{n}\right) }{\displaystyle\int_{\Phi_{R}}^{\Phi_{L}}{\cal D}\phi\,\exp\left(
-\int {\rm d}^{4}x\text{ }\mathcal{L}_{2,2}\right) }.
\end{equation}
We then analytically continue this answer back to $3+1$ signature to obtain
the $z$-ordered correlation function.
We now explain this procedure in more detail. First, we introduce a quantity
closely related to $T_{zz}$ which we shall refer to as a \textquotedblleft
flux\textquotedblright\ functional:
\begin{equation}
\mathcal{F}=\frac{1}{2}\left( \left( \partial_{t}\phi\right) ^{2}%
+\widetilde{\pi}^{2}-\left( \partial_{x}\phi\right) ^{2}-\left(
\partial_{y}\phi\right) ^{2}\right) -V(\phi),
\end{equation}
Observe that we can now specify a path integral in which the sum over paths
involves boundary conditions $\Phi_{L}(t,x,y)$ and $\Phi_{R}(t,x,y)$
specified at the slices $z = z_L$ and $z_R$. Evolution proceeds
according to this flux $\mathcal{F}$. Indeed, we can introduce the path integral:
\begin{equation}
\int_{\Phi_{R}}^{\Phi_{L}}{\cal D}\phi{\cal D}\tl\pi\,\exp\left( i\int
{\rm d}^{4}x\Big[\tl\pi\partial_z\phi+\mathcal{F}\Big]\right) ,
\end{equation}
and integrating out $\widetilde{\pi}$ now results in the standard $3+1$
integrand of the path integral, but where the boundary conditions on the path integral
are now specified at fixed values of the spatial coordinate $z$ rather than at fixed times:
\begin{equation}
\int_{\Phi_{R}}^{\Phi_{L}}{\cal D}\phi\,\exp\left( i\int {\rm d}^{4}x\text{ }%
\mathcal{L}_{3,1}\right) .
\end{equation}
Note that whereas in the standard path integral we evolve forward in time,
here we evolve from ``right to left''.\footnote{We would have evolution from left to right
if we had instead evolved with $-\mathcal{F}$.} We can
use this expression to evaluate $z$-ordered (as opposed to time-ordered)
correlation functions of local operators. In this case, the evolution operator
is associated with the \textquotedblleft Hamiltonian density\textquotedblright%
\ $-T_{zz}$, but we emphasize that the corresponding Lagrangian density
appearing in the path integral is identical to the usual case.
We can also compute expectation values with respect to preferred eigenstates
of the flux operator $\mathcal{F}$. Along these lines, we can perform a
path integral:
\begin{equation}
\int_{\Phi_{R}}^{\Phi_{L}}{\cal D}\phi{\cal D}\widetilde{\pi}\,\exp\left( \int {\rm d}^{4}x\,\Big[i\widetilde{\pi}\partial_{z}
\phi-\mathcal{F}\Big]\right) =\int_{\Phi_{R}}^{\Phi_{L}}{\cal D}\phi\,\exp\left( -\int {\rm d}^{4}x\,\mathcal{L}_{2,2}\right) ,
\end{equation}
where now the Kleinian signature Lagrangian makes an appearance.
\begin{equation}
\mathcal{L}_{2,2}=\frac{1}{2}\left( \left( \partial_{t}\phi\right)
^{2}+\left( \partial_{z}\phi\right) ^{2}-\left( \partial_{x}\phi\right)
^{2}-\left( \partial_{y}\phi\right) ^{2}\right) -V(\phi) .
\end{equation}
Since we are slicing up the configuration of paths in a nonstandard way, there is
clearly a sense in which we are making reference to the integrated flux
operator:
\begin{equation}\label{FluxOp}
F(z)=\int {\rm d}t{\rm d}x{\rm d}y \, T_{zz}(t,x,y,z),
\end{equation}
which is something one typically does not do, for the obvious reason that it
grossly violates causality. In particular, integrating over the $t$ direction in equation (\ref{FluxOp})
deviates from the usual formulation of quantum states being
specified on a fixed time slice (i.e., a Cauchy surface). Although it is
clearly a bit formal, a priori there is no issue with treating $F(z)$ as an
operator which acts on our Hilbert space of states. Indeed, since
$T_{zz}(t,x,y,z)$ is constructed from the field $\phi(t,x,y,z)$, and since
$\phi(t,x,y,z)$ is just a linear combination of creation and annihilation
operators, we also get an expression for the operator $F(z)$ in terms of the
same creation and annihilation operators.
Our discussion generalizes to other degrees of freedom. As an illustrative
example, consider a free Dirac field $\psi$. In this case, the relevant
components of the stress-energy tensor are:
\begin{align}
T_{tt} & =i\overline{\psi}\gamma_{t}\partial_{t}\psi=-i\overline{\psi
}\left( \gamma_{z}\partial_{z}+\gamma_{x}\partial_{x}+\gamma_{y}\partial
_{y}\right) \psi\\
T_{zz} & =i\overline{\psi}\gamma_{z}\partial_{z}\psi=-i\overline{\psi
}\left( \gamma_{t}\partial_{t}+\gamma_{x}\partial_{x}+\gamma_{y}\partial
_{y}\right) \psi,
\end{align}
where we have used the equations of motion in the second equality. Note that in performing the Legendre transformation
for the flux operator, we introduce a term proportional to $T_{tt}$ in the case of $t$-evolution, and $T_{zz}$ in the case of
$z$-evolution. Combining with the rightmost terms in the above makes manifest that when performing the corresponding evolutions,
the net effect is to analytically continue in the $t$ and $z$ directions, respectively.
The whole discussion can be phrased more abstractly for any field theory
with a stress-energy tensor, $T_{\mu\nu}$. Given a fixed vector $\xi^{\mu}$, we
can evolve our states using the operator $\exp(iF_{\xi
})$ defined via:
\begin{equation}
F_{\xi}=\int {\rm d}^{3}\Sigma^{\mu}\,\xi^{\nu}T_{\mu\nu},
\end{equation}
where $\Sigma^{\mu}$ is the surface perpendicular to $\xi^{\nu}$.
If we instead use the projection operator defined by $\exp(-F_{\xi})$, we see that
successive applications of this operator leads us to states
which have a wave-functional captured by a statistical field theory obtained by
Wick rotation in the $\xi$ direction.\footnote{For example, returning to the scalar field example above, we can work in spherical coordinates and use $T_{rr}$ to project to a state with low flux through constant radius surfaces.}
Clearly, there are three qualitative choices, corresponding to $\xi$ timelike,
spacelike, or null, and the case of present interest to us is the
seemingly most pathological (in the sense of causality), where $\xi$ is spacelike.
When $\xi$ is timelike, $F_{\xi}$ is just the Hamiltonian and this projection operation maps
a generic state onto a linear combination of states with dominant amplitude in the ground state.
What sort of state is being created by instead acting repeatedly with
$\exp(-F_{\xi})$ when $\xi$ is spacelike? To answer this question we decompose the
Hilbert space into eigenstates of $F_{\xi}$. By definition, these
are associated with the pressure or, more precisely, the flux in the $\xi
$ direction. The projection obtained via $\exp(-LF_{\xi})$ for large $L$
amounts to restricting to states with \textquotedblleft
minimal\textquotedblright\ flux in this direction. For all these reasons, we
shall refer to the state obtained by acting with $\exp(-F_{\xi})$ as
\textquotedblleft low flux states in the $\xi$-direction\textquotedblright,
and shall denote them as $\left\vert \text{LOW}\right\rangle $.
Acting with $\exp(-F_{\xi})$ is potentially dangerous when $\xi$ is spacelike because the action
$S_{2,2}$ is unbounded from below. Indeed, the best we can really do is to perform a perturbative analysis around
a given saddle point configuration. The situation is somewhat akin to what occurs in
the Euclidean path integral of gravity, where the action is also unbounded from below.
For some recent additional discussion on some of the subtleties with
analytic continuation of metrics and summing over saddle point configurations,
see e.g., \cite{Kontsevich:2021dmb, Witten:2021nzp}.
Less formally, we expect that actual on-shell physical configurations will always
have a minimal value of $z$-flux. For example, in the case of a perfect fluid
where pressure $p$ is proportional to energy density via $p=w\rho$, the
cosmological constant (which has $w=-1$) would constitute a low pressure configuration.
In order to sidestep these difficulties, we adopt the sentiment that this projection operation is a somewhat formal device that selects low flux states in the vicinity of some chosen saddle point.
Given a $2+2$ Lagrangian, we can also identify candidate symmetries which leave the Lagrangian, and more generally the
partition function, invariant. Such symmetries serve to constrain the correlation functions, and lead to corresponding Ward identities.
The interpretation of these symmetries in $3+1$ signature is more subtle. For example, in a theory with $\mathrm{ISO}(2,2)$ spacetime symmetry, the continuation to Lorentzian signature need not respect this symmetry. That said, the structure of correlation functions will still be constrained.
A related set of issues concerns the systematics of perturbation theory in such systems. Along these lines, suppose we work in $2+2$ signature momentum space. Then, we can present the data of a real four-vector in terms of a complex two-vector with components $(\Omega, K)$. The difference between $2+2$ (Kleinian) and $4+0$ (Euclidean) norms is:
\begin{align}
\mathrm{2+2}: &~~~ k^2 = -\Omega\, \overline{\Omega} + K \overline{K}\\
\mathrm{4+0}: &~~~ k^2 = +\Omega\, \overline{\Omega} + K \overline{K}.
\end{align}
So, much as we can perform all calculations in Lorentzian signature by first Wick rotating to Euclidean signature, we can similarly
perform a Wick rotation from Kleinian to Euclidean signature by formally continuing in the norm $\vert \Omega \vert \rightarrow i \vert \Omega \vert$.
An important subtlety here is the corresponding $i \varepsilon$ prescription when continuing back to Lorentzian signature.~The appropriate notion of $z$-ordering and projection onto the $\vert \mathrm{LOW} \rangle$ state means that in our evaluation of
loop integrals we should work in a deformed contour of integration with respect to the $k_z$ direction. Said differently, we simply apply the standard $i \varepsilon$ prescription, but in the $k_z$ rather than $k_0$ momentum coordinate~\cite{Srednyak:2013ylj}.
\section{$2+2$ Signature Lagrangians}
Having motivated the appearance of $2+2$ signature Lagrangians in some physical problems, we now construct some examples.~We view the action principle as specifying a statistical field theory evaluated around a saddle point
configuration.
Let us begin with the Lagrangian density of a massless free real scalar
field:
\begin{equation}
\mathcal{L}[\phi]=-\frac{1}{2}\eta_{(K)}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}%
\phi=\frac{1}{2}\left( \left( \partial_{t}\phi\right) ^{2}+\left(
\partial_{z}\phi\right) ^{2}-\left( \partial_{x}\phi\right) ^{2}-\left(
\partial_{y}\phi\right) ^{2}\right) ,
\end{equation}
where $\eta_{(K)}^{\mu\nu} = {\rm diag}(-1,+1,+1,-1)$.
By inspection, this is invariant under the Kleinian analog of the Poincar\'e
symmetries, i.e., translations and $\mathrm{Spin}(2,2)$ rotations. Next consider
including additional real scalar fields. We index the fields as
$\phi^{I}$ and introduce a general symmetric constant matrix $M_{IJ}$ for the kinetic terms of
these fields. We have, in general:
\begin{equation}
\mathcal{L}[\phi^{I}]=-\frac{1}{2}M_{IJ}\eta_{(K)}^{\mu\nu}\partial_{\mu}\phi^{I}%
\partial_{\nu}\phi^{J}.
\end{equation}
Since we are working in Kleinian signature, we can a priori allow a
wider variety of possible $M$'s than we would permit in Lorentzian
signature. For example, we could take $M$ to be a $2\times2$ matrix such as:
\begin{equation}
M=\left[
\begin{array}
[c]{cc}%
0 & 1\\
1 & 0
\end{array}
\right] ,
\end{equation}
which would lead to a Lagrangian with a \textquotedblleft wrong sign kinetic
term\textquotedblright\ for one of our fields. Indeed, starting from such an
action, we can write:
\begin{equation}
\mathcal{L}[\phi^{1},\phi^{2}]=-\eta^{\mu\nu}_{(K)}\partial_\mu\phi^{1}\partial_\nu\phi^{2}=-\frac{1}{2}\eta_{(K)}^{\mu\nu}\partial_\mu
\phi_{+}\partial_\nu\phi_{+}+\frac{1}{2}\eta_{(K)}^{\mu\nu}\partial_\mu\phi_{-}\partial_\nu\phi_{-},
\end{equation}
where we have diagonalized the Lagrangian by defining the combinations
\begin{equation}
\phi_{\pm}=\frac{1}{\sqrt{2}}\left( \phi^{1}\pm\phi^{2}\right) .
\end{equation}
This is of course a pathological Lagrangian in Lorentzian signature (but see also \cite{Carroll:2003st, Cline:2003gs}),
though it will appear quite naturally as the kinetic term of scalars in certain supersymmetric systems in
Kleinian signature.
A priori, in Kleinian signature one could entertain both real and unitary forms of various gauge
groups (as well as suitable complexifications). To see why both possibilities
are available, recall our Lagrangian for a pair of real scalars:
\begin{equation}
\mathcal{L}[\phi^{1},\phi^{2}]=-\eta^{\mu\nu}_{(K)}\partial_\mu\phi^{1}\partial_\nu\phi^{2}\,.
\end{equation}
There is a symmetry of the theory given by the real rescaling:
\begin{equation}
\phi^{1}\mapsto e^{\xi}\phi^{1}\text{ \ \ and \ \ }\phi^{2}\mapsto e^{-\xi
}\phi^{2}.
\end{equation}
The corresponding group of symmetries $\mathbb{R}^{\ast}$ is noncompact. More
generally, we can entertain real forms of various symmetry groups such as
$\mathrm{SL}(N,\mathbb{R})$. There is a sense in which this choice is more natural,
especially in the context of gluon scattering amplitudes in Kleinian signature~\cite{Witten:2003nn,Arkani-Hamed:2009hub}.
Consider next fermionic degrees of freedom. To accomplish
this, we need to discuss spinor representations, associated with the
Clifford algebra:
\begin{equation}
\big\{ \gamma_{(K)}^{\mu},\gamma_{(K)}^{\nu} \big\} =-2\eta_{(K)}^{\mu\nu
}. \label{gammaKlein}%
\end{equation}
Spinors transform in representations of
$\mathrm{Spin}(2,2)\simeq \mathrm{SL}(2,\mathbb{R})_{L}\times \mathrm{SL}(2,\mathbb{R})_{R}$. Irreducible
representations are given by real doublets under one of these factors, i.e.,
Majorana--Weyl spinors. Raising and lowering of a spinor index is accomplished with the anti-symmetric
tensors $\varepsilon_{ab}$ and $\varepsilon_{\dot{a} \dot {b}}$ with
$\varepsilon_{12} = -1$.\footnote{For spinors of the $2+1$ signature Lorentz group $\mathrm{SL}(2,\mathbb{R})$,
it is customary to include an additional factor of $i$ in the $\varepsilon_{ab}$ (i.e., charge conjugation)
tensors, with suitable reality conditions enforced via the 3D Dirac matrices. For our purposes, however, where
we still have a notion of chirality (which is not an issue in 3D)
this would be a bit awkward since a doublet $\chi_a$ with
real entries would then be related to the doublet $\chi^a$
with purely imaginary entries.}
Complexifying a Majorana--Weyl spinor results in a Weyl spinor.
Taking a pair of left-handed and right-handed Weyl
spinors gives a Dirac spinor. This is a four-component vector with
complex entries:
\begin{equation}
\Psi=\left[
\begin{array}
[c]{c}%
\psi_{a}\\
\widetilde{\chi}^{\dot{a}}%
\end{array}
\right] .
\end{equation}
We introduce a Dirac spinor $\Lambda$ in a conjugate representation with
entries:\footnote{Note that because of the choice of signature, we refrain
from introducing the conjugate spinor via $\Psi^{\dag}\cdot\gamma_{(K)}^{0}$.}
\begin{equation}
\Lambda=\left[
\begin{array}
[c]{cc}%
\zeta^{a} & \widetilde{\eta}_{\dot{a}}%
\end{array}
\right] .
\end{equation}
In terms of this, the Dirac Lagrangian (in any signature, see e.g., reference \cite{Wetterich:2010ni}) is given by:\footnote{A comment on the factor of $-i$. A common practice in the case of the Euclidean signature Dirac action is to absorb the factor of $i$ into $\Lambda$, which is often denoted as ``$\overline{\Psi}$'' even though it is not related to the degrees of freedom in $\Psi$. Our choice to retain the factor of $i$ has to do with subsequent comparison with the literature (for example \cite{Seiberg:2003yz}). Moreover, with the factor of $i$, we can impose a suitable reality condition on the Majorana--Weyl spinor action, something we can achieve in $2+2$ signature, but not $4+0$ signature. For some additional discussion and review of various approaches to Euclidean spinors and Wick rotations, see reference \cite{vanNieuwenhuizen:1996tv}.}
\begin{equation}
\mathcal{L}[\Psi,\Lambda]=-i\Lambda\gamma^{\mu}\partial_{\mu}\Psi.
\end{equation}
By construction, this is Poincar\'{e} invariant. Now we specialize to write down the action of smaller representations. To proceed, it is
helpful to introduce a chiral basis of gamma matrices with all real entries:
\begin{equation}
\gamma_{(K)}^{\mu}=\left[
\begin{array}
[c]{cc}
0& \sigma_{(K)}^{\mu}\\
\overline{\sigma}_{(K)}^{\mu} &0
\end{array}
\right] ,
\end{equation}
where:
\begin{equation}
\sigma_{(K)}^{0} =\left[
\begin{array}
[c]{cc}%
-1 &0 \\
0& -1
\end{array}
\right] \text{, }~\sigma_{(K)}^{1}=\left[
\begin{array}
[c]{cc}
0& 1\\
1 &0
\end{array}
\right] \text{, }~\sigma_{(K)}^{2}=\left[
\begin{array}
[c]{cc}%
1 &0 \\
0& -1
\end{array}
\right] \text{, }~\sigma_{(K)}^{3}=\left[
\begin{array}
[c]{cc}
0& -1\\
1 &0
\end{array}
\right] \,,
\label{eq:paulimatrices}
\end{equation}
and where $\bar\sigma_{(K)}^\mu = (\sigma_{(K)}^0,-\sigma^i_{(K)})$.
Observe that, in contrast to Lorentzian signature, all the $\sigma_{(K)}^{\mu}$'s have
real entries. Moreover, we have analytically continued in the $z$-direction, with
$\sigma_{(K)}^{3}$ anti-Hermitian.\footnote{In comparing with Lorentzian signature conventions, we observe that
$\sigma^2$ and $\sigma^3$ appear to have switched roles. This is just a choice of labelling scheme, and permuting the coordinates would
amount to working with a metric of signature $(-,+,-,+)$. It makes no material difference to the physical content.}
This is required for the gamma matrix algebra to be equation \eqref{gammaKlein}. The Lagrangian for our Dirac fermion now decomposes as:
\begin{align}
\mathcal{L}[\Psi,\Lambda] & =-i\left[
\begin{array}
[c]{cc}%
\zeta^{a} & \widetilde{\eta}_{\dot{a}}%
\end{array}
\right] \cdot\left[
\begin{array}
[c]{cc}
0& \sigma_{(K)}^{\mu}\partial_{\mu}\\
\overline{\sigma}_{(K)}^{\mu}\partial_{\mu} &0
\end{array}
\right] \cdot\left[
\begin{array}
[c]{c}%
\psi_{a}\\
\widetilde{\chi}^{\dot{a}}%
\end{array}
\right] \\
& =-i \widetilde{\eta}_{\dot{a}}\left( \overline{\sigma}_{(K)}^{\mu
}\partial_{\mu}\right) ^{\dot{a}a}\psi_{a}-i\zeta^{a}\left( \sigma_{(K)}%
^{\mu}\partial_{\mu}\right) _{a\dot{a}}\widetilde{\chi}^{\dot{a}} .
\end{align}
As expected, the action pairs a right-mover with a left-mover, which are in this case complex Weyl spinors.
Now, a special property of split signature is that we can simultaneously impose the Majorana and Weyl conditions.
We are therefore also free to restrict to the special case of purely
real spinors. In what follows, we shall often work with a single pair of Majorana--Weyl spinors
$\lambda_{a}$ and $\widetilde{\lambda}_{\dot{a}}$ and write the action as:
\begin{equation}
\mathcal{L}[\lambda,\widetilde{\lambda}]=-i\widetilde{\lambda}_{\dot{a}}\left(
\overline{\sigma}_{(K)}^{\mu}\partial_{\mu}\right) ^{\dot{a}a}\lambda_{a}.
\end{equation}
We stress that in Kleinian signature, there is no relation between
$\lambda$ and $\widetilde{\lambda}$ like there is in Lorentzian signature, where they are related by Hermitian conjugation. Rather,
in Kleinian signature they are simply two different
Majorana--Weyl fermions, one of which is left-handed and one of which is right-handed. Note also that as opposed to the situation in
Euclidean signature, here we can enforce a reality condition for the action.\footnote{Indeed, observe that under complex conjugation we have $(\widetilde{\lambda}_{\dot a} (\overline{\sigma}^{\mu}_{(K)} \partial_{\mu})^{\dot{a} a} \lambda_a)^{\ast} = (\overline{\sigma}^{\mu}_{(K)} \partial_{\mu})^{\dot{a} a} \lambda_{a} \widetilde{\lambda}_{\dot a}
= - \widetilde{\lambda}_{\dot a}(\overline{\sigma}^{\mu}_{(K)} \partial_{\mu})^{\dot{a} a} \lambda_{a}$,
where we have used the fact that in Kleinian signature, the $\overline{\sigma}^{\mu}$'s are real matrices,
and the doublets are also real. Including an overall factor of $-i$ then ensures that the action is real.}
Much as in the case of our theory of scalars, we can generalize to multiple fermions.
Assuming that the kinetic
term is non-degenerate, we can, without loss of generality, use the fermions $\lambda_{a}^{A}$ and
$\widetilde{\lambda}_{\dot{a}}^{B}$ along with a non-degenerate symmetric matrix
$K_{AB}$ to produce:
\begin{equation}
\mathcal{L}[\lambda,\widetilde{\lambda}]=-iK_{AB}\widetilde{\lambda}_{\dot{a}}%
^{A}\left( \overline{\sigma}_{(K)}^{\mu}\partial_{\mu}\right) ^{\dot{a}%
a}\lambda_{a}^{B}.
\end{equation}
As in the case of our scalar action, there is no a priori reason to limit our
quadratic form. In what follows, we will suppress the $(K)$ subscript when the
context is clear.
\subsection{Supersymmetry}
Let us now turn to the structure of supersymmetry in Kleinian signature. In
this subsection we focus on the case of $\mathcal{N}=1$ supersymmetry; namely
we have a left-handed Majorana--Weyl spinor $Q_{a}$ and a right-handed
Majorana--Weyl spinor $\widetilde{Q}_{\dot{a}}$. Our conventions in Lorentzian
signature follow those in \cite{Wess:1992cp}, and those of
\cite{Seiberg:2003yz} in Euclidean signature. Our task will be to develop the
related structures in Kleinian signature (see also
\cite{Castellani:1982ke,Bergshoeff:1982az,Blencowe:1988sk,Bars:1996dz,Cortes:2019mfa,Cortes:2020shr,Gall:2021tiu}%
). In Kleinian signature, the supersymmetry algebra is:
\begin{align}
\{ Q_{a},\widetilde{Q}_{\dot{b}}\} & =2\sigma_{a\dot{b}}^{\mu
}P_{\mu}& \lbrack P_{\mu},Q_{a}] & =[P_\mu,\widetilde{Q}_{\dot{a}}]=0\\
\{ Q_{a},Q_{b}\} & =\{ \widetilde{Q}_{\dot{a}%
},\widetilde{Q}_{\dot{b}}\} =0 & \lbrack P_{\mu},P_{\nu}] & =0,
\end{align}
where $\sigma_{a\dot{b}}^{\mu}$ are the Pauli matrices in Kleinian
signature~\eqref{eq:paulimatrices} (we suppress the $(K)$ subscript) and $P_\mu = i \partial_\mu$.
We note that in contrast to Lorentzian signature, here the spinors are
independent real doublets. Even though they are not related by conjugation, they are still
linked together. For example, observe that there is a redundancy in our
characterization as captured by the rescaling:
\begin{equation}
Q\mapsto e^{\xi}Q\text{ \ \ and \ \ }\widetilde{Q}\mapsto e^{-\xi
}\widetilde{Q}.
\end{equation}
In a theory which respects this rescaling transformation, we have a
corresponding non-compact R-symmetry group $\mathbb{R}^{\ast}$.
Constructing a supersymmetric Lagrangian is also straightforward, and can be
adapted from the Lorentzian signature treatment. Formally speaking we
construct examples of such Lagrangians by replacing all complex conjugate
fields by their tilded versions. This also includes analytic continuation of symmetry groups from compact to real forms.
We begin by introducing the infinitesimal
parameters $\varepsilon_{a}$ and $\widetilde{\varepsilon}_{\dot{a}}$, and use
these to define a symmetry generator:%
\begin{equation}
\delta=\varepsilon^{a}Q_{a}+\widetilde{\varepsilon}_{\dot{a}}\widetilde{Q}%
^{\dot{a}}.
\end{equation}
As an example of a supersymmetric Lagrangian, consider a real scalar $\phi$, a
Majorana--Weyl spinor $\lambda_{a}$ and a real auxiliary field $F$, as well as
partner fields $\widetilde{\phi}$, $\widetilde{\lambda}_{\dot{a}}$ and
$\widetilde{F}$. Our explicit Lagrangian is:
\begin{equation}\label{eq:LAGfree}
\mathcal{L}=-i\widetilde{\lambda}_{\dot{a}}\left( \overline{\sigma}^{\mu}\partial_{\mu
}\right) ^{\dot{a}a}\lambda_{a} - \partial^{\mu}\widetilde{\phi}\partial_{\mu
}\phi+\widetilde{F}F.
\end{equation}
We can explicitly verify that this action is invariant under the following
transformation rules:
\begin{align}
\delta\phi & =\sqrt{2}\varepsilon^{a}\lambda_{a}\text{, \ \ }
&
\delta
\lambda_{a}&= i\sqrt{2}\left( \sigma^{\mu}\partial_{\mu}\phi\right) _{a\dot
{a}}\widetilde{\varepsilon}^{\dot{a}}+\sqrt{2}\varepsilon_{a}F\text{,
\ \ }&
\delta F&= i\sqrt{2}\widetilde{\varepsilon}_{\dot{a}}\left( \overline
{\sigma}^{\mu}\partial_{\mu}\right) ^{\dot{a}a}\lambda_{a}\,,\\
\delta\widetilde{\phi} & =\sqrt{2}\widetilde{\varepsilon}_{\dot{a}%
}\widetilde{\lambda}^{\dot{a}}\,,&
\delta\widetilde{\lambda}_{\dot{a}%
}&=-i\sqrt{2}\epsilon_{\dot a\dot b}\left( \overline{\sigma}^{\mu}\partial_{\mu}\widetilde{\phi
}\right) ^{\dot{b}b}\varepsilon_{b}+\sqrt{2}\widetilde{\varepsilon}_{\dot{a}%
}\widetilde{F}\,,~&
\delta\widetilde{F}&=-i\sqrt{2}\varepsilon
^{a}\left( \sigma^{\mu}\partial_{\mu}\right)_{a\dot{a}}\widetilde{\lambda
}^{\dot{a}}.
\end{align}
To construct supersymmetric actions, it is convenient to work in terms of superspace. To this end,
we supplement our spacetime by a pair of Majorana--Weyl
spinors $\theta^{a}$ and $\widetilde{\theta}^{\dot{a}}$. Using these superspace coordinates, we
have explicit representatives of left and right anti-derivations:
\begin{align}
\label{eq:suspdef1}
Q_{a} & =+i\left( \frac{\partial}{\partial\theta^{a}} + i\sigma_{a\dot{a}%
}^{\mu}\widetilde{\theta}^{\dot{a}}\partial_{\mu}\right)\,,
&
D_{a} & =-i\left( \frac{\partial}{\partial\theta^{a}} -i\sigma_{a\dot{a}%
}^{\mu}\widetilde{\theta}^{\dot{a}}\partial_{\mu}\right)\,, \\
\widetilde{Q}_{\dot{a}} & =-i\left( \frac{\partial}{\partial
\widetilde{\theta}^{\dot{a}}}+i\theta^{a}\sigma_{a\dot{a}}^{\mu}\partial_{\mu
}\right) \,,
&
\widetilde{D}_{\dot{a}} & =+i\left( \frac{\partial}{\partial
\widetilde{\theta}^{\dot{a}}}-i\theta^{a}\sigma_{a\dot{a}}^{\mu}\partial_{\mu
}\right) .
\label{eq:suspdef2}
\end{align}
Each of these generators is invariant under complex conjugation because:\footnote{Note that with our
reality conventions, $\left( {\rm d}\theta\right) ^{\ast}=-{\rm d}(\theta^{\ast
})=-{\rm d}\theta$. See e.g.,~DeWitt's book on Supermanifolds~\cite{DeWitt:2012mdz}.~This is a consequence of our convention for complex conjugation, namely $(\alpha\beta
)^{\ast}=\beta^{\ast}\alpha^{\ast}$, so for real Grassmann variables, we have
$(\alpha\beta)^{\ast}=-(\alpha\beta)$.}
\begin{equation}
\left(\frac{\partial}{\partial \theta^a} \right)^{\ast} = - \frac{\partial}{\partial \theta^a},\,\,\,\,\, \left(\frac{\partial}{\partial \widetilde{\theta}^{\dot{a}}} \right)^{\ast} = - \frac{\partial}{\partial \widetilde{\theta}^{\dot{a}}},\,\,\,\,\, \left(\frac{\partial}{\partial x^{\mu}} \right)^{\ast} = \frac{\partial}{\partial x^{\mu}}.
\end{equation}
We caution that this is slightly different from Hermitian conjugation, which is implicitly defined by using integration over superspace to define an inner product. Indeed, the Hermitian conjugates have an additional minus sign. The generators $Q_a,\widetilde{Q}_{\dot a},D_a,\widetilde{D}_{\dot a}$'s and $D$'s are thus anti-Hermitian in our conventions. This tracks with what happens in the case of 3D $\mathcal{N} = 1$ superspace (see e.g., equation (2.2.2a) of reference \cite{Gates:1983nr}). It is also instructive
to compare this with what we would have in Lorentzian
signature. There, we would have no overall factor of $i$ or $-i$ on each
term. Additionally, complex conjugation would send $\partial/\partial\theta$
to $-\partial/\partial\overline{\theta}$.
With the definitions~\eqref{eq:suspdef1} and~\eqref{eq:suspdef2}, we obtain the expected
supersymmetry algebra:
\begin{align}
\big\{Q_{a},\widetilde{Q}_{\dot{a}}\big\} & =+2i\sigma_{a\dot{a}}^{\mu}\partial_{\mu}\\
\big\{ D_{a},\widetilde{D}_{\dot{a}}\big\} & =-2i\sigma_{a\dot{a}}^{\mu}\partial_{\mu}.
\end{align}
It is also convenient to work in terms of a \textquotedblleft
shifted\textquotedblright\ superspace coordinate which favors one handedness
over the other. Introducing coordinates:
\begin{equation}
y^{\mu}=x^{\mu}+i\theta^{a} \sigma^{\mu}_{a\dot{a}}\widetilde{\theta}^{\dot{a}},
\end{equation}
we observe that $y$ is purely real. One can also introduce an ``anti-chiral'':
\begin{equation}
\widetilde{y}^{\mu}=x^{\mu} - i\theta^{a} \sigma^{\mu}_{a\dot{a}}\widetilde{\theta}^{\dot{a}},
\end{equation}
which is also real. In terms of this, the super-derivations
take the form:
\begin{align}
Q_{a} & =i\left( \frac{\partial}{\partial\theta^{a}}\right)
&
D_{a} & =i\left( \frac{\partial}{\partial\theta^{a}}\right) \\
\widetilde{Q}_{\dot{a}} & =-i\left( \frac{\partial}{\partial
\widetilde{\theta}^{\dot{a}}}+2i\theta^{a}\sigma_{a\dot{a}}^{\mu}%
\frac{\partial}{\partial y^{\mu}}\right)
&
\widetilde{D}_{\dot{a}} & =-i\left( \frac{\partial}{\partial
\widetilde{\theta}^{\dot{a}}}-2i\theta^{a}\sigma_{a\dot{a}}^{\mu}%
\frac{\partial}{\partial y^{\mu}}\right) .
\end{align}
Superfields can be introduced in the standard way. Scalar functions of the superspace coordinates $f(y,\widetilde{y}, \theta, \widetilde{\theta})$ can be expanded into components. For example, chiral and anti-chiral superfields satisfy the conditions:
\begin{align}
\text{Chiral}\text{: } \quad & \widetilde{D}_{\dot{a}}\Phi=0\,,\\
\text{Anti-Chiral}\text{: } \quad & D_{a}\widetilde{\Phi}=0\,,
\end{align}
So $\Phi$ depends only on $y$ and $\theta$, and $\widetilde{\Phi}$ depends only on $\widetilde{y}$ and $\widetilde{\theta}$.
In terms of component field expansions, we have:
\begin{equation}
\Phi(y,\theta) = \phi(y) - i \sqrt{2} \theta \lambda(y) - i \theta \theta F(y),
\end{equation}
with a similar expansion for $\widetilde{\Phi}$.
Indeed, in passing from Lorentzian to Kleinian signature, the main difference is that $\Phi$ and $\widetilde{\Phi}$ are not related by complex conjugation. Instead, they are purely real: $\Phi = \Phi^{\dag}$. In this sense, they are more analogous to the superfields of a Lorentzian signature 3D $\mathcal{N} = 1$ system.\footnote{Compared with standard 3D $\mathcal{N} = 1$ conventions (see e.g., \cite{Gates:1983nr}),
we have some additional factors of $i$ in our component field expansion. This is because our $\varepsilon_{ab}$ tensor is purely real.}
To build a supersymmetric action, we can simply make the substitution $\widetilde{\Phi}$ for each complex conjugate quantity $\Phi^{\dag}$ in Lorentzian signature. For example, the Lagrangian for a free superfield $\Phi \oplus \widetilde{\Phi}$ is:
\begin{equation}
\mathcal{L}_{\Phi} = \widetilde{\Phi} \Phi |_{\theta \theta \widetilde{\theta} \widetilde{\theta}},
\end{equation}
where the subscript is an instruction to only keep the term in the component field expansion proportional to $\theta \theta \widetilde{\theta} \widetilde{\theta}$. This yields the Lagrangian of equation (\ref{eq:LAGfree}).
Similar considerations hold for vector superfields, which we specify by the condition $V = V^{\dag}$.
For ease of exposition we focus on the case of abelian gauge fields, the extension to non-abelian gauge symmetry presents no additional
complications. A comment here is that the gaugino is a four-component Majorana fermion,
and in $2+2$ signature, it is more natural to take the real form of the gauge group.\footnote{For example $\mathbb{R}^{\ast}$ rather than $\mathrm{U}(1)$, and $\mathrm{SL}(N,\mathbb{R})$ rather than $\mathrm{SU}(N)$.}
Introducing an abelian vector superfield $V$, gauge transformations act as:
\begin{equation}
V \mapsto V + \Xi - \widetilde{\Xi}
\end{equation}
with $\Xi$ a chiral superfield and $\widetilde{\Xi}$ its counterpart. We can build the spinorial field strengths:
\begin{align}
W_a &= -\frac{1}{4} \widetilde{D} \widetilde{D} D_{a} V \\
\widetilde{W}_{\dot{a}} &= -\frac{1}{4} D D \widetilde{D}_{a} V,
\end{align}
and then construct a corresponding kinetic term via:
\begin{equation}
\mathcal{L}_{\mathcal{W}} = \frac{1}{4 g^2} \left( W^2|_{\theta \theta} + \widetilde{W}^2 |_{\widetilde{\theta} \widetilde{\theta}} \right),
\end{equation}
with $g$ the gauge coupling. We can also couple the vector multiplet to charge $q$ chiral superfields via the term $\widetilde{\Phi} e^{q V} \Phi$. Observe that whereas in Lorentzian and Euclidean signature the gauge transformation on a charged field would send $\Phi \mapsto \exp(- i q \Xi) \Phi$ and $\Phi^{\dag} \mapsto \exp(i q \Xi^{\dag})$ for $\Xi$ a chiral superfield, in Kleinian signature, reality of the various superfields requires us to instead take the transformations $\Phi \mapsto \exp(- q \Xi) \Phi$ and $\widetilde{\Phi} \mapsto \exp(q \widetilde{\Xi}) \widetilde{\Phi}$. Clearly, one can build more elaborate supersymmetric actions for quantum field theories and supergravity theories in the standard way, following, for example reference \cite{Wess:1992cp}. A final comment is that a number of $\mathcal{N} = 2$ actions in various signatures were recently constructed in \cite{Cortes:2019mfa,Cortes:2020shr,Gall:2021tiu}.
\section{Lorentz Breaking and $\mathcal{N}=1/2$ Supersymmetry}
Precisely because the Lorentz group in $2+2$ signature admits spinor
representations with two real degrees of freedom, it is natural to ask whether we can
find Lagrangians which preserve the minimal amount of supersymmetry.
Upon analytic continuation to Lorentzian
signature, we expect such theories to formally have $\mathcal{N}=0$
supersymmetry, but in which some states such as $\left\vert \text{LOW}%
\right\rangle $ are protected against radiative corrections.
This is one of our main observations.
Now, even though the minimal irreducible spinor representation consists of two
real degrees of freedom, building a $\mathrm{Spin}(2,2)$ invariant action in which
degrees of freedom propagate in all four spacetime directions is not actually
possible~\cite{Cortes:2019mfa,Cortes:2020shr, Gall:2021tiu}. This can also be explicitly checked simply by attempting to
construct a kinetic term for a single Majorana--Weyl spinor in Kleinian signature.
To build examples of $\mathcal{N}=1/2$ supersymmetric theories, we must
therefore entertain the possibility of breaking Lorentz symmetry. In fact, the
analogous question in Euclidean signature was studied in \cite{Ooguri:2003qp, Ooguri:2003tt, Seiberg:2003yz, Berkovits:2003kj}
for a specific breaking pattern based on a self-dual field strength of the
sort which naturally appears in Euclidean signature $\mathcal{N}=2$
supergravity backgrounds with a non-zero graviphoton field strength. In
general terms, there can be breaking patterns triggered by a vector, $v_{a\dot b}$, a
self-dual field strength, $c_{ab}$, or an anti-self-dual field strength, $\tl c_{\dot a\dot b}$, which take the following forms:
\begin{align}
& \mathrm{Spin}(2,2)\xrightarrow{v_{a\dot{b}}}\mathrm{SL}(2,\mathbb{R})_{D}\,,\\
& \mathrm{Spin}(2,2)\xrightarrow{c_{ab}}\mathrm{SL}(2,\mathbb{R})_{R}\,,\\
& \mathrm{Spin}(2,2)\xrightarrow{\widetilde{c}_{\dot{a}\dot{b}}}\mathrm{SL}(2,\mathbb{R})_{L}\,,
\end{align}
these respectively break the\ Lorentz symmetry to either a diagonal or
anti-chiral/chiral subgroup of the full Lorentz group. It is worth noting that there is a sense in which the breaking via a timelike vector is more natural in the context of cosmology. Let us now discuss each possibility in turn.
\subsection{Vector-Like Breaking}
Let us begin with vector-like breaking of the Lorentz group.
Observe that the decomposition of the vector $v_{a\dot{b}}$ under the
diagonal subgroup $\mathrm{SL}(2,\mathbb{R})_{D}$ yields
$\mathbf{3}$~\raisebox{0.145ex}{$\oplus$}~$\mathbf{1}$, so switching on this expectation
value means we need to project to representations of the diagonal group:
\begin{align}
\nonumber
\mathrm{SL}(2,\mathbb{R})_{L}\times \mathrm{SL}(2,\mathbb{R})_{R} &\supset \mathrm{SL}(2,\mathbb{R})_{D} \\
v_{a\dot{b}}:(\mathbf{2},\mathbf{2}) &\rightarrow\mathbf{3}~\displaystyle{\raisebox{0.14ex}{$\oplus$}}~\mathbf{1}\\
Q_{a}:(\mathbf{2},\mathbf{1}) &\rightarrow\mathbf{2}\\
\widetilde{Q}_{\dot{b}}:(\mathbf{1},\mathbf{2}) &\rightarrow\mathbf{2}\,,%
\end{align}
in accord with our general discussion presented above. The presence of $v_{a\dot b}$ as an object in the theory allows us to convert dotted and undotted indices into each other. We should therefore also expect a modified supersymmetry algebra.
Without loss of generality, we may adopt a frame in which
$v_{a\dot{b}} = - \sigma_{a\dot{b}}^{0}$ is
just the identity matrix. Using this tensor, we
can then convert right-handed spinors to left-handed ones via:
\begin{equation}
\overline{Q}_{a} \equiv v_{a\dot{b}}\widetilde{Q}^{\dot{b}}. \label{Qidentification}%
\end{equation}
In other words, we are making the identifications:
\begin{equation}
\overline{Q}_{1}=\widetilde{Q}^{\dot{1}}=\widetilde{Q}_{\dot{2}}\text{ \ \ and
\ \ }\overline{Q}_{2}=\widetilde{Q}^{\dot{2}}=-\widetilde{Q}_{\dot{1}}.
\end{equation}
We can now take a general linear combination of the spinors $\overline
{Q}_{a}$ and $Q_{a}$ as given by:
\begin{equation}
\mathbb{Q}_{a}\equiv \frac{1}{\sqrt{2}}\left( \zeta Q_{a}+\zeta^{-1}\overline
{Q}_{a}\right) \text{,}%
\end{equation}
where $\zeta$ is a non-zero real number. This real parameter determines which $\mathcal{N}=1/2$ subalgebra is preserved.
We now ask which supercharges leave our vector $v_{a\dot{b}}$ invariant. In order to facilitate this, we first introduce a corresponding real vector superfield which contains the component:\footnote{Here we are defining $\bar\theta_a \equiv v_{a\dot b}\tl\theta^{\dot b}$. Also, compared with the Lorentzian signature expression we have an additional factor of $-i$ to ensure $V = V^{\dag}$.}
\begin{equation}
V= \cdots + i\theta^{a}v_{a\dot{b}}\widetilde{\theta}^{\dot{b}}+ \cdots = \cdots + i\theta
^{a}\overline{\theta}_{a}+ \cdots\,. \label{Vexpand}%
\end{equation}
Acting with $\mathbb{Q}_{a}$ results in:
\begin{equation}
\mathbb{Q}_{a}\, V=\frac{1}{\sqrt{2}}\left( \zeta\overline{\theta}%
_{a}-\zeta^{-1}\theta_{a}\right) .
\end{equation}
So, we see that there is a 3D $\mathcal{N}=1$ superspace invariant under the
action of the $\mathbb{Q}_{a}$'s given by identifying:
\begin{equation}
\zeta\overline{\theta}_{a}=\zeta^{-1}\theta_{a}.
\end{equation}
The end result of this is that the $\mathcal{N}=1/2$ algebra is just that of a
3D $\mathcal{N}=1$ system:
\begin{equation}
\begin{aligned}
\left\{ \mathbb{Q}_{a},\mathbb{Q}_{b}\right\} & =\frac{1}{2}\left\{ \zeta
Q_{a}+\zeta^{-1}\overline{Q}_{a},\zeta Q_{b}+\zeta^{-1}\overline{Q}%
_{b}\right\} \\
& =\frac{1}{2}\left\{ Q_{a},\overline{Q}_{b}\right\} +\frac{1}{2}\left\{
\overline{Q}_{a},Q_{b}\right\} \\
& =v_{b\dot{b}}P_{a}^{~\,\dot{b}}+v_{a\dot{a}}P_{b}^{~\,\dot{a}},
\end{aligned}
\end{equation}
which we summarize by simply writing the 3D $\mathcal{N}=1$ algebra:
\begin{equation}
\left\{ \mathbb{Q}_{a},\mathbb{Q}_{b}\right\} =2P_{ab},
\end{equation}
where $P_{ab}$ generates translations in the direction transverse to the timelike vector $v_{a \dot{b}}$.
We now discuss a few generalizations. The analysis just presented assumed a timelike vector, but in Kleinian signature we could equally well
have considered a spacelike vector. The reason is that under spatial breaking, we would retain a
3D Lorentz symmetry group with metric of signature $(-,-,+)$. The situation is different in Lorentzian signature.
Activating a timelike vector would leave us with a metric of signature $(+,+,+)$, and spinors in 3D Euclidean space are pseudo--real rather than real. Imposing a reality condition on a pseudo--real doublet would force all entries to be zero. On the other hand, a spacelike vector
can preserve 3D $\mathcal{N} = 1$ supersymmetry, as happens for supersymmetric domain walls.
Vector-like breaking can also be accomplished for any background which produces the same breaking pattern of the Lorentz group.
Indeed, the general form of the coset construction for spontaneously broken spacetime symmetries ensures that other choices, e.g., a time-dependent rolling scalar background, as well as a three-form flux will also result in the same symmetry breaking pattern. A related comment
is that in the F-theory motivated scenario of \cite{Heckman:2018mxl, Heckman:2019dsj}, a three-form flux threads the spatial slice of an
FLRW cosmology, which, upon analytically continuing to Kleinian signature, retains the same $\mathcal{N} = 1/2$ supersymmetry considered here.
\subsubsection{Example}
Let us now give an example of vector-like breaking. To keep the supersymmetry of the system manifest,
we embed our vector field in a non-dynamical vector superfield $V$, and keep it at a fixed background value.\footnote{In other words, we view the real vector $v_{\mu}$ as the field strength for a zero-form potential. In terms of superfields we can write $V = - i \theta D S - i \widetilde{\theta} \widetilde{D} \widetilde{S} $ for a background chiral multiplet $S$ and its partner $\widetilde{S}$. We have included both $S$ and $\widetilde{S}$ in anticipation of continuing back to Lorentzian signature.} We can couple matter fields to $V$ to obtain a corresponding supersymmetric action. To illustrate, consider a chiral multiplet (and its partner) $\Phi \oplus \widetilde{\Phi}$ with respective charges $+q$ and $-q$ under the vector.
The supersymmetric kinetic term for the chiral multiplet is:\footnote{So far we have been following the Lorentzian signature conventions of \cite{Wess:1992cp}, but to avoid various factors of $1/2$ for covariant derivatives in later expressions
we now rescale $V \mapsto -2V$.}
\begin{equation}
S=\int {\rm d}^{4}x{\rm d}^{4}\theta\text{ }\widetilde{\Phi}e^{-2qV}\Phi + \cdots\,.
\end{equation}
Giving $V$ a background value:
\begin{equation}
V= i \theta^{a}v_{a\dot{b}}\widetilde{\theta}^{\dot{b}}= i \theta^{a}\overline{\theta}_{a},
\end{equation}
we note that only two real supercharges will leave this function of superspace
invariant. Indeed, that is the content of our discussion around equation~\eqref{Vexpand}.
Plugging this expression back into our action, we obtain the following component field action:
\begin{equation}
S =\int {\rm d}^{4}x\text{ } -i \widetilde{\lambda}_{\dot{a}}\left(
\partial^{\dot{a}a}+ q v^{\dot{a}a}\right) \lambda_{\dot{a}} -(\partial_{\mu}- q v_{\mu})\widetilde{\phi}(\partial^{\mu
}+ q v^{\mu})\phi +\cdots\,,
\end{equation}
where we have expanded in the component fields of the various superfields. For example, $\phi$ is the scalar component of
$\Phi$ and $\lambda_a$ is its fermionic superpartner.
Additionally, we have used the bispinor notation $\partial^{\dot{a} a} = \overline{\sigma}_{\mu}^{\dot{a} a} \partial^{\mu}$.
This leads to Lorentz violating couplings, but this is potentially permissible if it is solely in the timelike direction (as occurs anyway in cosmology).
\subsection{Chiral Breaking}
Chiral breaking of the Lorentz group retains an $\mathcal{N}=1/2$
subalgebra simply because one half of the superalgebra generated by $Q$ and $\widetilde{Q}$
is not deformed at all. Euclidean signature effective actions were
constructed using the language of non-anti-commutative Grassmann variables in
superspace in references \cite{Ooguri:2003qp,Ooguri:2003tt,Seiberg:2003yz}. In more
pedestrian terms, we can simply construct the corresponding supersymmetric
action compatible with spontaneous breaking of a chiral half of the Lorentz algebra.
Deformations of the underlying superspace geometry have been considered in a number of references,
including \cite{Casalbuoni:1975hx, Casalbuoni:1975bj, Casalbuoni:1976tz, Schwarz:1982pf, Ferrara:2000mm,
Klemm:2001yu, Abbaspur:2002xj, deBoer:2003dpn, Ooguri:2003qp, Ooguri:2003tt, Kawai:2003yf, Chepelev:2003ga,
David:2003ke, Seiberg:2003yz, Britto:2003aj,Berkovits:2003kj}, although we add that these references
consider Euclidean, rather than Kleinian signature spacetime.
The general approach in this setup is to consider a deformation of the Grassmann superspace coordinates:
\begin{equation}
\{\theta^a , \theta^b \} = c^{ab},
\end{equation}
with $c^{ab}$ a constant background field. This can also be accompanied by a non-commutative deformation of the spacetime coordinates, namely
$[x^{\mu} , x^{\nu}] = i b^{\mu \nu}$. In reference \cite{Seiberg:2003yz} the construction of effective actions in terms of a corresponding generalized Moyal product was developed. From the present perspective, this is but one choice of symmetry breaking for the Lorentz group.
Physical backgrounds which produce this sort of breaking pattern are obtained from switching on a background self-dual field strength. This can be arranged in both Euclidean and Kleinian signature. For further details on the analysis in Euclidean signature, see e.g., \cite{Ooguri:2003qp, Ooguri:2003tt, Kawai:2003yf, Chepelev:2003ga, David:2003ke, Seiberg:2003yz}.
\section{Spatial Instabilities} \label{ssec:Instabilities}
A different application of $2+2$ signature Lagrangians is in the study of
Lorentzian signature systems with a spatial instability.\footnote{We thank
C.L. Kane for discussions on this point.} To give an example, consider a system
of $(2+1)$-dimensional theories arranged as \textquotedblleft coupled
layers\textquotedblright\ in a $(3+1)$-dimensional system with one lattice
direction. For concreteness, we index the layers by $j\in\mathbb{Z}$ so that
for each layer we have fields $\phi_{j}$, and the action is:
\begin{equation}
S=\underset{j}{\sum}S_{j}+\underset{j}{\sum}S_{j,j+1},
\end{equation}
where $S_{j}$ is the action for a free field on a single layer:
\begin{equation}
S_{j}=\int {\rm d}^{3}x\,\frac{1}{2}\left( \left( \partial_{t}\phi
_{j}\right) ^{2}-\left( \partial_{x}\phi_{j}\right) ^{2}-\left(
\partial_{y}\phi_{j}\right) ^{2}\right) ,
\end{equation}
and $S_{j,j+1}$ is the contribution from nearest neighbor interaction term:
\begin{equation}
S_{j,j+1}=\int {\rm d}^{3}x\left(-\frac{\alpha}{4}\sin^{2}\left( \phi_{j}%
-\phi_{j+1}\right) \right) ,
\end{equation}
which we we can treat as a bounded effective potential. When $\alpha>0$, this is
just giving a lattice approximation to a $(3+1)$-signature system, and the
ground state has $\left\langle \phi_{j}-\phi_{j+1}\right\rangle =0$. For
$\alpha<0$, this same configuration is actually a local maximum and the minimum
is instead reached by taking $\phi_{j}-\phi_{j+1}=\pi/2$. Expanding around
this local maximum, we obtain a $2+2$ signature Lagrangian density.
Instanton configurations in $2+2$ signature correspond to transitions from one local maximum to another. This has a clear meaning in the context of the 3D layers construction, but also generalizes to other field theories, including Yang-Mills theory. Indeed,
in both $2+2$ and $4+0$ signature, there are self-dual field configurations.
We remark that this construction is compatible with 3D $\mathcal{N} = 1$ supersymmetry.
Introducing 3D real superfields $\Phi_{j}$, we can couple neighboring layers via the superpotential:
\begin{equation}
W_{\mathrm{eff}} = \underset{j}{\sum} \mathrm{cos}(\Phi_{j} - \Phi_{j+1}),
\end{equation}
which implements a superspace version of dimensional deconstruction, much as in~\cite{Arkani-Hamed:2001vvu,Dijkgraaf:2003xk}.
The resulting effective potential $V \sim \vert \partial W / \partial \Phi \vert^2$ is positive definite, but when expanded around a local maximum will result in a deconstructed Kleinian signature theory.
\section{Discussion}
One of the original motivations for this work was to better understand the
potential role of $\mathcal{N}=1/2$ supersymmetry in $2+2$ signature
spacetimes as a way to address the \textquotedblleft cosmological constant
problem\textquotedblright. In $2+2$ signature,
bubble diagrams that correct the energy of a $\mathcal{N}=1/2$ supersymmetric state automatically vanish.
Indeed, this fact was already observed in the case of
chiral symmetry breaking of supersymmetry in Euclidean signature in
\cite{Seiberg:2003yz,Britto:2003aj,Terashima:2003ri}, but it clearly extends
to other choices of Lorentz breaking such as those which are of relevance in
cosmological backgrounds.
What might this mean for a Lorentzian signature spacetime? From the
present perspective, a perturbative calculation in $2+2$ signature is related
to correlation functions computed in a specific class of low-flux states. The
statement that $\mathcal{N}=1/2$ supersymmetry is retained in $2+2$ signature
means that these states do not mix at the quantum level with other states.
From this perspective, it is tempting to speculate that rather than working
with the ground state of a quantum field theory, the low flux states are more
appropriate in the context of cosmology.
Of course, one of the important phenomenological implications of supersymmetry
is the prediction of new superpartners for all of the states of the Standard
Model of particle physics. From the way we have constructed our $\mathcal{N}%
=1/2$ Lagrangians in $2+2$ signature, we see that the field content for the
$3+1$ theory obtained from analytic continuation will have precisely the same
degrees of freedom as in the MSSM, with the caveat that we do not expect to
have as much control over the resulting superpartner masses.
There are two conflicting intuitions, which make it challenging to reliably extract
the mass spectrum.~References \cite{Heckman:2018mxl,
Heckman:2019dsj} suggested that the geometric mean of an IR\ and UV\ cutoff
could arise via F-theory on a $\mathrm{Spin}(7)$ background, which in turn could
produce superpartner masses on the order of $10-100$ TeV. Are the
considerations presented here compatible with these coarse expectations?
On the one hand, in Minkowski space there is no such thing as
$\mathcal{N}=1/2$ supersymmetry. From this perspective, it is natural to
suspect that the $\mathcal{N}=1/2$ MSSM just involves a high scale for
supersymmetry breaking, once radiative corrections to the superpartner masses
are taken into account. On the other hand, the explicit models of
$\mathcal{N}=1/2$ supersymmetry we investigated involved spontaneous breaking
of Lorentz symmetry, which, to be compatible with cosmology, must lead to a
rather low scale for supersymmetry breaking. Said differently, in all of our
explicit realizations, we constructed $\mathcal{N}=1$ Lagrangians which only
broke to $\mathcal{N}=1/2$ supersymmetry at very low cosmological scales. Even so,
it is well-known in other contexts that even seemingly mild Lorentz breaking
terms can, after including radiative corrections, produce relatively large effects which are often difficult to reconcile with observational constraints.
One clear indication of an $\mathcal{N}=1/2$ structure would be apparent Lorentz violation effects, which would in turn
suggest an apparent violation of CPT symmetry (see e.g., \cite{Colladay:1996iz}).
It would be interesting to study the phenomenological
consequences of this, and related aspects of an $\mathcal{N}=1/2$ MSSM.
In this paper we have laid out a framework for thinking about quantum field theories in Kleinian signature spacetimes, describing the drawbacks, advantages, and challenges of exploiting these structures to address problems in $3+1$-dimensional physics. In future work we will take on some of these major challenges, constructing and studying an $\mathcal{N}=1/2$ MSSM, understanding the mass spectrum, and further interrogating the formal structure of this approach.
\vspace{-.25cm}
\paragraph{Acknowledgments:}
We thank G. Gabadadze, C.L. Kane, J. Stout, A. Tomasiello, E. Torres and S. Wong for helpful
discussions. This work was initiated at the 2019 Penn Center for Particle Cosmology retreat
hosted by PDT partners in New York, and we thank the local hosts, especially
D. Wesley for kind hospitality. Some of this work was also performed at the
2022 Aspen winter conference on Geometrization of (S)QFTs in $D\leq6$ held at
the Aspen Center for Physics, which is supported by National Science
Foundation grant PHY-1607611. The work of JJH and MT is supported by DOE (HEP)
Award DE-SC0013528. The work of AJ is supported by DOE (HEP) Award DE-SC0009924.
The work of MT is also supported by the Simons Foundation
Origins of the Universe Initiative, grant number 658904.
|
1,108,101,566,425 | arxiv | \section{Introduction}
A kervolutional neural network (KNN) is like a traditional convolutional neural network (CNN) except it uses a 'kervolution' operation instead of a convolution operation\cite{b1}. Kervolution is like convolution, the only difference being that kernel functions are applied to filters and receptive fields before performing the dot product operation. That is where the name 'kervolutional' neural network comes from: \textbf{ker}nel con\textbf{volutional} neural network.
The convolution operation in CNNs was partly inspired by how the simple cells in the visual cortex of the human brain are used for recognizing features in images. The kervolution operation is said to be analogous not just to simple cells in the visual cortex, but also the complex and hyper complex cells that can recognize more complex patterns than the simple cells. The originally proposed paper claims that applying a kernel function on the filters and receptive fields of the convolution operation allows filters to learn nonlinear, more complex features much like how the complex and hypercomplex cells of the visual cortex do. The original paper attributes its higher reported accuracies and quicker convergence times to this.
Based off the definition of kervolution, the convolution operation can be seen as a form of linear kervolution.
\section{Related Methods}
There are many other related methods applied to CNNs that have enabled nonlinear feature map learning or have made use of kernel functions. Out of all the described related methods in the original paper\cite{b1}, KNNs are the only method able to introduce patch-wise non-linearity while still maintaining computational complexity. These methods include:
\subsection{Quadratic Convolution}
A modified convolution operation that is quadratic with the size of the receptive field increases the capacity of filters. This method however greatly increases training complexity and training time.
\subsection{Nonlinearity through Activation Functions}
Introducing nonlinearity through activation functions (such as relu) doesn't introduce more parameters and training time, but only performs nonlinearity point wise.
\subsection{CKN}
CKNs use kernel approximation to learn transform invariance, but can't extract non-linear features.
\subsection{SimNets}
The SimNets method adds kernel similarity layers under
convolutional layers to introduce more capacity to the model but also significantly increases the
complexity.
\subsection{Kernel Pooling}
Kernel pooling introduces more learnable non-linearity to the model but it is not able to extract patch-wise non-linear features and also increases training complexity substantially.
\section{The Kervolution Operation}
The kervolution operation makes use of the kernel trick to keep training complexity low while introducing non-linearity on the feature maps\cite{b1}. This section outlines a mathematical description of the kervolution operation which transforms CNNs into KNNs by replacing convolution with kervolution. First, a description of normal convolution is given:
$$\mathbf{conv}_{i}(x)=\left\langle x_{(i)},w\right\rangle,$$
where $x$ is the flattened input image, $i$ is the position of the receptive field (constructed using a circular shift, since $x$ is flattened) and $w$ being the filter (also a vector). Kervolution is then defined as:
$$\mathbf{kerv}_{i}(x)=\left\langle\varphi\left(x_{(i)}\right),\varphi(w)\right\rangle,$$
where $\varphi$ is a certain kernel function. The kernel trick is defined as:
$$\left\langle\varphi\left(x_{(i)}\right), \varphi(w)\right\rangle=\sum_{k} c_{k}\left(x_{(i)}^{T} w\right)^{k}=\kappa\left(x_{(i)}, w\right)$$
The kernel trick in the case of the polynomial kernel is:
$$\kappa_{\mathrm{p}}(x, w)=\sum_{j=0}^{d_{p}} c_{p}^{d_{p}-j}\left(x^{T} w\right)^{j}=\left(x^{T} w+c_{p}\right)^{d_{p}}$$
\section{Methodology}
\subsection{Methodology Used in the Original Paper}
The methodology used in the original paper relies on ablation\cite{b1}. That means that it examines certain network model configurations, and compares the performance of them with and without kervolution used while not changing any other parameters. The original paper also only uses one test set for each data set to provide accuracy scores. The two metrics used to show KNNs improved performance are highest test accuracy during training, and time to convergence where convergence in this case means achieving a target accuracy on the test data.
The original paper shows results on a few different data sets, including MNIST and CIFAR-10 with a few different model configurations including ResNet and LeNet-5. For every experiment, the KNN gives better results on highest test accuracy during training and time to convergence.
\subsection{New Methodology Used in This Paper}
The methodology used in this paper similarly uses an ablation study, however, for both the CNN and KNN models a learning rate search is performed to see which learning rate is optimal for each model. The learning rate search is done by starting at a learning rate of $0.2$ and halving it until it is less than or equal to $0.0002$. For each learning rate the model is trained for each of $5$ k-folds. This means the new methodology uses more than one test set to provide a better analysis of metrics via a mean and confidence interval.
Additionally, the recorded highest test accuracy of a model was only updated every time a new highest training accuracy was seen. This more simulates how a model would be developed in a real-world setting because the best model can only be determined by how well it performs on the data available to train and validate on before being sent off to do the 'real work' and final evaluation on the test data. The original paper selects the best performing model on the test data for every experiment, which is not as reflective of how well the neural network really performs. It isn't a significant change in experimentation, but something interesting to have noted. In figures~\ref{fig7} and~\ref{fig8} the correlation between training and testing accuracy can be seen.
This paper shows results on the MNIST and Fashion MNIST data sets. Fashion MNIST is considered to be a harder version of MNIST, but contains images of clothing instead of digits. The same LeNet-5 network configuration used in the original paper is used, with all of the same parameters, including milestones for learning rate decays and epochs to train. The kernels used in this paper are the polynomial kernel, which performed the best out of all the kernels tested in the original paper, and a newly proposed 'mixture of kernels' which is described in section~\ref{mixture:section}.
\section{Mixture of Kernels}
\label{mixture:section}
A related paper showed that linearly combining the RBF and polynomial kernel functions used for a SVM can show improved performance compared to using just one of the RBF or polynomial kernels\cite{b2}. The following equation describes it:
$$\kappa_{\mathrm{m}}(x, w)=\lambda\kappa_{\mathrm{g}}(x, w) + (1-\lambda)\kappa_{\mathrm{p}}(x, w),$$
where the tunable parameter $\lambda$ is bounded between $0$ and $1$. For use in a KNN, this paper proposes that the parameter $\lambda$ be a learnable parameter wrapped in a sigmoid function to keep it bounded between $0$ and $1$:
$$\kappa_{\mathrm{m}}(x, w)=\sigma(\lambda)\kappa_{\mathrm{g}}(x, w) + (1-\sigma(\lambda))\kappa_{\mathrm{p}}(x, w)$$
This 'mixture kernel' is used the same way as the polynomial kernel in KNNs, by applying it via (two) kernel trick(s) on the receptive field and filter. The only difference is the summation component after computing the two kernel tricks.
\section{Results}
\label{results:section}
This section outlines the results given from this paper's new methodology. For the metrics in each diagram the faded colours show the $95\%$ confidence interval and the solid darker lines show the mean value. The legend of each graph shows either the maximum mean highest test accuracy and confidence interval for the highest accuracy graphs, or minimum mean convergence time and confidence interval for the convergence time graphs.
\subsection{Highest Achieved Accuracy}
\label{accuracy:subsection}
See figures~\ref{fig1} and~\ref{fig2} for results showing the highest achieved accuracy vs learning rate on both the MNIST and Fashion MNIST data sets respectively. It is clearly shown that the normal CNN achieves the highest mean accuracy on both data sets. In the original KNN paper the opposite result is shown\cite{b1}, where the normal CNN achieves the lowest accuracy out of all methods on the MNIST data set. There are very wide confidence intervals attributed to outlier experiements where the network sometimes converged to $10\%$ accuracy immediately and became stuck. For pragmatic reasons, the confidence intervals given by these outlier experiments won't be taken into consideration. Taking bounds on the confidence intervals into consideration (except for the outlier experiments), the normal CNN achieves at least 95\% reproducible highest accuracy. The polynomial kernel and mixture of kernels show comparable highest achieved accuracy for both data sets.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.6]{mnist_highest_accuracies.jpg}}
\caption{}
\label{fig1}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.6]{fashion_mnist_highest_accuracies.jpg}}
\caption{}
\label{fig2}
\end{figure}
\subsection{Time to Target Accuracy}
See figures~\ref{fig3},~\ref{fig4},~\ref{fig5} and~\ref{fig6} describing the results for time to train to target accuracy vs learning rate on the MNIST and Fashion MNIST data sets. For each model, results for a learning rate that contain at least one fold where target accuracy was not achieved are omitted from the graph, thus alleviating the 'outlier experiment' issue described in subsection~\ref{accuracy:subsection}. The normal convolutional neural network achieved faster time to target accuracies for both data sets. The polynomial kernel and mixture kernel showed very similar results for time to target accuracy but the polynomial kernel was a bit faster. This is likely almost entirely because of the added computations from computing 2 kernels instead of one.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.6]{mnist_time_to_98_accuracy.jpg}}
\caption{}
\label{fig3}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.6]{fashion_mnist_time_to_90_accuracy.jpg}}
\caption{}
\label{fig4}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.6]{mnist_time_to_97_accuracy.jpg}}
\caption{}
\label{fig5}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.6]{fashion_mnist_time_to_89_accuracy.jpg}}
\caption{}
\label{fig6}
\end{figure}
\section{Analysis and Discussion}
This section contains further analysis on the results shown from section~\ref{results:section} along with additional theoretical analysis of KNNs absent from the original KNN paper.
\subsection{Learning Rates}
In the original paper the LeNet-5 model uses an initial learning rate of $0.003$ for both the KNN and CNN experiment on MNIST data. The results in section~\ref{results:section} clearly show that the learning rate used in the original paper benefitted the KNN more than the CNN. The polynomial KNN's best learning rate was $0.00625$ and the normal CNN's best learning rate was $0.025$. The polynomial kernel provided extra sensitivity to learning rates because of the higher order terms present in the filters.
The experimentation in this paper shows that it is important to be aware of the influence of hyper parameters when performing an ablation study. Given computational resources and time constraints only one hyperparameter was searched through. The learning rate was chosen as the parameter to tweak since it is perhaps the most important hyperparameter\cite{b3}. For future experiments, a much more efficient (but less exhaustive) learning rate search could be used,\cite{b4}, so that other important hyper parameter(s) can be searched through to potentially provide a more thorough analysis.
\subsection{Selecting a Kernel Operation}
One downside of the new KNN method is that there are more ways to tune a network. There are many different types of kernel functions to choose from, all that may perform better on different data sets and with different hyperparameters.
The mixture of kernels method proposed in this paper could make choosing a kernel much simpler. This could be done either through showing superior performance, or being used to evaluate a priori which kernel is most effective to use by itself.
\subsection{Kernel Operation vs Preprocessing Data}
It is interesting and potentially important to note that applying the kervolution operation is quite similar to preprocessing data. A KNN with a single kervolution operation at the input level, followed by all other kernels being strictly linear (convolution) could be equivalently constructed using preprocessing on the input data. The data would be preprocessed using the kernel function and then feeding that higher dimensional data into a CNN with all the same parameters except with a linear kernel at the input layer with an increased stride and receptive field size. The following equation illustrates this idea, where $x'$ is the transformed input data, $i'$ the new coordinate corresponding to the original receptive field's coordinate, and $w'$ the enlarged filter:
$$\mathbf{kerv}_{i}(x)=\left\langle\varphi\left(x_{(i)}\right),\varphi(w)\right\rangle=\left\langle x'_{(i')},w')\right\rangle$$
The original paper ran an ablation study where one network had a kervolutional layer at the input and then convolution for the second layer containing filters, and the other network vice versa\cite{b1}. The network that had kervolution first performed better than the network that had convolution first. This further supports the similarity between kervolution and preprocessing data. Intuitively: An effective transformation on the data is more beneficial towards the input of the network, so more layers of the network can make use of it.
\subsection{Kernel Operation Causing Overfitting}
In the original paper it was noted that overfitting was more present when using KNN compared to CNN\cite{b1}. The same results were found for the experimentation in this paper. So much so, that the KNN was able to achieve higher mean training accuracy than the CNN, while the CNN was achieving higher test accuracy than the KNN. The comparison of results of each model on MNIST data can be seen, where each model uses the learning rate that gave highest mean accuracy during the learning rate search. The original paper claims the overfitting showed that the LeNet-5 KNN had too high of a capacity for the MNIST data set based off of an analysis of the recorded loss. The experimentation done for this paper more effectively showed that. See figures~\ref{fig7} and~\ref{fig8}.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.6]{overfitting_mnist_linear.jpg}}
\caption{}
\label{fig7}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.6]{overfitting_mnist_polynomial_KNN.jpg}}
\caption{}
\label{fig8}
\end{figure}
When kernel functions are used in SVM, a similar phenomenon happens: Increased generalization error\cite{b5}. Given enough samples, the generalization error can be minimized. Given a larger data set, the overfitting of the KNN training on MNIST could be eliminated and lead to higher test accuracies compared to CNN.
\subsection{Mixture of Kernels}
Across all experiments for each data set ($5$ folds $\times$ $10$ learning rates $= 50$ experiments), the mixture of kernels KNN achieved a higher test accuracy than the polynomial kernel KNN. For Fashion MNIST the polynomial kernel KNN achieved $91.0857\%$ accuracy while the mixture of kernels KNN achieved $91.1643\%$ accuracy. For MNIST the polynomial kernel KNN achieved $99.0643\%$ accuracy while the mixture of kernels KNN achieved $99.1429\%$ accuracy.
Although it is not conclusive that the mixture of kernels method proposed in this paper is strictly better at achieving higher accuracies than the polynomial kernel method, the results seen are encouraging for future experimentation. It is likely that putting the $\lambda$ parameter inside of the sigmoid function caused unwanted behaviour. The mixture of kernels KNN showed slightly more variant/unreliable behaviour than the polynomial KNN, such as converging to an accuracy of $10\%$. If the $\lambda$ parameter for example became close to $0$ or $1$ for a given filter, the famous 'vanishing gradient' problem may become an issue and cause this unwanted behaviour. Alternatives to this approach could include allowing the $\lambda$ parameter to change values in a more free and linear manner while having some sort of regularization on it. Due to time constraints and computational resources, this experimentation did not take place.
\subsection{Analogous Methods}
In the original paper, the use of the word 'non-linear' was mentioned many times to describe the benefit of KNN over CNN\cite{b1}. The analysis of KNN's effectiveness didn't extend much beyond 'non-linear thus improved model capacity'. It is important to note that non-linearity does not strictly mean better performance. Some analogous examples include:
\subsubsection{L1 vs L2 norm}
When using an L1 norm or an L2 norm for something such as regularizing a network, one is not strictly better than the other in all circumstances. The L2 norm's non-linearity doesn't make it superior.
\subsubsection{Applying an Activation Function to a Regression Output}
In the case of neural networks with a single regression output neuron, it isn't always advantageous to wrap the output neuron in an activation function. Introducing unnecessary non-linearity can be detrimental; introducing more non-linearity for the network to learn than necessary.
\subsection{Why Does Kervolution Work?}
Other than just providing more model capacity because of introducing non-linearity on the filters, there is another intuitive way of understanding why a KNN would be more effective than just normal CNN. Kernel functions used in SVM project data to a higher dimensional space, making it easier to separate using a hyperplane. The hyperplane in the projected space is a non-linear hyperplane in the original space. Applying a kernel function to the receptive field of a CNN might do something similar, by making features more distinct and separable. For example, in the higher dimensional space, features such as noses, eyes and ears for the purpose of face detection may be all easier to distinguish from each other, allowing filters to identify these features more accurately.
\section{Conclusion}
To conclude, this "mini paper" showed that KNNs do not strictly perform better than CNNs. More experimentation is needed to clearly justify their effectiveness, most importantly experimenting with them on larger data sets to alleviate overfitting. It was shown that that introducing the kernel on filters of a CNN is both similar to preprocessing techniques and using kernel functions in SVM. A new mixture of kernels method was proposed that showed comparable performance to the methods shown in the original KNN paper, and is potentially a starting point for further advancements.
\section{Acknowledgements}
I'd like to thank Dr. Jochen Lang from the University of Ottawa for his valuable teachings.
|
1,108,101,566,426 | arxiv | \section[Introduction]{Introduction}
\label{sec:intro}
In 1950 van~Roosbroeck \cite{roosbroeck50} established a system of
partial differential equations describing the motion of electrons and
holes in a semiconductor device due to drift and diffusion within a
self-consistent electrical field. In 1964 Gummel \cite{gummel64}
published the first report on the numerical solution of these
drift--diffusion equations for an operating semiconductor device. From
that time on van~Roosbroeck's system has been the backbone of many a
model in semiconductor device simulation. The first papers devoted to
the mathematical analysis of van~Roosbroeck's system appeared in the
early seventies of the previous century \cite{mock72,mock74}; for a
historical synopsis and further references see \cite{gajewski93}. In
1986 Gajewski and Gr{\"o}ger proved the global existence and
uniqueness of weak solutions under realistic physical and geometrical
conditions \cite{gajewski:groeger86}. The key for proving these
results and also for establishing stable numerical solving procedures
is the existence of a Lyapunov function for the van Roosbroeck system.
This solution theory entails restricting conditions on the models for
the recombination of electron--hole pairs, see
\cite[2.2.3]{gajewski93}, \cite[Ch.~5]{gajewski:groeger89},
\cite[Ch.~6]{gajewski:groeger90}, \cite{skrypnik:02}, and
\cite{pub:938}. In this paper we relax the condition on the reaction
terms in the equations considerably, up to the point that some
external control to the generation or annihilation of electrons or
holes can be applied individually. In particular, this aims at
radiative recombination of electron-hole pairs in semiconductor
lasers, and at the generation of electron-hole pairs in optoelectronic
detectors. Notwithstanding this generalization, we continue to use the
name van~Roosbroeck system for the model equations.
Van~Roosbroeck's system consists of current--continuity equations ---
one for electrons, another one for holes --- which are coupled to a
Poisson equation for the electrostatic potential, and comprise
generative terms, first of all recombination of electron--hole pairs.
The current--continuity equations can be viewed as quasi-linear
parabolic equations. However, the natural formulation of balance laws
is in integral form
%
\begin{equation}
\label{eq:balancelaw}
\frac{\partial}{\partial t}
\int_\omega u_k \,\mathrm{d}{x}
=
\int_{\partial\omega}
{\nu}\cdot{j_k} \,\mathrm{d}{\sigma_\omega}
+
\int_{\omega} r_k \,\mathrm{d}{x}
.
\end{equation}
%
Here $u_2$ and $u_1$ is the density of electrons and holes,
respectively, $j_k$ is the corresponding flux, and $r_k$ is a
reaction term. $\omega$ is any (suitable) sub-domain of the whole
domain under consideration, $\nu$ the outer unit normal to the
boundary $\partial\omega$ of $\omega$ and $\sigma_\omega$ the
arc measure on $\partial \omega$. In the weak formulation of the
balance law the boundary integral of the normal component of the
current is expressed as the volume integral of the divergence of the
corresponding current.
%
Very little is known about the question whether the weak solutions also
satisfy the original balance law equations \eqref{eq:balancelaw}.
Obviously, this depends on the applicability of Gauss' theorem. So,
the problem is about the divergence of the currents in weak solutions
being functions --- not only distributions. In particular, this comes
to bear in the numerical treatment of van Roosbroeck's system. The
choice for space discretization of drift--diffusion equations is the
finite volume method, see \cite{gklnr:detectors}, which rests on
the original balance law formulation \eqref{eq:balancelaw} of the
equations.
In this paper we solve this problem for the spatially two-dimensional
van Roosbroeck system by showing that it admits a classical solution
in a suitably chosen Lebesgue space---at least locally in time. Aiming
at the inclusion of rather general recombination and generation
processes for electron-hole pairs we cannot expect global existence
anymore, and we cannot rely on a Lyapunov function. Instead we apply
local methods for quasi-linear evolution equations. To that end, we
rewrite van Roosbroeck's system as an evolution equation for the
electrochemical potentials of electrons and holes, and apply a
recently obtained result on quasi-linear parabolic equations in
Lebesgue spaces, see \cite{pub:765}. This yields a classical solution
of van Roosbroeck system locally in time with currents the divergence
of which is Lebesgue integrable to some exponent greater than one.
The strong differentiability of the electron and hole density in time
is constitutive for the implicit time discretization scheme which is
accepted custom in engineering and scientific computing, see\ for instance\
\cite{gajewski93}.
Please note that in device simulation one is always confronted with
contacted devices of heterogeneous material composition. That leads to
mixed boundary conditions and jumping material coefficients in the
model equations. Hence, standard theorems on existence, uniqueness
and regularity do not apply.
\section[Van Roosbroeck's system]{Van Roosbroeck's system}
\label{sec:setting}
\subsubsection*{Basic variables}
In the following we investigate van Roosbroeck's model for a
semiconductor device which describes the flow of electrons and holes
in a self-consistent electrical field due to drift and diffusion. The
physical quantities one is interested in are:
%
the densities $u_1$ and $u_2$ of holes and electrons,
the densities $j_1$ and $j_2$ of the hole and electron current,
the electrostatic potential $\widetilde{\varphi}$ of the self-consistent
electrical field, and
the electrochemical potentials $\widetilde{\phi}_1$ and $\widetilde{\phi}_2$ of holes
and electrons
%
These unknowns have to satisfy Poisson's equation and the
current--continuity equations for electrons and holes with some side
conditions. The latter are given by the relations between the
potentials and the densities.
\subsubsection*{Spatial domain}
We study only semiconductor devices which are quasi translational
invariant in one space direction or angular symmetric. In that case
van Roosbroeck's system in real space can be reduced to a similar set
of equations in the plane. That means, we regard a cut through the
device perpendicular to the direction of invariance. Let
$\widehat{\Omega}$ be the resulting two-dimensional (bounded)
representative domain. Parts of the device may be insulating, for instance\
formed by an oxide. Then, electrons and holes can move only in a
sub-domain $\Omega$ of $\widehat{\Omega}$. This also covers the case
of charges which are artificially immobilized on a sub-domain
$\widehat{\Omega}\setminus\Omega$.
%
Furthermore, we mark out a part $\widehat{\Neumann}$ of the boundary of
$\widehat{\Omega}$ where the device borders on an insulator. The
remaining part
of the boundary represents (possibly several) contacts of the device.
%
We also mark out a part $\Gamma$ of $\Omega$'s boundary. In the case
of a stand alone drift--diffusion model of the semiconductor device
again $\Gamma$ represents areas of the device bordering to an
insulator, whereas the remaining part
is the contact area.
\subsubsection*{External control}
In real--world modeling of semiconductor devices van Roosbroeck's
system often serves as a component in a compound model of the device.
Then the superordinated system --- for instance a circuit model ---
may exercise a control on van Roosbroeck's system.
Apart of a superordinated circuit model, compound models comprising in
addition to van Roosbroeck's system equations for the lattice
temperature or the power of lasing modes play an important role in
device simulation, see\ for instance\
\cite{gajewski93,spie-mqw:00,kare:smqw,bhk:02}.
But the concept of external control also comes to bear in segmentation
of the simulation domain, in particular in connection with multiscale
modeling, see\ for instance\ \cite{kare97b,kare97c,kare:current}.
If van Roosbroeck's equations serve as a component of a compound
model, then system parameters, state equations, boundary conditions,
et alii, possibly bear a different physical meaning than in the
stand-alone model.
We make assumptions about an external control from
the initial time $T_0$ up to a time $T_1$.
\subsection{Poisson equation}
The solution of the Poisson equation with mixed boundary conditions,
%
\begin{equation}
\label{eq:poi}
\begin{aligned}[2]
-\nabla\cdot
\left(
\varepsilon \nabla \widetilde{\varphi}
\right)
& = \tilde{d}(t) + u_1 - u_2
&\qquad
&\text{on $\widehat{\Omega}$,}
\\
\widetilde{\varphi}
&= \varphi_{\widehatDirichlet}(t)
&\qquad
&\text{on
\begin{math}
\widehat{\Dirichlet}
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\interior{\partial\widehat{\Omega}\setminus\widehat{\Neumann}}
\end{math},}
\\
{\nu}\cdot{
\left(
\varepsilon \nabla \widetilde{\varphi}
\right)
}
+
\varepsilon_{\widehat{\Neumann}} \widetilde{\varphi}
& =\Rbvarphi(t)
&\qquad
&\text{on ${\widehat{\Neumann}}$,}
\end{aligned}
\end{equation}
gives the electrostatic potential $\widetilde{\varphi}$ on $\widehat{\Omega}$
subject to the electron and hole density $u_2$ and $u_1$. Strictly
speaking, the densities $u_k$, $k=1,2$, are only defined on $\Omega$
but, we extend them by zero to $\widehat{\Omega}$.
The parameters in \eqref{eq:poi} have the following meaning:
%
$\varepsilon$ is a bounded, measurable function on $\widehat{\Omega}$
with values in the set of real, symmetric, $2\times2$, positive
definite matrices and corresponds to the spatially varying dielectric
permittivity on the space region occupied by the device. Moreover, we
assume
\begin{equation*}
\norm{\varepsilon(x)}_{\mathcal{B}(\mathbb{R}^2)}
\le
\upp{\varepsilon}
\; \text{and} \;
{(\varepsilon(x)\xi)}\cdot{\xi}
\ge
\low{\varepsilon}
\norm{\xi}_{\mathbb{R}^2}^2
\quad
\text{for almost all $x\in\widehat{\Omega}$
and all $\xi\in\mathbb{R}^2$}
\end{equation*}
with two strictly positive constants $\low{\varepsilon}$ and
$\upp{\varepsilon}$. Furthermore, $\varepsilon_{\widehat{\Neumann}}$ is
a non-negative function on ${\widehat{\Neumann}}$, representing the
capacity of the part of the device surface bordering on an insulator.
We assume that $\widehat{\Dirichlet}$ is not empty or
$\varepsilon_{\widehat{\Neumann}}$ is positive on a subset of
$\widehat{\Neumann}$ with positive arc measure. In other words, the
device has a Dirichlet contact or part of its surface has a positive
capacity.
%
$\varphi_{\widehatDirichlet}(t)$ and $\Rbvarphi(t)$ are the voltages applied at the
contacts of the device, and $\tilde{d}(t)$ represents a charge.
In the case of a stand alone drift--diffusion model $\varphi_{\widehatDirichlet}$,
$\Rbvarphi$, and $\tilde{d}$ are constant in time, and
$\tilde{d}$ solely is the charge density of dopants in the
semiconductor materials composing the device.
%
In general, $\varphi_{\widehatDirichlet}$, $\Rbvarphi$, and $\tilde{d}$ are function
which are defined on the time interval $[T_0,T_1]$ where a possible
control acts on the device.
\subsection{Current--continuity equations}
The current--continuity equations for holes and electrons ($k=1,2$,
respectively)
%
%
\begin{equation}
\label{eq:cc}
u_k'
- \nabla\cdot j_k
=
r_k(t,\widetilde{\varphi},\widetilde{\phi}_1,\widetilde{\phi}_2)
\qquad \text{on $\Omega$}
\end{equation}
%
characterize the evolution of the electron and hole density under the
action of the currents $j_k$ and the reactions $r_k$ subject to
the mixed boundary conditions
%
\begin{equation}
\label{eq:cc:bc}
\begin{aligned}[2]
\widetilde{\phi}_k(t)
&=\Bvphi{k}(t)
&\qquad
&\text{on $\mathrm{D}\stackrel{\scriptscriptstyle\mathrm{def}}{=}\interior{\partial\Omega\setminus\Gamma}$,}
\\
{\nu}\cdot{j_k}
&= 0
&\qquad
&\text{on $\Gamma$,}
\end{aligned}
\end{equation}
%
%
from the initial conditions
%
%
\begin{equation}
\label{eq:cc:ini}
\widetilde{\phi}_k(T_0)=\iniphi_k.
\end{equation}
Each $r_k$, $k=1,2$ is a reaction term which models the
generation and annihilation of electrons and holes. In particular,
this term covers the recombination of electrons and holes in the
semiconductor device. $r_1$ and $r_2$ can be rather
general functions of the particle and current densities, see\
\secref{sec:reactions}. We require that the set
$\mathrm{D}=\interior{\partial\Omega\setminus\Gamma}$ is not empty.
The boundary values $\Bvphi{1}$, $\Bvphi{2}$ in general depend on
time. Moreover, the reactions $r_k$ may explicitly depend on
time. This dependence on time, again, allows for a control of the
system by some other part of a superordinated compound model.
\subsection{Carrier and current densities}
Van Roosbroeck's system has to be complemented by a prescription
relating the density of electrons and holes as well as the densities
of the electron and hole current to the chemical potentials of these
charge carriers. We assume
%
\begin{equation}
\label{eq:densities}
u_k(t,x)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\rho_k(t,x)
\mathcal{F}_k
\left(
\chi_k(t,x)
\right)\, ,
\quad x \in \Omega
,
\qquad k=1,2,
\end{equation}
%
where $\chi_1$ and $\chi_2$ are the chemical potentials
%
\begin{equation}
\label{eq:chempot}
\chi_k
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\widetilde{\phi}_k
+ (-1)^k \widetilde{\varphi}
+ b_k
, \qquad k=1,2,
\end{equation}
%
and $\widetilde{\phi}_2$, $\widetilde{\phi}_1$ are the electrochemical potentials of
electrons and holes, respectively.
%
$b_k$, $\rho_k$, $k=1,2$ are positive, bounded functions on $\Omega$.
They describe the electronic properties of the materials composing the
device. $b_2$ and $b_1$ are the band edge offsets for electrons and
holes, and $\rho_2$, $\rho_1$ are the corresponding effective band
edge densities of states.
%
If the equations under consideration form part of a compound model for
the semiconductor device, then $b_k$, $\rho_k$, $k=1,2$, may depend on
time. For instance, the $\rho_k$ could be subject to an external
control of the device temperature. Then they depend on time via the
temperature. Mathematically, we assume the following.
%
\begin{assu}
\label{assu:effband}
For every $t\in[T_0,T_1]$ the functions $\rho_k(t)$ are
essentially bounded on $\Omega$ and admit positive lower bounds
which are uniform in $t\in[T_0,T_1]$. The mappings
\begin{equation}
\label{e-rho1}
[T_0,T_1] \ni t \mapsto \rho_k(t) \in L^2(\Omega), \quad k=1,2
\end{equation}
are differentiable on the interval $]T_0,T_1[$ with H\"older
continuous derivatives $\rho_k'$.
\end{assu}
%
The functions $\mathcal{F}_1$ and $\mathcal{F}_2$ represent the
statistical distribution of the holes and electrons on the energy
band. In general, Fermi--Dirac statistics applies, i.e.\
%
\begin{equation}
\label{eq:fermi-integral}
\mathcal{F}_k (s)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\frac{2}{\sqrt{\pi}}
\int_0^\infty
\frac{\sqrt{t}}{1+\exponential{t-s}}
\,\mathrm{d}{t}
,
\qquad
s\in\mathbb{R}
.
\end{equation}
%
However, often Boltzmann statistics
%
\begin{math}
\label{eq:boltzmann}
\mathcal{F}_k (s)
= \exponential{s}
\end{math}
%
is a good approximation.
As for the kinetic relations specifying the current--continuity
equations we assume that the electron and hole current is driven by
the negative gradient of the electrochemical potential of electrons
and holes, respectively. More precisely, the
current densities are given by
%
\begin{equation}
\label{eq:curr-dens}
j_k(t,x)
=
-
\mathcal{G}_k
\left(
\chi_k (t,x)
\right)
\mu_k(x)
\,
\nabla \widetilde{\phi}_k(t,x)\; ,
\quad x \in \Omega
,
\qquad k = 1,2.
\end{equation}
%
The mobilities $\mu_2$ and $\mu_1$ for the electrons and holes,
respectively, are measurable, bounded function on ${\Omega}$ with
values in the set of real, $2\times2$, positive definite matrices
satisfying for almost all $x\in\widehat{\Omega}$ and all
$\xi\in\mathbb{R}^2$
\begin{equation*}
\norm{\mu_k(x)}_{\mathcal{B}(\mathbb{R}^2)}
\le \upp{\mu}
\quad \text{and} \quad
{(\mu_k(x)\xi)}\cdot{\xi}
\ge
\low{\mu}
\norm{\xi}_{\mathbb{R}^2}^2
,
\qquad
k=1,2,
\end{equation*}
with two strictly positive constants $\low{\mu}$ and $\upp{\mu}$.
The mobilities are accounted for on the parts of the device where
electrons and holes can move due to drift and diffusion.
%
\begin{rem}
\label{rem:k}
In semiconductor device modeling, usually, the functions
$\mathcal{G}_k$ and $\mathcal{F}_k$ coincide, see\ for instance\
\cite{selberherr84} and the references there. However, a rigorous
formulation as a minimal problem for the free energy reveals that
\begin{math}
\mathcal{G}_k = \mathcal{F}_k^\prime
\end{math}
is appropriate. This topic has been thoroughly investigated for
analogous phase separation problems, see\
\cite{quastel:92,quastel:99,lebowitz:97,lebowitz:98}, see\ also
\cite{skrypnik:02} and \cite{griepentrog04}. In order to cover both
cases we regard independent functions $\mathcal{G}_k$ and
$\mathcal{F}_k$.
\end{rem}
%
\begin{assu}
\label{assu:distri}
Mathematically, we demand that the distribution functions
$\mathcal{F}_k$, $\mathcal{G}_k$, $k=1,2$, are defined on the real
line, take positive values, and are
either exponentials, or twice continuously differentiable and
polynomially bounded. Moreover, $\mathcal{F}_1'$, $\mathcal{F}_2'$
are strictly positive on $\mathbb{R}$. In the sequel we will call
such distribution functions 'admissible.' This includes Boltzmann
statistics, as well as Fermi--Dirac statistics (see\
\eqref{eq:fermi-integral}).
\end{assu}
%
Let us comment on the (effective) band edges $b_k$ and the (effective)
densities of states $\rho_k$, see\ \eqref{eq:densities} and
\eqref{eq:chempot}:
Basically the band edge offsets $b_k$ and the effective band edge
densities of states $\rho_k$ are material parameters. In a
heterogeneous semiconductor device they are generically piecewise
constant on the spatial domain $\Omega$.
%
As Assumption~\ref{assu:bands} reveals, we cannot cope with such a
situation as far as the band edges $b_k$ are concerned. However, in
the case of Boltzmann statistics one can rewrite \eqref{eq:densities}
and \eqref{eq:chempot} as
\begin{equation*}
u_k
=
\rho_k
\exponential{b_k}
\exponential{\left(
\widetilde{\phi}_k
+ (-1)^k \widetilde{\varphi}
\right)}
\; \;
\text{on} \; {\Omega}
,
\qquad k=1,2,
\end{equation*}
with modified effective densities of states and
identically vanishing band edge offsets.
%
In the case of Fermi--Dirac statistics this reformulation is not
possible and one has to recourse to some approximation of the $b_k$
by functions confirming to Assumption~\ref{assu:bands}.
%
Discontinuities of the band edge offsets up to now seem to be an
obstacle in whatever approach to solutions of van
Roosbroeck's equations, if the statistical distribution function is
not an exponential, see\ for instance\ \cite{pub:938}.
There are compound multiscale models of semiconductor devices such
that the effective band edges and the effective densities of states
result by upscaling from quantum mechanical models for the electronic
structure in heterogeneous semiconductor materials, see\
\cite{spie-mqw:00,bhk:02,pub:1133}. In view of an offline coupling to
electronic structure calculations we allow for an explicit dependence
of $\rho_k$, and $b_k$ on time.
\subsection{Reaction rates}
\label{sec:reactions}
The reaction terms on the right hand side of the current--continuity
equations can be rather general functions of time, of the
electrostatic potential, and of the vector of the electrochemical
potentials. $r_1$ and $r_2$ describes the production
of holes and electrons, respectively --- generation or annihilation,
depending on the sign of the reaction term. Usually van Roosbroeck's
system comprises only recombination of electrons and holes:
\begin{math}
r=r_1=r_2.
\end{math}
We have formulated the equations in a more general way, in
order to include also coupling terms to other equations of a
superordinated compound model. That is why we also allow for an
explicit time dependency of the reaction rates.
Our formulation of the reaction rates, in particular, includes a
variety of models for the recombination and generation of
electrons--hole pairs in semiconductors. This covers non-radiative
recombination of electrons and holes like the Shockley--Read--Hall
recombination due to phonon transition and Auger recombination.
But, radiative recombination (photon
transition), both spontaneous and stimulated, is also included.
Mathematical models for stimulated optical recombination typically
require the solution of additional equations for the optical
field. Thus, the recombination rate may be a non-local operator.
Moreover, by coupling van--Roosbroecks system to the optical field
some additional control of this optical field may also interact with
the internal electronics. For instance, in modeling and simulation of
edge--emitting multiple--quantum--well lasers van--Roosbroeck's system
augmented by some Helmholtz equation often serves as a transversal (to
the light beam) model, and a control of the optical field is exercised
by a master equation or some model for the longitudinal (on the axis
of the light beam) behavior of the laser, see\ for instance\
\cite{wuensche:etal93,spie-mqw:00,bhk:02}.
Modeling recombination of electron--hole pairs in semiconductor
material is an art in itself, see\ for instance\ \cite{landsberg91}.
However, for illustration, let us list some common recombination
models, see\ for instance\ \cite{selberherr84,gajewski93} and the references
cited there.
\emph{Shockley--Read--Hall recombination}
(phonon transitions):
%
\begin{equation*}
r_1
=r_2
=r^{\mathrm{SRH}}
=
\frac{u_1 u_2 - n_i^2}{\tau_2(u_1+n_1)+\tau_1(u_2+n_2)},
\end{equation*}
%
where $n_i$ is the intrinsic carrier density, $n_1$, $n_2$ are
reference densities, and $\tau_1$, $\tau_2$ are the lifetimes of
holes and electrons, respectively. $n_i$, $n_1$, $n_2$, and $\tau_1$,
$\tau_2$ are parameters of the semiconductor material; thus, depend
on the space variable, and ultimately, also on time.
\emph{Auger recombination} (three particle transitions):
%
\begin{equation*}
r_1
=r_2
=r^{\mathrm{Auger}}
=
(u_1 u_2 - n_i^2)
(c_1^{\mathrm{Auger}} u_1 + c_2^{\mathrm{Auger}}u_2),
\end{equation*}
%
where $c_1^{\mathrm{Auger}}$ and $c_2^{\mathrm{Auger}}$ are the Auger
capture coefficients of holes and electrons, respectively, in the
semiconductor material.
\emph{Stimulated optical recombination:}
%
\begin{equation*}
r_1
=r_2
=r^{\mathrm{stim}}
=
\sum_j
f(\sigma_j)
\frac{\abs{\psi_j}^2}{\int\abs{\psi_j}^2},
\end{equation*}
%
where $f$ additionally depends on the vector of the densities, and on
the vector of the electrochemical potentials. $\sigma_j$, $\psi_j$
are the eigenpairs of a scalar Helmholtz--operator:
%
\begin{equation*}
\Delta \psi_j + \epsilon(u_1,u_2) \psi_j = \sigma_j \psi_j
.
\end{equation*}
%
In laser modeling each eigenpair corresponds to an optical (TE) mode of the
laser and $\abs{\psi_j}^2$ is the intensity of the electrical field of
the $\sigma_j$--mode. $\epsilon$ is the dielectric permittivity (for
the optical field); it depends on the density of electrons and holes.
The scalar Helmholtz--equation originates from the Maxwell equations
for the optical field \cite{wuensche91}.
The functional analytic requirements on the reaction terms will be
established in Assumption~\ref{assu:recomb}.
\section[Mathematical prerequisites]{Mathematical prerequisites}
\label{sec:pre}
In this section we introduce some mathematical terminology and make
precise assumptions about the problem.
\subsection{General Assumptions}
\label{sec:general}
For a Banach space $X$ we denote its norm by $\norm{\cdot}_X$ and the
value of a bounded linear functional $\psi^*$ on $X$ in $\psi\in{X}$
by $\dual{\psi^*}{\psi}_{X}$. If $X$ is a Hilbert space, identified
with its dual, then $\dual{\cdot}{\cdot}_{X}$ is the scalar product in
$X$. Just in case $X$ is the space $\mathbb{R}^2$, the scalar
product of $a,b\in\mathbb{R}^2$ is written as ${a}\cdot{b}$.
Upright $\xoplus{X}$ denotes the direct sum $X{\oplus}X$ of slanted
$X$ with itself.
$\mathcal{B}(X;Y)$ is the space of linear, bounded operators from $X$
into $Y$, where $X$ and $Y$ are Banach spaces. We abbreviate
$\mathcal{B}(X)=\mathcal{B}(X;X)$ and we denote by
$\mathcal{B}_\infty(X)$ the space of linear, compact operators on the
Banach space $X$.
%
The notation $[X,Y]_\theta$ means the complex interpolation space of
$X$ and $Y$ to the index $\theta \in[0,1]$.
%
The (distributional) $\nabla$--calculus applies. If $\psi$ is a
(differentiable) function on an interval taking its values in a Banach
space, then $\psi'$ always indicates its derivative.
\subsection{Spatial Domains}
\label{sec:domain}
Throughout this paper we assume that $\widehat{\Omega}$ as well as
$\Omega$ are bounded Lipschitz domains in $\mathbb{R}^2$, see\
\cite[Ch.~1]{grisvard85}. By $\mathstrut^\uparrow$ we denote the operator
which extends any function defined on $\Omega$ by zero to a function
defined on $\widehat{\Omega}$. Conversely, $\mathstrut_\downarrow$ denotes the
operator which restricts any function defined on $\widehat{\Omega}$ to
$\Omega$. The operators $\mathstrut^\uparrow$ and $\mathstrut_\downarrow$ are adjoint to
each other with respect to the duality induced by the usual scalar
product in spaces of square integrable functions.
With respect to the marked out Neumann boundary parts
$\widehat{\Neumann}\subset\partial\widehat{\Omega}$ and
$\Gamma\subset\partial\Omega$ of the boundary of $\widehat{\Omega}$
and $\Omega$ we assume each being the union of a finite
set of open arc pieces such that no connected component
of $\partial\widehat{\Omega}\setminus{\widehat{\Neumann}}$ and
$\partial\Omega\setminus\Gamma$ consists only of a single point. We
denote the parts of the boundary where Dirichlet boundary conditions
are imposed by
\begin{math}
\widehat{\Dirichlet}
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\interior{\partial\widehat{\Omega}\setminus\widehat{\Neumann}}
\end{math}
and
\begin{math}
\mathrm{D}\stackrel{\scriptscriptstyle\mathrm{def}}{=}\interior{\partial\Omega\setminus\Gamma}.
\end{math}
\subsection{Function spaces and linear elliptic operators}
\label{sec:spaces}
We exemplarily define spaces of real-valued functions on spatial
domains with respect to the bounded domain $\Omega\subset\mathbb{R}^2$
and its boundary. Spaces of functions on $\widehat{\Omega}$ and parts
of its boundary may be similarly defined and are denoted by hatted
symbols.
If $r\in[1,\infty[$, then $L^r$ is the space of real, Lebesgue
measurable, $r$-integrable functions on $\Omega$ and $L^\infty$ is the
space of real, Lebesgue measurable, essentially bounded functions on
$\Omega$.
$W^{1,r}$ is the usual Sobolev space
$W^{1,r}(\Omega)$,
see\ for instance\ \cite{triebel}.
%
%
$W^{1,r}_{\Gamma}$ is the closure in $W^{1,r}$ of
%
\begin{equation*}
\left\{
\psi|_{\Omega}
\,:\,
\psi \in C^\infty_0(\mathbb{R}^2)
, \;
\supp \psi
\cap
(\partial\Omega \setminus \Gamma)
=\emptyset
\right \}
,
\end{equation*}
i.e. \, $W^{1,r}_{\Gamma}$ consists of all functions from
$W^{1,r}$ with vanishing trace on $\mathrm{D}$.
%
$W^{-1,r}_{\Gamma}$ denotes the dual of
$W^{1,r'}_{\Gamma}$, where $\textfrac{1}{r}+\textfrac{1}{r'}=1$.
\begin{math}
\dual{\cdot}{\cdot}_{W^{1,2}_{\Gamma}}
\end{math}
is the dual pairing between $W^{1,2}_{\Gamma}$ and $W^{-1,2}_{\Gamma}$.
%
Correspondingly, the divergence for a vector of square integrable
functions is defined in the following way: If $j\in\xoplus{L}^2$, then
\begin{math}
\nabla\cdot j \in W^{-1,2}_{\Gamma}
\end{math}
is given by
\begin{equation}
\label{eq:div}
\lrdual{\nabla\cdot j}{\psi}_{W^{1,2}_{\Gamma}}
=
-\int_\Omega
{j}\cdot{\nabla \psi} \,\mathrm{d}{x},
\qquad
\psi \in W^{1,2}_{\Gamma}.
\end{equation}
$\sigma$ is the natural arc measure on the boundary of $\Omega$.
We denote by
%
$L^\infty(\partial\Omega)$ and
%
$L^r(\partial\Omega)$,
%
the spaces of $\sigma$-measurable, essentially bounded, and
$r$-integrable, $r\in[1,\infty[$, functions on $\partial\Omega$,
respectively. Moreover,
%
$W^{s,r}(\partial\Omega)$
%
denotes the Sobolev space of fractional order $s\in]0,1]$ and
integrability exponent $r\in[1,\infty[$ on $\partial\Omega$, see\
\cite[Ch.~1]{grisvard85}.
%
Mutatis mutandis for functions on ${\sigma}$-measurable, relatively
open parts of $\partial{\Omega}$.
Let us now define in a strict sense the (linear) Poisson operator and
the elliptic operators governing the current continuity equations.
%
\begin{defn}
\label{d-opera}
We define the Poisson operator
\begin{math}
-\nabla\cdot\varepsilon\nabla
\from{
\widehat{W}^{1,2}
\to
\widehat{W}_{\widehat{\Neumann}}^{-1,2}
}
\end{math}
by
\begin{equation}
\label{e-poi}
\dual{
-\nabla\cdot\varepsilon\nabla\psi_1}
{\psi_2}_{\widehat{W}_{\widehat{\Neumann}}^{1,2}}
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\int_{\widehat{\Omega}}
{\varepsilon \nabla\psi_1}\cdot{\nabla \psi_2}
\,\mathrm{d}{x}
+
\int_{\widehat{\Neumann}}
\varepsilon_{\widehat{\Neumann}} \psi_1 \psi_2
\,\mathrm{d}{\widehat{\sigma}}
,
\end{equation}
for
$\psi_1\in\widehat{W}^{1,2}$ and
$\psi_2\in\widehat{W}_{\widehat{\Neumann}}^{1,2}$.
$\mathcal{P}_0$ denotes
the restriction of $-\nabla\cdot\varepsilon\nabla$ to
$\widehat{W}_{\widehat{\Neumann}}^{1,2}$; we denote the maximal
restriction of $\mathcal{P}_0$ to any range space which continuously
embeds into $\widehat{W}_{\widehat{\Neumann}}^{-1,2}$ by the same
symbol $\mathcal{P}_0$.
\end{defn}
%
\begin{defn}
\label{def:cont}
With respect to a function $\varsigma\in{}L^\infty$
we define the operators
\begin{multline*}
-\nabla\cdot{\varsigma\mu_k}\nabla
\from{W^{1,2} \to W^{-1,2}_{\Gamma}}
,
\quad
k=1,2,
\quad
\text{by}
\\
\dual{-\nabla\cdot \varsigma\mu_k \nabla\psi_1}{\psi_2}_{W^{1,2}_{\Gamma}}
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\int_\Omega
\varsigma\;
{\mu_k\nabla\psi_1}\cdot{\nabla\psi_2}
\,\mathrm{d}{x}
,
\qquad
\psi_1\in W^{1,2},
\;
\psi_2\in W^{1,2}_{\Gamma}
.
\end{multline*}
%
If, in particular, $\varsigma\equiv{1}$, then we simply write
$\fulla{k}$ for $-\nabla\cdot\mu_k\nabla$. Moreover, we denote the
restriction of $\fulla{k}$ to the space $W^{1,2}_{\Gamma}$ by
$a_k$, i.e.\
%
\begin{math}
a_k \from{W^{1,2}_{\Gamma} \to W^{-1,2}_{\Gamma}}.
\end{math}
%
\end{defn}
%
\begin{prop}
\label{prop:isomorphy}
\emph{(see\ \cite{groeger89} and \cite{groeger:rehberg89})} There is
a number $\hat{q}>2$ (depending on $\widehat{\Omega}$, $\varepsilon$
and $\widehat{\Neumann}$) such that for all $q\in[2,\hat{q}]$ the
operator
\begin{math}
\mathcal{P}_0
\from{
\widehat{W}^{1,q}_{\widehat{\Neumann}}
\to
\widehat{W}^{-1,q}_{\widehat{\Neumann}}
}
\end{math}
is a topological isomorphism. Moreover, there is a $\check{q}>2$
(depending on $\Omega$, $\mu_1$, $\mu_2$ and $\Gamma$) such that
for all $q\in[2,\check{q}]$ the operators
\begin{math}
a_k
\from{
W^{1,q}_{\Gamma}
\to
W^{-1,q}_{\Gamma}
}
\end{math}
provide topological isomorphisms, and additionally, generate analytic
semigroups on $W^{-1,q}_{\Gamma}$.
\end{prop}
%
\begin{defn}
\label{def:pq}
From now on we fix a number $q\in]2,\min(4,\hat{q},\check{q})[$ and
define $p\stackrel{\scriptscriptstyle\mathrm{def}}{=}\frac{q}{2}$.
%
With respect to this $p$ we define the operators
%
\begin{gather*}
A_k \from{\psi \mapsto a_k \psi}
,\quad
\psi \in \mathcal{D}_{k}
\stackrel{\scriptscriptstyle\mathrm{def}}{=} \dom(A_k)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\left\{
\psi \in W^{1,2}_{\Gamma}
\,:\,
a_k\psi \in L^p
\right\}
,\quad
k=1,2,
\\
\operatorname{A} \from{ \mathcal{D} \to \xoplus{L}^p }
,\quad
\operatorname{A} \stackrel{\scriptscriptstyle\mathrm{def}}{=}
\left(
\begin{smallmatrix}
A_1 & 0
\\
0 & A_2
\end{smallmatrix}
\right)
,\quad
\mathcal{D} \stackrel{\scriptscriptstyle\mathrm{def}}{=} \dom(\operatorname{A}) = \mathcal{D}_{1} \oplus \mathcal{D}_{2}
\hookrightarrow
\xoplus{L}^p
.
\end{gather*}
%
\end{defn}
%
%
\begin{rem}
\label{rem:normnull}
If $\psi\in\mathcal{D}_{k}$, $k=1,2$, then
\begin{math}
{\nu}\cdot{(\mu_k \nabla \psi)}
|_{\Gamma}
=
0
\end{math}
in the sense of distributions, see\ for instance\
\cite[Ch.~1.2]{ciarlet79} or
\cite[Ch.1.2]{gajewski:groeger:etal74}.
\end{rem}
%
After having fixed the number $q$ and, correspondingly, the space
$L^p$, we will now formulate our mathematical requirements on the
reaction terms:
%
\begin{assu}
\label{assu:recomb}
The reaction terms $r_k$, $k=1,2$, are mappings
\begin{equation*}
r_k \from{[T_0,T_1]
\times \widehat{W}^{1,q}
\times \xoplus{W}^{1,q}
\to
L^p
}.
\end{equation*}
Moreover, we assume that there is a real number $\eta\in]0,1]$ and
for any bounded subset
$M\subset\widehat{W}^{1,q}\oplus\xoplus{W}^{1,q}$ a constant
$r_M$ such that
%
\begin{multline*}
\lrnorm{
r_k(t,v,\psi)
-
r_k(\check{t},\check{v},\check{\psi})
}_{L^p}
\\
\le
r_M
\left(
\abs{t-\check{t}}^\eta
+
\norm{
v - \check{v}
}_{\widehat{W}^{1,q}}
+
\norm{
\psi - \check{\psi}
}_{\xoplus{W}^{1,q}}
\right)
,
\\
t,\check{t} \in [T_0,T_1]
,
\quad
(v,\psi),\, (\check{v},\check{\psi}) \in M
.
\end{multline*}
\end{assu}
%
\begin{assu}
\label{assu:bands}
The functions
\begin{math}
b_k
\from{
[T_0,T_1]
\to
W^{1,q}
},
\end{math}
$k=1,2$, are H\"older continuous. Moreover, they are H\"older
continuously differentiable when considered as $L^p$ valued.
\end{assu}
\subsection{Representation of Dirichlet boundary values}
\label{sec:diribv}
For setting up the Poisson and current--continuity equations in
appropriate function spaces we must split up the solution into parts,
where one part represents the inhomogeneous Dirichlet boundary values
$\varphi_{\widehatDirichlet}$ and $\Bvphi{k}$, $k=1,2$. In this section we
treat of just this representation.
%
We make the following assumptions about the Dirichlet boundary values
of the electrochemical potentials $\phi_k$, $k=1,2$, and for their
initial values, see\ \eqref{eq:cc:bc}, \eqref{eq:cc:ini}.
%
\begin{assu}
\label{assu:cc:bc}
There is a H\"older continuous function
\begin{equation*}
\bvphi{}=(\bvphi{1},\bvphi{2}) \from{ [T_0,T_1] \to \xoplus{W}^{1,q}},
\quad k=1,2,
\end{equation*}
such that for all $t\in[T_0,T_1]$
%
\begin{align}
\label{eq:cc:bv1}
\fulla{k} \bvphi{k}(t)
& = 0
\\
\label{eq:cc:bv2}
\trace
\big( \bvphi{k}(t) \big)
\big|_{\mathrm{D}} &=\Bvphi{k}(t)
\end{align}
Moreover, we assume, that each $\bvphi{k}$, $k=1,2$,
--- as a function with values in $L^p$ ---
is differentiable and its derivative is H\"older continuous.
\end{assu}
%
\begin{rem}
\label{rem:curnormal}
It should be noted that \eqref{eq:cc:bv1} and the definition of the
operators $\fulla{k}$ imply $\nu\cdot\mu_k\nabla\bvphi{k}=0$ on
$\Gamma$ in the distributional sense, see\ for instance\
\cite[Ch.~1.2]{ciarlet79} or
\cite[Ch.~II.2]{gajewski:groeger:etal74}. This implies for the
current densities \eqref{eq:curr-dens} that $\nu\cdot{}j_k=0$ on
$\Gamma$ in the distributional sense, provided that
$\chi_k\in{}W^{1,q}$.
\end{rem}
%
We will now give a sufficient condition on $\Bvphi{k}$ for the
existence of a $\bvphi{k}$ with the assumed properties.
%
\begin{lem}
\label{lem:diri}
1.~If
\begin{math}
\psi \in W^{1-\textfrac{1}{q},q}(\mathrm{D}),
\end{math}
then there is a unique function $\Psi\in{W^{1,q}}$ fulfilling
\begin{equation*}
\fulla{k} \Psi
= 0,
\quad\text{and}\quad
\trace
(\Psi)
\big|_{\mathrm{D}}
= \psi.
\end{equation*}
\par
2.~If
\begin{math}
\psi
\from{
[T_0,T_1] \to
W^{1-\textfrac{1}{q},q}(\mathrm{D})
}
\end{math}
is H\"older continuous with index $\eta$, then the function
\begin{math}
\Psi
\from{ [T_0,T_1] \to W^{1,q} }
\end{math}
which is given for each $t\in[T_0,T_1]$ by item~1 is also H\"older
continuous with index $\eta$.
%
Moreover, if $\psi$ --- as a function with values in
$W^{\textfrac{1}{2},2}(\mathrm{D})$ --- is H\"older continuously
differentiable with H\"older index $\eta$, then $\Psi$ is H\"older
continuously differentiable with H\"older index $\eta$.
\end{lem}
%
\begin{proof}
Let
\begin{math}
\operatorname{ex}
\from{
W^{1-\textfrac{1}{q},q}(\mathrm{D})
\to
W^{1-\textfrac{1}{q},q}(\partial\Omega)
}
\end{math}
be a linear and continuous extension operator, and let $\trace^{-1}$
be a linear and continuous right inverse of the trace operator
\begin{math}
\trace
\from{
W^{1,q} (\Omega)
\to
W^{1-\textfrac{1}{q},q}(\partial\Omega)
}.
\end{math}
Such operators exist according to \cite[Thm~1.4.3.1]{grisvard85} and
\cite[Thm~1.5.1.3]{grisvard85}, respectively.
%
Thus,
\begin{math}
\trace^{-1}
\circ
\operatorname{ex}
\psi \in W^{1,q}.
\end{math}
%
Moreover, let $\wildbvpsi$ be the solution of the differential
equation
%
\begin{equation}
\label{eq:deiff}
a_k \wildbvpsi
=
\fulla{k}
\circ
\trace^{-1}
\circ
\operatorname{ex}
\psi
\end{equation}
%
in $W^{1,q}_{\Gamma}$.
%
This solution exists and is unique because the right hand side of
\eqref{eq:deiff} is from $W^{-1,q}_{\Gamma}$ and the operators
$a_k$ are isomorphisms from $W^{1,q}_{\Gamma}$ onto
$W^{-1,q}_{\Gamma}$.
%
We now define
\begin{equation}
\label{eq:DefinitionExtendedDirichletData}
\Psi \stackrel{\scriptscriptstyle\mathrm{def}}{=}
\trace^{-1}
\circ
\operatorname{ex}
\psi
-
\wildbvpsi
.
\end{equation}
The asserted properties of $\Psi$ follow directly from the
construction.
The second assertion is proved by observing that all steps in the
first part of the proof depend linearly on the datum.
\end{proof}
%
\begin{assu}
\label{assu:cc:ini}
We assume that the initial values $\iniphi_k$ belong to $W^{1,q}$,
$k=1,2$. Moreover, there is a
$\theta\in]\textfrac{1}{2}+\textfrac{1}{q},1[$ such that for each of
the initial values $\iniphi_k$ the difference
$\iniphi_k-\bvphi{k}(T_0)$ belongs to the complex interpolation space
$[L^p,\mathcal{D}_{k}]_{\theta}$.
\end{assu}
%
\begin{rem}
\label{rem:initial}
For all $\theta\in]\textfrac{1}{2}+\textfrac{1}{q},1[$ the space
$[L^p,\mathcal{D}_{k}]_{\theta}$ compactly embeds into
$W^{1,q}_{\Gamma}\hookrightarrow{}L^\infty$, see\
\cite[Thm.~5.2]{pub:765}.
\end{rem}
%
With respect to the inhomogeneous terms $\varphi_{\widehatDirichlet}$ and $\Rbvarphi$ in
the boundary conditions of Poisson's equation \eqref{eq:poi} we make
the following assumptions.
%
\begin{assu}
\label{assu:poi-diri-bv}
There is a H\"older continuous function
%
\begin{math}
\dbvarphi\from{
[T_0,T_1]
\to
\widehat{W}^{1,q}
}
\end{math}
%
such that $\dbvarphi$ --- as a function from $[T_0,T_1]$ into
$\widehat{L}^{p}$ --- is H\"older continuously
differentiable. For all $t\in[T_0,T_1]$ it holds true
\begin{align}
\label{eq:trace1}
- \nabla \cdot \varepsilon \nabla \dbvarphi(t)
& = 0,
\\
\label{eq:trace2}
\trace
\big( \dbvarphi(t) \big)
\big|_{\widehat{\Dirichlet}}
&= \varphi_{\widehatDirichlet}(t).
\end{align}
%
The function
\begin{equation*}
[T_0,T_1] \ni t
\mapsto
\Rbvarphi(t) \in L^\infty(\widehat{\Neumann})
\end{equation*}
is differentiable and possesses a H\"older continuous derivative.
\end{assu}
%
\begin{rem}
\label{r-represent}
Similar to \lemref{lem:diri} it is possible to give a sufficient
condition on the existence of a representing function
$t\mapsto\dbvarphi(t)$ which only rests on the function
$t\mapsto\varphi_{\widehatDirichlet}(t)$. We do not carry out this here.
\end{rem}
%
\begin{rem}
\label{r-extend}
For all $t\in[T_0,T_1]$ we extend $\Rbvarphi(t)$ by zero to a
$\widehat{\sigma}$--measurable, essentially bounded function on
$\partial\widehat{\Omega}$. Due to the continuous embedding
%
\begin{displaymath}
\widehat{W}_{\widehat{\Neumann}}^{1,q'}
\hookrightarrow
\widehat{W}^{1,q'}
\hookrightarrow
W^{1-\textfrac{1}{q'},q'}(\partial\widehat{\Omega})
\hookrightarrow
L^{q'}(\partial\widehat{\Omega})
,
\end{displaymath}
%
see\ \cite[Thm~1.5.1.3]{grisvard85}, there is a continuous embedding
%
\begin{displaymath}
L^{\infty}(\partial\widehat{\Omega})
\hookrightarrow
L^{q}(\partial\widehat{\Omega})
\hookrightarrow
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
.
\end{displaymath}
%
Thus, $\Rbvarphi(t)$, $t\in[T_0,T_1]$ can be regarded as an element
of $\widehat{W}_{\widehat{\Neumann}}^{-1,q}$. We denote $\Rbvarphi$
as a function from $[T_0,T_1]$ into
$\widehat{W}_{\widehat{\Neumann}}^{-1,q}$ by $\rbvarphi$.
The H\"older continuous differentiability of $\Rbvarphi$ entails
the H\"older continuous differentiability of
\begin{math}
\rbvarphi
\from{[T_0,T_1] \to \widehat{W}_{\widehat{\Neumann}}^{-1,q}}
\end{math}
with the same H\"older exponent.
\end{rem}
\subsection{The linear Poisson equation}
Let us assume the following about $\tilde{d}$ --- the doping
profile (or control parameter) on the right hand side of Poisson's
equation \eqref{eq:poi}.
%
\begin{assu}
\label{assu:dop}
The function
\begin{math}
\tilde{d}
\from{[T_0,T_1] \to \widehat{W}_{\widehat{\Neumann}}^{-1,q}}
\end{math}
is continuously differentiable with H\"older continuous derivative.
We define a ``generalized doping''
\begin{equation}
\label{e-dop}
d\from{[T_0,T_1] \to \widehat{W}_{\widehat{\Neumann}}^{-1,q}}
\qquad
\text{by}
\qquad
d(t)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\tilde{d}(t)
+
\rbvarphi(t)
,
\quad
t\in[T_0,T_1]
.
\end{equation}
\end{assu}
%
We now define what is a solution of Poisson's equation \eqref{eq:poi}.
%
\begin{defn}
\label{def:linpoisson}
Let $u_k\in\widehat{W}_{\widehat{\Neumann}}^{-1,q}$, $k=1,2$ be given.
We say that $\widetilde{\varphi}$ is a solution of Poisson's equation
\eqref{eq:poi} at $t\in[T_0,T_1]$, if
%
\begin{equation}
\label{eq:splitoff}
\widetilde{\varphi} = \varphi + \dbvarphi(t),
\end{equation}
%
and $\varphi\in\widehat{W}_{\widehat{\Neumann}}^{1,q}$ is the unique
solution of
%
\begin{equation}
\label{e-linpoi}
\mathcal{P}_0\varphi = d(t) + u_1 - u_2.
\end{equation}
%
$\varphi$ and $\widetilde{\varphi}$ depend parametrically on $t$, $u_1$, and
$u_2$. If convenient, we indicate the dependence on $t$ by writing
$\varphi(t)$ and $\widetilde{\varphi}(t)$, respectively.
\end{defn}
%
\begin{rem}
\label{rem:boundjustify}
With respect to the boundary conditions in \eqref{eq:poi} it should
be noted that \eqref{eq:trace2} and the property
$\varphi\in\widehat{W}_{\widehat{\Neumann}}^{1,q}$ give
\begin{math}
\widetilde{\varphi}|_{\widehat{\Dirichlet}} = \varphi_{\widehatDirichlet}.
\end{math}
Additionally, if $\tilde{d}$, $u_1$, and $u_2$ belong to the space
$\widehat{L}^1$, then \eqref{e-dop}, \eqref{eq:splitoff} and
\eqref{e-linpoi} together with \eqref{eq:trace1} imply
\begin{math}
{\nu}\cdot{
\left(
\varepsilon \nabla \widetilde{\varphi}
\right)
}
+
\varepsilon_{\widehat{\Neumann}} \widetilde{\varphi}
=\Rbvarphi(t)
,
\end{math}
see\ for instance\
\cite[Ch.~1.2]{ciarlet79} or
\cite[Ch.~II.2]{gajewski:groeger:etal74}.
\end{rem}
%
Throughout this section we demand several times H\"older continuity of
functions and/or their derivatives. Clearly, there is a common
H\"older exponent which we will denote from now on by $\eta$.
\section[Precise Formulation of the Problem]
{Precise Formulation of the Problem}
\label{sec:solution}
%
We are now going to define the problem outlined in
\secref{sec:setting}.
%
\begin{defn}
\label{def:vanroos}
We say the van Roosbroeck system admits a local in time solution, if
there is a time $T\in]T_0,T_1]$ and
\begin{math}
(\widetilde{\varphi}, \widetilde{\phi} )=(\widetilde{\varphi}, {\widetilde{\phi}}_1,{\widetilde{\phi}}_2)
\end{math}
such that
\begin{equation}
\label{eq:initial-condition}
\widetilde{\phi}(T_0)
= (\widetilde{\phi}_1(T_0),\widetilde{\phi}_2(T_0))
=(\iniphi_1,\iniphi_2)
\in \xoplus{W}^{1,q},
\end{equation}
%
\begin{equation}
\label{eq:sol-varphi}
\varphi \stackrel{\scriptscriptstyle\mathrm{def}}{=} \widetilde{\varphi}-\dbvarphi
\in
C([T_0,T];\widehat{W}^{1,q}_{\widehat{\Neumann}})
\cap
C^1(]T_0,T[;\widehat{W}^{1,q}_{\widehat{\Neumann}})
\end{equation}
%
\begin{equation}
\label{eq:sol-phi}
\phi \stackrel{\scriptscriptstyle\mathrm{def}}{=} \widetilde{\phi} - \bvphi{}
\in
C^1(]T_0,T[,\xoplus{L}^p)
\cap
C(]T_0,T],\mathcal{D})
\cap
C([T_0,T],[\xoplus{L}^p,\mathcal{D}]_\theta),
\end{equation}
%
fulfill the Poisson equation and the current continuity equations:
\begin{equation}
\label{eq:poisson}
\mathcal{P}_0(\varphi(t))
= d(t) + \mathstrut^\uparrow u_1(t) - \mathstrut^\uparrow u_2(t)
\quad
t\in[T_0,T]
,
\end{equation}
\begin{equation}
\label{eq:continuity}
u_k'(t) - \nabla\cdot j_k(t)
=
r_k(t,\widetilde{\varphi}(t),\widetilde{\phi}(t)),
\quad k=1,2,
\quad t \in ]T_0,T[
.
\end{equation}
%
The carrier densities and the current densities are given by
%
\begin{align}
\label{eq:car-density}
u_k(t)
&\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\rho_k(t) \mathcal{F}_k
\big(
\chi_k(t)
\big),
\\
\label{eq:cur-density}
j_k(t)
&\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\mathcal{G}_k\big(
\chi_k(t)
\big)
\mu_k\nabla {\widetilde{\phi}}_k(t),
\\
\label{eq:chemical-potential}
\chi_k(t)
&\stackrel{\scriptscriptstyle\mathrm{def}}{=}
{\widetilde{\phi}}_k(t) + (-1)^k\mathstrut_\downarrow \widetilde{\varphi}(t) + b_k(t).
\end{align}
and satisfy
\begin{equation}
\label{eq:ukjk-regularity}
u_k \in C([T_0,T],L^\infty) \cap C^1(]T_0,T[,L^p),
\end{equation}
\begin{equation}
\label{eq:jk-regularity}
j_k \in C([T_0,T],{L}^q),
\end{equation}
\begin{equation}
\label{eq:divjk-regularity}
\nabla\cdot j_k \in C(]T_0,T],{L}^p)
\end{equation}
for $k=1,2$.
\end{defn}
\section[Reformulation as a quasi-linear parabolic system]
{Reformulation as a quasi-linear parabolic system}
\label{sec:nl-reform}
In this section we provide the tools to rewrite
the problem from \defnref{def:vanroos}
as a quasi-linear system for the continuity equations. To that end we
eliminate the electrostatic potential from the continuity equations.
Replacing the carrier densities $u_1$ and $u_2$ on the right hand side
of \eqref{eq:poisson} by \eqref{eq:car-density} making use of
\eqref{eq:chemical-potential} and \eqref{eq:splitoff} one obtains a
nonlinear Poisson equation for $\varphi$. We solve this equation with
respect to prescribed parameters $b_k$ and ${\widetilde{\phi}}_k$, $k=1,2$,
which we will assume here to be from $L^\infty$. This way to decouple
van Roosbroeck's equations into a nonlinear Poisson equation and a
system of parabolic equations is also one of the fundamental
approaches to the numerical solution of the van Roosbroeck system. It
is due to Gummel \cite{gummel64} and was the first reliable numerical
technique to solve these equations for carriers in an operating
semiconductor device structure.
\subsection{The nonlinear Poisson equation}
\label{sec:nl-poisson}
We are now going to prove the unique solvability of the nonlinear
Poisson equation and some properties of its solution. First we show
that the supposed admissibility of the carrier distribution functions
$\mathcal{F}_k$ ensures that the relation between a potential and its
corresponding carrier density is monotone and even continuously
differentiable when considered between adequate spaces.
%
\begin{lem}
\label{lem:cardens}
Let $\rho$ and $g$ be from ${L}^\infty$ and
$\mathcal{F}=\mathcal{F}_k$ be an admissible carrier distribution
function, see \assuref{assu:distri}.
\par
1.~The operator
\begin{equation}
\label{eq:cardens}
\widehat{W}_{\widehat{\Neumann}}^{1,2}
\ni
h
\longmapsto
\mathstrut^\uparrow \rho \mathcal{F}(g +\mathstrut_\downarrow h)
\in \widehat{L}^2
\end{equation}
is well defined, continuous and bounded. Its composition with the
embedding
\begin{math}
\widehat{L}^2 \hookrightarrow \widehat{W}_{\widehat{\Neumann}}^{-1,2}
\end{math}
is monotone.
\par
2.~The Nemyckii operator
\begin{equation*}
L^\infty
\ni
h
\longmapsto
\rho {\mathcal{F}}(g + \mathstrut_\downarrow h)
\end{equation*}
induced by the function
\begin{equation*}
\Omega \times \mathbb{R}
\ni
(x,s)
\longmapsto
\rho(x) \mathcal{F}(g(x)+s),
\end{equation*}
maps $L^\infty$ continuously into itself and is even continuously
differentiable. Its Fr{\'e}chet derivative at $h\in{L}^\infty$ is
the multiplication operator given by the essentially bounded
function
\begin{equation}
\label{eq:cardens:deriv}
\Omega \ni x \longmapsto \rho(x) \mathcal{F}^\prime(g(x)+ h(x)).
\end{equation}
\end{lem}
\begin{proof}
Indeed, the assumption that the carrier distribution functions
should be admissible assures that the operator \eqref{eq:cardens} is
well defined, continuous and bounded, see\ \cite{trudinger:67} for
the case of an exponential, and see\ \cite[Chapter~3]{appell90} for
the case of a polynomially bounded function. The asserted
monotonicity follows from the monotonicity of the function
$\mathcal{F}$ and the fact that the duality between
$\widehat{W}_{\widehat{\Neumann}}^{1,2}$ and
$\widehat{W}_{\widehat{\Neumann}}^{-1,2}$ is the extension of the
$\widehat{L}^2$ duality:
\begin{multline*}
\dual{
\mathstrut^\uparrow \rho \mathcal{F} ( g + \mathstrut_\downarrow h_1)
-
\mathstrut^\uparrow \rho \mathcal{F} ( g + \mathstrut_\downarrow h_2 )
}{h_1 - h_2}_{\widehat{W}_{\widehat{\Neumann}}^{1,2}}
\\
=
\int_{\widehat{\Omega}}
\left(
\mathstrut^\uparrow \rho \mathcal{F} ( g + \mathstrut_\downarrow h_1 )
-
\mathstrut^\uparrow \rho \mathcal{F} ( g + \mathstrut_\downarrow h_2 )
\right)
\left(
h_1 - h_2
\right)
\,\mathrm{d}{x}\\
=
\int_{\Omega}
\left(
\rho \mathcal{F} ( g + \mathstrut_\downarrow h_1 )
-
\rho \mathcal{F} ( g + \mathstrut_\downarrow h_2 )
\right)
\left(
\mathstrut_\downarrow h_1 - \mathstrut_\downarrow h_2
\right)
\,\mathrm{d}{x}
\ge 0
\;\;
\text{for all $h_1$, $h_2 \in \widehat{W}_{\widehat{\Neumann}}^{1,2}$.}
\end{multline*}
\par
The second assertion follows from a result by Gr\"oger and Recke, see\
\cite[Thm~5.1]{recke-groeger-nodea}.
\end{proof}
%
\begin{cor}
\label{cor:implica}
The mapping
\begin{equation*}
\widehat{W}^{1,q}
\ni
h
\longmapsto
\mathstrut^\uparrow \rho {\mathcal{F}}(g + \mathstrut_\downarrow h)
\end{equation*}
takes its values in $\widehat{L}^\infty$ and is also continuously
differentiable. Its derivative at a point $h\in\widehat{W}^{1,q}$
equals the multiplication operator which is induced by the function
$\mathstrut^\uparrow\rho\mathcal{F}'(g+\mathstrut_\downarrow h)$.
\end{cor}
%
\begin{thm}
\label{thm:monotone}
Under \assuref{assu:distri} on the distribution functions
$\mathcal{F}_1$, $\mathcal{F}_2$ and \assuref{assu:effband} the
following statements are true:
\par
1.~For any pair of functions
$z=(z_1,z_2)\in\xoplus{L}^\infty$
the operator
\begin{equation}
\label{eq:monmap}
\varphi
\longmapsto
\mathcal{P}_0 \varphi
- \mathstrut^\uparrow \rho_1 \mathcal{F}_1 (z_1 - \mathstrut_\downarrow \varphi)
+ \mathstrut^\uparrow \rho_2 \mathcal{F}_2 (z_2 + \mathstrut_\downarrow \varphi)
\end{equation}
is strongly monotone and continuous from
$\widehat{W}_{\widehat{\Neumann}}^{1,2}$ to
$\widehat{W}_{\widehat{\Neumann}}^{-1,2}$, where the operator
$\mathcal{P}_0$ is according to \defnref{d-opera}. The monotonicity
constant of \eqref{eq:monmap} is a least that of $\mathcal{P}_0$.
\par
2.~For all
$f\in\widehat{W}_{\widehat{\Neumann}}^{-1,2}$ and
$z=(z_1,z_2)\in\xoplus{L}^\infty$
the nonlinear Poisson equation
\begin{equation}
\label{eq:nlp}
\mathcal{P}_0 \varphi
- \mathstrut^\uparrow \rho_1 \mathcal{F}_1(z_1-\mathstrut_\downarrow \varphi)
+ \mathstrut^\uparrow \rho_2 \mathcal{F}_2(z_2+\mathstrut_\downarrow \varphi)
=f
\end{equation}
admits exactly one solution $\varphi$ which we denote by
$\mathcal{L}(f,z)$. This solution belongs to
$\widehat{W}_{\widehat{\Neumann}}^{1,2}$ and satisfies the estimate
\begin{equation*}
\norm{\varphi}_{\widehat{W}_{\widehat{\Neumann}}^{1,2}}
\le
\frac{1}{m}
\lrnorm{
\mathstrut^\uparrow \rho_1 \mathcal{F}_1(z_1)
-
\mathstrut^\uparrow \rho_2 \mathcal{F}_2(z_2)
+
f
}_{\widehat{W}_{\widehat{\Neumann}}^{-1,2}},
\end{equation*}
where $m$ is the monotonicity constant of $\mathcal{P}_0$.
\par
3.~The maximal restriction of the operator \eqref{eq:monmap} to the
range space $\widehat{W}_{\widehat{\Neumann}}^{-1,q}$ has the domain
$\widehat{W}_{\widehat{\Neumann}}^{1,q}$. Moreover, if
$M$ is a bounded subset of
\begin{math}
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
\oplus \xoplus{L}^\infty,
\end{math}
then the set
\begin{math}
\left \{
\mathcal{L}(f,z)
\,:\, (f, z) \in M
\right \}
\end{math}
is bounded in $\widehat{W}_{\widehat{\Neumann}}^{1,q}$.
\par
4.~The mapping
\begin{math}
\mathcal{L}
\from{
\widehat{W}_{\widehat{\Neumann}}^{-1,q} \oplus \xoplus{L}^\infty
\to
\widehat{W}_{\widehat{\Neumann}}^{1,q}
}
\end{math}
is continuously differentiable. Let $(F,Z)=(F,Z_1,Z_2)$ be from
\begin{math}
\widehat{W}_{\widehat{\Neumann}}^{-1,q} \oplus \xoplus{L}^\infty
;
\end{math}
we define the function
\begin{equation}
\label{eq:pk}
\mathcal{N}_k
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\mathstrut^\uparrow \rho_k
\mathcal{F}'_k (Z_k + (-1)^k\mathstrut_\downarrow \mathcal{L}(F,Z))
,
\end{equation}
and we also denote the corresponding multiplication operator on
$\widehat{\Omega}$ by $\mathcal{N}_k$.
Then the Fr{\'e}chet derivative
\begin{math}
\partial \mathcal{L}
\end{math}
at a point
\begin{math}
(F,Z) = (F,Z_1,Z_2)
\end{math}
is the bounded linear mapping given by
\begin{equation}
\label{eq:nlp-fderiv}
\left[
\partial \mathcal{L}(F,Z)
\right]
(f,z)
=
\left(
\mathcal{P}_0 + \mathcal{N}_1 + \mathcal{N}_2
\right)^{-1}
\left(
f
+ \mathcal{N}_1 \mathstrut^\uparrow z_1
- \mathcal{N}_2 \mathstrut^\uparrow z_2
\right),
\quad k=1,2
\end{equation}
for all
\begin{math}
(f,z) = (f,(z_1,z_2))
\in
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
\oplus
\xoplus{L}^\infty
.
\end{math}
\par
5.~The norm of
\begin{math}
\partial \mathcal{L}(F,Z)
\in
\mathcal{B}(
\widehat{W}_{\widehat{\Neumann}}^{-1,q} \oplus \xoplus{L}^\infty
;
\widehat{W}_{\widehat{\Neumann}}^{1,q}
)
\end{math}
can be estimated as follows:
\begin{multline*}
\norm{\partial \mathcal{L}(F,Z)}_{
\mathcal{B}(
\widehat{W}_{\widehat{\Neumann}}^{-1,q} \oplus \xoplus{L}^\infty
;
\widehat{W}_{\widehat{\Neumann}}^{1,q}
)}
\\
\le
2
\norm{\mathcal{P}_0^{-1}}_{
\mathcal{B}(L^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\sqrt{
\norm{\mathcal{N}_1 + \mathcal{N}_2}_{L^\infty}
\norm{\mathcal{N}_1 + \mathcal{N}_2}_{L^1}
}
+
\norm{\mathcal{P}_0^{-1}}_{\mathcal{B}(
\widehat{W}_{\widehat{\Neumann}}^{-1,q};
\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\\
+
\norm{\mathcal{P}_0^{-1}}_{\mathcal{B}(\widehat{L}^2;
\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\sqrt{
\norm{\mathcal{N}_1 + \mathcal{N}_2}_{L^\infty}
}
\norm{\mathcal{P}_0^{-1/2}}_{\mathcal{B}(
\widehat{W}_{\widehat{\Neumann}}^{-1,q};\widehat{L}^2)}
\end{multline*}
\end{thm}
%
\begin{proof}
1.~The assumption that $\widehat{\Dirichlet}$ is not empty or
$\varepsilon_{\widehat{\Neumann}}$ is positive on a set of positive arc
measure ensures that the operator $\mathcal{P}_0$ is strongly
monotone. Thus, taking into account \lemref{lem:cardens}, the
mapping \eqref{eq:monmap} is strongly monotone and continuous from
$\widehat{W}_{\widehat{\Neumann}}^{1,2}$ to
$\widehat{W}_{\widehat{\Neumann}}^{-1,2}$.
2.~The second assertion follows from the first one by standard
results on monotone operators, see\ for instance\
\cite{gajewski:groeger:etal74}.
3.~For $f\in\widehat{W}^{-1,2}_{\widehat{\Neumann}}$ the solution
$\mathcal{L}(f,z)$ is from $\widehat{W}^{1,2}_{\widehat{\Neumann}}$
and hence,
\begin{equation*}
- \mathstrut^\uparrow \rho_1
\mathcal{F}_1\big(z_1- \mathstrut_\downarrow \mathcal{L}(f,z)\big)
+ \mathstrut^\uparrow \rho_2
\mathcal{F}_2\big(z_2+ \mathstrut_\downarrow \mathcal{L}(f,z)\big)
\in
\widehat{L}^2
\hookrightarrow
\widehat{W}^{-1,q}_{\widehat{\Neumann}},
\end{equation*}
see\ \lemref{lem:cardens}. By the second assertion of the theorem,
the set
\begin{equation*}
\left\{\mathcal{L}(f,z){\,:\,}(f,z){\in}M\right\}
\quad
\text{is bounded in $\widehat{W}^{1,2}_{\widehat{\Neumann}}$.}
\end{equation*}
From this we conclude again by \lemref{lem:cardens} that the set
\begin{equation*}
\left\{
\mathstrut^\uparrow \rho_1
\mathcal{F}_1\big(z_1-\mathstrut_\downarrow \mathcal{L}(f,z)\big)
-
\mathstrut^\uparrow \rho_2
\mathcal{F}_2\big(z_2+\mathstrut_\downarrow \mathcal{L}(f,z)\big)
\,:\,
(f,z) \in M
\right\}
\end{equation*}
is bounded in $\widehat{L}^2$, and hence, is bounded in
$\widehat{W}^{-1,q}_{\widehat{\Neumann}}$. Thus, the set
\begin{equation*}
\left\{
\mathstrut^\uparrow \rho_1
\mathcal{F}_1\big(z_1-\mathstrut_\downarrow \mathcal{L}(f,z)\big)
-
\mathstrut^\uparrow \rho_2
\mathcal{F}_2\big(z_2+\mathstrut_\downarrow \mathcal{L}(f,z)\big)
+
f
\,:\,
(f,z) \in M
\right\}
\end{equation*}
is also bounded in $\widehat{W}^{-1,q}_{\widehat{\Neumann}}$.
Consequently, the image of this set under $\mathcal{P}_0^{-1}$ is
bounded in $\widehat{W}^{1,q}_{\widehat{\Neumann}}$.
4.~We define an auxiliary mapping
\begin{math}
\mathcal{K}\from{
\widehat{W}_{\widehat{\Neumann}}^{1,q}
\oplus
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
\oplus
\xoplus{L}^\infty
\to
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
}
\end{math}
by
\begin{equation*}
\mathcal{K}(\varphi,f,z)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\mathcal{P}_0 \varphi
- \mathstrut^\uparrow \rho_1
\mathcal{F}_1(z_1 - \mathstrut_\downarrow \varphi)
+ \mathstrut^\uparrow \rho_2
\mathcal{F}_2(z_2 + \mathstrut_\downarrow \varphi)
- f
\end{equation*}
such that
\begin{math}
\mathcal{K}\big(\mathcal{L}(f,z),f,z\big)
=
0
\end{math}
for all $f\in\widehat{W}_{\widehat{\Neumann}}^{-1,q}$ and all
$z\in\xoplus{L}^\infty$. The assertion follows from the
\thmtitle{Implicit Function Theorem} if we can prove that
$\mathcal{K}$ is continuously differentiable and the partial
derivative with respect to $\varphi$ is a topological isomorphism
between $\widehat{W}_{\widehat{\Neumann}}^{1,q}$ and
$\widehat{W}_{\widehat{\Neumann}}^{-1,q}$.
%
For any
\begin{math}
\varphi \in \widehat{W}_{\widehat{\Neumann}}^{1,q},
\end{math}
\begin{math}
f \in \widehat{W}_{\widehat{\Neumann}}^{-1,q},
\end{math}
and
\begin{math}
z \in \xoplus{L}^\infty
\end{math}
the partial derivatives of $\mathcal{K}$ are given by
\begin{eqnarray}
\label{eq:grad-K-varphi}
\partial_\varphi
\mathcal{K}(\varphi,f,z)
& = &
\mathcal{P}_0
+
\sum_{k=1}^2
\mathstrut^\uparrow \rho_k \mathcal{F}'_k
(z_k
+ (-1)^k
\mathstrut_\downarrow \varphi)
\in
\mathcal{B}(
\widehat{W}_{\widehat{\Neumann}}^{1,q};
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
),
\\
\label{eq:grad-K-nlpright}
\partial_f
\mathcal{K}(\varphi,f,z)
& = &
-\mathbb{I}
\in
\mathcal{B}(
\widehat{W}_{\widehat{\Neumann}}^{-1,q};
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
),
\\
\label{eq:grad-K-nlpfixed}
\partial_{z_k}
\mathcal{K}(\varphi,f,z)
& = &
(-1)^k
\mathstrut^\uparrow \rho_k \mathcal{F}'_k
(z_k
+ (-1)^k
\mathstrut_\downarrow \varphi)
\in
\widehat{L}^\infty
\hookrightarrow
\mathcal{B}(
\xoplus{L}^\infty;
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
)
\end{eqnarray}
and they are continuous, see\ \lemref{lem:cardens} and
\cite[\S5]{recke-groeger-nodea}.
Now we consider the equation
\begin{equation}
\label{eq:laxmil}
\mathcal{P}_0 \psi
+
\sum_{k=1}^2
\mathstrut^\uparrow \rho_k
\mathcal{F}'_k
(z_k + (-1)^k \mathstrut_\downarrow \varphi)
\psi
=
f \in \widehat{W}_{\widehat{\Neumann}}^{-1,q}
\end{equation} %
Because
\begin{math}
\sum_{k=1}^2
\mathstrut^\uparrow \rho_k
\mathcal{F}'_k
(z_k + (-1)^k \mathstrut_\downarrow \varphi)
\end{math}
is a positive function from $\widehat{L}^\infty$, \eqref{eq:laxmil}
has exactly one solution
$\psi\in\widehat{W}_{\widehat{\Neumann}}^{1,2}$ by the
\thmtitle{Lax-Milgram-Lemma}. Moreover,
\begin{equation*}
\sum_{k=1}^2
\mathstrut^\uparrow \rho_k
\mathcal{F}'_k
(z_k + (-1)^k \mathstrut_\downarrow \varphi)
\psi
\in
\widehat{L}^2
\hookrightarrow
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
,
\end{equation*}
and
\begin{math}
\mathcal{P}_0\from{
\widehat{W}_{\widehat{\Neumann}}^{1,q}
\to
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
}
\end{math}
is a topological isomorphism. Thus, a rearrangement of terms in
\eqref{eq:laxmil} gives
$\psi\in\widehat{W}_{\widehat{\Neumann}}^{1,q}$.
5.~We now estimate the Fr{\'e}chet derivative \eqref{eq:nlp-fderiv}:
\begin{multline}
\label{eq:impl02}
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
(f
+\mathcal{N}_1 \mathstrut^\uparrow z_1
-\mathcal{N}_2 \mathstrut^\uparrow z_2
)
}_{\widehat{W}_{\widehat{\Neumann}}^{1,q}}
\\
\le
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
f
}_{\widehat{W}_{\widehat{\Neumann}}^{1,q}}
\\
+
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
(\mathcal{N}_1 \mathstrut^\uparrow z_1
-\mathcal{N}_2 \mathstrut^\uparrow z_2
)
}_{\widehat{W}_{\widehat{\Neumann}}^{1,q}}
.
\end{multline}
We treat the right hand side terms separately;
for the second addend one obtains
\begin{multline}
\label{eq:impl02aa}
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
(\mathcal{N}_1 \mathstrut^\uparrow z_1
-\mathcal{N}_2 \mathstrut^\uparrow z_2
)
}_{\widehat{W}_{\widehat{\Neumann}}^{1,q}}
\\
\le
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
\sqrt{\mathcal{N}_1 + \mathcal{N}_2}
}_{\mathcal{B}(\widehat{L}^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\lrnorm{g}_{L^2}
,
\end{multline}
where the function $g\in{L^2}$ is defined by
\begin{equation}
\label{eq:g}
g(x) \stackrel{\scriptscriptstyle\mathrm{def}}{=}
\frac{\mathcal{N}_1(x)z_1(x)
-\mathcal{N}_2(x)z_2(x)}
{\sqrt{\mathcal{N}_1(x)+\mathcal{N}_2(x)}}
\quad\text{for $x\in\Omega$.}
\end{equation}
Please note that the functions $\mathcal{N}_k$ are strictly positive
almost everywhere in $\Omega$ due to the positivity of the
distribution functions and \assuref{assu:effband}. For the function
$g$ in \eqref{eq:g} one has the following bound:
\begin{equation*}
\norm{g}_{L^2}
\le
\sqrt{\norm{\mathcal{N}_1+\mathcal{N}_2}_{\widehat{L}^1}}
\left(
\norm{z_1}_{L^\infty}
+
\norm{z_2}_{L^\infty}
\right)
.
\end{equation*}
Making use of the operator identity
\begin{equation}
\label{eq:opid}
(\mathcal{P}_0+\mathcal{N}_1+\mathcal{N}_2)^{-1}
=
\mathcal{P}_0^{-1}
-
\mathcal{P}_0^{-1}
(\mathcal{N}_1+\mathcal{N}_2)
(\mathcal{P}_0+\mathcal{N}_1+\mathcal{N}_2)^{-1}
\end{equation}
one obtains
\begin{multline*}
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
\sqrt{\mathcal{N}_1 + \mathcal{N}_2}
}_{\mathcal{B}(\widehat{L}^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\le
\lrnorm{
\mathcal{P}_0^{-1}
\sqrt{\mathcal{N}_1 + \mathcal{N}_2}
}_{\mathcal{B}(\widehat{L}^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\\
+
\lrnorm{
\mathcal{P}_0^{-1}
\sqrt{\mathcal{N}_1 + \mathcal{N}_2}
\sqrt{\mathcal{N}_1 + \mathcal{N}_2}
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
\sqrt{\mathcal{N}_1 + \mathcal{N}_2}
}_{\mathcal{B}(\widehat{L}^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\\
\le
\lrnorm{
\mathcal{P}_0^{-1}
}_{\mathcal{B}(\widehat{L}^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\sqrt{\lrnorm{\mathcal{N}_1 + \mathcal{N}_2}_{\widehat{L}^\infty}}
\;\times
\\
\times
\left(
1
+
\lrnorm{
\sqrt{\mathcal{N}_1 + \mathcal{N}_2}
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1/2}
}^2_{\mathcal{B}(\widehat{L}^2)}
\right)
\end{multline*}
We note that
\begin{equation}
\label{eq:subord}
\lrnorm{
\sqrt{\mathcal{N}_1 + \mathcal{N}_2}
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1/2}
}_{\mathcal{B}(\widehat{L}^2)}
\le 1
\end{equation}
because the bounded multiplication operator $\mathcal{N}_1+\mathcal{N}_2$
is form subordinated to
\begin{math}
\mathcal{P}_0
+
\mathcal{N}_1 + \mathcal{N}_2,
\end{math}
see\ for instance\ \cite[VI.2.6]{kato}.
Thus, we get for the second addend of \eqref{eq:impl02}:
\begin{multline}
\label{eq:impl02a}
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
(\mathcal{N}_1\mathstrut^\uparrow z_1
-\mathcal{N}_2\mathstrut^\uparrow z_2
)
}_{\widehat{W}_{\widehat{\Neumann}}^{1,q}}
\\
\le
2\,
\lrnorm{
\mathcal{P}_0^{-1}
}_{\mathcal{B}(\widehat{L}^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\sqrt{\lrnorm{\mathcal{N}_1 + \mathcal{N}_2}_{\widehat{L}^\infty}}
\sqrt{\lrnorm{\mathcal{N}_1 + \mathcal{N}_2}_{\widehat{L}^1}}
\left(
\norm{z_1}_{L^\infty}
+
\norm{z_2}_{L^\infty}
\right)
\end{multline}
Applying \eqref{eq:opid} to the first term on the right hand side of
\eqref{eq:impl02} we find
\begin{multline}
\label{eq:impl02bb}
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
f
}_{\widehat{W}_{\widehat{\Neumann}}^{1,q}}
\le
\lrnorm{
\mathcal{P}_0^{-1}
}_{\mathcal{B}(
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
;
\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\lrnorm{
f
}_{\widehat{W}_{\widehat{\Neumann}}^{-1,q}}
\\
+
\lrnorm{
\mathcal{P}_0^{-1}
}_{\mathcal{B}(\widehat{L}^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\lrnorm{
(\mathcal{N}_1+\mathcal{N}_2)
(\mathcal{P}_0+\mathcal{N}_1+\mathcal{N}_2)^{-1}
}_{\mathcal{B}(\widehat{W}_{\widehat{\Neumann}}^{-1,q};\widehat{L}^2)}
\lrnorm{
f
}_{\widehat{W}_{\widehat{\Neumann}}^{-1,q}}.
\end{multline}
The terms
\begin{math}
\lrnorm{
\mathcal{P}_0^{-1}
}_{\mathcal{B}(
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
;
\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\end{math}
and
\begin{math}
\lrnorm{
\mathcal{P}_0^{-1}
}_{\mathcal{B}(\widehat{L}^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\end{math}
are finite.
As for the remaining term
\begin{multline*}
\lrnorm{
(\mathcal{N}_1+\mathcal{N}_2)
(\mathcal{P}_0+\mathcal{N}_1+\mathcal{N}_2)^{-1}
}_{\mathcal{B}(\widehat{W}_{\widehat{\Neumann}}^{-1,q};\widehat{L}^2)}
\\
\le
\sqrt{\lrnorm{\mathcal{N}_1 + \mathcal{N}_2}_{\widehat{L}^\infty}}
\lrnorm{
\sqrt{\mathcal{N}_1 + \mathcal{N}_2}
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1/2}
}_{\mathcal{B}(\widehat{L}^2)}
\\
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1/2}
\mathcal{P}_0^{1/2}
}_{\mathcal{B}(\widehat{L}^2)}
\lrnorm{
\mathcal{P}_0^{-1/2}
}_{\mathcal{B}(\widehat{W}_{\widehat{\Neumann}}^{-1,q};\widehat{L}^2)}
\end{multline*}
we note that
\begin{math}
\lrnorm{
\mathcal{P}_0^{-1/2}
}_{\mathcal{B}(\widehat{W}_{\widehat{\Neumann}}^{-1,q};\widehat{L}^2)}
\end{math}
is finite,
since
\begin{math}
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
\end{math}
embeds continuously into
\begin{math}
\widehat{W}_{\widehat{\Neumann}}^{-1,2}
\end{math}
and
\begin{math}
\mathcal{P}_0^{1/2}
\from{\widehat{L}^2 \to \widehat{W}_{\widehat{\Neumann}}^{-1,2}}
\end{math}
is a topological isomorphism.
Again, $\mathcal{P}_0$ is form subordinated to
$\mathcal{P}_0+\mathcal{N}_1+\mathcal{N}_2$. Hence, besides
\eqref{eq:subord} one has
\begin{equation*}
\norm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1/2}
\mathcal{P}_0^{1/2}
}_{\mathcal{B}(\widehat{L}^2)}
\le 1
.
\end{equation*}
Thus, we get from \eqref{eq:impl02bb}:
\begin{multline}
\label{eq:impl02b}
\lrnorm{
(\mathcal{P}_0
+\mathcal{N}_1
+\mathcal{N}_2)^{-1}
f
}_{\widehat{W}_{\widehat{\Neumann}}^{1,q}}
\le
\lrnorm{
\mathcal{P}_0^{-1}
}_{\mathcal{B}(
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
;
\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\lrnorm{
f
}_{\widehat{W}_{\widehat{\Neumann}}^{-1,q}}
\\
+
\lrnorm{
\mathcal{P}_0^{-1}
}_{\mathcal{B}(\widehat{L}^2;\widehat{W}_{\widehat{\Neumann}}^{1,q})}
\sqrt{\lrnorm{\mathcal{N}_1 + \mathcal{N}_2}_{\widehat{L}^\infty}}
\lrnorm{
\mathcal{P}_0^{-1/2}
}_{\mathcal{B}(\widehat{W}_{\widehat{\Neumann}}^{-1,q};\widehat{L}^2)}
\lrnorm{
f
}_{\widehat{W}_{\widehat{\Neumann}}^{-1,q}}.
\end{multline}
Inserting \eqref{eq:impl02a} and \eqref{eq:impl02b} into
\eqref{eq:impl02} finishes the proof.
\end{proof}
%
\begin{cor}
\label{cor:boundedlip}
Let the assumptions of \thmref{thm:monotone} be satisfied. Then
holds true:
1.~The mapping
\begin{math}
\mathcal{L}
\from{
\widehat{W}_{\widehat{\Neumann}}^{-1,q} \oplus \xoplus{L}^\infty
\to
\widehat{W}_{\widehat{\Neumann}}^{1,q}
}
\end{math}
is boundedly Lipschitzian, i.e.\ for any bounded subset
\begin{math}
M \subset
\widehat{W}_{\widehat{\Neumann}}^{-1,q}
\oplus
\xoplus{L}^\infty
\end{math}
there is a constant $\mathcal{L}_M$ such that
\begin{equation*}
\lrnorm{
\mathcal{L}(f,z)
-
\mathcal{L}(\check{f},\check{z})
}_{W^{1,q}}
\le
\mathcal{L}_M
\left(
\lrnorm{f-\check{f} }_{\widehat{W}_{\widehat{\Neumann}}^{-1,q}}
+
\lrnorm{z - \check{z} }_{\xoplus{L}^\infty}
\right)
\end{equation*}
for all $(f,z)$, $(\check{f},\check{z})\in{M}$.
2.~Let additionally \assuref{assu:dop} be satisfied. If
\begin{equation*}
z=(z_1,z_2)
\in
C([T_0,T],\xoplus{L}^\infty) \cap C^1(]T_0,T[,\xoplus{L}^p)
,
\end{equation*}
then the function
\begin{math}
[T_0,T] \ni t
\mapsto
\varphi(t) \in \widehat{W}_{\widehat{\Neumann}}^{1,q}
\end{math}
given by
\begin{math}
\varphi(t)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\mathcal{L}(d(t),z(t))
\in
\widehat{W}_{\widehat{\Neumann}}^{1,q}
\end{math}
is continuous, and continuously differentiable on $]T_0,T[$.
Its derivative is
\begin{multline*}
\label{eq:impl21}
\varphi'(t)
=
\left[
\partial \mathcal{L}\big(d(t),z(t)\big)
\right]
\big(d'(t),z'(t)\big)
\\
=\left(
\mathcal{P}_0 + \mathcal{N}_1 + \mathcal{N}_2
\right)^{-1}
\left(
d'(t)
+ \mathcal{N}_1 \mathstrut^\uparrow z_1 '
- \mathcal{N}_2 \mathstrut^\uparrow z_2 '
\right),
\end{multline*}
where $\mathcal{N}_k$ is again defined by \eqref{eq:pk} --- there
$(F,Z)$ specified as $\big(d(t),z(t)\big)$.
\end{cor}
\subsection[Derivation of the quasi-linear system]
{Derivation of the quasi-linear system}
\label{sec:der}
We start now with the reformulation of the van Roosbroeck system as
defined in \defnref{def:vanroos} as a quasi-linear parabolic system for
the continuity equations. The aim of eliminating the electrostatic
potential in mind, we first look for a substitute for its time
derivative. In order to achieve this, we formally differentiate
Poisson's equation \eqref{eq:poisson} with respect to time. This gives
%
\begin{equation}
\label{eq:poidiff}
\mathcal{P}_0 \varphi'
=
d'
+
\mathstrut^\uparrow (u_1' - u_2')
.
\end{equation}
%
From \eqref{eq:continuity} one obtains
%
\begin{equation}
\label{eq:conteq}
u_1' - u_2'
=
\nabla\cdot j_1 - \nabla\cdot j_2 +
r_1(t,\widetilde{\varphi},\widetilde{\phi}) -
r_2(t,\widetilde{\varphi},\widetilde{\phi})
.
\end{equation}
%
Inserting \eqref{eq:conteq} into \eqref{eq:poidiff}, one gets
%
\begin{equation}
\label{eq:currcon}
\mathcal{P}_0 \varphi'
= d'
+ \mathstrut^\uparrow
\big(
\nabla\cdot j_1
- \nabla\cdot j_2
+ r_1(t,\widetilde{\varphi},\widetilde{\phi})
- r_2(t,\widetilde{\varphi},\widetilde{\phi})
\big)
.
\end{equation}
%
Just in case, $r=r_1=r_2$ is only
recombination, this is precisely the well known conservation law for
the total current, see\ \cite{gajewski93}.
%
Clearly, \eqref{eq:currcon} leads to
%
\begin{equation}
\label{eq:currcon1}
\mathstrut_\downarrow \varphi'
=
\mathstrut_\downarrow
\mathcal{P}_0^{-1}
\left(
d'
+ \mathstrut^\uparrow \big(
\nabla\cdot j_1
- \nabla\cdot j_2
+ r_1(t,\widetilde{\varphi},\widetilde{\phi})
- r_2(t,\widetilde{\varphi},\widetilde{\phi})
\big)
\right)
.
\end{equation}
%
Now we differentiate \eqref{eq:car-density} (with
\eqref{eq:chemical-potential}) with respect to time and obtain
%
\begin{multline}
\label{eq:diffdicht}
u'_k
=
\rho_k
\mathcal{F}_k'
(\widetilde{\phi}_k + (-1)^k\mathstrut_\downarrow \widetilde{\varphi} + b_k)
\big[
\widetilde{\phi}_k' + (-1)^k\mathstrut_\downarrow \widetilde{\varphi}' + b_k'
\big]
\\
+\rho_k'
\mathcal{F}_k
(\widetilde{\phi}_k + (-1)^k\mathstrut_\downarrow \widetilde{\varphi} + b_k)
,
\quad k =1,2,
\end{multline}
%
Pending further notice we do not write out the argument
\begin{math}
\widetilde{\phi}_k + (-1)^k\mathstrut_\downarrow \widetilde{\varphi} + b_k
\end{math}
of the distribution function $\mathcal{F}_k$ and its derivative.
%
We also abstain from drawing out the argument of the reaction terms
$r_k$.
%
According to \eqref{eq:splitoff} we split
$\widetilde{\varphi}'=\varphi'+\dbvarphi'$ and insert \eqref{eq:diffdicht} into
the current continuity equation \eqref{eq:continuity}. Thus, we find
%
\begin{equation*}
\big[
\widetilde{\phi}_k' + (-1)^k\mathstrut_\downarrow \varphi'
\big]
\rho_k
\mathcal{F}_k'
- \nabla\cdot j_k
=
r_k
-
\big[
(-1)^k \mathstrut_\downarrow \dbvarphi'+b_k'
\big]
\rho_k
\mathcal{F}_k'
-
\rho_k'
\mathcal{F}_k
,
\quad k=1,2
.
\end{equation*}
%
Using \eqref{eq:currcon1} we get further
%
\begin{multline*}
\rho_k
\mathcal{F}_k'
\widetilde{\phi}_k'
- \nabla\cdot j_k
+ (-1)^k
\rho_k
\mathcal{F}_k'
\mathstrut_\downarrow
\mathcal{P}_0^{-1}
\big(
d'
+
\mathstrut^\uparrow \big(
\nabla\cdot j_1 - \nabla\cdot j_2
+ r_1
- r_2
\big)
\big)
\\
=
r_k
-
\big[
(-1)^k\mathstrut_\downarrow \dbvarphi'+ b_k'
\big]
\rho_k
\mathcal{F}_k'
-
\rho_k'
\mathcal{F}_k
,
\quad k=1,2
.
\end{multline*}
%
Dividing this by $\rho_k\mathcal{F}_k'$ we obtain
%
\begin{multline*}
\left(
\begin{array}{c}
\widetilde{\phi}_1'\\
\widetilde{\phi}_2'
\end{array}
\right)
-
\left(
\begin{array}{cc}
1 + \mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow \mathcal{F}_1' \rho_1
&
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow \mathcal{F}_2' \rho_2
\\
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow \mathcal{F}_1' \rho_1
&
1 + \mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow \mathcal{F}_2' \rho_2
\end{array}
\right)
\left(
\begin{array}{cc}
\frac{1}{\rho_1 \mathcal{F}_1'}
&
0
\\
0
&
\frac{1}{\rho_2 \mathcal{F}_2'}
\end{array}
\right)
\left(
\begin{array}{c}
\nabla\cdot j_1\\
\nabla\cdot j_2
\end{array}
\right)
\\[1ex]
=
\left(
\begin{array}{c}
\frac{r_1}{\rho_1 \mathcal{F}_1'}
+
r_1\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
-
r_2\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\\
-
r_1\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
+
\frac{r_2}{\rho_2 \mathcal{F}_2'}
+
r_2\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\end{array}
\right)
+
\left(
\begin{array}{c}
\mathstrut_\downarrow \mathcal{P}_0^{-1} d'
+ \mathstrut_\downarrow \dbvarphi'
- b_1'
- \frac{\rho_1'}{\rho_1} \frac{\mathcal{F}_1}{\mathcal{F}'_1}
\\
-
\mathstrut_\downarrow \mathcal{P}_0^{-1} d'
- \mathstrut_\downarrow \dbvarphi'
- b_2'
- \frac{\rho_2'}{\rho_2} \frac{\mathcal{F}_2}{\mathcal{F}'_2}
\end{array}
\right)
\end{multline*}
%
This evolution equation can be written in the condensed form
%
\begin{equation}
\label{eq:evol2}
\widetilde{\phi}' - [I + Z(t,\widetilde{\phi})]E(t,\widetilde{\phi})\nabla\cdot} %%\text{diag}{\dive{j} = Y(t,\widetilde{\phi})
\end{equation}
where $\widetilde{\phi}=(\widetilde{\phi}_1,\widetilde{\phi}_2)$ and
\begin{math}
\nabla\cdot} %%\text{diag}{\dive{j} \stackrel{\scriptscriptstyle\mathrm{def}}{=} ( \nabla\cdot j_1 , \nabla\cdot j_2 ).
\end{math}
Moreover, $I$ denotes the identity. The coefficients $Z$, $E$, and
$Y$ are given in the following way:
%
First we split off the Dirichlet inhomogeneities of $\widetilde{\varphi}$ in the
sense of \secref{sec:diribv} and we replace $\varphi$ by the solution
of the nonlinear Poisson equation, see\ \thmref{thm:monotone}. With
respect to an arbitrary $\psi=(\psi_1,\psi_2)\in\xoplus{W}^{1,q}$ we
set
%
\begin{equation}
\label{eq:Q}
Q_k(t,\psi)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\psi_k
+ (-1)^k\mathstrut_\downarrow\mathcal{L}\big(d(t),z(t)\big)
+ (-1)^k\mathstrut_\downarrow\dbvarphi(t) +b_k(t)
,
\quad k = 1,2,
\end{equation}
where $z\stackrel{\scriptscriptstyle\mathrm{def}}{=}(z_1,z_2)$ with
\begin{equation}
\label{eq:zrhs}
z_k (t)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\psi_k
+ (-1)^k\mathstrut_\downarrow\dbvarphi(t)
+ b_k(t)
,
\quad
k=1,2.
\end{equation}
%
Now we define
%
\begin{align}
Z(t,\psi)
& \stackrel{\scriptscriptstyle\mathrm{def}}{=}
\left(
\begin{array}{cc}
\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\mathcal{F}_1'(Q_1(t,\psi))\rho_1(t)
&
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\mathcal{F}_2'(Q_2(t,\psi))\rho_2(t)
\\
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\mathcal{F}_1'(Q_1(t,\psi))\rho_1(t)
&
\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\mathcal{F}_2'(Q_2(t,\psi))\rho_2(t)
\end{array}
\right)
\label{eq:Z}
\\[1ex]
E(t,\psi)
& \stackrel{\scriptscriptstyle\mathrm{def}}{=}
\left(
\begin{smallmatrix}
E_1(t,\psi)
&
0
\\
0
&
E_2(t,\psi)
\end{smallmatrix}
\right)
,\quad
E_k(t,\psi)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\frac{1}{\rho_k(t) \mathcal{F}_k'(Q_k(t,\psi))}
\label{eq:E}
\\[1ex]
R(t,\psi)
& \stackrel{\scriptscriptstyle\mathrm{def}}{=}
\left(
\begin{array}{c}
r_1(
t,
\mathcal{L}(d(t),z(t)) + \dbvarphi(t),
\psi)
\\
r_2(
t,
\mathcal{L}(d(t),z(t)) + \dbvarphi(t),
\psi)
\end{array}
\right)
,
\label{eq:R}
\end{align}
%
and finally
%
\begin{equation}
\label{eq:Y}
Y(t,\psi)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\big[
I + Z(t,\psi)
\big]
E(t,\psi)R(t,\psi)
- X(t,\psi)
,
\end{equation}
where $X(t,\psi)=\big(X_1(t,\psi),X_2(t,\psi)\big)$ with
\begin{equation}
\label{eq:X}
X_k(t,\psi)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
(-1)^k \mathstrut_\downarrow
\big(
\mathcal{P}_0^{-1} d'(t) + \dbvarphi'(t)
\big)
+ b_k'(t)
+
\frac{\rho_k'(t)}{\rho_k(t)}
\frac{\mathcal{F}_k(Q_k(t,\psi))}{\mathcal{F}'_k(Q_k(t,\psi))}
,\quad
\end{equation}
$k=1,2$.
Please note
\begin{equation}
\label{eq:ZE}
Z(t,\psi)E(t,\psi)
=
\left(
\begin{smallmatrix}
\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
&
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\\
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
&\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\end{smallmatrix}
\right)
.
\end{equation}
Next we apply the definition \eqref{eq:curr-dens} of the currents
$j_k$ and get
%
\begin{equation*}
\nabla\cdot j_k
=
\nabla\cdot
\big(
\mathcal{G}_k(
\widetilde{\phi}_k
+ (-1)^k \mathstrut_\downarrow \varphi
+ (-1)^k \mathstrut_\downarrow \dbvarphi
+b_k)
\mu_k
\nabla \widetilde{\phi}_k
\big)
,
\quad k = 1,2
,
\end{equation*}
%
or in shorter notation
%
\begin{equation}
\label{eq:curmoda}
\nabla\cdot} %%\text{diag}{\dive {j}
=
\nabla\cdot} %%\text{diag}{\dive
G(t,\widetilde{\phi})
\mu
\nabla} %%\text{diag}{\gard
\widetilde{\phi}
,
\end{equation}
%
where --- see\ also \eqref{eq:Q} and \eqref{eq:curr-dens} ---
%
\begin{equation}
\label{eq:G}
G(t,\psi)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\left(
\begin{smallmatrix}
G_1(t,\psi)
&
0
\\
0
&
G_2(t,\psi)
\end{smallmatrix}
\right)
,\quad
G_k(t,\psi)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\mathcal{G}_k \big( Q_k(t,\psi) \big)
.
\end{equation}
Now, putting together \eqref{eq:curmoda} and \eqref{eq:evol2} we
obtain in conclusion the evolution equation
%
\begin{equation}
\label{eq:evol3}
\widetilde{\phi}'
-
\big[ I + Z(t,\widetilde{\phi}) \big]
E(t,\widetilde{\phi})
\nabla\cdot} %%\text{diag}{\dive
G(t,\widetilde{\phi}) \mu \nabla} %%\text{diag}{\gard \widetilde{\phi}
= Y(t,\widetilde{\phi})
\end{equation}
which has to be complemented by the boundary conditions
\eqref{eq:cc:bc} and the initial condition \eqref{eq:cc:ini},
see\ also \remref{rem:curnormal}.
\section
[The quasi-linear parabolic equation]
{The quasi-linear parabolic equation}
\label{sec:evol}
Evolution equations of the type \eqref{eq:evol3} were investigated in
\cite{pub:765}: \eqref{eq:evol3} has a unique, local in time solution,
if the functions $E$, $G$, $Z$ and $Y$ defined by \eqref{eq:E},
\eqref{eq:G}, \eqref{eq:Z} and \eqref{eq:Y}, respectively, satisfy the
following conditions.
%
\begin{assu}
\label{assu:evol}
With respect to $q\in]2,\infty[$ and $p=q/2$, as specified in
\defnref{def:pq}, there is an $\eta\in]0,1]$ and further for any
bounded set $M\subset\xoplus{W}^{1,q}$ exist positive constants
$E_M$, $G_M$, $Y_M$, and $Z_M$ such that the
mappings
%
\begin{eqnarray}
E
& : &
[T_0,T_1] \times \xoplus{W}^{1,q}
\longrightarrow
\xoplus{L}^\infty,
\label{E1}\\
G
& : &
[T_0,T_1] \times \xoplus{W}^{1,q}
\longrightarrow
\xoplus{W}^{1,q},
\label{G1}\\
Z
& : &
[T_0,T_1] \times \xoplus{W}^{1,q}
\longrightarrow
\mathcal{B}_\infty(\xoplus{L}^p)
\label{Z1},\\
Y
& : & [T_0,T_1] \times \xoplus{W}^{1,q}
\longrightarrow
\xoplus{L}^p\label{Y1}
\end{eqnarray}
satisfy the conditions
%
\begin{eqnarray}
\min_{k = 1,2}
\inf_{
\begin{smallmatrix}
t\in[T_0,T_1]\\
\psi\in M
\end{smallmatrix}
}
\essinf_{x\in\Omega}E_k(t,\psi)(x) & > & 0
\label{E2}\\
\min_{k = 1,2}
\inf_{
\begin{smallmatrix}
t\in[T_0,T_1]\\
\psi\in M
\end{smallmatrix}
}
\essinf_{x\in\Omega}G_k(t,\psi)(x) & > & 0
\label{G2}
\end{eqnarray}
%
and for all $t$, $\check{t}\in[T_0,T_1]$ and all $\psi,
\check{\psi} \in M$:
%
\begin{eqnarray}
\|E(t,\psi) - E(\check{t},\check{\psi})\|_{\xoplus{L}^\infty}
& \le &
E_M
\left(
|t-\check{t}|^\eta
+ \|\psi- \check{\psi}\|_{\xoplus{W}^{1,q}}
\right),
\label{E3}\\
\|G(t,\psi) - G(\check{t},\check{\psi})\|_{\xoplus{W}^{1,q}}
& \le &
G_M
\left(
|t-\check{t}|^\eta
+ \|\psi-\check{\psi}\|_{\xoplus{W}^{1,q}}
\right),
\label{G3}\\
\|Z(t,\psi) - Z(\check{t}, \check{\psi})\|_{\mathcal{B}(\xoplus{L}^p)}
& \le &
Z_M
\left(
|t-\check{t}|^\eta
+ \|\psi-\check{\psi}\|_{\xoplus{W}^{1,q}}
\right),
\label{Z3}\\
\|Y(t,\psi) - Y(\check{t}, \check{\psi})\|_{\xoplus{L}^p}
& \le &
Y_M
\left(|t-\check{t}|^\eta +
\|\psi- \check{\psi}\|_{\xoplus{W}^{1,q}}
\right)
\label{Y3}
.
\end{eqnarray}
\end{assu}
%
\begin{defn}
\label{def:solution}
%
Let the Assumptions~\ref{assu:cc:bc} and \ref{assu:evol}
be satisfied. Further, let
\begin{math}
\operatorname{A}
\from{
\mathcal{D} \to \xoplus{L}^p
}
\end{math}
be the operator from \defnref{def:pq} and let $V$ be a Banach space
such that
$\mathcal{D}\hookrightarrow{V}\hookrightarrow\xoplus{W}^{1,q}$.
We say the evolution equation \eqref{eq:evol3} with initial
condition $\widetilde{\phi}(T_0)=\iniphi\in\xoplus{W}^{1,q}$ has a unique
local solution $\widetilde{\phi}=\phi+\bvphi{}$ with respect to $V$ if
$\iniphi-\bvphi{}(T_0)\in{V}$ implies the existence of a number
$T\in]T_0,T_1]$ such that the initial value problem
%
\begin{multline}
\label{eq:exactsys}
\phi'(t)
+
\big[
I + Z\big(t,\phi(t) + \bvphi{}(t)\big)
\big]
E\big(t,\phi + \bvphi{}(t)\big)
G\big(t,\phi(t) + \bvphi{}(t)\big)
\operatorname{A}\phi(t)
\\
=
Y\big(t,\phi(t) + \bvphi{}(t)\big)
-
\bvphi{}'(t)
+
J\big(t,\phi(t)\big),
\quad
\phi(T_0) = \iniphi - \bvphi{}(T_0)
\end{multline}
%
admits a unique solution
%
\begin{equation}
\label{eq:solution}
\phi
\in
C^1(]T_0,T[,\xoplus{L}^p)
\cap
C(]T_0,T],\mathcal{D})
\cap
C([T_0,T],V)
.
\end{equation}
%
For
\begin{math}
(t,\psi) \in [T_0,T_1] \times \xoplus{W}^{1,q}_{\Gamma}
\end{math}
the term $J$ in \eqref{eq:exactsys} is given by
%
\begin{equation*}
J(t,\psi)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\big[
I + Z\big(t,\psi + \bvphi{}(t)\big)
\big]
E\big(t,\psi + \bvphi{}(t)\big)
\nabla} %%\text{diag}{\gard G\big(t,\psi + \bvphi{}(t)\big)
\cdot
\mu \nabla} %%\text{diag}{\gard\big(\psi + \bvphi{}(t)\big)
.
\end{equation*}
\end{defn}
\begin{rem}
\label{rem:equiv}
We have to clarify the relation between \eqref{eq:evol3} and
\eqref{eq:exactsys}.
If
$\widetilde{\phi}=\phi+\bvphi{}$ is a solution in the sense of
\defnref{def:solution}, then
\begin{equation}
\label{eq:distrib}
\nabla\cdot} %%\text{diag}{\dive
G(t,\widetilde{\phi})\mu\nabla} %%\text{diag}{\gard\widetilde{\phi}
=
G(t,\widetilde{\phi})\operatorname{A}\phi
+
\nabla} %%\text{diag}{\gard G(t,\widetilde{\phi})
\cdot
\mu \nabla} %%\text{diag}{\gard\widetilde{\phi}
\end{equation}
is satisfied, which allows to rewrite \eqref{eq:exactsys} in the form
\eqref{eq:evol3}.
\end{rem}
%
\begin{rem}
\label{rem:boundary}
%
If $\widetilde{\phi}=(\widetilde{\phi}_1,\widetilde{\phi}_2)$ is a solution of
\eqref{eq:evol3} in the sense of \defnref{def:solution}, then
%
\begin{equation*}
\trace
\big(
\widetilde{\phi}_k(t)
\big)
\big|_{\mathrm{D}}
=
\trace
\big(
\bvphi{k}(t)
\big)
\big|_{\mathrm{D}}
=
\Bvphi{k}(t)
,
\quad k = 1,2,
\quad t \in [T_0,T].
\end{equation*}
%
The Neumann boundary condition
%
\begin{equation*}
0
=
{\nu}\cdot{\mu_k\nabla \widetilde{\phi}_k(t)}
\big|_{\Gamma}
=
{\nu}\cdot{\mu_k\nabla\bvphi{k}(t)}
\big|_{\Gamma}
,
\quad k = 1,2,
\quad t \in [T_0,T],
\end{equation*}
%
holds in the distributional sense, see\ \remref{rem:curnormal}.
%
\end{rem}
%
\begin{prop}
\label{prop:oint} \emph{(See \cite{pub:765}.)}
%
Let the Assumptions~\ref{assu:cc:bc} and \ref{assu:evol} be
satisfied. For each
\begin{math}
\gamma
\in
\big]
\frac{1}{2}+\frac{1}{q},1
\big[
\end{math}
the initial value problem \eqref{eq:evol3} with initial value
$\iniphi\in\xoplus{W}^{1,q}$ has a unique local
solution $\phi$ with respect to the complex interpolation spaces
$V\stackrel{\scriptscriptstyle\mathrm{def}}{=}\big[\xoplus{L}^p,\mathcal{D}\big]_\gamma$.
%
\end{prop}
%
We are now going to show that the mappings $E$, $G$, $Y$ and $Z$ satisfy
\assuref{assu:evol}. To that end we need the following preparatory lemma.
%
\begin{lem}
\label{lem:nemyckii}
If $\xi\from{\mathbb{R}\to\mathbb{R}}$ is continuously
differentiable, then $\xi$ induces a Nemyckii operator from
$L^\infty$ into itself which is boundedly Lipschitzian.
%
If $\xi\from{\mathbb{R}\to\mathbb{R}}$ is twice continuously
differentiable, then it induces a Nemyckii operator from $W^{1,q}$
into itself which is boundedly Lipschitzian.
\end{lem}
%
The proof is straightforward. Recall that, according to
\defnref{def:pq}, $q$ is fixed and larger than two.
%
\begin{lem}
\label{lem:contQ}
Let the Assumptions~\ref{assu:bands}, \ref{assu:poi-diri-bv} and
\ref{assu:dop} be satisfied. Then the equation \eqref{eq:Q} defines
mappings
\begin{math}
Q_k \from{ [T_0,T_1] \times \xoplus{L}^\infty
\to
L^\infty
},
\end{math}
$k =1,2$, and the restriction of each $Q_k$ to
$[T_0,T_1]\times\xoplus{W}^{1,q}$ takes its values in
$W^{1,q}$. Moreover, there is a number $\eta\in]0,1]$ and then for
any bounded subset $M\subset\xoplus{L}^\infty$ a positive number
$Q_M$ exists such that
%
for all $t,\,\check{t}\in[T_0,T_1]$ and all
$\psi,\,\check{\psi}\in M$:
\begin{equation*}
\|
Q_k(t,\psi) - Q_k(\check{t},\check{\psi})
\|_{L^\infty}
\le
Q_M
\big(
|t - \check{t}|^\eta
+\|\psi - \check{\psi}\|_{\xoplus{L}^\infty}
\big),
\quad k = 1,2.
\end{equation*}
Analogously, for each bounded subset $M\subset\xoplus{W}^{1,q}$
there is a positive number $Q_M$ such that for all
$t,\,\check{t}\in[T_0,T_1]$ and all $\psi,\,\check{\psi}\in M$:
%
\begin{equation*}
\|
Q_k(t,\psi) - Q_k(\check{t},\check{\psi})
\|_{W^{1,q}}
\le
Q_M
\big(
|t-\check{t}|^\eta
+ \|\psi - \check{\psi}\|_{\xoplus{W}^{1,q}}
\big),
\quad k = 1,2.
\end{equation*}
\end{lem}
%
The proof is obtained from \corref{cor:boundedlip}.
\begin{lem}
\label{lem:comp}
Let the Assumptions~\ref{assu:bands}, \ref{assu:poi-diri-bv} and
\ref{assu:dop} be satisfied.
%
If $\xi\from{\mathbb{R}\to\mathbb{R}}$ is continuously
differentiable, then $\xi$ induces operators
\begin{equation*}
[T_0,T_1] \times \xoplus{L}^\infty
\ni
(t,\psi)
\longmapsto
\xi(Q_k(t,\psi))
\in
L^\infty,
\quad
k = 1,2.
\end{equation*}
Moreover, there is a constant $\eta\in]0,1]$ and for any bounded set
$M\subset\xoplus{L}^\infty$ a constant $\xi_M$ such that for all
$t,\,\check{t}\in[T_0,T_1]$ and all $\psi,\,\check{\psi}\in M$: %
\begin{equation*}
\|
\xi\big(Q_k(t,\psi)\big)
-
\xi\big(Q_k(\check{t},\check{\psi})
\big)
\|_{L^\infty}
\le
\xi_M
\big(
|t - \check{t}|^\eta
+
\|\psi - \check{\psi}\|_{\xoplus{L}^\infty}
\big),
\quad
k=1,2.
\end{equation*}
If $\xi$ is twice continuously differentiable, then the restriction
of $\xi{\circ}Q_k$ to $[T_0,T_1]\times\xoplus{W}^{1,q}$ maps
into $W^{1,q}$, $k =1,2$. Moreover, there is a number $\eta\in]0,1]$
and for any bounded subset $M\subset\xoplus{W}^{1,q}$ a constant
$\xi_M$ such that for all $t,\,\check{t}\in[T_0,T_1]$ and all
$\psi,\,\check{\psi}\in M$:
%
\begin{equation*}
\|
\xi\big(Q_k(t,\psi)\big)
-
\xi\big(Q_k(\check{t},\check{\psi})\big)
\|_{W^{1,q}}
\le
\xi_M
\big(
|t - \check{t}|^\eta
+
\|\psi - \check{\psi}\|_{\xoplus{W}^{1,q}}
\big),
\quad
k=1,2.
\end{equation*}
\end{lem}
The proof follows from \lemref{lem:nemyckii} and \lemref{lem:contQ}.
%
\begin{lem}
\label{lem:EG}
Let the Assumptions~\ref{assu:bands}, \ref{assu:poi-diri-bv} and
\ref{assu:dop} be satisfied. Then there is a number $\eta\in]0,1]$
such that the mappings $E$ and $G$ defined by \eqref{eq:E} and
\eqref{eq:G} satisfy the conditions \eqref{E1}, \eqref{E2},
\eqref{E3}, and \eqref{G1}, \eqref{G2}, \eqref{G3}, respectively.
\end{lem}
%
\begin{proof}
The functions $\frac{1}{\mathcal{F}'_k}$ are continuously
differentiable by \assuref{assu:distri}. Consequently, by
\lemref{lem:comp} the mappings $\widetilde{E}_k$, given by
\begin{equation*}
[T_0,T_1] \times \xoplus{L}^\infty
\ni
(t,\psi)
\longmapsto
\frac{1}{\mathcal{F}'_k\big(Q_k(t,\psi)\big)}
\in
L^\infty
,
\quad
k =1,2,
\end{equation*}
are well defined.
%
Moreover, \lemref{lem:comp} provides a constant $\eta\in]0,1]$ such
that for any bounded set $M\subset\xoplus{L}^\infty$ a constant
$C_M$ exists such that for all
$t,\,\check{t}\in[T_0,T_1]$ and all $\psi,\,\check{\psi}\in M$:
%
\begin{equation*}
\|
\widetilde{E}_k(t,\psi)
-
\widetilde{E}_k(\check{t},\check{\psi})
\|_{\xoplus{L}^\infty} \le
C_M
\big(
|t - \check{t}|^\eta
+
\|\psi - \check{\psi}\|_{\xoplus{L}^\infty}
\big),
\quad
k=1,2.
\end{equation*}
Since $\xoplus{W}^{1,q}$ embeds continuously into
$\xoplus{L}^\infty$ for any bounded set $M\subset\xoplus{W}^{1,q}$
there is a constant, again named $C_M$, such that for all
$t,\,\check{t}\in[T_0,T_1]$ and all $\psi,\,\check{\psi}\in M$:
%
\begin{equation*}
\|
\widetilde{E}_k(t,\psi)
-
\widetilde{E}_k(\check{t},\check{\psi})
\|_{\xoplus{L}^\infty}
\le
C_M
\big(
|t - \check{t}|^\eta
+
\|\psi - \check{\psi}\|_{\xoplus{W}^{1,q}}
\big),
\quad
k=1,2.
\end{equation*}
%
The identity $E_k=\frac{1}{\rho_k}\widetilde{E}_k$ and
\assuref{assu:effband} now imply \eqref{E1} and \eqref{E3}. According to
\lemref{lem:contQ} the sets
\begin{equation*}
\left\{
Q_k(t,\phi)
\,:\,
(t,\phi) \in [T_0,T_1] \times M
\right\}
,
\quad
k =1,2,
\end{equation*}
are bounded in $L^\infty$. Since the derivative of the carrier
distribution functions $\mathcal{F}_k$, $k =1,2$, are continuous and
positive, \eqref{E2} immediately follows.
Using the second assertion of \lemref{lem:comp} we verify
\eqref{G1}, \eqref{G2}, and \eqref{G3} in a similar manner.
\end{proof}
%
\begin{lem}
\label{lem:BZ}
Let the Assumptions~\ref{assu:bands}, \ref{assu:poi-diri-bv}, and
\ref{assu:dop} be satisfied. Then the mapping $Z$ given by
\eqref{eq:Z} defines a family
\begin{math}
\{Z(t,\psi)\}_{(t,\psi) \in [T_0,T_1] \times \xoplus{W}^{1,q}}
\end{math}
of linear, compact operators
\begin{math}
Z(t,\phi)\from{\xoplus{L}^p \to \xoplus{L}^p}
.
\end{math}
Additionally, there is a H\"older exponent $\eta\in]0,1]$ and
constants $Z_M$ such that \eqref{Z1} and \eqref{Z3} are
satisfied.
\end{lem}
%
\begin{proof}
It suffices to show the analogous assertions for the entries of the
operator matrices $Z(t,\psi)$. Firstly, \lemref{lem:comp} gives us
the estimate
\begin{multline*}
\|
\mathcal{F}_k'\big(Q_k(t,\psi)\big)
-
\mathcal{F}_k'\big(Q_k(\check{t},\check{\psi})\big)
\|_{\mathcal{B}(L^p)}
\\
\le
\|
\mathcal{F}_k'\big(Q_k(t,\psi)\big)
-
\mathcal{F}_k'\big(Q_k(\check{t},\check{\psi})\big)
\|_{L^\infty}
\\
\le
C_M
\big(
|t-\check{t}|^\eta
+
\|\psi-\check{\psi}\|_{\xoplus{W}^{1,q}}
\big)
,
\quad
k =1,2,
\end{multline*}
%
where the constant $C_M$ can be taken uniformly with respect to
$t,\,\check{t}\in[T_0,T_1]$ and $\psi,\,\check{\psi}$ from any
bounded set $M\subset\xoplus{W}^{1,q}$. This estimate together with
\assuref{assu:effband} implies \eqref{Z3}. As
$\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow$ is a linear and even compact
operator from $L^p$ into itself, this gives \eqref{Z1}.
\end{proof}
%
\begin{lem}
\label{lem:YR}
Let the Assumptions~\ref{assu:recomb}, \ref{assu:bands},
\ref{assu:poi-diri-bv}, and \ref{assu:dop} be satisfied. Then the
mapping $Y$ defined by \eqref{eq:Y} meets the conditions
\eqref{Y1} and \eqref{Y3}.
\end{lem}
%
\begin{proof}
At first one deduces from the assumptions and
\corref{cor:boundedlip} that \eqref{eq:R} defines a mapping
\begin{math}
R \from{
[T_0,T_1] \times \xoplus{W}^{1,q} \to \xoplus{L}^p
}
\end{math}
for which there is a H\"older exponent $\eta\in]0,1]$. Moreover, for
any bounded set $M\subset\xoplus{W}^{1,q}$ exists a constant $C_M$
such that for all $t,\,\check{t}\in[T_0,T_1]$ and all
$\psi,\,\check{\psi}\in M$:
\begin{equation*}
\|
R(t,{\psi})
-
R(\check{t},\check{\psi})
\|_{\xoplus{L}^p}
\le
C_M
\big(
|t - \check{t}|^\eta
+
\|{\psi} - \check{\psi}\|_{\xoplus{W}^{1,q}}
\big)
.
\end{equation*}
Applying \lemref{lem:EG} and \lemref{lem:BZ} one obtains \eqref{Y1}
and \eqref{Y3} for the mapping
\begin{equation*}
[T_0,T_1] \times \xoplus{W}^{1,q}
\ni
(t,\psi)
\longmapsto
\big[
I + Z(t,\psi)
\big]
E(t,\psi)
R(t,\psi).
\end{equation*}
The addends $b_k'$ and $\mathstrut_\downarrow\dbvarphi'$ of \eqref{eq:X} have the
required properties due to \assuref{assu:bands} and
\assuref{assu:poi-diri-bv}, respectively.
For $\mathcal{P}_0^{-1}d'$ they follow from
\assuref{assu:poi-diri-bv} (see\ also \remref{r-extend}),
\assuref{assu:dop} and the fact that $\mathcal{P}_0$ is an isomorphism
from $\widehat{W}^{1,q}_{\widehat{\Neumann}}$ onto
$\widehat{W}^{-1,q}_{\widehat{\Neumann}}$.
The addend
\begin{math}
\frac{\rho_k'(t)}{\rho_k(t)}
\frac{\mathcal{F}_k(Q_k(t,\psi))}{\mathcal{F}'_k(Q_k(t,\psi))}
\end{math}
of \eqref{eq:X} can be treated by means of \lemref{lem:comp} and
\assuref{assu:effband}.
\end{proof}
%
We are now going to establish existence and uniqueness of a local
solution to the evolution equation \eqref{eq:evol3}.
%
\begin{thm}
\label{thm:existenceA}
Under the Assumptions~\ref{assu:recomb}, \ref{assu:bands},
\ref{assu:cc:bc}, \ref{assu:cc:ini}, \ref{assu:poi-diri-bv} and
\ref{assu:dop} the quasi-linear parabolic equation \eqref{eq:evol3}
with the initial condition $\widetilde{\phi}(T_0)=\iniphi$ admits a
unique local solution in the sense of \defnref{def:solution} with
respect to the interpolation space
$V=[\xoplus{L}^p,\mathcal{D}]_\theta$.
\end{thm}
%
\begin{proof}
According to the Lemmas~\ref{lem:EG}, \ref{lem:BZ}, \ref{lem:YR} the
mappings $E$, $G$, $Z$, and $Y$, defined by \eqref{eq:E},
\eqref{eq:G}, \eqref{eq:Z}, and \eqref{eq:Y}, respectively, fulfill
\assuref{assu:evol}.
Hence, the result follows from \proref{prop:oint}, see also
Remarks~\ref{rem:equiv} and \ref{rem:boundary}.
\end{proof}
\section[Main result]
{Main result}
\label{s-main}
We are going to show that a solution of the evolution equation
\eqref{eq:evol3} in the sense of \defnref{def:solution}
provides a solution of the van Roosbroeck system in the sense of
\defnref{def:vanroos}.
We start with a technical lemma.
\begin{lem}
\label{lem:diff}
Let $\xi\from{\mathbb{R}\to\mathbb{R}}$ be twice
continuously differentiable.
The composition $\xi\circ\psi$ is from
$C([T_0,T],L^\infty)$, if $\psi{\in}C([T_0,T],L^\infty)$.
If $\psi$ composed with the embedding
$L^\infty{\hookrightarrow}L^p$, $p\ge1$, is continuously
differentiable in $L^p$ on $]T_0,T[$, then $\xi\circ\psi$
composed with the same embedding is continuously differentiable in
$L^p$ on $]T_0,T[$ and its derivative is given by
\begin{equation*}
\frac{d\xi\circ\psi}{dt}(t)
= \xi'\big(\psi(t)\big)\psi'(t) \in L^p
,
\quad
t \in ]T_0,T[.
\end{equation*}
\end{lem}
\begin{proof}
If $h_1$, $h_2\in L^\infty$, then, by \lemref{lem:cardens} ---
see also \assuref{assu:distri}, we may write
\begin{equation*}
\xi(h_1) - \xi(h_2)
=
\xi'(h_1)(h_1 - h_2)
+
T(h_1,h_2)((h_1 - h_2)
\end{equation*}
where $T(h_1,h_2)$ converges to zero in $L^\infty$ if
$h_1{\in}L^\infty$ is fixed and $h_2$ approaches $h_1$ in the
$L^\infty$-norm. Now we set $h_1=\psi(t)$ and $h_2=\psi(\check{t})$ and
divide both sides by $t-\check{t}$. In the limit $\check{t}\tot$ there is
$\lim_{\check{t}\tot}T(\psi(t),\psi(\check{t}))=0$ in $L^\infty$, while
$\lim_{\check{t}\tot}\frac{\psi(t)-\psi(\check{t})}{t-\check{t}}=\psi'(t)$ in
$L^p$ by supposition.
\end{proof}
Our next aim is to justify formula \eqref{eq:diffdicht}.
\begin{lem}
\label{lem:implic}
Let the Assumptions~\ref{assu:bands}, \ref{assu:cc:bc},
\ref{assu:poi-diri-bv}, and \ref{assu:dop} be satisfied and assume that
$\widetilde{\phi}$ is a solution of \eqref{eq:evol3}. We define
\begin{equation}
\label{eq:zimpk}
z\stackrel{\scriptscriptstyle\mathrm{def}}{=}(z_1,z_2)
\quad\text{with}\quad
z_k(t) \stackrel{\scriptscriptstyle\mathrm{def}}{=} \widetilde{\phi}_k(t) + b_k(t) + (-1)^k \mathstrut_\downarrow \dbvarphi(t)
,
\quad
k=1,2
,
\;
t \in [T_0,T]
,
\end{equation}
and $\varphi(t)\stackrel{\scriptscriptstyle\mathrm{def}}{=}\mathcal{L}\big(d(t),z(t)\big)$. Then
\begin{math}
Q_k(t,\widetilde{\phi}(t))
=
z_k(t)
+
(-1)^k \mathstrut_\downarrow\varphi(t)
,
\end{math}
and
the functions
\begin{equation*}
[T_0,T] \ni t
\longmapsto
G_k(t,\widetilde{\phi}(t))
=
\mathcal{G}_k
\big(
Q_k(t,\widetilde{\phi}(t))
\big) \in L^\infty
,
\end{equation*}
and
\begin{equation*}
[T_0,T] \ni t
\longmapsto
u_k(t) \stackrel{\scriptscriptstyle\mathrm{def}}{=} \rho_k(t) \mathcal{F}_k
\big( Q_k(t,\widetilde{\phi}(t))
\big) \in L^\infty
\end{equation*}
are continuous and concatenated with the embedding
$L^\infty{\hookrightarrow}L^p$ they are continuously
differentiable on $]T_0,T[$. The time derivative of
$u_k$
is given by
\begin{multline}
\label{eq:ukprime}
u_k'(t)
=
\rho_k'(t) \mathcal{F}_k
\big(
Q_k(t,\widetilde{\phi}(t))
\big)
\\
+
\rho_k(t) \mathcal{F}_k'
\big(
Q_k(t,\widetilde{\phi}(t))
\big)
\big[
\widetilde{\phi}_k'(t)
+ b_k'(t)
+ (-1)^k \mathstrut_\downarrow \dbvarphi'(t)
+ (-1)^k \mathstrut_\downarrow \varphi'(t)
\big]
\end{multline}
$k =1,2$, $t\in]T_0,T]$.
\end{lem}
\begin{proof}
Due to \assuref{assu:cc:bc} and \defnref{def:solution} the function
$\widetilde{\phi}$ belongs to the space
\begin{equation}
\label{eq:space}
C([T_0,T],\xoplus{L}^\infty)
\cap
C^1(]T_0,T[,\xoplus{L}^p)
\end{equation}
see also \remref{rem:initial}.
Hence, the Assumptions~\ref{assu:bands} and \ref{assu:poi-diri-bv}
ensure that the function $z$ also belongs to this space, and by
\corref{cor:boundedlip}, so does the function
$\varphi=\mathcal{L}\big(d(t),z(t)\big)$. Thus, we may
apply \lemref{lem:diff}.
\end{proof}
\begin{rem}
\label{rem:justi}
\lemref{lem:implic} justifies the formal manipulations in
\secref{sec:der}. First, \eqref{eq:diffdicht} is given a strict
sense. Furthermore, the differentiation of Poisson's equation
\eqref{eq:poidiff} has the following precise meaning: since
$\widetilde{\phi}$ is from the space \eqref{eq:space}, the function
$t\mapsto\varphi(t)$ is differentiable --- even in a much 'better'
space than $\widetilde{\phi}$ --- see\ \corref{cor:boundedlip}. Hence, the
right hand side of \eqref{eq:poisson} is differentiable with respect
to time in the space $\widehat{W}^{-1,q}_{\widehat{\Neumann}}$ and
\eqref{eq:poidiff} is an equation in the space
$\widehat{W}^{-1,q}_{\widehat{\Neumann}}$.
\end{rem}
We come now to the main results of this paper.
\begin{thm}
\label{thm:central}
Under the Assumptions~\ref{assu:recomb}, \ref{assu:bands},
\ref{assu:cc:bc}, \ref{assu:cc:ini}, \ref{assu:poi-diri-bv}, and
\ref{assu:dop} van Roosbroeck's system with initial condition
$\widetilde{\phi}(T_0)=\iniphi\in\xoplus{W}^{1,q}$ admits a unique local
in time solution in the sense of \defnref{def:vanroos}.
\end{thm}
\begin{proof}
By \thmref{thm:existenceA} the auxiliary evolution equation
\eqref{eq:evol3} admits --- in the sense of \defnref{def:solution}
--- a unique local solution $\widetilde{\phi}$ satisfying the initial
condition $\widetilde{\phi}(T_0)=\iniphi$. Let us show that --- in the sense
of \defnref{def:vanroos} --- the pair $\{\widetilde{\varphi},\widetilde{\phi}\}$, with
$\widetilde{\varphi}$ given by
\begin{equation}
\label{eq:proof1}
\widetilde{\varphi}(t)
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\dbvarphi(t)
+
\mathcal{L}
\big(
d(t),
z(t)
\big)
,
\quad t \in [T_0,T],
\end{equation}
and $z$ according to \eqref{eq:zimpk},
is a local solution of van Roosbroeck's system.
First, \eqref{eq:sol-phi} is identical with \eqref{eq:solution}. By
the embedding
\begin{math}
V
\hookrightarrow
\xoplus W^{1,q}_{\Gamma}
\hookrightarrow
\xoplus{L}^\infty
\end{math}
(see\ \remref{rem:initial}) the
function
\begin{math}
[T_0,T]\ni{t}\mapsto\phi(t)\in \xoplus{L}^\infty
\end{math}
is continuous, and so is the function
\begin{math}
[T_0,T]\ni{t}\mapsto\bvphi{}(t)\in \xoplus{L}^\infty
\end{math}
in view of \assuref{assu:cc:bc}.
Thus,
\begin{math}
\widetilde{\phi}
\in
C([T_0,T],\xoplus{L}^\infty)
\cap
C^1(]T_0,T[,\xoplus{L}^p)
.
\end{math}
Moreover, for $z$, see\ \eqref{eq:zimpk}, one obtains from the
Assumptions~\ref{assu:bands} and \ref{assu:poi-diri-bv} that
\begin{math}
z
\in C([T_0,T],\xoplus{L}^\infty)
\cap
C^1(]T_0,T[,\xoplus{L}^p)
.
\end{math}
Consequently, property \eqref{eq:sol-varphi} follows by
\corref{cor:boundedlip}, while \eqref{eq:ukjk-regularity} results from
\lemref{lem:implic}. The Poisson equation \eqref{eq:poisson} with
densities \eqref{eq:car-density} is obviously satisfied by
\eqref{eq:proof1} due to the definition of $\mathcal{L}$.
\eqref{eq:jk-regularity} follows from
\begin{math}
\nabla \widetilde{\phi}_k
\in
C(]T_0,T],\xoplus{L}^q)
,
\end{math}
$k =1,2$, and \lemref{lem:implic}.
\eqref{eq:divjk-regularity} is implied by \eqref{eq:solution} and
\eqref{eq:distrib}.
It remains to show that the continuity equations
\eqref{eq:continuity} are satisfied.
For this, one first notes the relations
\begin{equation}
\label{eq:qk}
Q_k(t,\widetilde{\phi}(t))
= \widetilde{\phi}_k(t)
+ (-1)^k\mathstrut_\downarrow\widetilde{\varphi}(t)
+ b_k(t)
= z_k(t)
+ (-1)^k\mathstrut_\downarrow\varphi(t)
,
\quad k = 1,2,
\end{equation}
and
\begin{equation}
\label{eq:recover}
R(t,\widetilde{\phi}(t))
=
\left(
\begin{smallmatrix}
r_1(t, \widetilde{\varphi}(t), \widetilde{\phi}(t))
\\
r_2(t, \widetilde{\varphi}(t), \widetilde{\phi}(t))
\end{smallmatrix}
\right)
,
\end{equation}
which follows from the definitions \eqref{eq:Q} and \eqref{eq:R} of
$R$ and $Q$, and \eqref{eq:zimpk}, \eqref{eq:proof1}.
Further, in \assuref{assu:recomb} we demand that the mappings
$r_k$, $k=1,2$, take their values in $L^p$ --- consequently,
$R$ takes its values in $\xoplus{L}^p$.
From \eqref{eq:ukprime} and \eqref{eq:E} one gets
\begin{equation*}
E_k(t,\widetilde{\phi}(t)) u_k'(t)
= \widetilde{\phi}_k'(t)
+ b_k'(t)
+ (-1)^k \mathstrut_\downarrow \widetilde{\varphi}'(t)
+ \tfrac{\rho_k'(t)}{\rho_k(t)}
\tfrac{\mathcal{F}_k ( Q_k(t,\widetilde{\phi}(t) ) )}
{\mathcal{F}_k' ( Q_k(t,\widetilde{\phi}(t) ) )}
,
\end{equation*}
and by means of the evolution equation \eqref{eq:evol3} we obtain
\begin{multline*}
E(t,\widetilde{\phi}(t)) u'(t)
=
\big[ I + Z(t,\widetilde{\phi}(t)) \big]
E(t,\widetilde{\phi}(t))
\nabla\cdot} %%\text{diag}{\dive
G(t,\widetilde{\phi}(t)) \mu \nabla} %%\text{diag}{\gard \widetilde{\phi}(t)
\\
+
\big[
I + Z(t,\widetilde{\phi}(t))
\big]
E(t,\widetilde{\phi}(t))
R(t,\widetilde{\phi}(t))
+
\left(
\begin{smallmatrix}
\mathstrut_\downarrow\mathcal{P}_0^{-1} d'(t) - \mathstrut_\downarrow\varphi'(t)
\\
\mathstrut_\downarrow\varphi'(t) - \mathstrut_\downarrow\mathcal{P}_0^{-1} d'(t)
\end{smallmatrix}
\right)
.
\end{multline*}
We now make use of the representation \eqref{eq:cur-density} of the
currents $j=(j_1,j_2)$, and get
\begin{multline*}
E(t,\widetilde{\phi}(t))
\left[
u'(t) - \nabla\cdot} %%\text{diag}{\dive j(t) - R(t,\widetilde{\phi}(t))
\right]
\\
=
Z(t,\widetilde{\phi}(t))
E(t,\widetilde{\phi}(t))
\left[
\nabla\cdot} %%\text{diag}{\dive j(t) + R(t,\widetilde{\phi}(t))
\right]
+
\left(
\begin{smallmatrix}
\mathstrut_\downarrow\mathcal{P}_0^{-1} d'(t) - \mathstrut_\downarrow\varphi'(t)
\\
\mathstrut_\downarrow\varphi'(t) - \mathstrut_\downarrow\mathcal{P}_0^{-1} d'(t)
\end{smallmatrix}
\right)
.
\end{multline*}
We already know that the formal differentiation of Poisson's
equation is justified, see\ \remref{rem:justi}. Thus,
\eqref{eq:poidiff} yields
\begin{multline*}
E(t,\widetilde{\phi}(t))
\left[
u'(t) - \nabla\cdot} %%\text{diag}{\dive j(t) - R(t,\widetilde{\phi}(t))
\right]
\\
=
Z(t,\widetilde{\phi}(t))
E(t,\widetilde{\phi}(t))
\left[
\nabla\cdot} %%\text{diag}{\dive j(t) + R(t,\widetilde{\phi}(t))
\right]
+
\left(
\begin{smallmatrix}
\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
(u_2'(t) - u_1'(t))
\\
\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
(u_1'(t) - u_2'(t))
\end{smallmatrix}
\right)
,
\end{multline*}
and, observing \eqref{eq:ZE} and \eqref{eq:recover}, we get
\begin{equation}
\label{eq:proof2}
\left[
E(t,\widetilde{\phi}(t))
+
\left(
\begin{smallmatrix}
\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
&
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\\
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
&\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\end{smallmatrix}
\right)
\right]
\left(
\begin{smallmatrix}
u_1'(t) - \nabla\cdot j_1(t) - r_1(t, \widetilde{\varphi}(t), \widetilde{\phi}(t))
\\
u_2'(t) - \nabla\cdot j_2(t) - r_2(t, \widetilde{\varphi}(t), \widetilde{\phi}(t))
\end{smallmatrix}
\right)
=
0
.
\end{equation}
%
The operator on the left is continuous on $\xoplus{L}^p$; we show now
that its kernel is trivial. Let $f_1$, $f_2\in{L}^p$ be such that
\begin{equation*}
\left[
E(t,\widetilde{\phi}(t))
+
\left(
\begin{smallmatrix}
\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
&
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\\
-\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
&\mathstrut_\downarrow\mathcal{P}_0^{-1}\mathstrut^\uparrow
\end{smallmatrix}
\right)
\right]
\left(
\begin{smallmatrix}
f_1 \\ f_2
\end{smallmatrix}
\right)
= 0
.
\end{equation*}
This is equivalent to the relations
\begin{equation*}
f_2=-\tfrac{E_1(t,\widetilde{\phi}(t))} {E_2(t,\widetilde{\phi}(t))}f_1
\quad\text{and}\quad
\mathstrut_\downarrow \mathcal{P}_0^{-1} \mathstrut^\uparrow
\left(
\big(
1+\tfrac {E_1(t,\widetilde{\phi}(t))}{E_2(t,\widetilde{\phi}(t))}
\big)
f_1
\right)
=
-E_1(t,\widetilde{\phi}(t))f_1
.
\end{equation*}
\begin{math}
\mathcal{P}_0^{-1}\mathstrut^\uparrow
\big(
(1+\frac {E_1(t,\widetilde{\phi}(t))}{E_2(t,\widetilde{\phi}(t))})
f_1
\big)
\end{math}
is a continuous mapping from ${W}^{1,q}_{\Gamma}$
into $\widehat{L}^\infty$. Indeed, the embedding
$\widehat{L}^p\hookrightarrow\widehat{W}^{-1,q}_{\widehat{\Neumann}}$
is continuous, and $\mathcal{P}_0$ is an isomorphism between
$\widehat{W}^{1,q}_{\widehat{\Neumann}}$ and
$\widehat{W}^{-1,q}_{\widehat{\Neumann}}$, see\
\proref{prop:isomorphy}.
Hence, we may multiply both sides with
\begin{math}
f_1+\frac{E_1(t,\widetilde{\phi}(t))}{E_2(t,\widetilde{\phi}(t))}f_1
\end{math}
and integrate over
$\Omega$; this yields
\begin{multline}
\label{eq:glei}
\int_{\Omega}
\mathstrut_\downarrow \mathcal{P}_0^{-1} \mathstrut^\uparrow
\left(
f_1
+
\tfrac{E_1(t,\widetilde{\phi}(t))}{E_2(t,\widetilde{\phi}(t))} f_1
\right)
\left(
f_1+\tfrac{E_1(t,\widetilde{\phi}(t))}{E_2(t,\widetilde{\phi}(t))} f_1
\right)
\,\mathrm{d}{x}
\\
=
\int_{\widehat{\Omega}}
\mathcal{P}_0^{-1}
\mathstrut^\uparrow
\left(
f_1 + \tfrac{E_1(t,\widetilde{\phi}(t))}{E_2(t,\widetilde{\phi}(t))} f_1
\right)
\mathstrut^\uparrow
\left(
f_1 + \tfrac{E_1(t,\widetilde{\phi}(t))}{E_2(t,\widetilde{\phi}(t))} f_1
\right)
\,\mathrm{d}{x}
\\
=
-
\int_{\Omega}
E_1(t,\widetilde{\phi}(t))
\left(
1+\tfrac{E_1(t,\widetilde{\phi}(t))}{E_2(t,\widetilde{\phi}(t))}
\right)
f_1^2
\,\mathrm{d}{x}
\end{multline}
The quadratic form
\begin{math}
\psi
\mapsto
\int_{\widehat{\Omega}}
(\mathcal{P}_0^{-1} \psi) \psi \,\mathrm{d}{x}
\end{math}
is non-negative on $\widehat{L}^2$ and extends by continuity to
$\widehat{L}^p$, where it is also non-negative. On the other hand,
the function
\begin{math}
E_1(t,\widetilde{\phi}(t))
\left(
1+\tfrac{E_1(t,\widetilde{\phi}(t))}{E_2(t,\widetilde{\phi}(t))}
\right)
\end{math}
is almost everywhere on $\Omega$ strictly positive. Therefore, the
right hand side of \eqref{eq:glei} can only be non-negative if $f_1$
is zero almost everywhere on $\Omega$.
Hence, \eqref{eq:proof2} establishes the continuity equations
\eqref{eq:continuity}.
To prove uniqueness of a solution of van Roosbroeck's system in the
sense of \defnref{def:vanroos} one assures that any solution in the
sense of \defnref{def:vanroos} procures a solution in the sense of
\defnref{def:solution}. Indeed this has been done on a formal stage
by the reformulation of van Roosbroeck's system as a quasi-linear
parabolic system in \secref{sec:nl-reform}. In fact, all formal
steps can be carried out in the underlying function spaces. We
accomplish this in the sequel for the crucial points.
\eqref{eq:poisson} and \eqref{eq:car-density} ensure, that $\varphi$
is a solution of \eqref{eq:nlp}. Hence, \corref{cor:boundedlip}
implies that $\varphi$ indeed is continuously differentiable in
$\widehat{W}_{\widehat{\Neumann}}^{1,q}$, and, consequently,
\eqref{eq:currcon} makes sense in
$\widehat{W}_{\widehat{\Neumann}}^{-1,q}$. The derivation of
\eqref{eq:car-density}, see\ also \eqref{eq:chemical-potential}, is
justified by \lemref{lem:diff}. Thus, \eqref{eq:diffdicht} holds in
a strict sense. The division by $\rho_k\mathcal{F}_k'$ is allowed
because both factors have (uniform) upper and lower bounds. The rest
of the manipulations up to \eqref{eq:evol3} is straight forward to
justify.
\end{proof}
Next we want to establish the natural formulation of the balance laws
in van Roosbroeck's system in integral form, see\
\eqref{eq:balancelaw}, which is one of the central goals of this
paper.
At first, one realizes that the boundary integral has to be understood
in the distributional sense --- as is well known from Navier-Stokes
theory, see \cite{temam} --- if one only knows that the current is a
$q$--summable function and that its divergence is $p$--summable. More precisely, the following proposition holds.
\begin{prop}
\label{prop:currnor}
Let $\omega\subset\mathbb{R}^2$ be any bounded Lipschitz domain.
Assume $j\from{\omega\to\mathbb{R}^2}$ to be from
$L^q(\omega;\mathbb{R}^2)$ and let the divergence (in the sense
of distributions) $\nabla\cdot{j}$ of $j$ be $p$--integrable on $\omega$.
If $q>2$ and $p=\frac{q}{2}$, then there
is a uniquely determined linear continuous functional
\begin{math}
j_\nu
\in
W^{-1+\frac{1}{q'},q}(\partial\omega)
\end{math}
such that
\begin{equation}
\label{eq:currnor}
\int_\omega j \cdot \nabla\psi \,\mathrm{d}{x}
+
\int_\omega \psi \nabla\cdot j \,\mathrm{d}{x}
=
\dual{j_\nu}{\psi|_{\partial\omega}}
\quad
\text{for all $\psi \in W^{1,q'}(\omega)$,}
\end{equation}
where $\dual{\cdot}{\cdot}$ on the right hand side denotes
the duality between
$W^{1-\frac{1}{q'},q'}(\partial\omega)$ and
$W^{-1+\frac{1}{q'},q}(\partial\omega)$.
If, in addition, the function $j$ is continuously differentiable on
$\omega$ and the partial derivatives have continuous extensions
to $\overline{\omega}$, then
\begin{equation*}
\int_\omega j \cdot \nabla\psi \,\mathrm{d}{x}
+
\int_\omega \psi \nabla\cdot j \,\mathrm{d}{x}
=
\int_{\partial\omega}
\psi|_{\partial\omega}
\nu \cdot j
\,\mathrm{d}{\sigma_{\omega}}
\quad
\text{for all $\psi \in W^{1,q'}(\omega)$,}
\end{equation*}
where $\nu$ is the outer unit normal of $\partial\omega$, and
$\sigma_\omega$ is the arc--measure on $\partial\omega$.
\end{prop}
\begin{proof}
The first statement is a slight generalization, see\
\cite[Lemma~2.4]{kare:current}, of well known results from
\cite[Ch.~1]{temam}. The second assertion has been proved in
\cite[Ch.~5.8]{evans:gariepy92}.
\end{proof}
\begin{thm}
\label{thm:curnorm}
If $(\widetilde{\varphi},\widetilde{\phi})$ is a solution of van Roosbroeck's system in
the sense of \defnref{def:vanroos}, and $\omega\subset\Omega$ is
an open Lipschitz domain, then there are unique continuous
functions
\begin{math}
j_{k\nu}\from{
]T_0,T]
\to
W^{-1 + \frac {1}{q'},q}(\partial\omega)
},
\end{math}
$k=1,2$,
such that
\begin{equation}
\label{eq:gauss}
\frac{\partial}{\partial t}
\int_\omega u_k(t) \,\mathrm{d}{x}
=
\dual{j_{k\nu}(t)}{1}
+
\int_\omega
r_k(t,\widetilde{\varphi}(t),\widetilde{\phi}(t)) \,\mathrm{d}{x}
,
\quad k =1,2,
\end{equation}
where $\dual{\cdot}{\cdot}$ again denotes the duality between
$W^{1-\frac{1}{q'},q'}(\partial\omega)$ and
$W^{-1+\frac{1}{q'},q}(\partial\omega)$.
\end{thm}
\begin{proof}
From \eqref{eq:continuity} we obtain for any open Lipschitz domain
$\omega\subset\Omega$
\begin{equation*}
\int_\omega
u_k'(t) - \nabla\cdot j_k(t)
\,\mathrm{d}{x}
=
\frac{\partial}{\partial t}
\int_\omega u_k(t)
\,\mathrm{d}{x}
-
\int_\omega \nabla\cdot j_k(t)
\,\mathrm{d}{x}
=
\int_\omega
r_k(t,\widetilde{\varphi}(t),\widetilde{\phi}(t))
\,\mathrm{d}{x}
,
\end{equation*}
where $j_k$ is defined by \eqref{eq:cur-density}. Using
\proref{prop:currnor} we find for every $t\in]T_0,T]$ a unique
element $j_{k\nu}(t){\in}W^{-1+\frac{1}{q'},q}(\partial\omega)$
such that \eqref{eq:gauss} holds. Moreover, continuity passes over
from the functions \eqref{eq:jk-regularity} to the mappings
\begin{math}
]T_0,T] \ni t
\mapsto
j_{k\nu}(t) \in W^{-1+\frac{1}{q'},q}(\partial\omega)
.
\end{math}
\end{proof}
If the currents $j_k(t)$ are continuously differentiable on
$\omega$ and the partial derivatives have continuous extensions to
$\overline{\omega}$, then by the second part of
\proref{prop:currnor} the formula \eqref{eq:gauss} takes the form
\eqref{eq:balancelaw}.
\section[Numerics]{Numerics}
\thmref{thm:curnorm} is the basis for space discretization of
drift--diffusion equations by means of the finite volume method (FVM).
The FVM was adopted for the numerical solution of van Roosbroeck's
equations by Gajewski, and this approach has been further investigated
in \cite{gajewski:gaertner94
0978.65081
gklnr:detectors,05013697}.
To discretise the spatial domain one uses a partition into simplex
elements.
Let $\mathcal{E}$ be the set of all edges $e_{il}=x_i-x_l$ of this
triangulation, where $x_1$, $x_2$,\ldots are the vertices.
Moreover, we define the Voronoi cell assigned to a vertex $x_i$ by
\begin{multline*}
V_i
\stackrel{\scriptscriptstyle\mathrm{def}}{=}
\{
\text{$x$ in the spatial simulation domain, such that}
\\
\norm{x-x_i} \le \norm{x-x_l}
\quad
\text{for all vertices $x_l$ of the triangulation}
\}
,
\end{multline*}
where $\norm{\cdot}$ refers to the norm in the spatial simulation
space $\mathbb{R}^2$.
Now, to get a space discrete version of the current--continuity
equation, we specify \eqref{eq:gauss} with $\omega=V_i$, and
approximate $\dual{j_{k\nu}(t)}{1}$ piecewise by
\begin{math}
{j_k}_{il}
\sigma(\partial V_i \cap \partial V_l),
\end{math}
$\sigma$ being the arc measure on the boundary of $\omega=V_i$.
The intermediate value ${j_k}_{il}$ can be obtained as follows: The
main hypothesis with respect to the discretization of the currents ---
due to Scharfetter and Gummel \cite{scharfetter:gummel69} --- is that
the electron and hole current density $j_2$ and $j_1$ are constant
along simplex edges. This assumption allows to calculate ${j_1}_{il}$
and ${j_2}_{il}$ --- the constant values on the edge $e_{il}$ --- in
terms of the node values of the electrostatic potential and the
particle densities, see\ for instance\ \cite{gklnr:detectors}.
Thus, one ends up with the following FVM discretization of van
Roosbroeck's system for all interior Voronoi cells $V_i$:
\begin{equation*}
\begin{split}
\varepsilon(x_i)
\sum_{l \,:\, e_{il}\in\mathcal{E}}
(\nabla\varphi)_{il}
\sigma(\partial V_k \cap \partial V_l)
& =
\left(
\tilde{d}(x_i) + {u_1}(x_i) - {u_2}(x_i)
\right)
\abs{V_i},
\\
\frac{\partial{u_k}}{\partial t}(x_i)\abs{V_i}
-{j_k}_{il}\sigma(\partial V_i \cap \partial V_l)
& =
r_k(t,\widetilde{\varphi},\widetilde{\phi}_1,\widetilde{\phi}_2)(x_i)
\abs{V_i}
,
\end{split}
\end{equation*}
where $\abs{V_i}$ is the volume of the Voronoi cells $V_i$.
Here we have tested the Poisson equation also with the characteristic
function $1_{V_i}$ of the Voronoi cell $V_i$, and we have applied
Gauss' theorem. In view of \proref{prop:currnor} we assume,
additional to \assuref{assu:dop},
\begin{math}
\tilde{d}
\from{[T_0,T_1] \to \widehat{L}^p},
\end{math}
and observe that $\rbvarphi$ can be choosen such that
\begin{math}
\dual{\rbvarphi}{1_{V_i}}=0
\end{math}
for interior Voronoi cells $V_i$,
see\ \remref{r-extend}.
Again, we approximate the right hand side of \eqref{eq:currnor}
piecewise by
\begin{math}
(\nabla\varphi)_{il} \sigma(\partial V_i \cap \partial V_l),
\end{math}
and we assume --- in consonance with the hypothesis about currents ---
that the gradient of the electrostatic potential is constant
on the edges of the triangulation, that means
\begin{math}
(\nabla\varphi)_{il}
=
(\varphi(x_i)-\varphi(x_l))/\norm{x_i-x_l
.
\end{math}
Usually, this finite volume discretization of space has been combined
with implicit time discretization, see\ for instance\ \cite{gajewski93}. Please
note that the strong differentiability of the electron and hole
density in time is constitutive for this approach.
\section[Outlook to three spatial dimensions]%
{Outlook to three spatial dimensions}
\label{sec:outlook}
Much of semiconductor device simulation relies on spatially
two-dimensional models. However, with increasing complexity of
electronic device design spatially three-dimensional simulations
become ever more important, see\ for instance\
\cite{gklnr:detectors,gartner:richter:06,gartner:depfet}. This raises
the question which of the results for the two-dimensional case carry
over to the three-dimensional case. In particular, can one expect that
in three spatial dimensions the divergence of the currents belongs to
a Lebesgue space, and is it possible to establish strong
differentiability of the carrier densities under the rather weak
assumptions about the reaction terms of this paper.
Conditio sine qua non for a modus operandi as in this paper is that in
the three-dimensional case the operators
\begin{equation*}
-\nabla \cdot \varepsilon \nabla
\from{
\widehat{W}^{1,q}_{\widehat{\Neumann}}
\to
\widehat{W}^{-1,q}_{\widehat{\Neumann}}
}
\quad
\text{and}
\quad
-\nabla \cdot \mu_k \nabla
\from{
{W}^{1,q}_{\Gamma}
\to
{W}^{-1,q}_{\Gamma}
}
\end{equation*}
provide isomorphisms for a summability index $q>3$. Unfortunately,
this is not so for arbitrary three-dimensional spatial domains, see\
\cite{meyers63}. However, one can proof such a result for certain
classes of three-dimensional material structures and boundary
conditions, see\ \cite{pub:1066}, for instance for layered media and
Dirichlet boundary conditions. Dauge proved the result in
\cite{0767.46026} for the Dirichlet Laplacian on a convex polyhedron,
provided the Dirichlet boundary part is separated from its complement
by a finite union of line segments. It would be satisfactory to
combine this conclusion with a heterogeneous material composition.
Under the hypothesis the afore mentioned isomorphisms exist there are
results on quasilinear parabolic systems --- analogous to
\proref{prop:oint} --- see\ \cite{rehberg:2005amann} and
\cite{hieber:rehberg:06}, such that one can obtain classical solutions
of the spatially three-dimensional drift--diffusion equations very
much in the same way as here in the two-dimensional case.
\begin{acknowledgement}
We would like to thank Klaus G\"artner for discussions about
van Roosbroeck's system.
\end{acknowledgement}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,566,427 | arxiv | \section{#1}}
\newcommand{\col}[2]{\left(\begin{array}{c} #1 \\ #2 \end{array}\right)}
\newcommand{\fl}[1]{ \lfloor #1 \rfloor }
\newcommand{\ket}[1]{\vert #1 \rangle}
\renewcommand{\mod}[1]{\ (\mathrm{mod}\ #1)}
\newcommand{\new}[1]{{\em #1}}
\newcommand{\rep}[1]{{\bf #1}}
\newcommand{\set}[1]{\{#1\}}
\def \begin{equation} {\begin{equation}}
\def \end{equation} {\end{equation}}
\def \begin{split} {\begin{split}}
\def \end{split} {\end{split}}
\def \begin{eqnarray} {\begin{eqnarray}}
\def \end{eqnarray} {\end{eqnarray}}
\def \nonumber {\nonumber}
\def\mathcal{\mathcal}
\def\mathfrak{\mathfrak}
\def\partial{\partial}
\newcommand{\fracs}[2]{{\textstyle{#1\over #2}}}
\newcommand{\bs}[1]{\ensuremath{\boldsymbol{#1}}}
\def\mathcal{M}{\mathcal{M}}
\def\mathop{\mathrm Im }{\mathop{\mathrm Im }}
\def\mathop{\mathrm Re }{\mathop{\mathrm Re }}
\def\frac{1}{2}{\frac{1}{2}}
\def\mathbb{Z}{\mathbb{Z}}
\def\mathbb{F}{\mathbb{F}}
\def\mathbb{Q}{\mathbb{Q}}
\def\mathbb{C}{\mathbb{C}}
\def\mathcal{H}{\mathcal{H}}
\def\mathcal{K}{\mathcal{K}}
\def\mathfrak{g}{\mathfrak{g}}
\def\mathcal{L}{\mathcal{L}}
\def\mathbb{R}{\mathbb{R}}
\def\mathcal{N}{\mathcal{N}}
\def\mathbb{P}{\mathbb{P}}
\def\mathbb{P}{\mathbb{P}}
\def\tilde{C}{\tilde{C}}
\def\tilde{\alpha}{\tilde{\alpha}}
\defH-V{H-V}
\deft{t}
\def{\mathrm{Tr}}{{\mathrm{Tr}}}
\def{\mathrm{tr}}{{\mathrm{tr}}}
\def\mathrm{ord}{\mathrm{ord}}
\def\text{Spec}{\text{Spec}}
\def\mathcal{O}{\mathcal{O}}
\defG{G}
\def\phi_0{\phi_0}
\def\phi_1{\phi_1}
\def\nu{\nu}
\def\psi_2{\psi_2}
\def\phi_2{\phi_2}
\def\psi_1{\psi_1}
\def\psi_3{\psi_3}
\def\xi{\xi}
\def\phit{\phi_0}
\def\phif{\phi_1}
\def\rho{\rho}
\def\tilde{\F}{\tilde{\mathbb{F}}}
\def{\mathfrak{e}}{{\mathfrak{e}}}
\def{\mathfrak{so}}{{\mathfrak{so}}}
\def{\mathfrak{su}}{{\mathfrak{su}}}
\def{\mathfrak{sp}}{{\mathfrak{sp}}}
\def{\mathfrak{f}}{{\mathfrak{f}}}
\def{\mathfrak{g}}{{\mathfrak{g}}}
\def{\mathfrak{u}}{{\mathfrak{u}}}
\def{\mathfrak{h}}{{\mathfrak{h}}}
\def{\mathfrak{a}}{{\mathfrak{a}}}
\def{\mathfrak{b}}{{\mathfrak{b}}}
\def\text{min}{\text{min}}
\def\text{max}{\text{max}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\mathbb{F}}{\mathbb{F}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\overline}{\overline}
\newcommand{\vev}[1]{\left\langle#1\right\rangle}
\newcommand{\dsz}[2]{\left\langle#1,#2\right\rangle}
\newcommand{\ensuremath{\mathop{\null {\mathbb{P}}}}\nolimits}{\ensuremath{\mathop{\null {\mathbb{P}}}}\nolimits}
\newcommand{\ensuremath{\mathcal{K}}}{\ensuremath{\mathcal{K}}}
\newcommand{\ensuremath{{dP_0}}}{\ensuremath{{dP_0}}}
\newcommand{\ensuremath{{M_{dP_0}}}}{\ensuremath{{M_{dP_0}}}}
\newcommand{\ensuremath{{Y^{(3,0)}}}}{\ensuremath{{Y^{(3,0)}}}}
\newcommand{\ensuremath{{\left(dP_0\right)^3}}}{\ensuremath{{\left(dP_0\right)^3}}}
\newcommand{\ensuremath{{M_{\left(dP_0\right)^3}}}}{\ensuremath{{M_{\left(dP_0\right)^3}}}}
\newcommand{\ensuremath{\hat{x}}}{\ensuremath{\hat{x}}}
\newcommand{\ensuremath{\hat{D}}}{\ensuremath{\hat{D}}}
\newcommand{\ensuremath{\tilde{D}}}{\ensuremath{\tilde{D}}}
\newcommand{\ensuremath{\tilde{t}}}{\ensuremath{\tilde{t}}}
\newcommand{\ensuremath{\mathcal{O}}}{\ensuremath{\mathcal{O}}}
\DeclareMathOperator{\Cl}{Cl}
\DeclareMathOperator{\conv}{conv}
\DeclareMathOperator{\Span}{span}
\DeclareMathOperator{\Vol}{Vol}
\DeclareMathOperator{\dVol}{dVol}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\ch}{ch}
\DeclareMathOperator{\Td}{Td}
\DeclareMathOperator{\Ext}{Ext}
\newcommand{\ensuremath{\Delta}}{\ensuremath{\Delta}}
\newcommand{\ensuremath{\nabla}}{\ensuremath{\nabla}}
\newcommand{\ensuremath{\nabla^\circ}}{\ensuremath{\nabla^\circ}}
\newcommand{\ensuremath{v^{(B)}} }{\ensuremath{v^{(B)}} }
\newcommand{\ensuremath{m^{(2)}} }{\ensuremath{m^{(2)}} }
\newcommand{\ensuremath{v^{(F)}} }{\ensuremath{v^{(F)}} }
\newcommand{\ensuremath{m^{(F)}} }{\ensuremath{m^{(F)}} }
\def0{0}
\defH_{\rm u}{H_{\rm u}}
\newcommand{\grid}{
\thicklines
\multiput(-200,-200)(0,50){9}{\line(1, 0){400}}
\multiput(-200,-200)(50,0){9}{\line(0, 1){400}}
\thinlines
\multiput(-200,-200)(0,10){41}{\line(1, 0){400}}
\multiput(-200,-200)(10,0){41}{\line(0, 1){400}}
\put(0,0){\circle{5}}
}
\definecolor{Red}{rgb}{1.00, 0.00, 0.00}
\defh^{1, 1}{h^{1, 1}}
\defh^{2, 1}{h^{2, 1}}
\newcommand{\eq}[1]{(\ref{#1})}
\newcommand{\subsubsubsection}[1]{\noindent {\bf #1}}
\newcommand{\wati}[1]{\footnote{\textbf{WT:\ #1}}}
\newcommand{\yh}[1]{\footnote{\textbf{YH:\ #1}}}
\newcommand{\yc}[1]{{\color{blue}#1}}
\newcommand{\ycr}[1]{{\color{red}#1}}
\newcommand{\yw}[1]{{\color{white}#1}}
\newcommand{\clean}{
\renewcommand{\wati}[1]{}
\renewcommand{\yh}[1]{}
}
\title{
On the prevalence of elliptic and genus one fibrations among toric
hypersurface Calabi-Yau threefolds
}
\author{Yu-Chien Huang,}
\author{Washington Taylor}
\affiliation{Center for Theoretical Physics\\
Department of Physics\\
Massachusetts Institute of Technology\\
77 Massachusetts Avenue\\
Cambridge, MA 02139, USA}
\emailAdd{{\tt yc\_huang} {\rm at} {\tt mit.edu}}
\emailAdd{{\tt wati} {\rm at} {\tt mit.edu}}
\preprint{MIT-CTP-4993}
\abstract{
We systematically analyze
the fibration structure of toric hypersurface Calabi-Yau threefolds
with large and small Hodge numbers.
We show that there are only four such Calabi-Yau threefolds with
$h^{1, 1} \geq 140$ or $h^{2, 1} \geq 140$ that do not have manifest
elliptic or genus one fibers arising from a fibration of the
associated 4D polytope. There is a
genus one fibration whenever
either Hodge number is 150 or greater, and an elliptic fibration when
either Hodge number is 228 or greater. We find that for small
$h^{1, 1}$ the fraction of polytopes in the KS database that do not
have a genus one or elliptic fibration drops exponentially
as $h^{1,1}$ increases.
We also consider the different toric fiber types that arise in the
polytopes of elliptic Calabi-Yau threefolds.}
\begin{document}
\maketitle
\flushbottom
\section{Introduction}
Calabi-Yau manifolds play a central role in string theory; these
geometric spaces can describe extra dimensions of space-time in
supersymmetric ``compactifications'' of the theory.
The analysis of Calabi-Yau manifolds has been a major focus of the work of
mathematicians and physicists since this connection was first
understood
\cite{chsw}.
Nonetheless, it is still not known whether the number of distinct
topological types of Calabi-Yau threefolds is finite or infinite.
A large class of Calabi-Yau threefolds can be described as
hypersurfaces in toric varieties; these were systematically classified
by Kreuzer and Skarke \cite{Kreuzer:2000xy, database} and represent most of the
explicitly known Calabi-Yau threefolds at large Hodge numbers.
A specific class of Calabi-Yau manifolds that are of particular
mathematical and physical interest are those that admit
a genus one or elliptic
fibration (an elliptic fibration is a genus one fibration
with a global section).
Elliptically fibered Calabi-Yau manifolds have additional structure
that makes them easier to understand mathematically, and they play a
central role in the approach to string theory known as ``F-theory''
\cite{Vafa-F-theory, Morrison-Vafa}.
Genus one fibrations are also relevant in F-theory in the context of
discrete gauge groups, as described in
e.g. \cite{Braun:2014oya, Morrison-WT-sections, aggk,
Mayrhofer:2014opa, Cvetic:2015moa}; see
\cite{Timo-TASI, Cvetic:2018bni} for further background and references on
this and other F-theory-related issues.
Unlike the general class of Calabi-Yau threefolds, it is known that
the number of distinct topological types of elliptic and genus one
Calabi-Yau
threefolds is finite \cite{Gross} (See also \cite{Grassi} for earlier
work that laid the foundation for this proof, and \cite{KMT-2} for a
more constructive and explicit argument for finiteness).
In recent years, an
increasing body of circumstantial evidence has suggested that in fact
a large fraction of the known Calabi-Yau manifolds admit an elliptic
or genus one fibration. A direct analysis of the related structure of
K3 fibrations for many of the toric hypersurface constructions in the
Kreuzer-Skarke database was carried out in \cite{Candelas-cs},
demonstrating directly the prevalence of fibrations by
smaller-dimensional Calabi-Yau fibers among known Calabi-Yau
threefolds. The study of F-theory has led to a systematic methodology
for constructing and classifying elliptic Calabi-Yau threefolds
\cite{clusters, toric, Hodge, Wang-WT, Johnson-WT,
Johnson:2016qar}. Comparing the structure of geometries constructed
in this way to the Kreuzer-Skarke database shows that at large Hodge
numbers, virtually all Calabi-Yau threefolds that are known are in
fact elliptic. In a companion paper to this one
\cite{Huang-Taylor-long}, we use this approach to show that all Hodge
numbers with $h^{1, 1}$ or $h^{2, 1}$ greater or equal to 240 that
arise in the Kreuzer-Skarke database are realized explicitly by
elliptic fibration constructions over toric or related base surfaces.
Finally, from a somewhat different point of view the analysis of
complete intersection Calabi-Yau manifolds and generalizations thereof
has shown that these classes of Calabi-Yau threefolds and fourfolds
are also overwhelmingly dominated by elliptic and genus one fibrations
\cite{Gray-hl1, Gray-hl, Anderson-aggl, Anderson-ggl, aggl-2, aggl-3}.
In this paper we carry out a direct analysis of the toric hypersurface
Calabi-Yau manifolds in the Kreuzer-Skarke database. There are 16
reflexive 2D polytopes that can act as fibers of a 4D polytope
describing a Calabi-Yau threefold; the presence of any of these fibers
in the 4D polytope indicates that the corresponding Calabi-Yau
threefold hypersurface is genus one or elliptically fibered. We systematically consider all
polytopes in the Kreuzer-Skarke database that are associated with
Calabi-Yau threefolds with one or both Hodge numbers at least 140. We
show that with only four exceptions these Calabi-Yau threefolds all
admit an explicit elliptic or more general genus one fibration that can be seen
from the toric structure of the polytope. We furthermore find that
for toric hypersurface Calabi-Yau threefolds with small $h^{1,1}$, the fraction that lack a
genus one or elliptic fibration decreases roughly exponentially with
$h^{1, 1}$. Together these
results strongly support the notion that genus one and elliptic
fibrations are quite generic among Calabi-Yau threefolds.
The outline of this paper is as follows: In Section \ref{sec:fibers}
we describe the 16 types of toric fibers of the polytope that can lead
to a
genus one or elliptic fibration of the hypersurface Calabi-Yau and our
methodology for analyzing the fibration structure of the polytopes.
In Section \ref{sec:results}, we give our results on those Calabi-Yau
threefolds with the largest Hodge numbers that do not admit an
explicit elliptic or genus one fibration in the polytope description, as well as
some results on the
distribution of fiber types and multiple
fibrations.
In Section \ref{sec:prevalence} we discuss some simple aspects of the
likelihood of the existence of fibrations and compare to the observed
frequency of fibrations in the KS database at small $h^{1,1}$.
Section \ref{sec:conclusions} contains some concluding
remarks.
Along with this paper, we are making
the results of the fiber analysis of polytopes in the Kreuzer-Skarke
database associated with Calabi-Yau threefolds having Hodge numbers
$h^{1, 1}\geq 140$ or $h^{2, 1}\geq 140$ available in Mathematica
form
\cite{data}.
\section{Identifying toric fibers}
\label{sec:fibers}
A fairly comprehensive introductory review of the toric hypersurface
construction and how elliptic fibrations are described in this context
is given in the companion paper \cite{Huang-Taylor-long}, in which we
describe in much more detail the structure of the elliptic fibrations
for Calabi-Yau threefolds $X$ with very large Hodge numbers ($h^{1, 1}
(X)\geq 240$ or $h^{2, 1} (X)\geq 240$). Here we give only a very brief summary of the
essential points.
\subsection{Toric hypersurfaces and the 16 reflexive 2D fibers}
The basic framework for understanding Calabi-Yau manifolds through
hypersurfaces in toric varieties was developed by Batyrev
\cite{Batyrev}.
A {\it lattice polytope}
$\ensuremath{\nabla}$
is defined to be the set of lattice points in
$N =\mathbb{Z}^n$ that are contained within the convex hull of a finite set of
vertices $v_i \in N$.
The dual of a polytope $\nabla$ is defined to be
\begin{equation}
\nabla^*=\{u\in M_\mathbb{R}= M\otimes \mathbb{R}: \langle u,v\rangle\geq-1, \forall v\in \nabla\},
\label{dual}
\end{equation}
where $M = N^*=\Hom(N,\mathbb{Z})$ is the dual lattice. A lattice polytope
$\nabla\subset N$ containing the origin is {\it reflexive} if its dual
polytope is also a lattice polytope.
When $\ensuremath{\nabla}$ is reflexive, we denote the dual polytope by $\ensuremath{\Delta} =\ensuremath{\nabla}^*$.
The elements of the dual polytope $\ensuremath{\Delta}$ can be associated with
monomials in a section of the anti-canonical bundle of a toric variety
associated to $\ensuremath{\nabla}$. A section of this bundle defines a hypersurface
in $\ensuremath{\nabla}$ that is a Calabi-Yau manifold of dimension $n-1$.
When the polytope $\ensuremath{\nabla}$ has a 2D subpolytope $\ensuremath{\nabla}_2$ that is also
reflexive, the associated Calabi-Yau manifold has a genus one
fibration. There are 16 distinct reflexive 2D polytopes, listed in
Appendix~\ref{sec:appendix-fibers}.
These fibers are analyzed in the language of polytope ``tops''
\cite{tops} in \cite{Bouchard-Skarke}.
The structure of the genus one and elliptic fibrations
associated with each of these 16 fibers is studied in some detail in
the F-theory context in \cite{Braun:2011ux, BGK-geometric, Klevers-16}.
Of the 16 reflexive 2D polytopes listed in Appendix \ref{sec:appendix-fibers}, 13 are
always associated with elliptic fibrations. This can be seen, following
\cite{BGK-geometric}, by observing that the anticanonical class $- K_2$
of the toric 2D variety associated with a given $\ensuremath{\nabla}_2$ is $\sum C_i$
where $C_i$ are the toric curves associated with
rays in a toric fan for $\ensuremath{\nabla}_2$. The intersection of the curve $C_i$ with
the genus one fiber associated with the vanishing locus of a section
of $- K_2$ is thus $C_i \cdot (- K_2) = 2+ C_i \cdot C_i$, so $C_i$
defines a section associated with a single point on a generic fiber
only for a curve of self-intersection $C_i \cdot C_i = -1$. The three
fibers $F_1, F_2, F_4$ are associated with the
weak Fano surfaces $\mathbb{P}^2,\mathbb{F}_0 =\mathbb{P}^1 \times\mathbb{P}^1$, and $\mathbb{F}_2 =\mathbb{P}^2 [1, 1, 2]$,
which have no $- 1$ curves, while the other 13 fibers $F_i$ all have $- 1$ curves. Thus, polytopes $\ensuremath{\nabla}$ with any fiber
$\ensuremath{\nabla}_2$ that is $F_n, n \notin\{1, 2, 4\}$ give CY3s with elliptic
fibrations, while those $\ensuremath{\nabla}$ with only fibers of types $F_1, F_2,
F_4$ are genus one fibered but may not be elliptically fibered.
The basic goal of this paper is a systematic scan through the
Kreuzer-Skarke database to determine which reflexive polytopes
associated with Calabi-Yau threefolds that have large Hodge numbers
or small $h^{1,1}$
have toric reflexive 2D fibers that indicate the existence of an
elliptic or genus one fibration for the associated Calabi-Yau
threefold.
Note that this analysis only identifies elliptic and genus one
fibrations that are manifest in the polytope structure. As discussed
further in \S\ref{sec:prevalence}, a more comprehensive analysis of
the fibration structure of a given Calabi-Yau threefold can be carried
out using methods analogous to those used in \cite{aggl-3}.
\subsection{Algorithm for checking a polytope for fibrations}
\label{sec:algorithm}
We use a similar algorithm to that we used in
\cite{Huang-Taylor-long} to check for reflexive 2D fibers of a 4D reflexive polytope. Except for a small tweak to optimize
efficiency, this is essentially the approach outlined in
\cite{BGK-geometric}.
The basic idea is to check a given polytope
for each of the possible 16 reflexive subpolytopes. For a given
polytope $\ensuremath{\nabla}$ and potential fiber polytope $\ensuremath{\nabla}_2$, we
proceed in the
following two steps:
\begin{enumerate}
\item To increase the efficiency of the analysis we start by determining
the subset $S$
of the lattice points in $\ensuremath{\nabla}$ that could possibly be contained in a
fiber of the form $\ensuremath{\nabla}_2$, using a simple criterion. For each fixed
fiber type $\ensuremath{\nabla}_2$, there is a maximum possible value $I_{\rm max}$
of the inner
product $\ensuremath{v^{(F)}} \cdot m$ for any $\ensuremath{v^{(F)}} \in\ensuremath{\nabla}_2, m
\in\ensuremath{\Delta}_2$.
For example, for the 2D $\mathbb{P}^{2,3,1}$ polytope $(F_{10})$,
$I_\text{max} = 5$.
The values of $I_\text{max}$ for each of the reflexive 2D polytopes
$\nabla_2$ are listed in Appendix~\ref{sec:appendix-fibers}. When
$\ensuremath{\nabla}_2$ is a fiber of $\ensuremath{\nabla}$, which implies that there is a
projection from $\ensuremath{\Delta}$ to $\ensuremath{\Delta}_2$, $I_{\rm max}$ is also the maximum
possible value of the inner product $\ensuremath{v^{(F)}} \cdot m$ for any $m \in\ensuremath{\Delta}$.
Thus,
we define the set $S$ to be the set of lattice points $v \in\ensuremath{\nabla}$ such
that $v \cdot m \leq I_\text{max}$ for all vertices $m$ of $\ensuremath{\Delta}$. Particularly
for polytopes $\ensuremath{\nabla}$ that contain many lattice points, generally
associated with Calabi-Yau threefolds with large $h^{1, 1}$, this step
significantly decreases the time needed for the algorithm.
\item We then consider each pair of vectors $v, w$ in $S$ and check if the
intersection of $\ensuremath{\nabla}$ with the plane spanned by $v, w$ consists of
precisely a set of lattice points that define the 2D polytope
$\ensuremath{\nabla}_2$. If such a pair of vectors exists then $\ensuremath{\nabla}$ has a fiber
$\ensuremath{\nabla}_2$ and the associated Calabi-Yau threefold has an elliptic fibration
structure defined by this fiber type.
\end{enumerate}
In practice,
we implement
these steps directly only to check for the presence of the minimal
2D subpolytopes $F_1, F_2, F_4$ within a 2D plane; all the other 2D
reflexive polytopes contain the points of $F_1$ as a subset (in some
basis).
These three cases use the values $I_{\rm max} = 2, 1, 3$ respectively
as shown in
Appendix~\ref{sec:appendix-fibers}.
The three minimal 2D polytopes do not contain any other 2D reflexive
polytopes, and it requires a minimal number of linear equivalence
relations among the toric divisors to check if these minimal polytopes
are present as a subset of the points in $\ensuremath{\nabla}$
that are in a plane defined by
a non-colinear pair $v, w \in S$:
\begin{itemize}
\item $F_1$: $-(v+w)\in S$
\item $F_2$: $-v, -w\in S$
\item $F_4$: $-(v+w)/2\in S$
\end{itemize}
We could in principle use
this kind of direct check to determine the presence of the larger
subpolytopes as well, though this becomes more complicated for the
other fibers and we proceed slightly more indirectly.
After identifying all the 2D
planes that are spanned by non-colinear
pairs $v, w$ and contain one of the three
minimal 2D subpolytopes, we calculate the intersection of the 4D
polytope with the 2D plane to obtain the full subpolytope that
contains the minimal 2D subpolytope. This intersection can be
determined by identifying all lattice points $x \in \ensuremath{\nabla}$ that
give rise to a $4\times 4$ matrix of rank two with another three
non-colinear vectors in the 2D plane.
Note that this
intersection must give a 2D reflexive polytope, since there can only
be one interior point in the 2D fiber polytope as any other interior
point besides the origin would also be an interior point of the full
4D polytope, which is not possible if the 4D polytope is reflexive.
Let
us call the sets of subpolytopes containing $F_1, F_2$, and $F_4$
respectively ${\cal S}_1, {\cal S}_2$, and ${\cal S}_4$.
We can then efficiently determine which fiber type arises in each case
by some simple checks.
Observing that all the 2D polytopes other than the three minimal ones contain the $F_1$ polytope, we immediately have
\begin{itemize}
\item $\set{\ensuremath{\nabla}_2^{F_2}}={\cal S}_2 \setminus {\cal S}_1$,
\item $\set{\ensuremath{\nabla}_2^{F_4}}={\cal S}_4 \setminus {\cal S}_1$.
\end{itemize}
Then we group the fibers associated with the
rest of the 2D polytopes,
which are all in ${\cal S}_1$, by the number of lattice points:
\begin{itemize}
\item 5 points: $F_3$
\item 6 points: $F_5, F_6$
\item 7 points: $F_7, F_8, F_9, F_{10}$
\item 8 points: $F_{11}, F_{12}$
\item 9 points: $F_{13}, F_{14}, F_{15}$
\item 10 points: $F_{16}$
\end{itemize}
This immediately fixes the fibers $F_3$ and $F_{16}$. To distinguish
the specific fiber types for the remaining four groups a number of approaches could be used. We
have simply used a projection to compute the self-intersections of
each curve in a given fiber and the sequence of these
self-intersections. (Note that in a toric surface, the self
intersection of the curve associated with the vector $v_i$ is $m$,
where $v_{i-1} + v_{i +1} = -mv_i$.)
By simply counting the numbers
of $-2$ curves we can identify $F_{5-13}$.
Finally, $F_{14}, F_{15}$ have the same
numbers of curves of each self-intersection, so we use the order of
the self-intersections of the curves in the projection to distinguish
these two subpolytopes.
\subsection{Stacked fibrations and negative self-intersection curves
in the base}
\label{ncb}
\begin{table}[]
\centering
\begin{tabular}{|c|c|cccccccccccccccc|}
\hline
$C^2$ & ord $\sigma_{n=1,2,3,4,(5),6}$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ \hline
$-3$ & $\set{1, 1, 1, 2, (2), 2}$ &3& 4& 4& 3& 5& 4& 6& 4& 5& 3 & 4 & 5 & 3 & 4 & 4 & 3 \\ \hline
$-4$ & $\set{1, 1, 2, 2, (3), 3}$ &3&4 & 4& 3& 5& 4& 6& 4& 5& 3& 4 & 5 & 3 & 4 & 4 & 3 \\ \hline
$-5$ & $\set{1, 2, 2, 3, (3), 4}$ &3& & 2& 2& 1& 3& & 1& 2 & 2 & 2 & 1 & 2 & 2 & & 3 \\ \hline
$-6$ & $\set{1, 2, 2, 3, (4), 4}$ &3& & 2& 2& 1& 3& & 1 &2 & 2 & 2 & 1 & 2 & 2 & & 3 \\ \hline
$-7$ & $\set{1, 2, 3, 3, (4), 5}$ & & & & 2& & 1 & & 1 & & 1 & 1 & & 2 & & & \\ \hline
$-8$ & $\set{1, 2, 3, 3,( 4), 5}$ & & & & 2& & 1 & & 1 & & 1 & 1 & & 2 & & & \\ \hline
$-9$ & $\set{1, 2, 3, 4,( 4), 5}$ & & & & & & & & & & 1 & & & & & & \\ \hline
$-10$ & $\set{1, 2, 3, 4, (4), 5}$ & & & & & & & & & & 1 & & & & & & \\ \hline
$-11$ & $\set{1, 2, 3, 4, (5), 5}$ & & & & & & & & & & 1 & & & & & & \\ \hline
$-12$ & $\set{1, 2, 3, 4,( 5), 5}$ & & & & & & & & & & 1 & & & & & & \\ \hline
$-13$ & $\set{1, 2, 3, 4, (5), 6}$ & & & & & & & & & & & & & & & & \\ \hline
\end{tabular}
\caption{\footnotesize Curves $C$ with self-intersection
$C\cdot C$ that are allowed in the base of a stacked $F$-fibered
polytope for the 16 fiber
types $F$. The numbers below the labels of the 16 fiber types count the
numbers of the vertices of $F$
that give vertex stacked-form fibrations where the corresponding curve
can appear in the base. (Note that $-3$ and $-4$ curves are allowed
in all cases, so the first and second rows give the total number
of the vertices of a given fiber, and the most negative curve that
can occur for a given fiber corresponds to the position of the last non-empty entry in
the column.) The second column gives the orders of vanishing of
$\sigma_n\in\mathcal{O}(-n K)$ along $C$, $n=1,2,3,4,(5),6$ (none of
the fibered polytopes has $\mathcal{O}(-n K)$ for either $n\geq 7$
or $ n=5$). A $(4,6)$ singularity arises along the whole curve
unless there exists a section $\sigma_n\in\mathcal{O}(-n K)$ such that
ord$_C(\sigma_n)< n$. The existence of such a section depends on the
fiber type and the specified vertex of the base used for the stacking.
Curves with $-13\leq C\cdot C\leq -3$ are considered (while curves
$C^2\geq-2$ are always allowed since $\set{\text{ord}_C(\sigma_n)\rvert
n=1,2,3,4,(5),6}=\set{0,0,0,0,(0),0}$, there is always a
(4,6) singularity along
the whole curve when $C^2\leq-13$ since ord$_C(\sigma_n)=n$ for all
$n=1,2,3,4,5,6$).
}
\label{allowed}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$m$ & $-3$ & $-4$ & $-5$ & $-6$ & $-7$ & $-8$ & $-9$ & $-10$ & $-11$ & $-12$ & $-13$ \\ \hline
min($n$) & 2 & 2 & 3 & 3 & 4 & 4 & 6 & 6 & 6 & 6 & - \\ \hline
\end{tabular}
\caption{\footnotesize For each $m$,
the minimal value of $n$ such that a section
$\sigma_n\in\mathcal{O}(-nK_B)$ exists preventing $(4,6)$ points over a
curve of self-intersection $m$. Note that since there are no $\sigma_5$s
in any cases (see the third column in Table \ref{models}), min($n$)
jumps from $4$ to $6$ between $m=-8$ and $m=-9$.}
\label{translatentom}
\end{table}
In the companion paper \cite{Huang-Taylor-long}, we have found that at
large Hodge numbers many of the polytopes in the KS database belong to
a particular ``standard stacking'' class of $\mathbb{P}^{2,3,1}$ fiber type
($F_{10}$) fibrations over toric base surfaces, which are
$F_{10}$ fibrations where all rays in the base are stacked over a
specific vertex $v_s$ of $F_{10}$. This simple class of fibrations corresponds naturally to Tate-form Weierstrass
models over the given base, which take the form $y^2 + a_1 yx + a_3y =
x^3 + a_2x^2 + a_4x + a_6$. In this paper we systematically consider
the distribution of different fiber types, and also analyze which of
the $\mathbb{P}^{2,3,1}$ fibrations are of the ``standard stacking''
type. As background for these analyses, we describe in this section
the more general ``stacked'' form of polytope fibrations and perform some further
analysis of which stacked fibration types can occur over bases with
curves of given-intersection; since certain fibers cannot arise in
fibrations over bases with extremely negative self-intersection curves
(at least in simple stacking fibrations),
this helps to explain the dominance
of $\mathbb{P}^{2,3,1}$ fibers at large $h^{1, 1}$.
\subsubsection{Stacked fibrations}
\label{sec:stacked}
As discussed in more detail in \cite{Kreuzer:1997zg, Huang-Taylor-long},
the presence of a reflexive fiber $F =\nabla_2 \subset\nabla$ gives
rise to a projection map $\pi: \nabla \rightarrow\mathbb{Z}^2$, where $\pi (F)
= 0$, associated with a genus one or elliptic fibration of the
Calabi-Yau hypersurface $X$ over a
toric complex surface base $B$.
The ``stacked'' form of a fibration refers to a polytope in which the
rays of the base all have pre-images under $\pi$ that
lie in a plane in $\nabla$ passing through one of the
points in the fiber polytope $\ensuremath{\nabla}_2$. Specifically, a polytope
$\ensuremath{\nabla}$ that is in the stacked form can always be put into coordinates
so that the lattice points in $\ensuremath{\nabla}$ contain a subset
\begin{equation}
\{(\ensuremath{v^{(B)}} _i)_{1,2};(\ensuremath{v^{(F)}} _s)_{1,2})\rvert v_{i}^{(B)}
\in \{\text{vertex rays in } \Sigma_B\} \}\cup
\{(0,0,(\ensuremath{v^{(F)}} _i)_{1,2})\rvert \ensuremath{v^{(F)}} _i
\in \{\text{vertices of $\ensuremath{\nabla}_2$\}}\},
\label{pile}
\end{equation}
where $\Sigma_B$ is the toric fan of the base $B$ and $\ensuremath{v^{(F)}} _s$ is a specified
point of the fiber subpolytope $\ensuremath{\nabla}_2$.
We refer to such polytopes as $\ensuremath{v^{(F)}} _s$ stacked $F$-fibered polytopes.
In some contexts it may be useful to focus attention on
the stacked fibrations where the point $\ensuremath{v^{(F)}} _s$ is a vertex of $\ensuremath{\nabla}_2$,
as these represent the extreme cases of
stacked fibrations, and have some particularly simple
properties\footnote{In particular, the analysis of \S 6.2 of
\cite{Huang-Taylor-long} can be easily generalized to show that a
fibration has a vertex stacking on $\ensuremath{v^{(F)}} _s \in\ensuremath{\nabla}_2$ iff there is a single
monomial over every point in the dual face of $\ensuremath{\Delta}_2$ and these
monomials all lie in a linear subspace of $\ensuremath{\Delta}$.}.
We
can refer to these as ``vertex stacked'' fibrations.
The {\it standard} $\mathbb{P}^{2,3,1}$ fibrations discussed in
\cite{Huang-Taylor-long}
(sometimes there called ``standard stacking'' fibrations)
refer to the cases of stacked fibrations where the fiber is $F_{10}$ and the
specified stacking point is the vertex $v_s^{(F)} = (-3, -2)$.\footnote{Note
that in \cite{Huang-Taylor-long}, we have a different convention for
$\mathbb{P}^{2, 3, 1}$ which uses slightly different coordinates
from those one we use here, so that the
vertex in the notation of that paper is $v_s^{(F)} = (2,
3)$.}
These are based on a standard type of construction
in the toric hypersurface literature (see e.g. \cite{Skarke:1998yk}).
As described in
detail in \cite{Huang-Taylor-long}, in the case of a standard stacking, the monomials in $\ensuremath{\Delta}$ match
naturally
with the set of monomials in the Tate-form Weierstrass
model. Generalizing this analysis gives bounds on
what kinds of curves can be present in the base supporting a stacked
fibration with different fiber types.
\subsubsection{Negative curve bounds}
For any stacked fibration with a given fiber type $F$ and specified point
$v_s^{(F)}$ for the stacking, the monomials in the dual polytope
$\ensuremath{\Delta}$ are sections of various line bundles ${\cal O} (-nK_B)$.
By systematically analyzing the possibilities we see that many fibers
cannot be realized
in stacked fibrations over bases with curves of very negative
self-intersection without giving rise to singularities in the
fibration over these curves that go outside the Kodaira classification
and have no Calabi-Yau resolution.
We analyze this explicitly as follows. To begin with, the lattice
points of the dual polytope $\ensuremath{\Delta}$ of an $F$-fibered polytope $\ensuremath{\nabla}$
are
of the form
\begin{equation}
\set{((\ensuremath{m^{(2)}} )_{1,2};(\ensuremath{m^{(F)}} _j)_{1,2})\rvert \ensuremath{m^{(F)}} _j\in \ensuremath{\Delta}^{(F)}_2; (\ensuremath{m^{(2)}} )_1,
(\ensuremath{m^{(2)}} )_2\in\mathbb{Z}} \,,
\label{dsform}
\end{equation}
where
$\ensuremath{\Delta}_2^{(F)}$ is one of
the 16 dual subpolytopes that are given in detail in
Appendix~\ref{sec:appendix-fibers-dual}.
For a given base $B$, we have the condition
\begin{equation}
\ensuremath{m^{(2)}} \cdot \ensuremath{v^{(B)}} _i\geq -n, \forall i \Leftrightarrow \ensuremath{m^{(2)}} \text{ gives a
section in } \mathcal{O}(-nK_B) \,.
\end{equation}
Given that $((\ensuremath{v^{(B)}} _i)_{1,2},(\ensuremath{v^{(F)}} _s)_{1,2})\in\ensuremath{\nabla}$ for all $i$ in a
fibration that has the ``stacked'' form (\ref{pile}), the reflexive condition $m \cdot v\geq -1, m
\in\ensuremath{\Delta}, v\in\ensuremath{\nabla}$ implies that a lattice point
$m=((\ensuremath{m^{(2)}} )_{1,2},(\ensuremath{m^{(F)}} _j)_{1,2})\in \ensuremath{\Delta}$ gives a section in
$\mathcal{O}(-(\ensuremath{v^{(F)}} _s\cdot\ensuremath{m^{(F)}} _j+1)K_B)$. (See Figure~\ref{sections} for
examples with the $F_{10}$ fiber type, using the three different
vertices $\ensuremath{v^{(F)}} _s$ of $\ensuremath{\nabla}_2$
as the specified points for three different stackings, including
the ``standard stacking'' in which the monomials over the different
lattice points in $\ensuremath{\Delta}_2$ correspond to sections $a_n$ of different line bundles in the
Tate-form Weierstrass
model.)
Note that the lattice points in $\ensuremath{\Delta}$ that
project to the same lattice point in $\ensuremath{\Delta}_2$ always
give sections that belong to
the same line bundle, since the line bundle depends only on
$m^{(F)}_j$.
This shows that the allowed monomials in any polytope dual to a
stacked
fibration construction over a base $B$ take values as sections of
various line bundles ${\cal O} (-nK_B)$.
For each vertex $\ensuremath{v^{(F)}} _s$ of the 2D polytope $\ensuremath{\nabla}_2$, and for each fiber type $F$,
the number of lattice points in $\ensuremath{\Delta}_2$ corresponding to the resulting line
bundle $\mathcal{O}(-nK)$ is listed in the third column in Table
\ref{models}.
For points $\ensuremath{v^{(F)}} _s$ in $\ensuremath{\nabla}_2$ that are not vertices, the numbers of such
points will interpolate between the vertex values; the largest values
of $n$ are found from vertex stackings.
The line bundles in which the monomials take sections place
constraints on the structure of the base.
The
order of vanishing of a section $\sigma_n\in \mathcal{O}(-nK_B)$ over a
generic point in a rational curve $C$ with self-intersection $m=C\cdot C \leq - 3$
is\footnote{This calculation can be simply done by using the Zariski
decomposition, along the lines of \cite{clusters}.}
\begin{equation}
\text{ord}_C(\sigma_n)=\left\lceil \frac{n (m+2)}{m} \right\rceil.
\end{equation}
The orders of vanishing $\set{\text{ord}_C(\sigma_n)\rvert
n=1,2,3,4,(5),6}$ for each $m$, $-3\geq m\geq -13$, are listed in
the second column in Table \ref{allowed}.
Note that none of
the 16 fiber types gives a section of $\mathcal{O}(-5K_B)$ (see the
third column in Table \ref{models}).
For a Weierstrass model, where the coefficients $f, g$ are sections of
the line bundles ${\cal O} (- 4 K_B)$ and ${\cal O} (- 6 K_B)$, the
Kodaira condition that a singularity have a Calabi-Yau resolution is
that $f, g$ cannot vanish to orders 4 and 6. For the more general
class of fibrations we are considering here, the necessary condition
is that at least one section $\sigma_{n=1,2,3,4, (5), 6}$ exists with
$\text{ord}_C(\sigma_n)<n$. This condition is necessary so
that when the sections are combined to make a Weierstrass form, the
resulting $f, g$ give either a section in $\mathcal{O}(-4K_B)$ or a
section in $\mathcal{O}(-6K_B)$, respectively, whose order of vanishing does not
exceed $4$ or $6$. Note that as the absolute value
$|m|$ of the self-intersection of the curve $C$ increases, the minimal
$n$ that satisfies $\text{ord}_C(\sigma_n)<n$ is non-decreasing. The
minimum value min($n$) so that this condition is satisfied is listed
for each $m$ in Table \ref{translatentom}. Therefore, given a fiber
type $F$ with a specified point $\ensuremath{v^{(F)}} _s$, the allowed negative curves
in the base that are allowed for a stacking construction using
the point $\ensuremath{v^{(F)}} _s$ that gives a resolvable Calabi-Yau construction are
such that the following two conditions are satisfied: the existence of
a section $\sigma_{n=1,2,3,4,\text{ or } 6}$ such that (1)
$\sigma_n\in \mathcal{O}(-(\ensuremath{v^{(F)}} _s\cdot\ensuremath{m^{(F)}} _j+1)K_B)$ and (2)
ord$_C(\sigma_n)<n$. For each fiber type $\ensuremath{\nabla}_2$, we have considered
the stacking constructions over each vertex. The most negative
self-intersection curve that is allowed in the base for each fiber
type is tabulated in the last non-empty entry in the corresponding column in
Table \ref{allowed}, and a $\ensuremath{v^{(F)}} _s$ that gives rise to stacked
fibrations in which the most
negative curve is allowed, and the corresponding line bundles associated with
lattice points in $\ensuremath{\Delta}_2$ are given in Appendix
\ref{sec:appendix-fibers-dual}.
Note that since for any lattice point in $\ensuremath{\Delta}_2$, the largest value of
$n$ such that for any choice of stacking point $\ensuremath{v^{(F)}} _s$ the corresponding points in $\ensuremath{\Delta}$ are sections of ${\cal
O}(-nK_B)$
arises from a vertex, it is sufficient to consider the maximum $n$
across the possible choices of vertices $\ensuremath{v^{(F)}} _s$.
This analysis shows that any polytope that has the stacked form with a
given fiber type $F$ gives a genus one fibration over a base $B$ in
which the self-intersection of the curves has a lower bound given by
the last nonempty entry in the corresponding column of
Table~\ref{allowed}. For the fiber $F_{10}$, this bound is more
general. It is not possible to find any elliptic fibration with a
smooth Calabi-Yau resolution over a base that contains curves of
self-intersection $C \cdot C <-12$. While we have not proven it for
polytopes that do not have the stacking form described here, it
seems plausible to conjecture that the bounds on curves in the base
for each fiber type given in Table~\ref{allowed} will also hold for
arbitrary fibrations (i.e. for general ``twists'' of the fibration
that do not have the stacking type). We have not encountered any
cases in our analysis that would violate this conjecture. And it is
straightforward to see using the analysis done here already that
these curve bounds will still hold when there is a coordinate system
where each ray of the base has a pre-image living over some ray $v_F
\in \nabla_2$, even when these rays are not all the same $\ensuremath{v^{(F)}} _s$ as
in the stacking case, since the bound applying for each curve will
match that of some choice of $\ensuremath{v^{(F)}} _s$.
If the more general conjecture is correct, then, for example,
it would follow in general that any
reflexive polytope with a fiber $F_4$ can only have curves in the
base of self-intersection $\geq - 8$, those with a fiber $F_1$ can
only have curves in the base of self-intersection $\geq - 6$,
etc. We leave, however, a general proof of this assertion to further
work.
\subsection{Explicit construction of reflexive polytopes from stackings}
In \cite{Huang-Taylor-long}, we showed that the standard
stacking construction with the fiber $\mathbb{P}^{2,3,1}$, combined with a
large class of Tate-form Weierstrass tunings, can be used to
explicitly construct a large fraction of the reflexive polytopes in
the Kreuzer-Skarke database at large Hodge numbers. The stacking construction
with other fibers can be used similarly to construct other reflexive
polytopes in the KS database.
Explicitly, given the negative curve bounds on the base determined
above, we can construct a stacked $F$-fibered polytope
over $B$ as follows, following a parallel procedure to that
described in \cite{Huang-Taylor-long} for the $\mathbb{P}^{2,3,1}$-fibered
standard stackings: Given a fiber $F$ with a specified ray
$\ensuremath{v^{(F)}} _s$, and a smooth 2D toric base $B$ in which the self-intersections
of all curves are not lower than the negative curve bound
associated with $\ensuremath{v^{(F)}} _s$, we start with the minimal fibered polytope
$\tilde{\ensuremath{\nabla}}\subset N$ (which may not be reflexive) that is the convex hull
of the set in equation (\ref{pile}). \iffalse
\begin{equation}
\{(v_{i, 1}^{(B)},v_{i, 2}^{(B)},v_{F,1},v_{F,2})\rvert v_{i}^{(B)} \text{ vertex rays in } \Sigma_B \}\cup \{\text{vertices of $F_n$ except } v_F\},
\label{pile}
\end{equation}
where $\Sigma_B$ is the base toric fan.
\fi
If $\tilde{\ensuremath{\nabla}}$ is reflexive, then we are done; otherwise we adopt
the ``dual of the dual'' procedure used in \cite{Huang-Taylor-long} to
resolve $\tilde{\ensuremath{\nabla}}$: define $\ensuremath{\Delta}^\circ=\text{ convex
hull}((\tilde{\ensuremath{\nabla}})^*\cap M)$. As long as the negative curve bound
is satisfied (no $(4,6)$ curves), $\ensuremath{\Delta}^\circ$ is a reflexive polytope,
and the resolved polytope in $N$ is $\ensuremath{\nabla}\equiv
(\ensuremath{\Delta}^\circ)^*$.
Explicit
examples of $F$-fibered polytopes over Hirzebruch
surfaces $\mathbb{F}_m$ are given in Table \ref{models}, for each fiber type
$F$. The base $\mathbb{F}_m$ is in each case
chosen such that $-m$ saturates the negative curve bound associated
with the specific vertex $\ensuremath{v^{(F)}} _s$ for a given fiber type (see Appendix
\ref{sec:appendix-fibers-dual} for the possible choices of $\ensuremath{v^{(F)}} _s$ for
each fiber type that allow the most negative self-intersection curves
in the base). For
example, the standard stacked $\mathbb{P}^{2,3,1}$-fibered polytopes considered in
\cite{Huang-Taylor-long} have bases stacked over the vertex $(-3,-2)$
of the fiber $F_{10}$ in Appendix \ref{sec:appendix-fibers}, and there
exist sections in $\mathcal{O}(-nK_B)$ for $n=1,2,3,4,6$ (see Figure
(c) in Table \ref{sections}), so models in this class correspond naturally to the Tate-form Weierstrass models where $a_n=\sigma_n$, and the negative curve bound is
$-12$. The model listed in Table \ref{models} is the generic
elliptically fibered CY over $\mathbb{F}_{12}$.
The construction just described above gives the minimal reflexive $F$-fibered polytope
over $B$ that contains the set in equation (\ref{pile}).
While the $F_{10}$ fiber type with $\ensuremath{v^{(F)}} _s=(-3,-2)$ gives the
most generic elliptic Calabi-Yau over any given toric
base $B$ through this construction, using the other fiber types or the
other specified points of $F_{10}$ for stacked
stacking polytopes give models with enhanced
symmetries (these can include
discrete, abelian, and non-abelian symmetries). Further
tunings of the polytope analogous to Tate-tunings for the standard $\mathbb{P}^{2,3,1}$ polytope
can reduce $\ensuremath{\Delta}$ and enlarge $\ensuremath{\nabla}$, giving a much larger class of
reflexive polytopes for Calabi-Yau threefolds.
The explicit construction of the polytopes corresponding to Tate tuned models
via polytope tunings of the standard $F_{10}$-fibered polytope with
$\ensuremath{v^{(F)}} _s=(-3,-2)$ were discussed in section 4.3.3 and Appendix A in
\cite{Huang-Taylor-long}. We have not
attempted systematic
polytope tunings for the other fiber types, but in principle one can work out tuning tables analogous to the Tate table for the other fiber types.
\begin{figure}[]
\centering
\begin{tabular}{ccc}
$\ensuremath{v^{(F)}} _s=(1,0)$ & $\ensuremath{v^{(F)}} _s=(0,1)$ & $\ensuremath{v^{(F)}} _s=(-3,-2)$ \\
\includegraphics[width=4.8cm]{F10ver2} &\includegraphics[width=4.8cm]{F10ver1} & \includegraphics[width=4.8cm]{F10ver3}\\
(a)&(b)&(c)
\end{tabular}
\caption{\footnotesize
Different choices of the point $\ensuremath{v^{(F)}} _s$ used to specify a stacking
construction are associated with different
``twists'' of the $F$-fiber bundle over the base $B$. The different
choices of $\ensuremath{v^{(F)}} _s$ for a given fiber type give rise to monomials in
the dual polytope that are sections of different line bundles over the
base, illustrated here for three different choices of $\ensuremath{v^{(F)}} _s$ as
vertices of the fiber $F_{10}= \mathbb{P}^{2,3,1}$.
In the
stacking construction, each lattice point in $\ensuremath{\Delta}_2$ is associated
with a line bundle $\mathcal{O}(-(\ensuremath{v^{(F)}} _s\cdot\ensuremath{m^{(F)}} _j+1)K_B),
\ensuremath{m^{(F)}} _j\in\ensuremath{\Delta}_2$. The dashed lines
are normal to the corresponding $\ensuremath{v^{(F)}} _s$. The lattice points in
$\ensuremath{\Delta}_2$ on the same dashed line are associated with sections of the same line
bundle over the base. (cf. the $F_{10}$ data in Table \ref{allowed}
and Table \ref{models}.)}
\label{sections}
\end{figure}
\section{Results at large Hodge numbers}
\label{sec:results}
We have systematically run the algorithm described in
Section
\ref{sec:algorithm} to check for a manifest elliptic or genus one
fibration realized through a reflexive 2D fiber for each
polytope in the Kreuzer-Skarke database
that gives a Calabi-Yau threefold $X$ with $h^{1, 1} (X)$ or $h^{2, 1}
(X)$
greater or equal to 140. The number of polytopes
that give rise to Calabi-Yau threefolds with
$h^{1, 1}\geq 140$
is $248305$.
Since the set of reflexive polytopes is mirror symmetric (Hodge
numbers $h^{1, 1}, h^{2, 1}$ are exchanged in going from $\ensuremath{\nabla}
\leftrightarrow\ensuremath{\Delta}$), this is also the number of polytopes with
$h^{2, 1}\geq 140$.
(Note, however, that the mirror of an elliptic Calabi-Yau threefold is
not necessarily elliptic.)
There are $495515$ polytopes
with at least one of the Hodge numbers at least 140, and from these
numbers it follows that the number of polytopes with both Hodge
numbers at least 140 is $1095$.
While as described in Section \ref{sec:algorithm}, we have made the
algorithm reasonably
efficient for larger values of $h^{1, 1}$, our
implementation in this initial investigation was in Mathematica, so a
complete analysis of the database using this code was impractical. We
anticipate that in the future a complete analysis of the rest of the database
can be carried out
with a more efficient code, but our focus here is on identifying the
largest values of $h^{1, 1}, h^{2, 1}$ that are associated with
polytopes that give Calabi-Yau threefolds with no manifest elliptic
fiber.
In \S\ref{sec:prevalence} we analyze the distribution of fibrations at
small $h^{1,1}$.
\subsection{Calabi-Yau threefolds without manifest
genus one fibers}
Of the 495515 polytopes analyzed at large Hodge numbers, we found that
only four lacked a 2D reflexive polytope fiber, and thus the other
495511 polytopes all lead to Calabi-Yau threefolds with a manifest
genus one fiber. The Hodge numbers of the four Calabi-Yau threefolds
without a manifest genus one fiber are
\begin{equation}
(h^{1, 1}, h^{2, 1}) =
(1, 149), \hspace*{0.1in}
(1, 145), \hspace*{0.1in}
(7, 143), \hspace*{0.1in} (140, 62) \,.
\label{eq:4-examples}
\end{equation}
(See Figure \ref{fig:four}.) It is of course natural that any Calabi-Yau threefold with $h^{1, 1} =
1$ cannot be elliptically fibered; by the Shioda-Tate-Wazir formula
\cite{stw},
any elliptically fibered Calabi-Yau threefold must have at least
$h^{1, 1} = 2$, with one contribution from the fiber and at least one
more from
$h^{1, 1}$ of the base, which must satisfy
$h^{1, 1} (B) \geq 1$.
We also expect that any genus one fibered CY3 will have at least a
multi-section \cite{Braun:2014oya, Morrison-WT-sections}, so $h^{1,
1}\geq 2$ in these cases for similar reasons.
\begin{figure}
\centering
\includegraphics[width=7cm]{KS}
\caption{\footnotesize The four Hodge pairs in the region
$h^{1,1}\geq240$ or $h^{2,1}\geq240$
associated with
polytopes
without reflexive 2D subpolytopes associated with genus one (including
elliptic) fibers.}
\label{fig:four}
\end{figure}
The examples (1, 145) and (1, 149) are the only Hodge numbers from
polytopes in the Kreuzer-Skarke database with $h^{1, 1} = 1, h^{2, 1}
\geq 140$. Note that the quintic, with Hodge numbers (1, 101), is
another example of a Calabi-Yau threefold with $h^{1, 1}= 1$ that
has no elliptic or genus one fibration.
We list here the
polytope structure of the two examples from
(\ref{eq:4-examples}) that have $h^{1, 1} > 1$, in the form given in
the Kreuzer-Skarke database. M refers to the numbers of lattice
points and vertices of the dual polytope $\ensuremath{\Delta}$, while N refers
to the numbers of lattice points and vertices of the polytope $\ensuremath{\nabla}$, and H refers to the Hodge numbers $h^{1,1}$ and $h^{2,1}$.
The vectors listed are the vertices of the polytope in the $N$
lattice.
The numbers in parentheses for each polytope refer to the position in
the list of polytopes
in the Kreuzer-Skarke database that give CY3s
with those specific Hodge numbers.
\begin{itemize}
\item M:196 5 N:10 5 H:7,143 (1$^\text{st}$/54)
Vertices of $\ensuremath{\nabla}$:
$\set{(-1,4,-1,-2), (-1,-1,1,1), (1,-1,0,0), (-1,-1,0,1), (-1,-1,0,3)}$
\item M:88 8 N:193 9 H:140,62 (6$^\text{th}$/255)
Vertices of $\ensuremath{\nabla}$:
\set{$(-1,2,-1,4), (-1,0,4,-1), (1,-1,-1,-1), (-1,-1,-1,19), $
\hspace*{0.6in}$(-1,-1,5,1),(-1,1,0,-1),(-1,1,-1,-1), (-1,-1,-1,-1), (-1,-1,5,-1)$}
\end{itemize}
Note that we have not proven that these Calabi-Yau threefolds do not
have elliptic or genus one fibers, we have just found that such fibers
do not appear in a manifest form from the structure of the polytope.
We leave for further work the question of analyzing non-toric elliptic
or genus one fibration structure of these examples, or others with
smaller Hodge numbers that also lack a manifest genus one fiber; such
an analysis might be carried out using methods similar to those of
\cite{aggl-3}.
\subsection{Calabi-Yau threefolds without manifest elliptic fibers}
Of the 495515 polytopes analyzed, only 384 had fibers of only types
$F_1, F_2, F_4$. These cases are associated with genus one fibered
Calabi-Yau threefolds that have no manifest toric section, and
therefore are not necessarily elliptically fibered.
Note that we have not proven that these Calabi-Yau threefolds do not
have elliptic fibers; in fact, many toric hypersurface
Calabi-Yau threefolds have been found to have non-toric fibrations
\cite{BGK-geometric}. It would be interesting to study these examples
further for the presence of non-toric sections.
The largest values of $h^{2, 1}$ and $h^{1, 1}$ for these genus one
fibered Calabi-Yau threefolds without a manifest toric section are
realized by the examples:
\begin{itemize}
\item M:311 5 N:15 5 H:11, 227 (1$^\text{st}$/19)
Vertices of $\ensuremath{\nabla}$:
$\set{(-1,0,4,-3),(-1,2,-1,0),(1,-1,-1,1),(-1,0,-1,1),(-1,0,-1,3)}$
\item M:(80; 81; 81; 82) 8 N:(263; 262; 261; 260) 9 H:194, 56 ((7$^\text{th}$; 8$^\text{th}$;
9$^\text{th}$; 10$^\text{th}$)/52)
Vertices of $\ensuremath{\nabla}$:
\begin{itemize}
\item 7$^\text{th}$ $\set{(-1, 0, 4, -1), (-1,
2, -1, -1), (1, -1, -1, -1), (-1, -1, -1, -1), (-1, -1, 6, -1), \\(-1,
1, 0, 6), (-1, -1, -1, 28), (-1, 1, -1, 10), (-1, -1, 6, 0)}$,\\
\item 8$^\text{th}$ $\set{(-1, 0, 4, -1), (-1,
2, -1, -1), (1, -1, -1, -1), (-1, -1, -1, -1), (-1, -1, 6, -1), \\(-1,
1, 0, 6), (-1, -1, -1, 28), (-1, 0, -1, 19), (-1, -1, 6, 0)}$,\\
\item 9$^\text{th}$ $\set{(-1, 0, 4, -1), (-1,
2, -1, -1), (1, -1, -1, -1), (-1, -1, -1, -1), (-1, -1, 5, -1),\\ (-1,
1, 0, 6), (-1, -1, -1, 28), (-1, 1, -1, 10), (-1, -1, 5, 4)}$,\\
\item 10$^\text{th}$ $\set{(-1, 0, 4, -1), (-1,
2, -1, -1), (1, -1, -1, -1), (-1, -1, -1, -1), (-1, -1, 5, -1), \\(-1,
1, 0, 6), (-1, -1, -1, 28), (-1, 0, -1, 19), (-1, -1, 5, 4)}$
\end{itemize}
\end{itemize}
The fiber type $F_4$ is the only fiber that arises in these five polytopes. In the first case, with Hodge numbers (11, 227), the base of the
elliptic fibration is the Hirzebruch surface $\mathbb{F}_8$.
Analysis of the F-theory physics of the genus one fibration associated
with this polytope suggests that there should in fact be an elliptic
fiber with a non-toric global section.\footnote{In the F-theory analysis, we consider the Jacobian fibration associated with the $F_4$ fibration.
This is an elliptic fibration with a section, for which a detailed
analysis shows that there are no further enhanced non-abelian gauge
symmetries. There are, however, 150 nodes in the $I_1$ component of the
discriminant locus in the base. Since the generic elliptic fibration
model over $\mathbb{F}_8$ has Hodge numbers (10, 376),
this analysis suggests that there should be an additional section in
this case, which should correspond to a non-toric section in the original polytope and in
the Jacobian model would give rise to a U(1) abelian factor where the 150
nodes correspond to
matter fields charged under the U(1); the anomaly cancellation
condition is satisfied for the resulting Jacobian model, matching with the
shift in Hodge numbers $(10, 376)+(1,1-150)=(11, 227)$.}
For further
work, it would be nice to prove this and find the non-toric section explicitly.
Further analysis of the F-theory physics of the other cases may also be
interesting, as well as the question of whether these threefolds admit
elliptic fibrations that are not manifest in the toric description.
\subsection{Fiber types}
The numbers of distinct polytopes in the regions $h^{1,1}, h^{2,1} \geq 140$ that have each fiber type
(not counting multiplicities)
are
\begin{center}
\begin{tabular}{cccccccc}
$F_1$ &$F_2$ &$F_ 3$ & $F_4$ &$F_5$ &$F_6$ &$F_7$ & $F_8$ \\\hline
612&1&1279&40218&32&19907&20&8579\\\\
$F_9$ &$F_{10}$ &$F_{11}$ &$F_{12}$ & $F_{13}$ & $F_{14}$ &$F_{15}$ &$F_{16}$\\\hline
2067&487387&24811&850&27631&2438&273&58
\end{tabular}
\end{center}
In Appendix \ref{sec:appendix-results},
we have included a set of figures that
show the distribution of polytopes containing each fiber type,
according to the Hodge numbers of the associated Calabi-Yau threefolds.
We have shaded the data points of Hodge
pairs varying from light to dark with increasing multiplicities; two factors
contribute to the multiplicity in these figures: the multiplicity of
the polytopes associated with
the same Hodge pair and the multiplicity of
fibers of the same type for a given polytope (note that the latter multiplicity
is not included in the numbers in the table above). We discuss multiple fibrations in the next
subsection.
We can see some interesting patterns in the distribution of polytopes
with different fiber types. As discussed in \S\ref{ncb}, at least for
polytopes with the stacked fibration form, the only fiber type that
can arise over a base with a curve of self-intersection less than $-8$
is the $\mathbb{P}^{2,3,1}$ ($F_{10}$) fiber (see Table \ref{allowed}). From
the graphs in Appendix \ref{sec:appendix-results}, it is clear that
this fiber dominates at large Hodge numbers. The other fiber types
that can arise over a base with a curve of self-intersection less than
$-6$ are $F_4, F_{13}$ (with two possible specified vertices) and $F_6,
F_8, F_{11}$ (with only one specified vertex). The Hodge numbers of
Calabi-Yau threefolds coming from polytopes with fiber types $F_4,
F_6, F_8$ extend to $h^{1,1}=263$, and $F_{11}$ extends to
$h^{1,1}=377$; in fact, the right most data point of the fiber types
$F_4, F_6, F_8, F_9, F_{12}, F_{15}$ is the same: $\set{263,23}$, and
the right most data point of the fiber types $F_{11}$ and $F_{14}$ is
the same: $\set{377,11}$. The fiber $F_{13}$ also continues out to the
largest values of $h^{1, 1}$ as $F_{10}$ does.
Since the largest value of $h^{1,1}$ for a generic elliptic fibration
over a toric base $B$ containing no curves of self-intersection $< -8$ is 224 \cite{toric, Hodge, Huang-Taylor-long},
these large values of $h^{1,1}$ for fibers other than $F_{10}$ must
involve tuning of relatively large gauge groups.
For $h^{1, 1}> 377$
the only fibers that arise are $F_{10}$ and $F_{13}$. In fact, the
Calabi-Yau threefold with the largest $h^{1, 1}$, which has Hodge
numbers (491, 11), has two distinct fibrations: one has the
standard $\mathbb{P}^{2,3,1}$ fiber over the 2D toric base
\{$-12//-11//-12//-12//-12//-12//-12//-12//-12//-12//-12//-12//-12//-12//-12//-11//-12,0$\},
represented by the self-intersection numbers of the toric curves, where
// stands for the sequence $-1,-2,-2,-3,-1,-5,-1,-3,-2,-2,-1$; the
other fibration has the fiber $F_{13}$ over the base
\{$-4,-1,-3,-1,-4,-1,-4,-1,-4,0,2$\}. We leave a more detailed
analysis of the alternative fibration of this Calabi-Yau threefold for
future work.
On the other hand, the fiber $F_{2}$, which is most restricted, arises from only one
$\ensuremath{\nabla}$ polytope, with multiplicity one: M:40 6 N:186 6 H:149,29, which
also has two different $F_{10}$ subpolytopes.
These observations tell us that, as we might expect, $h^{1,1}$
extends further for the fiber
subpolytopes
that admit more
negative curves in the base. Almost half of the fiber types do not
arise for any polytopes at all in the region $h^{2,1}\geq 140$:
$F_2,F_5,F_7,F_{12},F_{14}, F_{15},$ and $F_{16}$.
None of these is allowed over any base with a curve of
self-intersection less than $-6$ (at least in the stacking
construction of \S\ref{ncb}).
\subsection{Multiple fibrations}
Another interesting question is the prevalence of multiple fibrations.
This question was investigated for complete intersection Calabi-Yau threefolds
in \cite{aggl-2, aggl-3}, where it was shown that many CICY
threefolds have a large number of fibrations. In the toric
hypersurface context we consider here,
a polytope can have both multiple fibrations by different fiber types
and by the same fiber type.
In this analysis, as in the rest of this paper, we consider only fibrations that are manifest in the
toric description.
We have found that
the total number of (manifest) fibrations in a polytope in
the
two large Hodge number regions ranges from zero to $58$. The total
numbers of
fibrations and the number of polytopes that have each number of
total fibrations are listed in Table~\ref{t:multiple-fibers}.
\begin{table}
\begin{center}
\begin{tabular}{lccccccc}
\# fibrations & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\\hline
\# polytopes & 4 & 327058 & 113829 & 34657 & 11414 & 4466 & 1955\\
& (4) & (327058) & (113829) & (34659) & (11418) & (4465) & (1952) \\\\
\# fibrations& 7 & 8 & 9 & 10 & 11 & 12 & 13 \\\hline
\# polytopes& 1003 & 501 & 251 & 150 & 70 & 42 & 32 \\
& (1003) & (503) & (251) & (149) & (71) & (42) & (32) \\\\
\# fibrations& 14 & 15 & 16 & 17 & 18 & 20 & 22 \\\hline
\# polytopes& 31 & 4 & 14 & 6 & 9 & 2 & 6 \\
& (31) & (4) & (14) & (6) & (8) & (2) & (6) \\\\
\# fibrations& 23 & 25 & 26 & 31 & 34 & 37 & 58 \\\hline
\# polytopes& 2 & 1 & 2 & 1 & 1 & 3 & 1 \\
&(1) & (1) & (2) & (1) & (1) & (2) & (0)
\end{tabular}
\end{center}
\caption[x]{\footnotesize Table of the number of polytopes in the
large Hodge number regions $h^{1,1}, h^{2,1} \geq 140$ that have
a given number of distinct (manifest) fibrations. Numbers in
parentheses are after modding out by automorphism symmetries (see
text, Appendix \ref{automorphisms}).}
\label{t:multiple-fibers}
\end{table}
\iffalse
The number of total fibrations modulo the automorphism group of a polytope in the regions $h^{1,1}\geq 140$ or $h^{2,1}\geq 140$ ranges from zero to $37$. The numbers of the modded fibrations and the corresponding numbers of the polytopes are listed below:
\begin{center}
\begin{tabular}{cccccccc}
\# modded fibrations & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\\hline
\# poltyopes & 4 & 327058 & 113829 & 34659 & 11418 & 4465 & 1952 \\\\
\# modded fibrations & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\\hline
\# polytopes & 1003 & 503 & 251 & 149 & 71 & 42 & 32 \\\\
\# modded fibrations & 14 & 15 & 16 & 17 & 18 & 20 & 22 \\\hline
\# polytopes & 31 & 4 & 14 & 6 & 8 & 2 & 6 \\\\
\# modded fibrations & 23 & 25 & 26 & 31 & 34 & 37 &58 \\\hline
\# polytopes & 1 & 1 & 2 & 1 & 1 & 2 & -
\end{tabular}
\end{center}
\fi
In some cases the number of fibrations is enhanced by the existence of
automorphism symmetries of the polytope. While a generic polytope has
no symmetries, some polytopes with large numbers of fibrations
also have many symmetries. In such cases the number of inequivalent
fibrations can be smaller than the total number of
fibrations.
This issue is also addressed in \cite{Braun:2011ux, aggl-3}.
There are 16 polytopes in the region $h^{1,1}\geq 140$ or
$h^{2,1}\geq 140$ with a non-trivial action of the automorphism symmetry
on the fibers. We list these 16 polytopes in Appendix
\ref{automorphisms1}. For example, the polytope giving a Calabi-Yau
with Hodge numbers (149, 1) has an automorphism symmetry of order 24,
associated with an arbitrary permutation on 4 of the 5 vertices of the
polytope. This automorphism symmetry group is described in detail in
Appendix \ref{automorphisms2}; the number of distinct classes of
fibrations modulo automorphisms in this case is reduced to only $8$
instead of 58.
The
polytopes that we have found with a large total number
of (manifest) fibrations
are generally in the large $h^{1,1}$ region; in fact, polytopes in the large $h^{2,1}$ region have at most three fibrations:
\begin{center}
\begin{tabular}{l|cccc}
\# total fibrations& 0 & 1 & 2 & 3 \\\hline
\# polytopes with large $h^{2,1}$ & 3 & 240501 & 7775 & 26
\end{tabular}
\end{center}
The four polytopes with the two largest numbers of total fibrations
(58, 37 without modding out by automorphisms) are respectively
\begin{equation}
\nonumber
\set{\set{7, 5, 201, 5, 149, 1, 296}, \set{0, 0, 12, 12, 0, 0, 0, 0, 0, 12, 0, 0, 15, 0, 3, 4}}
\end{equation}
and
\begin{eqnarray}
\nonumber
&&\set{\set{7, 5, 196, 5, 145, 1, 288}, \set{0, 0, 0, 6, 0, 6, 0, 0, 0, 12, 0, 0, 9, 3, 0, 1}}\\\nonumber
&&\set{\set{8, 6, 195, 7, 144, 2, 284}, \set{0, 0, 0, 6, 0, 6, 0, 0, 0, 12, 0, 0, 9, 3, 0, 1}},\\\nonumber
&& \set{\set{9, 7, 192, 10, 144, 4, 280}, \set{0, 0, 0, 0, 0, 9, 0, 0, 0, 15, 3, 0, 6, 3, 0, 1}},
\end{eqnarray}
where the numbers are in the format
\begin{center}
\{\{\# lattice points of $\ensuremath{\Delta}$, \# vertices of $\ensuremath{\Delta}$, \# lattice points of $\ensuremath{\nabla}$, \# vertices of $\ensuremath{\nabla}$,\\ $h^{1,1}$, $h^{2,1}$, Euler Number\},\{\#$F_1$,\#$F_2$,$\ldots$,\#$F_{16}$\}\}.
\end{center}
Note that the first two polytopes are, respectively, the mirrors
of the first two polytopes (with $h^{1,1} = 1$) without any fibrations in equation
(\ref{eq:4-examples}).
We also note that in general, the polytopes with larger numbers of
total manifest fibrations fall within a specific range of values of $h^{1,1}$ and
$h^{2,1}$ (at least in the ranges we have studied here). The ranges of
$h^{1,1}$ and $h^{2,1}$ of the polytopes that have 8 or more
fibrations (without considering automorphisms) are
listed in Table~\ref{t:ranges}.
It may be interesting to note that in a somewhat different context, it
was found in \cite{Wang-WT-MC-2} that a large multiplicity of
elliptically fibered fourfolds arises at a similar locus in the space
of Hodge numbers, at intermediate values of $h^{1,1}$ and small values
of $h^{3,1}$ (which counts the number of complex structure moduli, as
does $h^{2,1}$ for Calabi-Yau threefolds). It would be interesting to
understand whether these observations stem from a common origin.
\begin{table}
\begin{center}
\begin{tabular}{c|ccccc|}
\hline
\multicolumn{1}{|c|}{\# total fibrations $\geq$} & 8 & 9 & 10 & 11 & 12 \\ \hline
\multicolumn{1}{|c|}{$h^{1,1}$ range} & [140,272]&[140,243]&[140,243]&[140,214]&[140,208]\\ \hline
\multicolumn{1}{|c|}{$h^{2,1}$ range} & [1, 19]& [1, 19]& [1, 16]& [1, 16]& [1, 12] \\ \hline\\\cline{2-6}
& 13 & 14 & 15 & 16 & 17 \\ \cline{2-6}
& [140, 208]& [140, 208]& [140, 208]& [140, 208]& [141, 173] \\ \cline{2-6}
& [1, 11]& [1, 9]& [1, 8]& [1, 8]& [1, 7] \\ \cline{2-6} \\\cline{2-6}
& 18 & 20 & 22 & 23 & 25 \\ \cline{2-6}
& [141, 173]& [141, 173]& [141, 173]& [141, 165]& [141, 154] \\ \cline{2-6}
& [1, 7]& [1, 6]& [1, 6]& [1, 5]& [1, 5] \\ \cline{2-6} \\\cline{2-6}
& 26 & 31 & 34 & 37 & 58 \\ \cline{2-6}
& [141, 149]& [141, 149]& [141, 149]& [144, 149]& [149, 149] \\ \cline{2-6}
& [1, 5]& [1, 5]& [1, 4]& [1, 4]& [1, 1] \\ \cline{2-6}
\end{tabular}\end{center}
\caption[x]{\footnotesize Ranges of Hodge numbers in which the
polytopes with the largest numbers of fibrations (not including
automorphisms) are localized.}
\label{t:ranges}
\end{table}
It is also interesting to note that while every Calabi-Yau threefold with $h^{1, 1}>335$ or $h^{2, 1}>256$ has more than
one fibration, the polytopes associated with the largest values of
$h^{1,1}$ have precisely two manifest fibrations, and the average
number of fibrations at large $h^{1,1}$ is close to 2. In
Figure~\ref{f:average-fibrations}, we show the average number of
fibrations for the polytopes associated with Calabi-Yau threefolds of
Hodge numbers $h^{1,1} \geq 140$.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{ploth11above140}
\end{center}
\caption[x]{\footnotesize Average number of fibrations for polytopes
associated with Calabi-Yau threefolds with $h^{1,1} \geq 140$.}
\label{f:average-fibrations}
\end{figure}
The maximal number of fibrations for each specific
fiber type in a polytope is
\begin{center}
\begin{tabular}{cccccccccccccccc}
$F_1$ &$F_2$ &$F_ 3$ & $F_4$ &$F_5$ &$F_6$ &$F_7$ & $F_8$ & $F_9$ &$F_{10}$ &$F_{11}$ &$F_{12}$ & $F_{13}$ & $F_{14}$ &$F_{15}$ &$F_{16}$\\\hline
4& 1& 12& 12& 2& 9& 1& 4& 4& 15& 4& 2& 15& 6& 3& 4\\
(4)& (1)& (6)& (8)& (2)& (9)& (1)& (4)& (4)& (15)& (4)& (2)& (9)& (6)&(3)& (1)
\end{tabular}
\end{center}
Numbers in parentheses are after modding out by automorphism
symmetries; for example, the maximal number of $F_{16}$ fibers,
which comes from the polytope associated with the Hodge pair (149,1), reduces from four
to one (see the last row of the table in Appendix
\ref{automorphisms1}).
If we count the distinct fiber types in a polytope, we find that the
maximum number of fiber types that a polytope in the large Hodge
number regions can have is eight. The eight polytopes that have the
maximum number of eight distinct fiber types are
\begin{eqnarray}
\nonumber
&&\{\{11,6,199,6,151,7,288\},\{0,0,2,1,0,0,0,2,0,3,2,0,2,1,1,0\}\},\\\nonumber&&\{\{12,7,193,8,146,8,276\},\{0,0,2,1,0,0,0,2,0,3,2,0,2,1,1,0\}\},\\\nonumber&&\{\{12,8,201,11,153,6,294\},\{0,0,2,0,0,2,0,0,0,3,2,2,1,1,0,1\}\},\\\nonumber&&\{\{13,8,198,10,151,7,288\},\{0,0,1,1,0,0,0,1,0,2,2,1,1,1,0,0\}\},\\\nonumber&&\{\{15,8,192,12,143,11,264\},\{0,0,2,2,0,1,0,2,0,3,1,0,1,0,1,0\}\},\\\nonumber&&\{\{14,9,184,12,140,8,264\},\{0,0,0,1,0,1,0,1,1,2,1,2,1,0,0,0\}\},\\\nonumber&&\{\{14,9,192,12,146,8,276\},\{0,0,1,1,0,0,0,1,0,2,2,1,1,1,0,0\}\},\\\nonumber&&\{\{16,9,191,13,143,11,264\},\{0,0,1,1,0,1,0,1,0,2,1,1,1,0,0,0\}\}.
\end{eqnarray}
In Table~\ref{t:distinct}, we show the distribution of all polytopes,
polytopes with large $h^{1,1}$, and polytopes with large $h^{2,1}$
according to the number of distinct fiber types. There are at most
three distinct fiber types in the polytopes in $h^{2,1}\geq
140$. While all fiber types occur in the large $h^{1,1}$ region, the
only fiber types that occur in the large $h^{2,1}$ region are $F_{1},
F_{3}, F_{4}, F_{6}, F_{8}, F_{10}, F_{11}, $ and $F_{13}.$
\begin{table}
\begin{center}
\begin{tabular}{|c|r|r|r|}
\hline
\begin{tabular}[c]{@{}c@{}}\# distinct \\ fiber types\end{tabular} & \# polytopes & \begin{tabular}[c]{@{}c@{}}\# polytopes with \\ $h^{1,1}\geq 140$\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# polytopes with \\ $h^{2,1}\geq 140$\end{tabular} \\ \hline
0 & 4 & 1 & 3 \\ \hline
1 & 393788 & 153601 & 229443 \\ \hline
2 & 86008 & 78995 & 6460 \\ \hline
3 & 13354 & 13347 & 7 \\ \hline
4 & 1755 & 1755 & - \\ \hline
5 & 469 & 469 & - \\ \hline
6 & 112 & 112 & - \\ \hline
7 & 17 & 17 & - \\ \hline
8 & 8 & 8 & - \\ \hline
\end{tabular}
\end{center}
\caption[x]{\footnotesize Distribution of polytopes by number of
distinct fiber types}
\label{t:distinct}
\end{table}
Finally, it is interesting to note that only the plot of $F_{10}$ in
Appendix \ref{sec:appendix-results} seems to exhibit mirror
symmetry to any noticeable extent. We do not expect elliptic
fibrations to respect mirror symmetry, so this may simply arise from a
combination of the observation that the total set of hypersurface
Calabi-Yau Hodge numbers in the Kreuzer-Skarke database is mirror
symmetric and the observation that in the large Hodge number regions
that we have considered most of the Calabi-Yau threefolds admit
elliptic fibrations described by a $F_{10}$ fibration of the
associated polytope.
\subsection{Standard vs. non-standard $\mathbb{P}^{2,3,1}$-fibered polytopes}
\begin{figure}
\centering
\includegraphics[width=7cm]{nonstand}
\caption{\footnotesize Hodge pairs with only non-standard $F_{10}$-fibered polytopes. The grey dots correspond to all Hodge pairs with $F_{10}$ fibers. The black dots correspond to Hodge pairs with only non-standard $F_{10}$-fibered polytopes. The vertical and horizontal dashed line correspond to $h^{1,1}=240$ and $h^{2,1}=240$, respectively.}
\label{fig:nonstand}
\end{figure}
In \cite{Huang-Taylor-long}, we compared elliptic and toric
hypersurface Calabi-Yau threefolds with Hodge numbers $h^{1,1}\geq
240$ or $h^{2,1}\geq 240$. We found that in the large $h^{1,1}$
region, there were eight Hodge pairs in the KS database that were not
realized by a simple
Tate-tuned model, and do not correspond to a ``standard
stacking''
$\mathbb{P}^{2,3,1}$-fibered polytope.
We found, however, that these eight outlying polytopes have a
description in terms of a $\mathbb{P}^{2,3,1}$ fiber structure that
is not of the standard ($v^{(F)}_s=(-3,-2)$) stacking form, and furthermore it can be seen
do not
respect the stacking framework of \S\ref{ncb}.
The
Weierstrass models of these Calabi-Yau threefolds
all have the novel feature that they can have gauge groups tuned over
non-toric curves, which can be of higher genus, in the base.
As discussed in \cite{Huang-Taylor-long},
the definition of a standard $\mathbb{P}^{2,3,1}$-fibered polytope
$\ensuremath{\nabla}$ (where the base is stacked over the vertex $(-3,-2)$ of the $F_{10}$ fiber) turns out to be equivalent to the condition that the corresponding $\ensuremath{\Delta}$ has a single lattice point for
each of the choices $\ensuremath{m^{(F)}} _2=(1,-1)$ and $\ensuremath{m^{(F)}} _3=(-1,2)$ in equation
(\ref{dsform}) (where we have numbered the vertex with the largest
multiple of $- K_B $ as $\ensuremath{m^{(F)}} _1 = (- 1, -1)$), and there is furthermore a
coordinate system in which
this lattice point has coordinates
$\ensuremath{m^{(2)}} =(0,0)$ in both cases. We have scanned through
the $F_{10}$-fibered polytopes and used this feature to compute the
fraction of $F_{10}$-fibered polytopes that have the standard versus
non-standard
form; the results of this analysis are shown in
Table~\ref{t:standard}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& total \# fibrations & \begin{tabular}[c]{@{}c@{}}\# fibrations in polytopes \\ with $h^{1,1}\geq 140$\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# fibrations in polytopes \\ with $h^{2,1}\geq 140$\end{tabular} \\ \hline
standard & 433827 & 242562 & 192218 \\ \hline
non-standard & 183818 & 130255 &53705 \\ \hline
\begin{tabular}[c]{@{}c@{}}non-standard\\ fraction\end{tabular} & 0.297611 & 0.349381 & 0.218381 \\ \hline
\end{tabular}
\end{center}
\caption[x]{\footnotesize Fractions of fibrations by the fiber
$F_{10}$ that take the ``standard stacking'' form versus
other fibrations.}
\label{t:standard}
\end{table}
Of the 488119 $F_{10}$-fibered polytopes, 98758 have more than one
$F_{10}$ fiber. Most of these polytopes have both standard and
non-standard types of fibrations. There are 103 Hodge pairs that have only the
non-standard fibered polytopes. These may give rise to more interesting
Weierstrass models, like those we have studied with $h^{1,1}\geq
240$ in section 6.2 of \cite{Huang-Taylor-long}. As a crosscheck to
the ``sieving'' results there, we have confirmed that
none of these 103 Hodge pairs are in the region
$h^{2,1}\geq 240$, and the 12 Hodge pairs of these 103 pairs that have $h^{1,1}\geq
240$ are exactly the Hodge pairs associated with non-standard $\mathbb{P}^{2,3,1}$-fibered
polytopes in Table 17 of \cite{Huang-Taylor-long}, together with the
four Hodge pairs of Bl$_{[0,0,1]}\mathbb{P}^{2,3,1}$-fibered polytopes;
the latter four polytopes, in other words, happen to also be $F_{11}$-fibered, and can be
analyzed as blowups of standard $\mathbb{P}^{2,3,1}$ model (U(1) models). We
list the remaining 91 Hodge pairs that only have non-standard
$\mathbb{P}^{2,3,1}$ fiber types below
(see also Figure \ref{fig:nonstand}):
\begin{itemize}
\item $140\leq h^{1,1}<240$\\
$\{\{149,1\},\{154,7\},\{179,8\},\{177,16\},\{179,22\},\{207,22\},\{235,22\},\{184,23\},\{228,24\}$,\\
$\{178,27\},\{206,27\},\{177,28\},\{205,28\},\{211,38\},\{232,38\},\{233,38\},\{182,39\},\{217,39\}$,\\
$\{223,40\},\{194,41\},\{221,41\},\{210,43\},\{203,44\},\{174,45\},\{207,45\},\{145,46\},\{193,46\}$,\\
$\{205,46\},\{159,48\},\{180,49\},\{187,53\},\{239,53\},\{150,55\},\{231,55\},\{225,57\},\{231,57\},$\\
$\{204,63\},\{231,63\},\{175,64\},\{237,65\},\{141,66\},\{208,66\},\{228,66\},\{199,67\},\{211,67\},$\\
$\{193,69\},\{201,69\},\{190,70\},\{200,70\},\{161,71\},\{160,73\},\{190,76\},\{214,82\},\{185,83\},$\\
$\{198,84\},\{181,85\},\{193,85\},\{229,85\},\{164,86\},\{200,86\},\{160,88\},\{185,93\},\{177,101\}$,\\
$\{197,101\},\{148,102\},\{171,105\},\{147,119\},\{141,123\},\{140,126\}\}$
\item $140\leq h^{2,1}<240$\\
$\{\{3,141\},\{3,165\},\{3,195\},\{4,142\},\{4,148\},\{4,154\},\{4,162\},\{4,166\},\{4,178\},\{5,141\}$,\\
$\{5,143\},\{5,149\},\{5,153\},\{11,176\},\{22,217\},\{23,182\},\{23,200\},\{24,183\},\{31,170\}$,\\
$\{95,155\},\{110,144\},\{111,141\}\}$.
\end{itemize}
\section{Fibration prevalence as a function of $h^{1, 1}(X)$}
\label{sec:prevalence}
In this section we consider the fraction of Calabi-Yau threefolds at a
given value of the Picard number $h^{1, 1}(X)$ that admit a genus one
or elliptic fibration. We begin in \S\ref{sec:analytic-cubic} with a
summary of some analytic arguments for why we expect that an
increasingly small fraction of Calabi-Yau threefolds will fail to
have
such a fibration as $h^{1, 1}$ increases; we then present some
preliminary
numerical results in \S\ref{sec:small-numbers}.
\subsection{Cubic intersection forms and genus one fibrations}
\label{sec:analytic-cubic}
For some years, mathematicians have speculated that the structure of
the triple intersection form on a Calabi-Yau threefold may make the
existence of a genus one or elliptic fibration increasingly likely as
the Picard number $\rho (X) =h^{1, 1} (X)$ increases. The rationale
for this argument basically boils down to the fact that a cubic in $k$
variables is increasingly likely to have a rational solution as $k$
increases. In this section we give some simple arguments that explain
why in the absence of unexpected conspiracies this conclusion is
true. If this result could be made rigorous it would be a significant
step forwards towards proving the finiteness of the number of distinct
topological types of Calabi-Yau threefolds.
As summarized in \cite{aggl-2}, the following conjecture is due to Koll\'ar
\cite{Kollar}:
\begin{conjecture}
Given a Calabi-Yau $n$-fold $X$, $X$ is genus one (or elliptically)
fibered iff there exists a divisor $D \in H^{2} (X,\mathbb{Q})$ that satisfies
$D^n = 0, D^{n -1} \neq 0$, and $D \cdot C \geq 0$ for all algebraic
curves $C \subset X$.
\end{conjecture}
Basically the idea is that $D$ corresponds to the lift $D =\pi^{-1} (D^{(B)})$
of a divisor $D^{(B)}$ on the base of the fibration, where the $(n - 1)$-fold
self-intersection of $D$ gives a positive multiple of the fiber
$F = \pi^{-1}(p)$, with $p$ a point on the base.
This conjecture was proven already for $n = 3$
by Oguiso and Wilson \cite{Oguiso, Wilson} under the additional
assumption that either $D$ is effective or $D \cdot c_2 (X)\neq 0$.
In the remainder of this section, as elsewhere in the paper, we often simply refer to a Calabi-Yau as genus one
fibered as a condition that includes both elliptically fibered
Calabi-Yau threefolds and more general genus one fibered threefolds.
In the case $n = 3$, to show that a Calabi-Yau threefold is
genus one fibered, we thus wish to identify an effective divisor $D$ whose triple
intersection with itself vanishes. The triple intersection form can
be written in a particular basis $D_i$ for $H^2 (X,\mathbb{Z})$ as
\begin{equation}
\langle A, B, C \rangle =
\sum_{i, j, k} \kappa_{i jk} a_ib_jc_k \,,
\label{eq:}
\end{equation}
where $A =\sum_i a_i D_i$, etc., and $D_i \cap D_j \cap D_k = \kappa_{i j k}$
The condition that there is a divisor $D = \sum_i d_i D_i$ satisfying
$D^3 = 0$ is then the condition that the cubic
intersection form on $D$ vanishes
\begin{equation}
D^3 = \langle D, D, D \rangle =
\sum_{i, j, k}^{} \kappa_{ijk} d_id_jd_k = 0\,.
\label{eq:cubic-condition}
\end{equation}
We are thus interested in finding a solution over the rational numbers
of a cubic equation in $k = \rho (X)$ variables. The curve condition
provides a further constraint that $D$ lies in the positive cone
defined by $D \cdot C \geq 0$ for all algebraic curves $C \subset X$.
Note that identifying a rational solution $D$ to
(\ref{eq:cubic-condition}) immediately leads to a solution over the
integers $\hat{d}_i \in\mathbb{Z}\ \forall i$, simply by multiplying by the LCM of
all the denominators of the rational solution $d_i$.
There are basically two distinct ways in which the conditions
for the existence of a divisor in the positive cone satisfying $D^3=0$
can
fail.
We consider each in turn.
Note that even when the condition $D^3 = 0$ is satisfied, the
condition for an elliptic fibration can fail if $D^2 = 0$, in which
case $D$ itself corresponds to a K3 fiber; this class of fibrations is
also interesting to consider but seems statistically likely to become
rarer as $\rho$ increases.
\subsubsection{Number theoretic obstructions}
There can be a number theoretic obstruction to the existence of a solution
to a degree $n$ homogeneous equation over the rationals such as
(\ref{eq:cubic-condition}).\footnote{Thanks to Noam
Elkies for explaining to us various aspects of the mathematics in this
section.} For example, there cannot be an integer
solution in the variables $x, y, z, w$ of the equation
\begin{equation}
x^3 + x^2 y + y^3 + 2z^3 + 4w^3 = 0\,.
\label{eq:}
\end{equation}
This can be seen as follows: if all the variables $x, y, z, w$ are
even, we divide by the largest possible power of 2 that leaves them
all as integers. Then there must be a solution with
at least one variable
odd. The variable $x$ cannot be odd, since if $y$ is odd or even the
LHS is odd. Similarly, $y$ cannot be odd. So $x, y$ must be even in
the minimal solution. But $z$ cannot be odd or the LHS would be
congruent to 2 mod 4. And $w$ cannot be odd if the others are even
since then the LHS would be congruent to 4 mod 8.
Such number-theoretic obstructions can only arise for small numbers of
variables $k$. It was conjectured long ago that for a homogeneous
degree $n$ polynomial the maximum number of variables for which such a
number-theoretic obstruction can arise is $n^2$ \cite{Mordell}. While
there is a counterexample known for $n = 4$, where there is an
obstruction for a quartic with 17 variables, it was proven in
\cite{Heath-Brown} that every {\it non-singular} cubic form in 10
variables with rational coefficients has a non-trivial rational zero.
And the existence of a rational solution has been proven for general
(singular or non-singular) cubics in 16 or more variables
\cite{Davenport}. Thus, no number-theoretic obstruction to the
existence of a solution to $D^3 = 0$ can arise when $\rho (X) =h^{1,
1} (X) > 15$, and there are also quite likely no obstructions for $\rho
(X) > 9$ though this stronger bound is not proven as far as the
authors are aware.
\subsubsection{Cone obstructions}
If the coefficients in the cubic conspire in an appropriate way, the
cubic can fail to have any solutions in the K\"ahler cone. We now
consider this type of obstruction to the existence of a solution.
For
example, the cubic
\begin{equation}
\sum_{i} d_i^3 + \sum_{i, j}d_i^2 d_j + \sum_{i, j, k}d_id_jd_k = 0
\label{eq:}
\end{equation}
has no nontrivial solutions in the cone $d_i \geq 0$ since all
coefficients are positive.
The absence of solutions in a given cone becomes increasingly
unlikely, however, as the number of variables increases
(again, in the absence of highly structured cubic coefficients).
A somewhat rough and naive approach to
understanding this is to consider adding the variables one at a time,
assuming that the coefficients are random and independently
distributed numbers.
In
this analysis we do not worry about the existence of rational
solutions; in any given region, the existence of a rational solution
should depend upon the kind of argument described in the previous
subsection.
We assume for simplicity that the cone condition states simply that
$d_i \geq 0\ \forall i$; a more careful analysis would consider cones
of different sizes and angles.
For two variables $x = d_1, y = d_2$ we have a cubic equation
\begin{equation}
\kappa_{111} x^3 + 3\kappa_{112} x^2 y + 3 \kappa_{122} xy^2 +
\kappa_{222} y^3 \,.
\label{eq:}
\end{equation}
Now assume that $x$ is some fixed value $x \geq 0$.
This cubic always has at least one real solution $(x, y)$.
If the coefficients in the cubic are randomly distributed, we expect
roughly a 1/2 chance that $y \geq 0$ for this real solution. Now add
a third variable. If the above procedure gives a solution $(x, y, z =
d_3 = 0)$ in the positive cone, we are done. If not, we plug in some
fixed positive values $x, y \geq 0$ and the condition becomes a cubic
in $z$. Again, there is statistically roughly a 1/2 chance that a
given real solution for $z$ is positive. So for 3 variables we expect
at most a probability of roughly $1/4$ that there is no solution in
the desired cone. Similarly, for $k$ variables, this simple argument
suggests that most a fraction of $1/2^{k-1}$ of random cubics will
lack a solution in the desired cone.
This is an extremely rough argument, and should not be taken
particularly seriously, but hopefully it illustrates the general sense
of how it becomes increasingly difficult to construct a cubic that has
no solutions in $k$ variables within a desired cone. Interestingly,
the rate of decrease found by this simple analysis matches quite
closely with what we find in a numerical analysis of the
Kreuzer-Skarke data at small $k =\rho (X) =h^{1, 1} (X)$.
\subsection{Numerical results for Calabi-Yau threefolds at
small $h^{1,1} (X)$}
\label{sec:small-numbers}
We have done some preliminary analysis of the distribution of
polytopes without a manifest reflexive 2D fiber for cases giving
Calabi-Yau threefolds with small $h^{1, 1}$. The results of this are
shown in Table~\ref{t:small-h11}.
It is interesting to note that the fraction of polytopes without a
genus one (or elliptic) fiber that is manifest in the toric geometry
decreases roughly exponentially, approximately as $p ({\rm no \, fiber})\sim 0.1 \times 2^{5-h^{1, 1}}$ in the range $h^{1, 1}\sim 4$---$7$.
Comparing to the total numbers of polytopes in the KS database that lack a manifested genus one fiber, if
this fraction continues to exhibit this pattern, the total number of
polytopes out of the 400 million in the full KS database would be
something like 14,000.
(Note, however, that the polytope identified in the database that has
no manifest fibration and corresponds to a Calabi-Yau with $h^{1,1} =
140$ would be extremely unlikely if this exponential rate of decrease
in manifest fibrations continues; this suggests that the tail of the
distribution of polytopes lacking a manifest fibration does not
decrease quite so quickly at large values of $h^{1,1}$.
Because the analytic argument of the previous section involves all
fibrations, not just manifest ones, it may be that this asymptotic is
still a good estimate of actual fibrations if most of the polytopes at
large $h^{1,1}$ that lack manifest fibrations actually have other
fibrations that cannot be seen from toric fibers.)
The naive distribution of the estimated number of polytopes from the
simple exponentially decreasing estimate is shown in the black dots in
Figure \ref{fig:est}.
Even with some uncertainty about the exact structure of the tail of
this distribution,
this seems to give good circumstantial
evidence that at least among this family of Calabi-Yau threefolds, the
vast majority are genus one or elliptically fibered, and that the
Calabi-Yau threefolds like the quintic that lack genus one fibration
structure are exceptional rare cases, rather than the general rule.
\begin{table}[]
\centering
\begin{tabular}{|c|cccccc|}
\hline
$h^{1,1}$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ \\ \hline
Total \# polytopes
& $36$ & $244$ & $1197$ & $4990$ & $17101$ & $50376$ \\ \hline
\# without reflexive fiber $\ensuremath{\nabla}_2$
& $23$ & $91$ & $256$ & $562$ & $872$ & $1202$ \\ \hline
\% without reflexive fiber
& $0.639$ & $0.373$ & $0.214$ & $0.113$ & $0.051$ & $0.024$ \\ \hline
\end{tabular}
\caption{\footnotesize The numbers of polytopes without a 2D
reflexive fiber, corresponding to Calabi-Yau threefolds without a
manifest genus one fibration, for small values of $h^{1, 1}$}
\label{t:small-h11}
\end{table}
\begin{figure}
\centering
\includegraphics[width=15cm]{est}
\caption{\footnotesize
The fraction of polytopes without a manifest reflexive fiber goes roughly as $0.1
\times 2^{5 - h^{1,1}}$ for small values of $h^{1,1}$. Continuing
this estimate to higher values of $h^{1,1}$,
the estimated number of polytopes with no fiber has a peak value
around $1800$ at $h^{1,1}\sim 9$
and drops below five around $h^{1,1} \sim 24$. The estimated number of total polytopes with no manifest fiber is around $14,000$.}
\label{fig:est}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
The results reported in this paper indicate that most Calabi-Yau
threefolds that are realized as hypersurfaces in toric varieties have
the form of a genus one fibration. At large Hodge numbers almost all
Calabi-Yau threefolds in the Kreuzer Skarke database satisfy the
stronger condition that they are elliptically fibered. This
contributes to the growing body of evidence that most Calabi-Yau
threefolds lie in the finite class of elliptic fibrations.
We have shown that all known Calabi-Yau threefolds where
at least one of the Hodge numbers is greater than 150 must have a
genus one fibration, and all CY3's with
$h^{1, 1}\geq 195$ or $h^{2, 1}\geq 228$ have an elliptic fibration.
We have also shown that the fraction of toric hypersurface Calabi-Yau
threefolds that are not manifestly genus one fibered decreases exponentially
roughly as $0.1 \times 2^{5 - h^{1,1}}$ for small values of
$h^{1,1}$. These results correspond well with the recent
investigations in \cite{aggl-2, aggl-3, agh-non-simply}, which showed
that over 99\% of all complete intersection Calabi-Yau (CICY)
threefolds have a genus one fibration (and generally many distinct
fibrations), including all CICY threefolds with $h^{1,1} > 4$, and
that similar results hold for the only substantial known class of
non-simply connected Calabi-Yau threefolds.
Taken together, these empirical results, along with the analytic
arguments described in \S\ref{sec:analytic-cubic}, suggest that it becomes increasingly
difficult to form a Calabi-Yau geometry that is not genus one or
elliptically fibered as the Hodge number $h^{1,1}$ increases.
Proving that any
Calabi-Yau with Hodge numbers beyond a certain value must admit an
elliptic fibration is a significant challenge for mathematicians;
progress in this direction might help begin to place some explicit
bounds that would help in proving the finiteness of the complete set
of Calabi-Yau threefolds.
There are a number of ways in which the analysis of this paper could
be extended. Clearly, it would be desirable to analyze the fibration
structure of the full set of polytopes in the Kreuzer-Skarke database,
which could be done by implementing the
algorithm used in this paper using faster and more powerful
computational tools.
It is also important to note that while the simple criteria we used
here showed already that most known Calabi-Yau threefolds at large
Hodge numbers are elliptic or more generally genus one fibered, the
cases that are not recognized as fibered by these simple criteria may still
have genus one or elliptic fibers. In particular, while we have identified a couple of Calabi-Yau threefolds with
$h^{1, 1} > 1$ and either $h^{1, 1}$ or $h^{2, 1}$ greater than 140
that do not admit an explicit toric genus one fibration that can be
identified by a 2D reflexive fiber in the 4D polytope,
it seems quite likely that the Calabi-Yau threefolds associated with these
polytopes may have a non-toric genus one or elliptic fibration
structure. Such fibrations could be identified by a more extensive
analysis along the lines of \cite{aggl-3}.
For Calabi-Yau threefolds that do not admit any genus one or elliptic fibration, it
would be interesting to understand whether there is some underlying
structure to the triple intersection numbers that is related to those
of elliptically fibered Calabi-Yau manifolds, and whether there are
simple general classes of transitions that connect the
non-elliptically fibered threefolds to the elliptically fibered CY3's,
which themselves all form a connected set through transitions
associated with blow-ups of the base and Higgsing/unHiggsing processes
in the corresponding F-theory models. We leave further investigation
of these questions for future work.
Finally, it of course would be interesting to extend this kind of
analysis to Calabi-Yau fourfolds.
An early analysis of the fibration structure of some known toric
hypersurface Calabi-Yau fourfolds was carried out in
\cite{Rohsiepe:2005qg}.
The analysis of fibration
structures of complete intersection Calabi-Yau fourfolds in
\cite{Gray-hl} suggests that again most known constructions should
lead predominantly to Calabi-Yau fourfolds that are genus one or
elliptically fibered. The classification of hypersurfaces in
reflexive 5D polytopes has not been completed, although the complete
set of $3.2 \times 10^{11}$ associated weight systems has recently
been constructed \cite{ss}. In fact,
recent work on classifying toric threefold bases that can support
elliptic Calabi-Yau fourfolds suggests that the number of such
distinct bases already reaches enormous cardinality on the order of
$10^{3000}$ \cite{Halverson-ls, Wang-WT-MC-2}. Thus, at this point
the known set of elliptic Calabi-Yau fourfolds is much larger than any
known class of Calabi-Yau fourfolds from any other construction.
\acknowledgments{
We would like to thank Lara Anderson,
Andreas Braun,
Noam Elkies,
James Gray,
Sam Johnson,
Nikhil Raghuram,
David
Morrison,
Andrew Turner,
Yinan Wang, and Timo Wiegand for helpful discussions.
This material is based upon work supported by the U.S.\ Department of
Energy, Office of Science, Office of High Energy Physics under
grant Contract Number
DE-SC00012567.
WT would like to thank the Aspen Center for Physics
for hospitality during the completion of this work; the Aspen Center
for Physics is supported by National Science Foundation grant
PHY-1607611.
}
\newpage
|
1,108,101,566,428 | arxiv | \section{Introduction}
\vspace{-0.5 em}
Relativistic electron-positron (\emph{$e^-e^+$}) pair plasma represents a unique state of matter, which is believed to exist in many extreme astrophysical environments, such as Active Galactic Nuclei (AGN), Pulsar Wind Nebulae (PWN), Gamma Ray Bursts (GRBs), and Black Holes (BHs) \cite{E. Weibel1959,B. Paczynski1986,M. J. Rees1992,E. Waxman1995,T. Piran1999,T. Piran2005,P. Mészáros2006,Hui Chen pop2015}. In particular, high dense \emph{$e^-e^+$} pair plasmas are generally accepted to be invoked to explain energetic phenomena associated with quasars, GRBs and BHs \cite{J. Wardle1998,I. F. Mirabel1999,P. Meszaros2002,G. Weidenspointner2008,G. Sarri2015}. On earth, relativistic positron sources are of paramount importance in experimental physics due to their potential applications to a wide range of physical subjects, including nuclear physics, particle physics, and laboratory astrophysics \cite{wluo2013,nd2016}. Generation of such pair plasma is attracting an increasing amount of worldwide attention \cite{H. Chen2011,Hui Chen prl2015,Jian-Xun Liu2015,Jin-Jin Liu2016,Tongjun Xu2016,yuantao2017}, aiming to replicate the physics of small-scale astrophysical environments in the laboratory and to elaborate potentially board applications.
With everlasting and progressive development of laser technology, laser beams with intensities of $10^{22}~\rm{W/cm^2}$ are already realized \cite{V. Yanovsky2008}. The underway project Extreme Light Infrastructure (ELI) \cite{eli} and proposed initiatives like XCELS \cite{xcels} and iCAN \cite{G. Mourou2013} are expected to achieve intensities above $10^{23}~\rm W/cm^2$ within the next few years. At such intensities, the normalized amplitude of laser vector potential $a_0=eE/(m_ec\omega_l)\gg1$, where $\omega_l=2\pi c/\lambda_l$ is the angular frequency of the laser, and laser-solid interaction enters the ultrarelativistic regime \cite{L.Willingale2009}, in which quantum electrodynamics (QED) processes may arise \cite{G. A. Mourou2006,E. N. Nerush2007,A. R. Bell2008,A. M. Fedotov2010}. One of the most representative QED processes is nonlinear Compton Scattering, $e^-+n\gamma_l\rightarrow e^-+\gamma_h$ \cite{A. DiPiazza2010,F. Mackenroth2011,C. P. Ridgers2012}. In this nonlinear process, foil electron accelerated (in the laser focus) spontaneously absorbs multiple laser photons $\gamma_l$ and then radiate a high-energy photon $\gamma_h$. The radiated photon in strong electromagnetic (EM) fields can further decay into an \emph{$e^-e^+$} pair via multi-photon Breit-Wheeler (BW) process, $\gamma_h+m\gamma_l\rightarrow e^-+e^+$ \cite{G. Breit1934,T. Erber1966,J. G. Kirk2009}. As QED effects become significant, prolific charged particles produced in the system can focus large current into the classical electron-ion plasma, which strongly modify the plasma dynamics. The EM fields, which are determined by the plasma dynamics, in turn affect the rates of the QED processes. As a result, the physics associated with the formation and dynamics of dense \emph{$e^-e^+$} pair plasmas produced by QED-strong laser interacting with solid foil is of paramount importance that may prove essential for developing an astrophysical relevant research platform.
\begin{figure*}
\centering
\includegraphics[scale=0.35]{fig1.png}
\caption{(Color online) The $y$-component of the normalized electric fields $E_y$ [(a) and (e)], the spatial density maps in units of $n_c$ of the foil electrons [(b) and (f)], and emitted $\gamma$-ray photons [(c) and (g)], and the positron intensity averaged one laser period [(d) and (h)] for a laser intensity of $I=4\times10^{23}~\rm{W/cm^2}$ ($a_0=540$). The data was recorded at $t=10~T_0$ and the initial target densities are 280$~n_c$ (upper pads) and 710$~n_c$ (bottom pads), respectively.}\label{fig1}
\end{figure*}
In this paper, we present enhanced \emph{$e^-e^+$} pair plasma production and its dynamics in thin foils undergoing relativistic transparency. The case of a thin foil irradiated by two counter-propagating lasers (two-side irradiation scheme) is particularly interesting, because in contrast to single laser solid interaction, two-side irradiation scheme can significantly enhance the QED effects \cite{Wen Luo2015,H. X. Chang2015}. Here we revisit this irradiation scheme in the relativistic transparency regime and emphasize on how this affects the pair production and the following particle dynamics. When target foil undergoes transparency, stable standing-wave (SW) field can be produced directly by the overlap of two counter-propagating laser pulses. Such SW fields significantly enhance the pair production by the BW process and then produce high dense \emph{$e^-e^+$} pair plasmas. Modulation dynamics of the positron energy and phase-space and angular distributions is further observed when transparency occurs.
The reminder of this paper consists of five sections. Section II describes the simulation setup. Section III demonstrates dense \emph{$e^-e^+$} pair plasma and $\gamma$-ray burst production in both the relativistic transparency and opaque regimes. In Section IV, the modulation of stable SW fields on produced pair plasmas in the relativistic transparency regime is discussed. Section V presents a brief discussion and summary on this work.
\vspace{-4.5 em}
\section{simulation setup}
\vspace{-1 em}
An open-source particle-in-cell (PIC) code EPOCH \cite{C. P. Ridgers2014} was used to perform multi-dimensional simulations. This code has been incorporated with a radiation-reaction module and a collective QED module, allowing self-consistent modeling of laser-plasmas interactions in the near-QED regime. Two linearly polarized (LP) laser pulses with an equal wavelength of $\lambda_l=1~\mu m$ and duration of $\tau=9~T_0$ focus on, respectively, the left and right boundaries of the simulation box at time $t=0$. Here $T_0=\lambda_l/c$ is the period of laser pulse. The lasers have transversely super-Gaussian spatial profiles with electric field as $\varpropto exp(-y^5/1~\mu m^2)$ and focus to spots with radius of $r = 1~\mu m$. The size of simulation box is $9\lambda_l\times8\lambda_l$. A thin foil is fully ionized carbons and hydrogen, which locates between $4~\mu m$ to $5~\mu m$ from the left boundary. Ratio of density of carbons and hydrogen is 1:1. The foil is discretized on a spatial grid with cell size of 10 nm and is represented by 500 macro electrons and 16 macro ions per cell. A set of initial foil densities ($n_e=80-710~n_c$) and laser intensities ($I=10^{23}-8\times10^{23}~\rm{W/cm^2}$) were used in these simulations.
\section{enhanced $\textnormal\emph{$e^-e^+$}$ pair production}
\vspace{-1 em}
When two counter-propagating laser pulses irradiate on a thin foil from opposite sides, laser hole-boring (HB) stage is initiated. At this stage, the foil electrons and ions are first accelerated in the laser field. Owing to the opposite and equal laser radiation pressures, these accelerated charged particles could hardly be escaped from the foil. The foil then becomes much denser with much higher reflectivity of the lasers, enabling formation of SW on each side of the foil. When HB stage ends, the foil starts thermal expansion stage, in which the sum of electrostatic and thermal pressures may exceed the laser radiation pressure on each side, leading to decompression and expansion of the foil \cite{H. X. Chang2015}. As the electron density of fully ionized foil becomes lower/higher than the relativistically corrected critical density, the foil plasma becomes transparent/opaque to the incident laser \cite{L.Willingale2009}. The results are shown in Figs.~\ref{fig1}(a) and (e). It is seen that the foil plasma is underdense at $n_e=280~n_c$ such that the counter-propagating lasers can penetrate the thin foil [see Fig.~\ref{fig1}(b)] and then form a stable SW. This SW has a period of 0.5$~T_0$. As foil plasma becomes denser, for example, at $n_e=710~n_c$, it can be opaque to the incident laser [see Fig.~\ref{fig1}(f)]. Then the incident and reflected lasers produce instable and relatively weak SWs at both sides of the irradiated foil, as shown in Fig.~\ref{fig1}(e).
Figs.~\ref{fig1}(c) and (g) shows the spatial distributions of hard photons with energies higher than 1.022 MeV. The number of radiated photons increases with the foil plasma density due to proportional increase of foil electron number. These photons have similar spatial patterns with those of foil electrons in the laser focus. At lower foil plasma density, we find that majority of photons are produced with relatively higher energies because the incident lasers can penetrate the thin foil and then enhances the amplitude of EM field. These photons in turn interact with the SW field sufficiently, producing more energetic pairs. The SW field can further bunch the produced pairs in space, leading to the formation of high dense \emph{$e^-e^+$} plasma with a maximum intensity above $10^{20}~\rm{W/cm^2}$, as shown in Fig.~\ref{fig1}(d). Such bunching effect, which disappears in the opaque regime, is mainly caused by the SW formed directly by two incoming laser pulses. When the initial target density is 710$~n_c$, the interaction enters into the opaque regime. The majority of photons are produced at the high-density plasma region, where the incident lasers get reflected [see Fig.~\ref{fig1}(f)]. In the opaque regime, energetic photons experience a relatively weak EM field when they escape from the interaction area, and then \emph{$e^-e^+$} pair production via multi-photon BW process becomes deficient. Fig.~\ref{fig1}(h) shows that the intensity of created positrons at $n_e=710~n_c$ is visibly lower than that at $n_e=280~n_c$. Such intensity profile further displays a chaotic pattern due to the lack of stable SW.
\begin{figure}
\centering
\includegraphics[width=9cm]{fig2.png}
\caption{(Color online) Space-time distributions of the right- [(a) and (b)] and left-moving [(c) and (d)] components of the total field for two counter-propagating LP lasers with dimensionless laser amplitude $a_0=540$. The foil densities used in the simulations are $n_e=280~n_c$ [left pads] and $n_e=710~n_c$ [right pads], respectively.}\label{fig2}
\end{figure}
In order to identify the relativistic transparency and opaque regimes, the characteristic behaviors of the right- and left-moving field components are shown in Fig.~\ref{fig2}, which is obtained in simulations with laser amplitude $a_0=540$. These components are defined by \cite{C. Baumann2016}
\begin{equation}\label{1}
F_{r,~l}=\frac{E_y\pm B_z}{2}.
\end{equation}
One can clearly see that in the case of $n_e=280~n_c$, the onset of transparency begins at $t\thickapprox6~T_0$, which leads to a transient SW [see Figs.~\ref{fig2}(a) and (c)]. The velocity of HB front is estimated as $v_{hb}\thicksim0.3c$ and the HB stage should be terminated at $t\approx5.7~T_0$, which is in accordance with the simulation result. However, in the case of $n_e = 710~n_c$ a visible interruption of laser propagation occurs until $t\thickapprox12T_0$ [see Figs.~\ref{fig2}(b) and (d)], which indicates that the plasma remains opaque for this interaction stage. In the following stage, the injection of laser energy is almost finished and the reminder of the SWs formed directly by two incident lasers can hardly affect the production of \emph{$e^-e^+$} pairs. This is the also the reason that the number of pairs obtained at higher foil density is visibly less than at lower foil density.
\begin{figure}
\centering
\includegraphics[width=9cm]{fig3.png}
\caption{(Color Online) Energy conversion transfer from laser photons to \emph{$e^-e^+$} pairs (cycle) and $\gamma$-photons (square) at $t=13.25~T_0$. Solid lines and dashed lines indicate the simulation results at laser intensities of $4\times10^{23}~\rm{W/cm^2}$ ($a_0=540$) and $6\times10^{23}~\rm{W/cm^2}$ ($a_0=660$), respectively. Vertical line denotes an approximate separatrix between the relativistic transparency and the opaque regimes.}\label{fig3}
\end{figure}
We investigate laser energy conversion to \emph{$e^-e^+$} pairs ($\eta_{pair}$) and $\gamma$-ray photons ($\eta_{\gamma}$) in the relativistic transparency and opaque regimes. Fig.~\ref{fig3} shows such energy transfer as a function of foil plasma density at $t=13.25~T_0$, which corresponds to the moment that the laser fields have vanished. It is seen that the $\eta_{pair}$ increases first and then decreases rapidly with the foil density, whereas the $\eta_{\gamma}$ increases first and then approaches a saturation value. For both laser amplitudes, $a_0=540$ and 660, the $\eta_{pair}$ has a maximum value at foil plasma density of $n_e=280~n_c$. Such density is slightly lower than the separatrix density, approximately given by $n_e=400~n_c$, for which the relativistic transparency occurs at $t\gtrsim9.5T_0$, that is, more than half of the laser pulses have reached the target. The separatrix density observed in the simulations is lower than the theoretical $a_0n_e$, due to target compression and QED plasma effects. The QED plasma effect can deplete the laser pulse energy, and thus reduces the laser amplitude in laser foil interactions. For laser amplitude $a_0=660$, the target compression and laser depletion due to QED effect \cite{W. M.Wang} increase significantly such that the separatrix density does not shift to higher value, compared to that for laser amplitude $a_0=540$.
\begin{figure*}
\centering
\includegraphics[scale=0.09]{fig4.png}
\caption{(Color online) Ensemble-averaged energy oscillation for positrons (a) and BW-electrons (b). Two counter-propagating laser pulses at three different intensities $I=1.0\times10^{23}~\rm{W/cm^2}$ (black square), $I=4\times10^{23}~\rm{W/cm^2}$ (red cycle) and $I=8\times10^{23}~\rm{W/cm^2}$ (blue triangle) were used to irradiate a 1$\mu m$ CH foil ($n_e=280~n_c$) from opposite sides.}\label{fig4}
\end{figure*}
As displayed in Fig.~\ref{fig3}, the $\eta_{pair}$ obtained in the relativistic transparency regime is obviously higher than that in the opaque regime. For laser amplitude $a_0=540$, when reducing foil plasma density from 710$~n_c$ to 280$~n_c$ the laser energy conversion $\eta_{pair}$ increases from $0.15\%$ to $0.53\%$, which is enhanced to about four times, so does the number of created pairs $N_{pair}$, from $3.4\times10^{10}$ at $n_e=710~n_c$ to $1.2\times10^{11}$ at $n_e=280~n_c$. Meanwhile, the laser energy conversion $\eta_{\gamma}$ keeps almost the same value, $\thicksim30\%$. The enhancement of \emph{$e^-e^+$} pair production is mainly due to the target transparency. As discussed above, the transparency results in the formation of stable SW from a direct overlap of two counter-propagating laser pulses. Such pulse overlapping enhances the laser field strength, which can accelerate charged particles to higher Lorentz factors and then radiate more high-energy photons per each charged particle. In the following stage, the propagation of these energetic photons through the stable SW field increases the dynamical quantum parameter $\chi_{\gamma}$ \cite{Wen Luo2015,H. X. Chang2015}, which controls \emph{$e^-e^+$} pair production via multi-photon BW process. As a consequence, more QED pairs can be produced in foils undergoing relativistic transparency. We see in Fig.~\ref{fig3} that an optimum foil density region, namely, $200-280~n_c$ is found for realizing maximum conversion efficiency from laser photons to \emph{$e^-e^+$} pairs.
\vspace{-1 em}
\section{modulation dynamics of $\textnormal\emph{$e^-e^+$}$ pair plasma in the relativistic transparency regime}
\vspace{-0.5 em}
\subsection{Energy modulation}
\vspace{-0.5 em}
After transparency has occured, the stable SW can modulate the average pair energy. Fig.~\ref{fig4} shows temporal evolutions of ensemble-averaged energy for the created pairs at laser intensities of $4~\times~10^{23}$ and $8~\times~10^{23}~\rm{W/cm^2}$, respectively. At the initial stage, the HB and thermal expansion effects are dominant over the formation of stable SW. The energy modulation is ambiguous. At $7.5~T_0<t\lesssim12~T_0$, the plasma foil becomes transparent and stable SW fiels are formed consequently [see Figs.~\ref{fig2}(a) and (c)]. During this stage, the average positron energy is modulated periodically [see Fig.~\ref{fig4}]. The modulation period is 0.5$~T_0$, as a result of temporal evolution of the SW. On the contrary, the oscillation of average positron energy becomes insignificant when laser foil interaction occurs in the opaque regime, for example, in the case of $n_e=710n_c$ for $I=4~\times~10^{23}~\rm{W/cm^2}$. This is due to merely the existence of instable SW fields formed by incident and reflected laser pulses. Furthermore, the foil does not undergo transparency at lower intensity of $10^{23}~\rm{W/cm^2}$. Only a few \emph{$e^-e^+$} pairs are produced and the modulation dynamics are invisible as well.
We present detailed modulation processes as follows. When SW field becomes strong at $t=nT_0/2$ with $n$ being the integer, charged particles experience such field and radiate more energetic photons. As a result, average energy per particle decreases and these particles can be readily trapped to the electric nodes due to stronger radiative trapping (RT) \cite{L. L. Jiprl2014} and tight longitudinal confinement, as discussed later. On the contrary, when the overlap of the SWs diminishes at $t=(2n+1)T_0/4$, charged particles radiate less; instead they can absorb some energy from the varying laser fields, which finally enhance their average energy. Such energy oscillation can maintain a few laser periods. After $t\thicksim12~T_0$, it becomes weak and even disappears along with the fading of SW field. It is seen that the final average energy for BW-electrons approaches a median value, approximately 200 MeV. Meanwhile the positron average energy has considerable enhancement on acceleration of sheath potential generated by fast electrons as they leave the target. The existence of sheath potential also gives rise to the average energy difference ($\thicksim$20 MeV) between the positron and BW-electron’s in the oscillation stage, as displayed in Figs.~\ref{fig4} (a) and (b).
The energy oscillation of \emph{$e^-e^+$} pairs can be explained with conservation of canonical momentum of positrons (or BW-electrons) after they are spontaneously accelerated to relativistic velocities. The conservation of canonical momentum reads \cite{J. Meyer-Ter-Vehn192}
\begin{equation}\label{2}
p_{\perp}+qA_{\perp}/c=constant
\end{equation}
\noindent Here $p_{\perp}$ is the transverse momentum of positron, $q$ is the elementary charge, $A_{\perp}$ is the vector potential of EM field perpendicular to particle momentum, and the $\it{constant}$ can be approximated as the median value mentioned above. During the interaction, the peak amplitudes of the SW field varied periodically, leading to that the trapped pairs have visible oscillation in the average transverse momentum. Meanwhile the longitudinal momentum keeps almost unchanged. The simulations have verified this behavior. A detailed discussion will be given in the following subsection.
At higher laser intensity of $8~\times~10^{23}~\rm{W/cm^2}$, regular modulation comes earlier than at lower laser intensity, as displayed in Fig.~\ref{fig4}. The reason is that the relatively strong laser radiation pressure can shorten the stages of both laser HB and target thermal expansion, hence forming stable SW sooner. Furthermore, the overall charged particle energy decreases due to accumulation of considerable radiation loss, which cannot be compensated by the energy gain from the laser fields.
\vspace{-1.1 em}
\subsection{Phase-space modulation}
\vspace{-0.5 em}
Besides particle energy oscillation, a visible phase-space modulation can be observed when thin foils undergo relativistic transparency. It is seen in Fig.~\ref{fig5} that the strength of SW field is always zero at nodes of $x=n\lambda_l/2$, and maximum at antinodes of $x=(2n+1)\lambda_l/2$. The field strengths at antinodes alternate between strong and weak overlaps during each half of laser period. The created \emph{$e^-e^+$} pairs are trapped to the nodes of the SW field when the field strength becomes strong at $t=nT_0/2$, which further helps forming stripe distributions along the direction of laser polarization [see an example shown in Fig.~\ref{fig5}(a)]. The \emph{$e^-e^+$} pair bunches have regular space interval of $\lambda_l/2$. After one-fourth of laser period, the bunched pattern evolves into a chaotic lobe when the field strength diminishes [see an example in Fig.~\ref{fig5}(b)].
\begin{figure}
\centering
\includegraphics[width=9cm]{fig5.png}
\caption{Positron spatial density distributions in $\it{x-y}$ plane (upper parts) as well as longitudinal profiles of electric fields $E_y$ (lower parts) at $t=10~T_0$ (a) and $t=10.25~T_0$ (b), respectively. Due to their mirror patterns along the $\it{x}$-axis, only spatial density profile of positrons in the range of $0<y<4~\mu m$ and longitudinal profile of laser fields $E_y$ in $-4~\mu m<y<0$ are plotted. Longitudinal distributions of positron number (black) and longitudinal profiles of $E_y$ (red) at $t=10~T_0$ (c) and $t=10.25~T_0$ (d). A laser intensity of $4\times10^{23}~\rm{W/cm^2}$ and a foil density of $n_e=280~n_c$ are used in the simulations.}\label{fig5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=9cm]{fig6.png}
\caption{(Color online) Positron phase-space distributions for $x-p_x$ at time $t=10~T_0$ (a) and $t=10.25~T_0$ (c), and for $y-p_y$ at $t=10~T_0$ (b) and $t=10.25~T_0$ (d).}\label{fig6}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.6]{fig7.png}
\caption{(Color online) Polar distributions of foil electron, $\gamma$-photon and positron intensities at time $t=10.0~T_0$ and 10.25$~T_0$ (a) and polar distributions of foil electron, $\gamma$-photon, BW-electron and positron intensities at $t=14.5~T_0$ (b), for $\it{p}$-polarized laser. Azimuthal distributions of foil electron, $\gamma$-photon and positron intensities at $t=10.0~T_0$ and 10.25$~T_0$, for $\it{p-}$ (c) and $\it{s-}$ (d) polarized lasers. In Pads (a), (c) and (d), the black, cyan and blue solid lines represent foil electrons, $\gamma$-photons and positrons, respectively at $t=10.0~T_0$; the magenta, green and red dashed lines indicate those at $t=10.25T_0$, respectively; the polar distribution of BW-electron intensity displays similar pattern as the positron’s, thus it is not shown here. Laser intensity of $4~\times~10^{23}~\rm{W/cm^2}$ and foil density of $n_e=280~n_c$ were used in the simulations.}\label{fig7}
\end{figure*}
The positron density peaked at electric nodes can be clearly seen in Fig.~\ref{fig5}(c), which plots the longitudinal distribution of positron number for a fixed moment in time $t=10~T_0$. At this moment, charged particles experience an electric field almost doubles that they experience at $t=10.25~T_0$ [see Fig.~\ref{fig5}]. Consequently these particles quiver violently in the intense field and loose substantial amount of energy by radiating high-energy photons. They in turn experience strong radiation-reaction (RR) force, leading to visible RT. A direct evidence for RT dynamics is the decrease of trapped particle transverse momentum \cite{L. L. Jiprl2014}, which is visibly shown in Fig.~\ref{fig6}. Instead of being focused only in the electric nodes, more positrons tend to appear in the subspace between two electric nodes at $t=10.25~T_0$ [see Fig.~\ref{fig5}(d)].
Figs.~\ref{fig6}(a) and (b) show positron phase-space distributions at time $t=10~T_0$. The $x-p_x$ distribution has transverse stripe structure periodically, which agrees with its spatial pattern shown in Fig.~\ref{fig5}(a). Instead of being expelled transversely by the ponderomotive force, positrons are mainly trapped in the high-field region of SW. Consequently the $y-p_y$ distribution in Fig.~\ref{fig6}(b) displays in a bright elliptical pattern in the center. However, the particle dynamics is different at time $t=10.25~T_0$, which corresponds to that the field strength of SW at their antinodes. At this moment both the confinement of charged particles and RT effect loose. These positrons with $p_x>0$ tend to move forward, and those with $p_x<0$ move backward. As a result, the transverse stripe structure becomes oblique with respect to the longitudinal direction [see Fig.~\ref{fig6}(c)]. The weakened SW field further disperses the positrons in phase-space of $y-p_y$, since the RR effect becomes insignificant such that it is difficult to compensate their transverse dispersion.
\vspace{-1.25 em}
\subsection{Angular distribution}
\vspace{-0.5 em}
We continue to investigate angular distributions of foil electrons, $\gamma$-photons and \emph{$e^-e^+$} pairs. Since it is necessary to consider additional plasma effects, such as thermal expansion along $\it{z}$-direction, more realistic three-dimensional (3D) simulations were performed. We define $\theta=0^{\circ}$ along the direction of the laser propagation, and $\phi=0^{\circ}~(90^{\circ})$ along the direction of $\it{p (s)}$-polarization. Two-side irradiation scheme enables irradiating and compressing a thin foil more symmetrically, thus foil electron, $\gamma$-photon and positron have symmetric $\theta$-intensity patterns with respect to the azimuthal plane [see Fig.~\ref{fig7}(a)]. Since the laser intensity is much higher than the relativistic intensity of the order of $10^{18}~\rm{W/cm^2}$, the effect of the magnetic and electric fields on the motion of charged particles should become comparable. Relativistic \emph{$e^-e^+$} pairs produced in such field are predicted to quiver nonlinearly, moving in figure-of-eight trajectory, rather than in straight lines.
The modulation of polar distribution of positron intensity is seen from Fig.~\ref{fig7}(a). In the presence of SW field, the polar patterns of positrons at time $t=10.0~T_0$ and 10.25$~T_0$ are “pinched” along the direction of laser polarization due to the confinement effect of the SW field. The polar pattern at time $t=10.0~T_0$ becomes more slim relative to that at $t=10.25~T_0$. This is due to a tighter confinement effect longitudinally. In order to confirm such effect, we display in Fig.~\ref{fig7}(b) that the 'pinched' effect disappears when the SW field has already faded. It is also shown in Fig.~\ref{fig7}(b) that different from BW-electron, the positron intensity have an isotropic distribution, due to the influence of sheath acceleration. For $\it{p}$-polarized lasers, foil electrons experience transverse electric fields. They and hard photons they emit have tendency to move along the direction of laser polarization, $\it{i.e}$., $\theta=90^{\circ}$.
Figs.~\ref{fig7}(c) and (d) show azimuthal distributions of foil electron, $\gamma$-photon and positron intensities for $\it{p-}$ and $\it{s}$-polarized lasers, respectively. In both laser polarization cases, foil electrons have angular patterns with maximal intensity in their respective laser polarization directions. However, such pattern is a little ‘fat’ as a result of target thermal expansion after it symmetrically compressed by laser radiation pressures. Azimuthal distribution of $\gamma$-photon intensity is peaked on-axis of the laser polarization, suggesting high-order harmonic generation based on Compton scattering in highly nonlinear regime. The positron intensity displays similar patterns as the $\gamma$-photon. Also shown in Figs.~\ref{fig7}(c) and (d) is temporal evolutions of SW field can hardly affect positron azimuthal pattern, which is different from modulation dynamics on positron polar structure.
\vspace{-1.25 em}
\section{summary and conclusion}
\vspace{-0.5 em}
Thin foil undergoing relativistic transparency can enhance the \emph{$e^-e^+$} pair production. The produced pair plasma has a density as high as $10^{22}~\rm{cm^{-3}}$. This implies the beam proper density of $n_{prop}=n_e/\gamma_{AV}\simeq2.5\times10^{19}~\rm{cm^{-3}}$ while considering the bulk Lorentz factor of $\gamma_{AV}\simeq400$. The beam of the produced pair plasma has transverse size of the order of $D_B\simeq2~\mu m$. The relativistic corrected collision-less skin depth of the beam is thus $l_{skin}=c/\omega_{prop}\simeq1.1~\mu m$, with $\omega_{prop}$ being the relativistic plasma frequency. This value is smaller than the beam transverse size, indicating that collective (that is, plasma-like) behaviors are likely to occur in the beam. Since the created plasmas are dense enough to permit collective and kinetic behaviors to play roles, they are further similar to the condition of astrophysical events such as jets of long $\gamma$-ray bursts \cite{J. Wardle1998,I. F. Mirabel1999}.
In summary, the generation of overdense pair plasmas and following modulation dynamics have been investigated by simulations of two QED-strong lasers irradiating a thin foil from opposite sides. In the relativistic transparency regime the laser energy conversion to \emph{$e^-e^+$} pairs can increase four times compared to that in the opaque regime, thus demonstrating enhanced \emph{$e^-e^+$} pair production. At laser intensity of $4~\times~10^{23}~\rm{W/cm^2}$, the produced \emph{$e^-e^+$} pair plasmas have a high energy density exceeding $10^{20}~\rm{W/cm^2}$. Although the beam transverse size is only a few micros, the created pair plasmas are dense enough to allow collective and kinetic behaviors to play roles. The modulation dynamics of \emph{$e^-e^+$} pairs is further demonstrated after transparency has occurred, which shows that the positron average energy, phase-space and angular distributions can be modulated periodically by the stable SW field formed directly by the counter-propagating laser pulses.
\vspace{-1.25 em}
\section{acknowledgements}
\vspace{-0.5 em}
This work was supported by the National Natural Science Foundation of China (Grant Nos. 11405083, 11347028 and 11675075), and the Graduate Student Innovation Project Foundation of Hunan Province (Grant No. CX2016B450). W.L. appreciates the support from China Scholarship Council and Young Talent Project of the University of South China. M.C. appreciates the support from National 1000 Youth Talent Project of China.
|
1,108,101,566,429 | arxiv | \section{Introduction}
Consider autonomous dynamical system
\begin{equation}
\frac{dy}{dt}=f(y,p)
\end{equation}
where $f:\mathbb{R}^n\times\mathbb{R}^q\to\mathbb{R}^n$ and non-autonomous system
\begin{equation}
\frac{dy}{dt}=f(t,y,p)
\end{equation}
where $f:\mathbb{R}\times\mathbb{R}^n\times\mathbb{R}^q\to\mathbb{R}^n$ is periodic in $t$, namely, there exists some positive $T$ such that $f(t,y,p)=f(t+T,y,p)$ for all $t,y$ and $p$. The goal of this report is presenting a unified toolbox \texttt{tor}\footnote{\texttt{tor} toolbox is available at \url{https://github.com/mingwu-li/torus_collocation}} for the continuation of \emph{two-dimensional tori} in the above two types of dynamical systems.
This toolbox includes several core functions as follows
\begin{itemize}
\item \texttt{ode\_TR2tor}: continuation of tori from a \textit{Neimark-Sacker/torus} bifurcation periodic orbit,
\item \texttt{ode\_isol2tor}: continuation of tori from an initial solution guess,
\item \texttt{ode\_tor2tor}: continuation of tori from a saved torus solution,
\item \texttt{ode\_BP2tor}: continuation of tori along a secondary branch passing through a branch point.
\end{itemize}
This toolbox is based on the continuation package \textsc{coco}~\cite{coco}. There are examples of continuation of tori in \textsc{coco}. Specifically, the continuation of tori in autonomous \emph{Langford} dynamical system is studied in Chapter 9 of~\cite{dankowicz2013recipes}, a book documents the development the package in great detail. In the \texttt{coll} tutorial of \textsc{coco}~\cite{coco-coll}, the continuation of tori in a non-autonomous system with harmonic excitation is studied. Based on these examples, we develop a \emph{general-purpose} toolbox for the continuation of tori in dynamical systems.
In the continuation of periodic orbits, Neimark-Sacker/torus (TR) bifurcation may be detected. It follows that a family of tori emanating from the bifurcation point can be obtained. In \texttt{po} toolbox of \textsc{coco}, one can perform the continuation of periodic orbits. Once a TR bifurcation periodic solution is found using \texttt{po}, one may switch to the continuation of tori starting from such a bifurcation periodic orbit using \texttt{ode\_TR2tor}. Therefore, the \texttt{tor} toolbox connects to the \texttt{po} toolbox seamlessly.
The rest of this report is organized as follows. In section~\ref{sec:prob-form}, problem formulation for tori is given. Specifically, a partial differential equation and phase conditions are derived to solve tori by boundary-value problem approach. Discretization schemes of the boundary-value problem are then discussed in Section~\ref{sec:dist}. In section~\ref{sec:continuation}, parameter continuation of tori is discussed. Following the analysis of dimension deficit of the continuation problem, the initialization of torus solution is presented to construct initial torus solution born from a TR bifurcation periodic orbit. In section~\ref{sec:example}, several examples are presented to demonstrate the effectiveness of the toolbox. Finally, section~\ref{sec:discussion} concludes this report with discussion on the limitation and future enhancement of the current toolbox.
\section{Problem formulation}
\label{sec:prob-form}
\subsection{PDE formulation}
In this section, we focus on the case of non-autonomous dynamical systems. It is straightforward to see that the same formulation holds for autonomous systems as well. The formulation has been derived in~\cite{dankowicz2013recipes,coco-coll}. We include the derivation here for completeness
For a 2-dimensional quasi-periodic invariant torus, we introduce a torus function $u:\mathbb{S}\times\mathbb{S}\to \mathbb{T}^2$ and two frequencies $\omega_1,\omega_2\in\mathbb{R}$ such that the parallel flow
\begin{equation}
\frac{d\theta_i}{dt} = \omega_i, \quad i=1,2.
\end{equation}
Substitution of $y(t) = u(\theta_1(t),\theta_2(t))$ into the vector field yields
\begin{equation}
\omega_1\frac{\partial u}{\partial\theta_1}(\theta_1,\theta_2) + \omega_2\frac{\partial u}{\partial\theta_2}(\theta_1,\theta_2) = f(t,u(\theta_1,\theta_2),p).
\end{equation}
The above equation is a first order \emph{quasi-linear} partial differential equation (PDE). With the method of characteristics, we let $\theta_1 = \varphi + \omega_1 t$ and $\theta_2 = \omega_2t$ and introduce
\begin{equation}
v(\varphi,t): = u(\varphi + \omega_1 t,\omega_2 t)
\end{equation}
for $\varphi\in\mathbb{S}$ and $t\in [0, {2\pi}/{\omega_2}]$.
From the transformation $(\varphi,t)\mapsto(\theta_1,\theta_2)$, we obtain $\varphi=\theta_1-\varrho\theta_2$ and $t = {\theta_2}/{\omega_2}$ with $\varrho=\omega_1/\omega_2$. It follows that
\begin{gather}
\omega_1\frac{\partial u}{\partial\theta_1}(\theta_1,\theta_2) = \omega_1\bigg(\frac{\partial v}{\partial \varphi}\frac{\partial \varphi}{\partial\theta_1} + \frac{\partial v}{\partial t}\frac{\partial t}{\partial\theta_1}\bigg)= \omega_1\frac{\partial v}{\partial \varphi},\\
\omega_2\frac{\partial u}{\partial\theta_2}(\theta_1,\theta_2) = \omega_2\bigg(\frac{\partial v}{\partial \varphi}\frac{\partial \varphi}{\partial\theta_2} + \frac{\partial v}{\partial t}\frac{\partial t}{\partial\theta_2}\bigg)= -\omega_1\frac{\partial v}{\partial \varphi}+\frac{\partial v}{\partial t},
\end{gather}
and the vector field becomes
\begin{equation}
\label{eq:pde}
\frac{\partial v}{\partial t} (\varphi,t) = f(t, v(\varphi,t),p),\quad \varphi\in\mathbb{S},t\in [0,T]
\end{equation}
where $T:={2\pi}/{\omega_2}$.
As for boundary conditions, note that
\begin{gather}
v(\varphi,0)=u(\varphi,0),\\ u(\varphi+2\pi\varrho,0)=u(\varphi+2\pi\varrho,2\pi)=u(\varphi+\omega_1\cdot2\pi/\omega_2,\omega_2\cdot2\pi/\omega_2)=v(\varphi,2\pi/\omega_2)=v(\varphi,T),
\end{gather}
and then an \emph{all-to-all} coupling condition is obtained as follows
\begin{equation}
\label{eq:pde-bc}
v(\varphi+2\pi\varrho,0)=v(\varphi,T),\quad \forall\varphi\in\mathbb{S}.
\end{equation}
We have a boundary-value problem (BVP) governed by \eqref{eq:pde} and \eqref{eq:pde-bc}. However, we need phase conditions to yield unique solution to the BVP.
\subsection{Phase conditions}
In the case of non-autonomous systems, we need a phase condition given $\varphi\in\mathbb{S}$. We use Poincar\'e phase condition as follows
\begin{equation}
\label{eq:phase1}
\langle v_\varphi^\ast(0,0), v(0,0)-v^\ast(0,0)\rangle = 0,
\end{equation}
where $v^\ast(\varphi,t)$ is a \emph{known} function of $\varphi$ and $t$, and $v^\ast_\varphi$ is its partial derivative with respect to $\varphi$. During continuation, $v^\ast(\varphi,t)$ will be updated as the solution of previous continuation step and then the Poincar\'e section is \emph{moving} during continuation. Integral phase condition provides an alternative to Poincar\'e phase condition. The reader may refer to~\cite{schilder2005continuation} for more details about the integral phase condition.
In the case of autonomous systems, we need one more phase condition given \eqref{eq:pde} becomes autonomous. For consistency, we again use Poincar\'e phase condition
\begin{equation}
\label{eq:phase2}
\langle v_t^\ast(0,0), v(0,0)-v^\ast(0,0)\rangle = 0,
\end{equation}
where $v^\ast_t$ is the partial derivative of $v^\ast(\varphi,t)$ with respect to $t$. Likewise, the Poincar\'e section here is updated during continuation.
\section{Discretization}
\label{sec:dist}
One need to discretize the domain $\mathbb{S}\times[0,T]$ to obtain approximated BVP. We first apply truncated Fourier series expansion of $v(\varphi,t)$ in $\varphi$, yielding a \emph{multi-segments} BVP with ordinary differential equations (ODEs). Then, collocation method is used to approximate the obtained ODEs to yield a set of nonlinear algebraic equations.
\subsection{Fourier series expansion}
Following~\cite{dankowicz2013recipes}, let $\chi$ represent truncated Fourier series expansion of a component of $v(\varphi,\tau)$, we have
\begin{equation}
\label{eq:FourierExp}
v_i(\varphi,t)\approx \chi(\varphi,t):= a_0(t) + \sum_{k=1}^N a_k(t)\cos( k\varphi) + b_k(t)\sin( k\varphi).
\end{equation}
It follows that
\begin{equation}
\begin{pmatrix}\chi(0,t)\\\chi(\frac{2\pi}{2N+1},t)\\\vdots\\\chi(\frac{4\pi N}{2N+1},t)\end{pmatrix} = \mathcal{F}^{-1} \begin{pmatrix}a_0(t)\\a_1(t)\\b_1(t)\\\vdots\\a_N(t)\\b_N(t)\end{pmatrix}
\end{equation}
where $\mathcal{F}$ is \emph{discrete Fourier transform matrix} and given by equation 9.21 in~\cite{dankowicz2013recipes}.
Likewise, we have
\begin{equation}
v_i(\varphi+2\pi\varrho,t)\approx \chi(\varphi+2\pi\varrho,t):= a'_0(t) + \sum_{k=1}^N a'_k(t)\cos( k\varphi) + b'_k(t)\sin( k\varphi).
\end{equation}
It follows that
\begin{equation}
\begin{pmatrix}a'_0(t)\\a'_1(t)\\b'_1(t)\\\vdots\\a'_N(t)\\b'_N(\tau)\end{pmatrix} = \mathcal{R} \begin{pmatrix}a_0(t)\\a_1(t)\\b_1(t)\\\vdots\\a_N(t)\\b_N(t)\end{pmatrix}
\end{equation}
where $\mathcal{R}$ is given by equation (38) in~\cite{coco-coll} or equation (9.24) in~\cite{dankowicz2013recipes}.
From the derived \emph{all-to-all} coupling boundary condition, we have
\begin{equation}
\chi(\varphi,T) = \chi(\varphi+2\pi\varrho,0),
\end{equation}
or equivalently
\begin{equation}
a_0(T) + \sum_{k=1}^N a_k(T)\cos( k\varphi) + b_k(T)\sin( k\varphi) = a'_0(0) + \sum_{k=1}^N a'_k(0)\cos( k\varphi) + b'_k(0)\sin( k\varphi)
\end{equation}
from which we obtain
\begin{equation}
\begin{pmatrix}a'_0(0)\\a'_1(0)\\b'_1(0)\\\vdots\\a'_N(0)\\b'_N(0)\end{pmatrix} = \begin{pmatrix}a_0(T)\\a_1(T)\\b_1(T)\\\vdots\\a_N(T)\\b_N(T)\end{pmatrix}.
\end{equation}
Substitution yields
\begin{equation}
\mathcal{R}\begin{pmatrix}a_0(0)\\a_1(0)\\b_1(0)\\\vdots\\a_N(0)\\b_N(0)\end{pmatrix} = \begin{pmatrix}a_0(T)\\a_1(T)\\b_1(T)\\\vdots\\a_N(T)\\b_N(T)\end{pmatrix}\implies \mathcal{R}\mathcal{F}\begin{pmatrix}\chi(0,0)\\\chi(\frac{2\pi}{2N+1},0)\\\vdots\\\chi(\frac{4\pi N}{2N+1},0)\end{pmatrix} = \mathcal{F}\begin{pmatrix}\chi(0,T)\\\chi(\frac{2\pi}{2N+1},T)\\\vdots\\\chi(\frac{4\pi N}{2N+1},T)\end{pmatrix}.
\end{equation}
We proceed to discretize the continuous family $v(\varphi,\tau)$ by restricting attention to the
mesh $\{\varphi_j\}_{j=1}^{2N+1}$, where $\varphi_j:=\frac{2(j-1)\pi}{2N+1}$. Following the above equation, the discrete \emph{all-to-all} boundary condition is given by
\begin{equation}
\label{eq:all-to-all}
(\mathcal{F}\otimes I_n) \begin{pmatrix}v(\varphi_1,T)\\v(\varphi_2,T)\\\vdots\\v(\varphi_N,T)\end{pmatrix} = ((\mathcal{R}\mathcal{F})\otimes I_n) \begin{pmatrix}v(\varphi_1,0)\\v(\varphi_2,0)\\\vdots\\v(\varphi_N,0)\end{pmatrix},
\end{equation}
where $I_n$ denotes identity matrix of size $n$.
In addition, we obtain the following set of ODEs
\begin{equation}
\label{eq:odes}
\frac{dv(\varphi_j,t)}{dt} = f(t,v(\varphi_j,t),p),\quad j=1,\cdots,2N+1, t\in[0,T].
\end{equation}
Along with the boundary condition \eqref{eq:all-to-all} and appropriate discrete phase conditions (which will be discussed in section~\ref{sec:dist-phase}), we have a \emph{multi-segments} BVP with a set of ODEs.
\subsection{Collocation}
We apply collocation methods to solve the above multi-segments BVP. Here we summarize the basic idea of collocation methods. One may refer to~\cite{dankowicz2013recipes} for more details.
\begin{enumerate}
\item Division of domain: the domain $[0,T]$ is divided into subintervals.
\item Approximation of unknown functions: the unknown function within each subinterval is approximated using Lagrangian interpolation of values of the function at a set of base points, and the continuity of the function between two adjacent subintervals is imposed.
\item Approximation of differential equations: the ODEs are satisfied at a set of collocation nodes.
\end{enumerate}
The collocation method with adaptive mesh has been implemented in \texttt{coll} toolbox of \textsc{coco}. The \texttt{tor} toolbox here is built upon the \texttt{coll} toolbox.
\subsection{Phase conditions}
\label{sec:dist-phase}
The imposition of phase conditions \eqref{eq:phase1} and \eqref{eq:phase2} is straightforward with the above discretization. We have $v^\ast_t(0,0)=f(0,v^\ast(0,0),p)$. To obtain the approximation of $v_\varphi^\ast(0,0)$, let recall \eqref{eq:FourierExp}
and then
\begin{equation}
\chi_\varphi(\varphi,t)= \sum_{k=1}^N \left( kb_k(t)\right)\cos(k\varphi)+\left(-ka_k(t)\right)\sin(k\varphi).
\end{equation}
Plugging $(\varphi,t)=(0,0)$ yields
\begin{equation}
\chi_\varphi(0,0)=\sum_{k=1}^N kb_k(0).
\end{equation}
Recall
\begin{equation}
\begin{pmatrix}a_0(0)\\a_1(0)\\b_1(0)\\\vdots\\a_N(0)\\b_N(0)\end{pmatrix} = \mathcal{F}\begin{pmatrix}\chi(\varphi_1,0)\\\chi(\varphi_2,0)\\\vdots\\\chi(\varphi_{2N+1},0)\end{pmatrix},
\end{equation}
we can express $\chi_\varphi(0,0)$ as a linear function of $\{\chi(\varphi_j,0)\}_{j=1}^{2N+1}$. It follows that we can obtain $v_\varphi^\ast(0,0)$ in terms of $\{v^\ast(\varphi_j,0)\}_{j=1}^{2N+1}$.
\section{Parameter continuation}
\label{sec:continuation}
\subsection{Dimension deficit}
\label{sec:deficit}
Recall system parameters $p\in\mathbb{R}^q$, we regard $\omega_1$, $\omega_2$ and $\varrho$ as system parameters as well. Other than the \emph{all-to-all} coupling condition, we have the following boundary conditions
\begin{equation}
T_0=0, T=\frac{2\pi}{\omega_2},\varrho=\frac{\omega_1}{\omega_2},
\end{equation}
where $T_0$ is the initial time.
With initially inactive system parameters, the dimension deficit is $-1$. We also need to account for phase conditions. It follows that
\begin{itemize}
\item \emph{autonomous systems}: two phase conditions are included and then the dimension deficit is $-3$. So four parameters need to be released to yield an one-dimensional solution manifold. For instance, one may release $\omega_1,\omega_2$ and $p_{1,2}$ to obtain a family of tori with fixed rotation number $\varrho$,
\item \emph{non-autonomous systems}: one phase condition is included and we have a coupling condition for system parameters, namely, $\Omega_2-\omega_2=0$, where $\Omega_2$ is a parameter included in $p$ and characterizes the period of non-autonomous forcing ($T=2\pi/\Omega_2$). So the dimension deficit is again $-3$ and we need to release four parameters to obtain a one-dimensional manifold of tori.
\end{itemize}
\subsection{Initialization}
In the continuation of periodic orbits, Neimark-Sacker (torus) bifurcation may be observed. The occurrence of torus bifurcation indicates the born of tori. Here we provide an initial guess to such a torus. The function \texttt{ode\_TR2tor} is based on the initialization presented in this section. The initialization here is adapted from the derivation in~\cite{olikara2010computation}.
Consider dynamical system
\begin{equation}
\dot{x}=f(t,x,p),\quad x(t_0)=x_0
\end{equation}
with a solution $x(t)$. It follows that
\begin{equation}
x(t)=x_0+\int_{t_0}^t f(s,x(s),p)\mathrm{d}s
\end{equation}
and hence
\begin{equation}
\frac{\partial x(t)}{\partial x_0}=\mathbb{I}+\int_{t_0}^t \frac{\partial f(s,x(s),p)}{\partial x}\frac{\partial x(s)}{\partial x_0}\mathrm{d}s.
\end{equation}
Let $\Phi(t,t_0):=D_{x_0}x(t)=\frac{\partial x(t)}{\partial x_0}$, we have
\begin{equation}
\dot{\Phi}=f_x\Phi,\quad \Phi(t_0)=\mathbb{I}
\end{equation}
and the transition matrix $\mathcal{M}(t,t_0)$ is given by $\Phi(t,t_0)$ and the monodromy matrix is $\mathcal{M}(t_0+T,t_0)$.
In general, we solve the above variational equation with initial condition $\Phi_0$. Let $\delta x_0=\Phi_0 v$, it follows that
\begin{equation}
\delta x(t)=\Phi(t,t_0) v=\Phi(t,t_0)\Phi_0^{-1}\delta x_0\implies \mathcal{M}(t,t_0)=\Phi(t,t_0)\Phi_0^{-1}.
\end{equation}
Suppose $\mathcal{M}(t_0+T,t_0)$ has an eigenvalue $e^{\mathrm{i}\alpha}$ and an eigenvector $v$ such that
\begin{equation}
\mathcal{M}(t_0+T,t_0)v = e^{\mathrm{i}\alpha}v.
\end{equation}
$\mathrm{Re}(v)$ and $\mathrm{Im}(v)$ span an invariant subspace to the monodromy matrix.
Define $u(t)=e^{-\mathrm{i}\alpha (t-t_0)/T}\mathcal{M}(t,t_0)v$, it follows that $u(t_0+T)=u(t_0)=v$ and then $u(t)$ is a periodic function with period $T$.
For a periodic reference solution $x_p(t)$ with period $T$, we know the coefficient matrix $f_x$ is time-periodic and then the transition matrix $\mathcal{M}(t,t_0)$ satisfies
\begin{equation}
\mathcal{M}(t+T,t_0)=\mathcal{M}(t+T,t)\mathcal{M}(t,t_0)=\mathcal{M}(t+T,t_0+T)\mathcal{M}(t_0+T,t_0)=\mathcal{M}(t,t_0)\mathcal{M}(t_0+T,t_0)
\end{equation}
where the first two equalities use the semi-group property of state transition matrix and the last one uses the periodicity of coefficient matrix.
Then we can show that $u(t)$ is an eigenvector to $\mathcal{M}(t+T,t)$ with same eigenvalue $e^{\mathrm{i}\alpha}$. Note
\begin{equation}
\mathcal{M}(t+T,t)\mathcal{M}(t,t_0)v=\mathcal{M}(t,t_0)\mathcal{M}(t_0+T,t_0)v=e^{\mathrm{i}\alpha}\mathcal{M}(t,t_0)v,
\end{equation}
we have $\mathcal{M}(t+T,t)u(t)=e^{\mathrm{i}\alpha}u(t)$.
So we can define a torus function perturbation
\begin{equation}
\hat{u}(\theta_1,t) = \mathrm{Re}[e^{\mathrm{i}\theta_1}u(t)]=\cos\theta_1\mathrm{Re}(u(t))-\sin\theta_1\mathrm{Im}(u(t)),\quad\theta_1\in\mathbb{S}.
\end{equation}
It follows that
\begin{equation}
\mathcal{M}(t+T)\hat{u}(\theta_1,t)=\mathrm{Re}[\mathcal{M}(t+T)e^{\mathrm{i}\theta_1}u(t)]=\mathrm{Re}[e^{\mathrm{i}\theta_1}\mathcal{M}(t+T)u(t)]=\mathrm{Re}[e^{\mathrm{i}\theta_1}e^{\mathrm{i}\alpha}u(t)]=\hat{u}(\theta_1+\alpha, t)
\end{equation}
which indicates the rotation of $\hat{u}(\theta_1,t)$ with angle $\alpha$ when state transition is applied. The perturbed torus solution is then given by
\begin{equation}
\tilde{u}(\theta_1,t) = \mathcal{M}(t,t_0)\hat{u}(\theta_1,t_0) = \mathrm{Re}[e^{\mathrm{i}\theta_1}\mathcal{M}(t,t_0)v] =\hat{u}(\theta_1+\alpha(t-t_0)/T,t).
\end{equation}
So the torus initial solution is given by
\begin{equation}
\hat{x}(\theta_1,t) = x(t) + \epsilon\tilde{u}(\theta_1,t)=x(t)+\epsilon\hat{u}(\theta_1+\omega_1(t-t_0),t),\quad \forall\theta_1\in\{\varphi_j\}_{j=1}^{2N+1}
\end{equation}
where $\epsilon$ controls the amount of perturbation. We have used the fact that $T=\frac{2\pi}{\omega_2}$ and $\alpha=\omega_1T$.
Note that the eigenvector is still an eigenvector after multiplication by a complex number. So we would like to choose the one such that $\langle v_\mathrm{R},v_\mathrm{I}\rangle=0$, where $v_\mathrm{R}=\mathrm{Re}(v)$ and $v_\mathrm{I}=\mathrm{Im}(v)$. Let the complex number be $e^{\mathrm{i}\theta}$, it follows that
\begin{equation}
e^{\mathrm{i}\theta}(v_\mathrm{R}+\mathrm{i}v_\mathrm{I})=(\cos\theta+\mathrm{i}\sin\theta)(v_\mathrm{R}+\mathrm{i}v_\mathrm{I})=(\cos\theta v_\mathrm{R}-\sin\theta v_\mathrm{I})+\mathrm{i}(\cos\theta v_\mathrm{I}+\sin\theta v_\mathrm{R}).
\end{equation}
We ask for
\begin{equation}
\langle\cos\theta v_\mathrm{R}-\sin\theta v_\mathrm{I},\cos\theta v_\mathrm{I}+\sin\theta v_\mathrm{R}\rangle=0,
\end{equation}
from which we obtain
\begin{equation}
\langle v_\mathrm{R},v_\mathrm{I}\rangle\cos2\theta+0.5(\langle v_\mathrm{R},v_\mathrm{R}\rangle-\langle v_\mathrm{I},v_\mathrm{I}\rangle)\sin2\theta=0\implies \tan2\theta=\frac{2\langle v_\mathrm{R},v_\mathrm{I}\rangle}{\langle v_\mathrm{I},v_\mathrm{I}\rangle-\langle v_\mathrm{R},v_\mathrm{R}\rangle}.
\end{equation}
We can solve $\theta$ from the above equation with \texttt{atan2} function.
\section{Examples}
\label{sec:example}
\subsection{Langford dynamical system}
Consider the \emph{Langford} dynamical system
\begin{gather}
\dot{x}_1 = (x_3-0.7)x_1-\omega x_2,\\
\dot{x}_2 = \omega x_1+(x_3-0.7)x_2,\\
\dot{x}_3 = 0.6+x_3-\frac{1}{3}x_3^3-(x_1^2+x_2^2)(1+\rho x_3)+\epsilon x_3x_1^3.
\end{gather}
Following the analysis in~\cite{dankowicz2013recipes}, the system has a family of periodic orbits when $\epsilon=0$, and such solution undergoes a torus bifurcation at $\rho=\rho^\ast\approx0.615$.
We proceed to use \texttt{po} toolbox in \textsc{coco} to find the family of periodic orbits and detect TR bifurcation point along the solution manifold. Specifically, we first use \texttt{ode45} to perform forward simulation and generate an approximated periodic orbit solution. Then we call \texttt{ode\_isol2po} to encode the boundary-value problem of periodic orbits.
\begin{alltt}
p0 = [3.5; 1.5; 0];
T = 2*pi/p0(1);
[~,x0] = ode45(@(t,x) lang(x,p0), 0:100*T, [0.3; 0.4; 0]);
[t0,x0] = ode45(@(t,x) lang(x,p0), linspace(0,T,100), x0(end,:));
figure;
plot(t0,x0);
prob = coco_prob();
prob = ode_isol2po(prob, '', @lang, @lang_DFDX, @lang_DFDP, ...
t0, x0, {'om','rho','eps'}, p0);
coco(prob, 'po', [], 1, 'rho', [0.2 2]);
\end{alltt}
Here \texttt{lang} denotes the vector field of the dynamical system, \texttt{lang\_DFDX} and \texttt{lang\_DFDP} represent the derivative of the vector field with respect to state $x$ and system parameters $p$ respectively. The system parameters in this example are $(\omega,\rho,\epsilon)$ and named as \texttt{om}, \texttt{rho} and \texttt{eps} as above. The vector field and its derivatives are encoded as follows
\begin{alltt}
function y = lang(x, p)
x1 = x(1,:);
x2 = x(2,:);
x3 = x(3,:);
om = p(1,:);
ro = p(2,:);
eps = p(3,:);
y(1,:) = (x3-0.7).*x1-om.*x2;
y(2,:) = om.*x1+(x3-0.7).*x2;
y(3,:) = 0.6+x3-x3.^3/3-(x1.^2+x2.^2).*(1+ro.*x3)+eps.*x3.*x1.^3;
end
function J = lang_DFDX(x, p)
x1 = x(1,:);
x2 = x(2,:);
x3 = x(3,:);
om = p(1,:);
ro = p(2,:);
eps = p(3,:);
J = zeros(3,3,numel(x1));
J(1,1,:) = (x3-0.7);
J(1,2,:) = -om;
J(1,3,:) = x1;
J(2,1,:) = om;
J(2,2,:) = (x3-0.7);
J(2,3,:) = x2;
J(3,1,:) = -2*x1.*(1+ro.*x3)+3*eps.*x3.*x1.^2;
J(3,2,:) = -2*x2.*(1+ro.*x3);
J(3,3,:) = 1-x3.^2-ro.*(x1.^2+x2.^2)+eps.*x1.^3;
end
function J = lang_DFDP(x, p)
x1 = x(1,:);
x2 = x(2,:);
x3 = x(3,:);
J = zeros(3,size(p,1),numel(x1));
J(1,1,:) = -x2;
J(2,1,:) = x1;
J(3,2,:) = -x3.*(x1.^2+x2.^2);
J(3,3,:) = x3.*x1.^3;
end
\end{alltt}
Indeed, a TR bifurcation periodic orbit is found at $\rho\approx0.61545$. We then move to the continuation of tori in the system. We call \texttt{ode\_TR2tor} to switch from the continuation of periodic orbits to the continuation of tori.
\begin{alltt}
T_po = 5.3;
T_ret = 2*pi/3.5;
varrho = T_ret/T_po;
bd = coco_bd_read('po');
TRlab = coco_bd_labs(bd, 'TR');
prob = coco_prob();
prob = coco_set(prob, 'cont', 'NAdapt', 5, 'h_min',...
1e-3, 'PtMX', 50, 'h_max', 10, 'bi_direct', false);
prob = ode_TR2tor(prob, '', 'po', TRlab, 50);
coco(prob, 'tr1', [], 1, {'varrho','rho','om1','om2','eps'},[varrho,0.44]);
\end{alltt}
Here \texttt{varrho}, \texttt{om1} and \texttt{om2} correspond to $\varrho$, $\omega_1$ and $\omega_2$ in the PDE formulation respectively. \texttt{TRlab} gives the label of TR bifurcation solution in \texttt{po} run. The 50 in \texttt{prob = ode\_TR2tor(prob, '', 'po', TRlab, 50)} characterizes the number of segments, namely, $N$ in section~\ref{sec:dist}. Recall the number of segments is given by $2N+1$, so we have 101 segments in this case. By default, $N=10$. Please type \texttt{help ode\_TR2tor} in \textsc{matlab} for more details of the syntax of this constructor.
In the above continuation run \texttt{tr1}, $(\varrho,\rho,\omega_1,\omega_2)$ are varied and a one-dimensional solution manifold is obtained. As discussed in section~\ref{sec:deficit}, four parameters need to be released to yield a one-dimensional manifold. Indeed, \texttt{eps} does not change during the continuation run. As an alternative, we may fix $\varrho$ and then another one-dimensional manifold of tori is obtained. To demonstrate it, we call constructor \texttt{ode\_tor2tor} as follows
\begin{alltt}
bd = coco_bd_read('tr1');
lab = coco_bd_labs(bd, 'EP');
lab = max(lab);
prob = coco_prob();
prob = coco_set(prob, 'cont', 'NAdapt', 5, 'h_min',...
1e-3, 'PtMX', 30, 'h_max', 10, 'bi_direct', true);
prob = ode_tor2tor(prob, '', 'tr1', lab);
coco(prob, 'tr2', [], 1, {'eps','rho','om1','om2','varrho'});
\end{alltt}
Here we start continuation from a solution found in previous run \texttt{tr1}. In current run \texttt{tr2}, $\varrho$ is not varied in continuation, as can be seen in the continuation history.
Once a torus solution is found, you may call \texttt{plot\_torus} to visualize the solution. When $\rho$ is away from $\rho^\ast$, the size of tori born from TR periodic orbit grows, as can be seen in the visualization of a family of tori
\begin{alltt}
figure; coco_plot_bd('tr2','rho','eps');
for lab = 1:5
plot_torus('','tr1', lab, [1 2 3], 0.75); pause(1);
end
\end{alltt}
To validate an obtained torus solution, one may perform forward simulation with an initial condition on the torus
\begin{alltt}
lab = 5;
plot_torus('','tr1', lab, [1 2 3]);hold on
sol = tor_read_solution('','tr1',lab);
p = sol.p(1:end-3);
xbp = sol.xbp;
[t,x] = ode45(@(t,x) lang(x,p), 0:0.01:100*T, xbp(1,:,1));
plot3(x(:,1),x(:,2),x(:,3),'r-');
\end{alltt}
Here \texttt{tor\_read\_solution} extracts the result of continuation run. For more details of the arguments of \texttt{plot\_torus} and \texttt{tor\_read\_solution}, please check their corresponding help info.
\subsection{Van der Pol oscillator}
Consider Van der Pol oscillator subject to harmonic excitation
\begin{equation}
\ddot{x}-c(1-x^2)\dot{x}+x=a\cos\Omega t.
\end{equation}
We are interested in the quasiperiodic response of such a system.
We start by constructing initial solution guess with forward simulation
\begin{alltt}
p0 = [ 1.5111; 0.11; 0.1 ];
T_po = 2*pi;
N = 10;
tout = linspace(0, T_po, 2*N+2);
T_ret = 2*pi/p0(1);
tt = linspace(0,1,10*(2*N+1))';
t1 = T_ret*tt;
x0 = zeros(numel(tt),2,2*N+1);
figure; hold on
for i=1:2*N+1
x1 = [2*cos(tout(i)) 2*sin(tout(i))];
[~, x1] = ode45(@(t,x) vdp(t,x,p0), 10*t1, x1);
[~, x1] = ode45(@(t,x) vdp(t,x,p0), t1, x1(end,:));
x0(:,:,i) = x1;
plot3(x1(:,1),t1,x1(:,2),'r-');
end
varrho = T_ret/T_po;
\end{alltt}
We proceed to use constructor \texttt{ode\_isol2tor} to perform continuation of tori with the generated initial solution guess. Given the dynamical system is non-autonomous, \texttt{coco\_set} is called to change default \emph{autonomous} setting. In addition, the excitation frequency parameter $\Omega$ needs to be named as \texttt{Om2} given the coupling condition $\Omega_2-\omega_2$ needs to be imposed, as discussed in section~\ref{sec:deficit}. With these observations, we have
\begin{alltt}
prob = coco_prob();
prob = coco_set(prob, 'tor', 'autonomous', false);
prob = coco_set(prob, 'coll', 'NTST', 40);
prob = coco_set(prob, 'cont', 'NAdapt', 0, 'h_max', 2, 'PtMX', 60);
torargs = \{@vdp @vdp_DFDX @vdp_DFDP @vdp_DFDT t1 x0 {'Om2','c','a','om1','om2','varrho'}
[p0' -2*pi/T_po p0(1) -varrho]\};
prob = ode_isol2tor(prob, '', torargs\{:\});
coco(prob, 'vdP_torus', [], 1, {'a','Om2','om2','om1', 'varrho','c'}, \{[0.1 2] [1 2]\});
\end{alltt}
Here we choose negative $\omega_1$ and $\varrho$ because the rotation direction could be in opposite direction. In other words, the value of $\varrho$ can be both positive and negative. One may predict that $\varrho$ is not changed in the continuation run \texttt{vdP\_torus} because releasing the first four parameters $(a,\Omega_2,\omega_2,\omega_1)$ is enough to yield a one-dimensional solution manifold of tori. Indeed $\varrho$ does not change in this continuation run.
Next we use constructor \texttt{ode\_tor2tor} to perform continuation from a saved torus solution in previous run \texttt{vdP\_torus}.
\begin{alltt}
bd = coco_bd_read('vdP_torus');
lab = coco_bd_labs(bd, 'EP');
lab = max(lab);
prob = coco_prob();
prob = coco_set(prob, 'cont', 'NAdapt', 5, 'h_max', 2, 'PtMX', 60);
prob = ode_tor2tor(prob, '', 'vdP_torus', lab);
coco(prob, 'vdP_torus_varrho', [], 1,
{'a','Om2','om2','varrho','om1','c'}, \{[0.1 2] [1 2]\});
\end{alltt}
In the above continuation run \texttt{vdP\_torus\_varrho}, $\varrho$ is free to change, and several branch points are detected. We then proceed to switch to secondary solution branch passing through one branch point. We use constructor \texttt{ode\_BP2tor} to perform such switch.
\begin{alltt}
bd = coco_bd_read('vdP_torus_varrho');
lab = coco_bd_labs(bd, 'BP');
lab = lab(1);
prob = coco_prob();
prob = coco_set(prob, 'cont', 'NAdapt', 0, 'h_max', 2, 'PtMX', 60);
prob = ode_BP2tor(prob, '', 'vdP_torus_varrho', lab);
coco(prob, 'vdP_torus_varrho_BP', [], 1,
{'a','Om2','om2','varrho','om1','c'}, {[0.1 2] [1 2]});
\end{alltt}
As exercises to this example, the reader may follow the methodology of previous example. Specifically, continuation of periodic orbits is performed first and then switch to the continuation of tori by \texttt{ode\_TR2tor} and detected TR points in the continuation run of periodic orbits. In addition, the reader may consider the case of positive $\varrho$ and compare the results in two cases.
\section{Discussion}
\label{sec:discussion}
A unified toolbox \texttt{tor} is presented in this report. There are some limitations of the discretization schemes used in the toolbox. First, the number of segments has to be determined before continuation and cannot be changed during continuation. The user needs to check the fidelity of obtained results and then update the number of segments if necessary. An error estimator is instructive for choosing reasonable number of segments. Second, the collocation method used here results in heavy computation load for large dynamical systems. As an alternative, parallel shooting/forward simulation may be utilized to remove such a bottleneck. Finally, the stability of torus solution is not available in current toolbox. Variational equation of torus may be derived and used to determine the stability of torus solution.
\bibliographystyle{plain}
|
1,108,101,566,430 | arxiv | \section{Introduction}
\label{sec:intro}
Magnetic reconnection is a commonly observed fundamental process in
plasmas. It allows topological change of magnetic field lines, and
rapidly converts the free energy stored in the magnetic field into
various forms of energy.
Amongst other poorly understood aspects of magnetic reconnection, such
as explaining the explosive time scale that is often observed, or the
onset mechanism, one key issue is the question of energy partition, \ie
understanding how, and how much of, the released energy is distributed
into the different available channels: bulk heating of each of the
plasma species and non-thermal particle acceleration.
Plasma heating is often observed to accompany magnetic reconnection in
both astrophysical and laboratory plasmas (see, \eg a recent review
by \citet{YamadaKulsrudJi_10}).
Specifically, the measured ion temperature in reversed-field pinches (RFP),
spheromaks, and merging plasma
experiments where reconnection events are thought to be
occurring~\citep{OnoYamadaAkao_96,HsuCarterFiksel_01,FikselAlmagriChapman_09}
often well exceeds the electron temperature. The fact that ions are
selectively heated invalidates Ohmic dissipation of the current sheet as
the dominant heating source since it \note{primarily} deposits the
energy into current carrying electrons. Furthermore, heating associated
with reconnection events generally occurs much faster than the expected
time scale estimated from the Ohmic heating, as expected in weakly
collisional environments.
Various mechanisms causing such `anomalous' ion heating have been
suggested: Stochastic ion heating has been
studied~\citep{FikselAlmagriChapman_09} for explaining ion heating in a
RFP device where multiple reconnections in turbulent
environments are occurring rather than a single reconnection event,
\note{while anomalous resistivity has been invoked in MRX~\citep
{HsuCarterFiksel_01}.}
In the weakly collisional plasmas often found in reconnection environments,
Ohmic and viscous heating cannot, by definition, be important. This leaves
phase mixing as the only possible mechanism of converting energy from
the fields to the particles in an irreversible way.
Indeed, kinetic effects generally lead to
non-Maxwellian distribution functions; once highly oscillatory structures
are created in velocity space, they suffer strong collisional
dissipation as the collision operator provides diffusion in velocity
space. The thermalization time scale may be comparable to the time scale of
magnetic reconnection; therefore, to address thermodynamic properties of such
plasmas, collisional effects are essential, even though collisions are rare
and not responsible for reconnection itself (\ie the frozen flux condition
is broken by electron kinetic effects, not collisions)
\footnote{In the interest of clarity, notice that by `heating' we mean
the increase in the entropy of the system due to reconnection.
Note that this definition excludes the energization
of supra-thermal particles, a process that is sometimes also
referred to as heating
but which is formally reversible and does not, therefore, change
the entropy of the system by itself.
}.
Landau damping~\citep{Landau_46} is one of the well-known
examples of phase mixing, in which nearly synchronous particles with the
electrostatic potential variations absorb energy from the field. Phase
mixing occurs along the
direction of electron free stream, \ie along the magnetic field lines.
The parallel phase mixing and resultant heating of electrons during
magnetic reconnection in low-$\beta$ plasmas ($\beta$ is the ratio of the
plasma to the magnetic pressure) have been studied using a reduced
kinetic model~\citep{ZoccoSchekochihin_11,LoureiroSchekochihinZocco_13}
and the full gyrokinetic model~\citep{NumataLoureiro_14}, and the
significance of such an effect has been proved.
Phase mixing can also be induced by a FLR
effect~\citep{DorlandHammett_93}. In strongly magnetized plasmas,
particles undergo drifts {(dominantly the $\bm{E}\times\bm{B}$ drift) in
the perpendicular direction to the mean
magnetic field. Gyro-averaging of the fields will give rise to
spread of the drift velocity of different particles, hence will lead
to phase mixing in the perpendicular
direction. Unlike linear parallel phase mixing of Landau damping,
the FLR-phase-mixing process causes damping proportional to the field
strength, and only appears nonlinearly (nonlinear phase mixing).
Energy dissipation due to FLR phase mixing has
been investigated in electrostatic
turbulence~\citep{TatsunoDorlandSchekochihin_09}; its role in magnetic
reconnection has never been studied.
In this paper, we present gyrokinetic simulations of magnetic
reconnection using the numerical code {\tt
AstroGK}~\citep{NumataHowesTatsuno_10}.
Our main aim is to perform a detailed analysis of ion and electron
(irreversible) heating in weakly collisional, strongly magnetized
plasmas.
We follow \citet{NumataDorlandHowes_11} and \citet{NumataLoureiro_14} to
setup a tearing-mode unstable configuration.
In addition to the electron heating
due to parallel phase mixing in low-$\beta$ plasmas as has already been
shown in \citet{LoureiroSchekochihinZocco_13}
and \citet{NumataLoureiro_14}, ion heating
is also expected to be significant in high-$\beta$ plasmas since
compressible fluctuations will be excited which are strongly damped
collisionlessly~\citep{SchekochihinCowleyDorland_09}.
The main questions we are going to address will be (i) fraction of ion
and electron heating, (ii) dissipation channel, \ie how much energy is
converted via phase mixing, resistivity and viscosity,
and (iii) how the answer to the above questions scales with plasma beta.
We also study in detail how and where plasma
heating occurs via phase mixing, which may be important to the
interpretation of plasma heating as measured in laboratory experiments
or space and astrophysical plasmas.
This paper is organized as follows:
We start with defining plasma heating and energy dissipation within the
framework of the gyrokinetic model in Sec.~\ref{sec:heating}.
The details of the simulation setup and choices of parameters are
described in Sec.~\ref{sec:setup}. In the same section, simulations of
the linear regime are presented to identify the region of parameter
space where the tearing instability growth rate is independent of the
plasma collisionality (the so-called collisionless regime of reconnection).
In Sec.~\ref{sec:results}, we report the results of nonlinear
simulations, focusing on the dependence on plasma beta of the
reconnection dynamics, as well as on the detailed analysis of ion and
electron heating caused by magnetic reconnection. We summarize the main
results of this paper in Sec.~\ref{sec:conclusion}.
\section{Plasma heating and energy dissipation}
\label{sec:heating}
We consider magnetic reconnection in the framework of $\delta f$
gyrokinetics.
Plasma heating is measured by the entropy production (irreversible),
rather than increase of temperature (reversible) in this work.
In the absence of collisions, the gyrokinetic equation
conserves the generalized energy consisting of the particle part
$E^{\mathrm{p}}_{s}$ and the magnetic field part
$E^{\mathrm{m}}_{\perp,\parallel}$
\begin{equation}
W = \sum_{s} E^{\mathrm{p}}_s + E^{\mathrm{m}}_{\perp} +
E^{\mathrm{m}}_{\parallel} =
\int
\left[
\sum_{s}\int \frac{T_{0s} \delta f_{s}^{2}}{2f_{0s}} \mathrm{d} \bm{v}
+ \frac{\left|\nabla_{\perp}A_{\parallel}\right|^{2}}{2\mu_{0}}
+ \frac{\left|\delta B_{\parallel}\right|^{2}}{2\mu_{0}}
\right] \mathrm{d} \bm{r}.
\label{eq:gen_energy}
\end{equation}
The subscript $s=\mathrm{i}$ (for ions) or $\mathrm{e}$ (for
electrons) denotes the species label.
$f_{0s}=n_{0s}/(\sqrt{\pi}v_{\mathrm{th},s})^{3}
\exp(-v^{2}/v_{\mathrm{th},s}^{2})$ is the Maxwellian distribution function
of background plasmas, $n_{0s}$, $T_{0s}$ are the density and
temperature, $v_{\mathrm{th},s}\equiv\sqrt{2T_{0s}/m_{s}}$ is the
thermal velocity, $m_{s}$, $q_{s}$ are the mass and electric
charge.
The perturbed distribution function \nuno{is} $\delta f_{s}= -
\left(q_{s}\phi/T_{0s}\right) f_{0s} + h_{s}$, where $h_{s}$ is the
non-Boltzmann part obeying the gyrokinetic equation.
$\phi$, $A_{\parallel}$, \nuno{$\delta B_{\parallel}$ are, respectively, the electrostatic
potential, vector potential and perturbed magnetic field} along the background
magnetic field $\bm{B_{0}}=B_{z0}\hat{z}$, and $\mu_{0}$ is the
vacuum permeability.
\nuno{Note that the perturbed particle energy is
$E_{s}^{\mathrm{p}}=-T_{0s} \delta S_{s}$,
where $\delta S_{s}$ is the perturbed entropy.}}
\revise{
If we
extract explicitly the first two velocity moments from $\delta f_{s}$
as in \citet{ZoccoSchekochihin_11},
\begin{equation}
\delta f_{s} = \left(\frac{n_{s}}{n_{0s}} + \frac{2 v_{\parallel}
u_{\parallel,s}}{v_{\mathrm{th},s}^{2}}\right) f_{0s}
+ h_{s}',
\label{eq:hermite}
\end{equation}
where
\begin{equation}
n_{s} = \int \delta f_s \mathrm{d} \bm{v}, ~~~
u_{\parallel,s} = \frac{1}{n_{0s}} \int v_{\parallel} \delta f_s \mathrm{d} \bm{v},
\end{equation}
the particle's energy is decomposed into the density variance, the
parallel kinetic energy, and the rest as follows:
\begin{equation}
E^{\mathrm{p}}_{s}
= \int
\left[
\frac{n_{0s}T_{0s}}{2} \frac{n_s^{2}}{n_{0s}^{2}}
+ \frac{m_{s}n_{0s}u_{\parallel,s}^{2}}{2}
+ \int \frac{T_{0s} h_{s}'^2}{2f_{0s}} \mathrm{d} \bm{v}
\right] \mathrm{d} \bm{r}.
\end{equation}
}
The generalized energy is dissipated by collisions as $\mathrm{d} W/\mathrm{d} t =
-\sum_{s} D_{s}$, where the dissipation rate of each species is given by
\begin{equation}
D_{s} =
-\int \int
\left\langle
\frac{T_{0s}h_{s}}{f_{0s}}
\left(\frac{\partial h_{s}}{\partial t}\right)_{\mathrm{coll}}
\right\rangle_{\bm{r}}
\mathrm{d} \bm{r} \mathrm{d} \bm{v} \geq 0.
\label{eq:dissipation_rate}
\end{equation}
The angle bracket denotes the gyro-averaging operation.
\note{The collision term $(\partial h_{s}/\partial t)_{\mathrm{coll}}$ is modeled by the
linearized, and gyro-averaged, Landau collision
operator~\citep{AbelBarnesCowley_08,BarnesAbelDorland_09}.
It consists of like-particle collisions of electrons and ions whose
frequencies are given by $\nu_{\mathrm{ee}}$ and $\nu_{\mathrm{ii}}$, and inter-species
collisions of electrons with ions given by $\nu_{\mathrm{ei}}=Z_{\mathrm{eff}}\nu_{\mathrm{ee}}$
($Z_{\mathrm{eff}}$ is the effective ion charge, and is set to unity in
this paper).
The ion-electron collisions are subdominant compared with the ion-ion
collisions.
The electron-ion collisions reproduce Spitzer resistivity~\citep{SpitzerHarm_53}
for which the electron-ion collision frequency and the resistivity
($\eta$) are related by $\eta/\mu_{0}=0.380 \nu_{\mathrm{ei}} d_{\mathrm{e}}^{2}$
where $d_{\mathrm{e}}\equiv\sqrt{m_{\mathrm{e}}/(\mu_{0}n_{0\mathrm{e}}q_{\mathrm{e}}^{2})}$
is the electron skin depth.}
The dissipation rate $D_s$ contains all possible dissipation channels.
Besides the dissipation that can be modeled in fluid models,
\eg the resistivity, $D_{s}$ also contains dissipation
of higher order velocity moments due to phase mixing depending on the
form of $h_s$.
In our problem setup explained in Sec.~\ref{sec:setup},
the initial energy is contained in the perpendicular magnetic field
energy and the electron parallel kinetic energy (\ie the reconnecting
magnetic field and the current associated with it).
During the magnetic reconnection process, the initial energy is
distributed into other forms of energy in a reversible way, and is only
irreversibly dissipated from the system through collisions. \note{There
is no direct dissipation channel of the magnetic field energy. The
resistive dissipation is the collisional decay of particle's kinetic energy
(current) supporting the magnetic field.}
The increased entropy is turned into heat, and the background
temperature increases on a time scale much longer than the time scale
considered in the simulation~\citep{HowesCowleyDorland_06}.
In this sense, we identify the energy dissipation
\eqref{eq:dissipation_rate} and plasma heating. The background
temperature $T_{0\mathrm{i,e}}$ does not change during simulations.
\section{Simulation setup}
\label{sec:setup}
We consider magnetic reconnection of strongly magnetized
plasmas in a two-dimensional doubly periodic slab domain.
Simulations are carried out using the gyrokinetic code {\tt
AstroGK}~\citep{NumataHowesTatsuno_10}.
We initialize the system by a tearing unstable magnetic field
configuration (see \citet{NumataHowesTatsuno_10,NumataDorlandHowes_11}
for details). The equilibrium magnetic field profile is
\begin{equation}
\bm B=B_{z0}\hat z+B_{y}^{\mathrm{eq}}(x)\hat y, \quad
\revise{B_{y}^{\mathrm{eq}}/B_{z0} \sim \varepsilon \ll 1},
\end{equation}
where $B_{z0}$ is the background guide magnetic field and
$B_{y}^{\mathrm{eq}}$ is the in-plane, reconnecting component, related
to the parallel vector potential by $B_{y}^{\mathrm{eq}}(x)= -\partial
A_{\parallel}^{\mathrm{eq}}/\partial x$,
\revise{$\varepsilon$ is the gyrokinetic epsilon -- a small expansion
parameter enabling scale separation in gyrokinetics (see, \eg
\citet{HowesCowleyDorland_06})},
and
\begin{equation}
\label{equilib_apar}
A_{\parallel}^{\mathrm{eq}}(x) =
A_{\parallel0}^{\mathrm{eq}}\cosh^{-2}\left(\frac{x-L_{x}/2}{a}\right)
S_{\mathrm{h}}(x).
\end{equation}
($S_{\mathrm{h}}(x)$ is a shape function to enforce periodicity~\citep{NumataHowesTatsuno_10}.)
$A_{\parallel}^{\mathrm{eq}}$ is generated by the electron parallel
current to satisfy the parallel Amp\`ere's law.
The equilibrium scale length is denoted by $a$ and $L_x$ is the length
of the simulation box in the $x$ direction, set to $L_x/a=3.2\pi$. In
the $y$ direction, the box size is $L_y/a=2.5\pi$. We impose a small
sinusoidal perturbation to the equilibrium magnetic field,
$\tilde{A}_{\parallel} \propto \cos(k_y y)$ with wave number
$k_y a=2\pi a/L_y=0.8$, yielding a value of the tearing instability
parameter $\Delta'a\approx 23.2$.
The background temperatures ($T_{0\mathrm{i,e}}$) and densities
($n_{0\mathrm{i,e}}$) are spatially uniform and held constant throughout
the simulations.
We consider a quasi-neutral plasma, so
$n_{0\mathrm{i}}=n_{0\mathrm{e}}=n_{0}$, and singly charged ions
$q_\mathrm{i}=-q_{\mathrm{e}}=e$.
The equilibrium magnetic field defines the time scale of the system.
We normalize time by the Alfv\'en time $\tau_{\mathrm{A}} \equiv a/V_{\mathrm{A}}$, where $V_{\mathrm{A}}\equiv
B_{y}^{\mathrm{max}}/\sqrt{\mu_{0}n_{0}m_{\mathrm{i}}}$ is
the (perpendicular) Alfv\'en velocity corresponding to the peak value of
$B_{y}^{\mathrm{eq}}$.
We solve the fully electromagnetic gyrokinetic equations for
electrons and ions. {\tt AstroGK} employs a pseudo-spectral algorithm
for the spatial coordinates ($x, y$), and Gaussian quadrature for
velocity space integrals. The velocity space is discretized in the
energy $E_s=m_s v^{2}/2$ and
$\lambda=v_{\perp}^{2}/(B_{z0}v^{2})$. The velocity space derivatives in
the collision operator are estimated using a first order finite
difference scheme on an unequally spaced grid according to the
quadrature rules~\citep{BarnesAbelDorland_09}.
\subsection{Parameters}
\label{sec:param}
There are four basic parameters in the system:
The mass ratio, $\sigma\equiv m_{\mathrm{e}}/m_{\mathrm{i}}$, the
temperature ratio of the background plasma, $\tau \equiv T_{0\mathrm{i}}/T_{0\mathrm{e}}$, the
electron plasma beta,
$\beta_{\mathrm{e}}\equiv n_{0}T_{0\mathrm{e}}/(B_{z0}^{2}/2\mu_{0})$,
and the ratio of the ion sound
Larmor radius to the equilibrium scale length $a$, $\rho_{\mathrm{Se}}/a\equiv
c_{\mathrm{Se}}/(\Omega_{\mathrm{ci}}a)$. The ion sound speed for cold
ions is
$c_{\mathrm{Se}}=\sqrt{T_{0\mathrm{e}}/m_{\mathrm{i}}}$, and the ion cyclotron frequency is
$\Omega_{\mathrm{ci}}=e B_{z0}/m_{\mathrm{i}}$. Those parameters define the
physical scales associated with the non-magnetohydrodynamic
effects:
\begin{align}
\rho_{\mathrm{i}} = & \tau^{1/2} \rho_{\mathrm{Se}}\sqrt{2}, &
d_{\mathrm{i}} = & \beta_{\mathrm{e}}^{-1/2} \rho_{\mathrm{Se}}\sqrt{2}, \\
\rho_{\mathrm{e}} = & \sigma^{1/2} \rho_{\mathrm{Se}}\sqrt{2}, &
d_{\mathrm{e}} = & \beta_{\mathrm{e}}^{-1/2} \sigma^{1/2} \rho_{\mathrm{Se}}\sqrt{2}.
\label{eq:ion_e_scales}
\end{align}
where $\rho_{\mathrm{i,e}}$ and $d_{\mathrm{i,e}}$ are the ion and
electron Larmor radii and skin depths, respectively.
We fix the following parameters throughout this paper:
\begin{align}
\rho_{\mathrm{Se}}/a = & 0.25/\sqrt{2}, &
\sigma = & 0.01, &
\tau = & 1.
\end{align}
Therefore, $\rho_{\mathrm{i}}/a=0.25$, $\rho_{\mathrm{e}}/a=0.025$, and $\beta_{\mathrm{e}}=\beta_{\mathrm{i}}$. We scan
in $\beta_{\mathrm{e}}=\beta_{\mathrm{i}}$ from 0.01 to 1; the electron inertial length also
changes accordingly as shown in Table~\ref{tbl:param}. The ion inertial
length is always $d_{\mathrm{i}}/d_{\mathrm{e}}=10$.
\begin{table}
\begin{center}
\begin{tabular}{cccccc}
Case & $\beta_{\mathrm{e}}$ & $d_{\mathrm{e}}/a$ & $\nu_{\mathrm{e}}\tau_{\mathrm{A}}$\\ \\
1 & 0.01 & 0.25 & $0.8 \times 10^{-4}$ \\
2 & 0.03 & 0.14 & $1.4 \times 10^{-4}$ \\
3 & 0.1 & 0.079 & $2.5 \times 10^{-4}$ \\
4 & 0.3 & 0.046 & $4.4 \times 10^{-4}$ \\
5 & 1 & 0.025 & $8.0 \times 10^{-4}$ \\
\end{tabular}
\caption{\label{tbl:param}Simulation parameters.}
\end{center}
\end{table}
To identify the collisionless tearing-mode regime (\ie such that the
frozen-flux condition is not broken by collisions but instead by the
electron inertia or electron FLR effects),
we perform linear simulations.
For the linear runs, we take the number of collocation points in the
pitch angle direction ($\lambda$) and the energy direction ($E$) as
$N_\lambda = N_{E}=16$; the number of grid points in the $x$
direction ranges from 256 to 1024, and we consider a single mode in the
$y$ direction (the fastest growing mode for this configuration).
We have verified that these numbers provide adequate resolution.
Shown in Fig.~\ref{fig:linear} are the growth rates ($\gamma$) and
current layer widths ($\delta$) as functions of collisionality.
For comparison, we also show the growth rate and current layer width for
$\nu_{\mathrm{e}} \tau_{\mathrm{A}} = 8.0\times10^{-5}$ obtained from the reduced kinetic model
valid for $\beta_{\mathrm{e}}\sim m_{\mathrm{e}}/m_{\mathrm{i}}$ using the {\tt Viriato}
code~\citep{LoureiroSchekochihinZocco_13};
the agreement with the fully-gyrokinetic results at $\beta_{\mathrm{e}}=0.01$ is
very good.
As collisionality decreases, the growth rate also decreases, and the
current layer becomes narrower. However, when collisionality becomes
sufficiently small, the growth rate and the current layer width
asymptote to constant values. This is the so-called
collisionless regime, even though the collision frequency is finite.
Note that, in the linear regime, small but finite
collisionality gives the same results as the exactly collisionless case
$\nu_{\mathrm{e}}=0$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{growth_rate_lin_v2.eps}
\includegraphics[scale=0.45]{current_width_lin_v2.eps}
\vspace*{3em}
\caption{\label{fig:linear}Collisionality scan of the linear growth
rate and current layer width for different $\beta_{\mathrm{e}}$
values. In all cases we set $\nu_{\mathrm{e}}=\nu_{\mathrm{ee}}=\nu_{\mathrm{ei}}$.
The growth rate and current layer width obtained from
the {\tt Viriato} code~\citep{LoureiroSchekochihinZocco_13} are also shown
by black diamonds. The {\tt Viriato} values agree with the
$\beta_{\mathrm{e}}=0.01$ case.}
\end{center}
\end{figure}
Figure~\ref{fig:lin_beta} shows the growth rate and current layer width
for the collisionless cases against $\beta_{\mathrm{e}}$. The growth rate decreases
with increasing $\beta_{\mathrm{e}}$, scaling roughly as
$\beta_{\mathrm{e}}^{-1/2}$ ($\propto d_{\mathrm{e}}$), as predicted by
\citet{Fitzpatrick_10} based on Braginskii's two-fluid model
\footnote{If $\beta_{\mathrm{e}}$ is much smaller, or $\Delta'$ is much larger, the other
collisionless regime originally derived by \citet{Porcelli_91},
$\gamma\tau_{\mathrm{A}}\propto\beta_{\mathrm{e}}^{-1/6}$ ($\gamma\tau_{\mathrm{A}}\proptod_{\mathrm{e}}^{1/3}$),
appears -- see \citet{NumataDorlandHowes_11}.}.
The
$\beta_{\mathrm{e}}$ dependence of the current layer width indicates the physical
mechanism responsible for breaking the frozen-flux constraint. For
$\beta_{\mathrm{e}}\ll 1$, the
electron Larmor radius is negligibly small compared with $\delta$, and
reconnection is mediated by electron inertia. As $\beta_{\mathrm{e}}$ increases,
the current layer width decreases, and $\rho_{\mathrm{e}}$ becomes comparable to
$\delta$ and $d_{\mathrm{e}}$. For $\beta_{\mathrm{e}}>1$, $\rho_{\mathrm{e}}$ becomes larger than $d_{\mathrm{e}}$,
and, thus, electron FLR effects overtake electron inertia as
the mechanism for breaking the flux-freezing condition.
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{growth_lin_beta.eps}
\includegraphics[scale=0.45]{current_width_beta.eps}
\vspace*{3em}
\caption{\label{fig:lin_beta}$\beta_{\mathrm{e}}$ dependence of the growth rate
and the current layer width for the collisionless cases of
Fig.~\ref{fig:linear}.}
\end{center}
\end{figure}
\section{Nonlinear simulation results}
\label{sec:results}
We now report the results of nonlinear simulations in the collisionless
regime at different values of $\beta_{\mathrm{e}}$.
We set $\nu_{\mathrm{ee}}=\nu_{\mathrm{ei}}=\nu_{\mathrm{ii}}=0.8 \times
10^{-4} \sim 8 \times 10^{-4}$ (see Table~\ref{tbl:param})
\footnote[1]{
The ion-ion collisions do not significantly affect the reconnection
dynamics, and are just included to regularize velocity space structures
of the ion distribution function. An ion collision frequency equal to
the electron one is assumed to be sufficient since the ion phase mixing
is at most as strong as that of electrons.
Ideally, a similar convergence test for ions as electrons shown in
Appendix~\ref{sec:vconv} should be performed.
}.
In all simulations reported here, the number of grid points in the $x$,
$y$ directions are $N_x=256$, $N_y=128$ (subject to the 2/3's rule for
de-aliasing), except for $\beta_{\mathrm{e}}=1$, where $N_x=512$ such that the linear
stage can be adequately resolved. In velocity space, we use
$N_{\lambda}=N_{E}=64$ for all cases, as required by the convergence
test described in Appendix~\ref{sec:vconv}.
\subsection{Temporal evolution of magnetic reconnection}
\label{sec:tevo}
Figure~\ref{fig:recrate} shows the time evolution of the electric field
at the $X$ point $(x,y)=(L_x/2,L_y/2)$ ($E_{X}$) as a measure of the
reconnection rate for different values of $\beta_{\mathrm{e}}$.
The peak reconnection rate values that we find ($\gtrsim 0.1$) are
comparable with the standard value usually reported in the literature in
the no-guide-field case; in addition, we find that the peak reconnection
rate is a weakly decreasing function of $\beta_{\mathrm{e}}$.
The reconnection rate from {\tt Viriato} is $\sim0.2$ again showing
very good agreement with the $\beta_{\mathrm{e}}=0.01$ case.
For higher $\beta_{\mathrm{e}}$ cases, the electric field drops sharply
shortly after the peak reconnection rate is achieved, eventually
reversing sign; this occurs because the current sheet becomes
unstable to the secondary plasmoid
instability~\citep{LoureiroSchekochihinCowley_07,LoureiroSchekochihinUzdensky_13},
with the $X$ point
becoming an $O$ point instead -- see Fig.~\ref{fig:diss_b1_e} for the
$\beta_{\mathrm{e}}=1$ case.
In the simulations shown in this work, the
secondary island stays at the center of the domain because there is no
asymmetry in the direction of the reconnecting field lines (\ie the
outflow direction).
\revise{For the $\beta_{\mathrm{e}}=1$ case, however, it eventually moves toward the
primary island due to the numerical noise as indicated by the
sharp spike of $E_{X}$ around $t/\tau_{\mathrm{A}}=52$.}
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{rec_rate2.eps}
\includegraphics[scale=0.4]{maxrate.eps}
\vspace*{3em}
\caption{\label{fig:recrate}Left panel: The electric field at the
center of the domain as a function of time.
At early times, the center of the domain is an $X$ point and $E_{X}$
represents the reconnection rate. The reversal of the electric field
observed for $\beta_{\mathrm{e}}\gtrsim0.1$ indicates the conversion of the
$X$ point into an $O$ point, \ie the current sheet is unstable to
plasmoid formation. From that moment onwards, $E_{X}$ ceases to
represent the reconnection rate.
\revise{The second peak for $\beta_{\mathrm{e}}=1$ indicates that the plasmoid
moves away from the center of the domain because of the numerical
noise.}
Right panel: Maximum reconnection rate as a function of $\beta_{\mathrm{e}}$.
\revise{(In the $\beta_{\mathrm{e}}=1$ case, we plot the maximum reconnection rate
achieved prior to plasmoid formation.)}
}
\end{center}
\end{figure}
To detail the energy conversion processes taking place in our simulations
we plot in Fig.~\ref{fig:energy} the time evolution of each component of
the generalized energy \eqref{eq:gen_energy} normalized by the initial
magnetic energy ($E_{0}^{\mathrm{m}}$),
\revise{except the parallel magnetic energy
($E_{\parallel}^{\mathrm{m}}$) as it is almost zero for all cases.}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{energy2.eps}
\includegraphics[scale=0.5]{dissipation2.eps}
\caption{\label{fig:energy}Time evolution of the energy components
(left) and the dissipation rate of ions and electrons (right) for
$\beta_{\mathrm{e}}=0.01, 0.1, 1$. Note the time for the $\beta_{\mathrm{e}}=1$ case is shifted
by $t/\tau_{\mathrm{A}}=10$. The energies are normalized by the initial magnetic
field energy.}
\end{center}
\end{figure}
During magnetic reconnection, the magnetic energy is converted to the
particle's energy reversibly. Looking at the $\beta_{\mathrm{e}}=0.01$ case we see
that, first, the ion $\bm{E} \times \bm{B}$ drift flow is excited, thereby increasing
the ion energy.
\revise{Then, electrons exchange energies with the excited fields. The
parallel electric field accelerates or decelerates electrons depending
on their orbit and energy, but the net work done on electrons for this
case is positive. Even}
though electrons lose parallel kinetic energy by resistive
dissipation,
they gain energy via the phase-mixing process \nuno{as will be
explained in detail shortly}, and store the increased
energy in the form of temperature fluctuations and higher order
moments. Collisionally dissipated energy (decrease of the total energy)
is about 1\% of the initial magnetic energy after dynamical process has
almost ended ($t/\tau_{\mathrm{A}}=25$).
Also shown in Fig.~\ref{fig:energy}, right panel, are the dissipation rates of
electrons and ions as determined from \eqref{eq:dissipation_rate}.
The dissipation rate is normalized by the initial
magnetic energy divided by the Alfv\'en time,
$E^{\mathrm{m}}_{0}/\tau_{\mathrm{A}}$. For $\beta_{\mathrm{e}}=0.01$, the electron energy
dissipation starts to grow rapidly when the maximum reconnection rate is
achieved. It stays large long after the dynamical stage, and an
appreciable amount of the energy is lost at late times. The ion
dissipation is negligibly small compared with the electron's for this
case.
With regards to energy partition, the most important effect to notice as
$\beta_{\mathrm{e}}$ increases is the decrease in the electron energy gain. For
the $\beta_{\mathrm{e}}=1$ case, the released magnetic energy is contained mostly by
ions \nuno{as shown in Fig.~\ref{fig:energy}, bottom left}.
\revise{The plasmoid formation and ejection complicate the evolution of
energies and dissipations for the $\beta_{\mathrm{e}}=1$ case. \nuno{From the
bottom-left panel of Fig.~\ref{fig:energy},} we notice that there is a
slight increase of the magnetic energy during the plasmoid growth phase
($40\lesssim t/\tau_{\mathrm{A}} \lesssim 50$). The electron dissipation decreases
in this phase, and is re-activated when the plasmoid is ejected
(Fig.~\ref{fig:energy}, bottom right).
\nuno{This secondary heating of electrons is
associated with the newly formed $X$ point as will be shown in
Fig.~\ref{fig:diss_b1_plasmoid}.}}
A significant fraction of the released magnetic energy is collisionally
dissipated. The fraction of the dissipated energy
[$\Delta W=W(t)-W(0)$] to the released magnetic energy [$\Delta
E^{\mathrm{m}}=E^{\mathrm{m}}(t)-E^{\mathrm{m}}(0)$] is shown in
Fig.~\ref{fig:fraction}. The cross ($\times$) and dot ($\bullet$) symbols
on each line indicate the times of the maximum reconnection rate
and the maximum electron dissipation rate, respectively.
\revise{We show $\Delta W/\Delta E^{\mathrm{m}}$ only in the nonlinear
regime since both $\Delta W$ and $\Delta E^{\mathrm{m}}$ are almost zero
in the linear regime (see Fig.~\ref{fig:energy}).}
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{diss_fraction2.eps}
\vspace*{2em}
\caption{\label{fig:fraction}Ratio of the dissipated energy to the
released magnetic energy as a function of time.
The cross ($\times$) and dot ($\bullet$) symbols on each line denote the
times of the maximum reconnection rate and the maximum electron
dissipation rate, respectively.
\revise{The plots are shown only in the nonlinear regime since both the
dissipation and magnetic energy release are almost zero in the
linear regime.}
}
\end{center}
\end{figure}
The total energy dissipation is determined by the local strength of
phase mixing (the magnitude of the integrand of \eqref{eq:dissipation_rate}
integrated over only the velocity coordinates at each position), the
total area where phase mixing is active, and the duration of phase
mixing. Therefore,
the energy conversion from the magnetic to thermal energy (\ie
how much of the released magnetic energy is dissipated from the system
($\Delta W/\Delta E^{\mathrm{m}}$)) as a function of time has no clear
dependence on $\beta_{\mathrm{e}}$.
\revise{For the $\beta_{\mathrm{e}}=1$ case, in particular, $\Delta W/\Delta
E^{\mathrm{m}}$ evolves in a complex manner because the evolution of
energies and dissipations are significantly altered by the plasmoid as
described above.}
As $\beta_{\mathrm{e}}$ is increased, the ion dissipation also becomes large. As
shown in Fig.~\ref{fig:heating_ratio}, the ratio of the energy
dissipation of ions to electrons, $D_{\mathrm{i}}/D_{\mathrm{e}}$,
becomes approximately unity for the $\beta_{\mathrm{e}}=1$ case implying that ion
heating is as significant as electron heating when $\beta_{\mathrm{e}} \sim 1$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{ehrate2.eps}
\vspace*{2em}
\caption{\label{fig:heating_ratio}Ratio of the dissipation rate of
ions to electron for five different $\beta_{\mathrm{e}}$ values. Ion heating is
comparable to electron heating for $\beta_{\mathrm{e}}\sim1$.}
\end{center}
\end{figure}
\subsection{Phase mixing and energy dissipation}
\label{sec:phasemixing}
Figure~\ref{fig:diss_b0.01} shows spatial distributions of the
dissipation rate of electrons for $\beta_{\mathrm{e}}=0.01$.
The top two panels show the dissipation rate of the electron energy at
$t/\tau_{\mathrm{A}}=10$, \ie when the reconnection rate is
maximum. At this time, the electron dissipation is localized near the
reconnection point. If we subtract the first two velocity moments from
the distribution function, which correspond to perturbations of the
density and bulk flow in the $z$ direction
\revise{(\ie the dissipation rate based on $h_s'$ defined in
\eqref{eq:hermite})},
we find that the dissipation occurs just downstream of the reconnection
site. This indicates that the dissipation at the $X$ point is the
resistivity component, corresponding to the decay of the initial
electron current. The contribution of higher order
moments to the dissipation spreads along the separatrix.
The energy lost via Ohmic heating at this stage is much smaller than the
dissipation due to
phase mixing that occurs later, and is not seen in later times.
(In all other plots of the spatial distribution of $D_{\mathrm{e}}$,
the resistivity component is negligible.)
After the dynamical reconnection phase ended, most of the available flux
has been reconnected and a large island is formed.
Electron dissipation is large inside the island
(bottom panel of Fig.~\ref{fig:diss_b0.01}), and continuously
increases in time -- see Fig.~\ref{fig:energy}, top right.
In the regions where the dissipation rate is large, we expect the
distribution function to show oscillatory structures due to phase mixing
(the local strength of dissipation is roughly proportional to $\nu
h^{2} \delta
v^{-2}$ where $\delta v$ denotes the scale length of the distribution
function in velocity space). Figure~\ref{fig:dstfne_b0.01} shows electron
distribution functions, without the first two velocity moments, taken
where the non-resistive part of dissipation
rate is largest for $t/\tau_{\mathrm{A}}=10, 20$. The distribution function is
normalized by $\varepsilon n_{0}/(\sqrt{\pi} v_{\mathrm{th,e}})^{3}$.
The distribution function
only has gradients in the $v_{\parallel}$ direction, indicating that
parallel phase mixing is occurring, and the scale length $\delta v$
becomes smaller as time progresses. The phase-mixing structures develop
slowly compared with the time scale of the dynamical reconnection
process, therefore the energy dissipation peaks long after the peak
reconnection rate~\citep{LoureiroSchekochihinZocco_13}.
Phase mixing is significant when the Landau resonance condition
is met: Electrons moving along the reconnected field lines feel
variations of the electromagnetic fields evolving with the velocity
$\simV_{\mathrm{A}}$ in the perpendicular plane.
Since the magnetic field lies dominantly in the $z$ direction, but has
a small component in the $x-y$ plane, when electrons run along the field
lines with $v_{\parallel}$, electrons also move in the $x-y$ plane with
$v_{\parallel}\left(B_{\perp}/B_{z0}\right)$, thus the resonance
condition is given by $v_{\parallel} \left(B_{\perp}/B_{z0}\right) \simV_{\mathrm{A}}$.
For Fig.~\ref{fig:dstfne_b0.01}, phase mixing is most pronounced
around $v_{\parallel}/v_{\mathrm{th,e}}\sim1$, which agrees with the resonance
condition for $\beta_{\mathrm{e}}=0.01$ and the mass ratio
$\sigma=0.01$. Therefore, electrons with $v_{\parallel}/v_{\mathrm{th,e}}\sim1$ can
effectively exchange energies with the fields ($\phi$ in this case
because $\delta B_{\parallel}$ is small).
\revise{Parallel heating driven by the curvature drift through the Fermi
acceleration mechanism suggested by \citet{DrakeSwisdakChe_06} and
\citet{DahlinDrakeSwisdak_14} is negligible for the strong-guide-field
case discussed here simply because the curvature drift is small compared
with the thermal velocity.}
\begin{figure}
\begin{center}
\includegraphics[scale=0.22]{De_t10.3_b0.01.eps}
\includegraphics[scale=0.22]{De_t10.3_b0.01_sub_mark.eps}
\includegraphics[scale=0.22]{De_t20.3_b0.01_mark.eps}
\caption{\label{fig:diss_b0.01}Spatial distributions of the
dissipation rate of electrons
($D_{\mathrm{e}}/(E^{\mathrm{m}}_0/\tau_A)$) for $\beta_{\mathrm{e}}=0.01$.
The top two panels show the dissipation rate at $t/\tau_{\mathrm{A}}=10$ where
the reconnection rate is maximum. The left panel includes the full
dissipation, while the right does not include the resistivity
component. The bottom figure shows the dissipation rate at
$t/\tau_{\mathrm{A}}=20$.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{dstfne_b0.01_t10.eps}
\includegraphics[scale=0.3]{dstfne_b0.01_t20.eps}
\caption{\label{fig:dstfne_b0.01}Velocity space structures of electron
distribution function, without the first two velocity moments, for
$\beta_{\mathrm{e}}=0.01$, normalized by
$\varepsilon n_{0}/(\sqrt{\pi}v_{\mathrm{th,e}})^{3}$. Distribution functions
are taken at the strongest dissipation point.}
\end{center}
\end{figure}
For the $\beta_{\mathrm{e}}=1$ case, we perform the same diagnostics for both
electrons and ions in Figs.~\ref{fig:diss_b1_e}--\ref{fig:dstfn_b1_i}.
For this case, the electron dissipation occurs, again, along the
separatrix, but confined in narrow strips.
At later times, the electron heating is restricted to
the narrow edge region of the primary island. Looking at the structure
of the electron distribution function at the point where the dissipation
is largest, we see narrow parallel phase-mixing structures. The Landau
resonance is pronounced for $v_{\parallel}<v_{\mathrm{th,e}}$.
Note that $v_{\mathrm{th,e}}
\left(B_{\perp}/B_{z0}\right)/V_{\mathrm{A}}=\sqrt{\beta_{\mathrm{e}}/\sigma}=10$.
More electrons are, thus, in resonance at higher-$\beta_{\mathrm{e}}$, thereby
enhancing the local strength of phase mixing as an energy dissipation
process.
At late times, the ion and electron dissipation rates become comparable.
We show spatial distributions of the ion dissipation rate at
the time of maximum reconnection rate, $t/\tau_{\mathrm{A}}=37.8$, and maximum of the ion
dissipation rate, $t/\tau_{\mathrm{A}}=47$, in Fig.~\ref{fig:diss_b1_i}. It is clearly
shown that the ion dissipation is localized to the center of the domain.
While the reconnection process proceeds, ion flows in the $z$ direction
develop as well as the $\bm{E} \times \bm{B}$ drift in the $x-y$ plane. Collisional
damping of those flows (ion viscosity) are the main contributions to the
ion dissipation at $t/\tau_{\mathrm{A}}=37.8$, though this is only a small fraction
of the dissipation that occurs later as a result of phase mixing.
As shown in the distribution function plot in Fig.~\ref{fig:dstfn_b1_i},
the ion distribution function has a component proportional to
$v_{\parallel}$ driven by $E_z$,
and peaks at $v_{\perp}\sim0.5$, corresponding to the $\bm{E} \times \bm{B}$ drift
velocity. No phase-mixing structures are visible at this time.
At later times,
the ion dissipation is large in the secondary island at the center of
the domain. At this stage, the ion distribution function displays
significant structure both in the parallel and perpendicular velocity
directions, indicating that the overall dissipation is due to both
parallel (linear) and perpendicular (nonlinear) phase mixing.
\note{
Perpendicular phase mixing is indeed expected to be large for
$\beta_{\mathrm{e}}\sim1$ since $k_{\perp}\rho_{\mathrm{i}}\sim\rho_{\mathrm{i}}/d_{\mathrm{e}}\gg1$. However, since
it is a nonlinear dissipation mechanism, perpendicular phase mixing can
only be significant when the amplitudes of the perturbed fields become
sufficiently large.}
\begin{figure}
\begin{center}
\includegraphics[scale=0.22]{De_t37.8_b1_sub_mark.eps}
\includegraphics[scale=0.22]{De_t40.1_b1_sub_mark.eps}
\caption{\label{fig:diss_b1_e}Spatial distribution of the dissipation
rate of electrons ($D_{\mathrm{e}}/(E^{\mathrm{m}}_0/\tau_A)$) for $\beta_{\mathrm{e}}=1$.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{dstfne_b1_t37.8_sub.eps}
\includegraphics[scale=0.3]{dstfne_b1_t40.1_sub.eps}
\caption{\label{fig:dstfn_b1_e}Velocity space structures of electron
distribution function, without the first two velocity moments, for
$\beta_{\mathrm{e}}=1$, normalized by
$\varepsilon n_{0}/(\sqrt{\pi}v_{\mathrm{th,e}})^{3}$.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.22]{Di_t37.8_b1_mark.eps}
\includegraphics[scale=0.22]{Di_t46.9_b1_mark.eps}
\caption{\label{fig:diss_b1_i}Spatial distribution of the dissipation
rate of ions ($D_{\mathrm{i}}/(E^{\mathrm{m}}_0/\tau_A)$) for $\beta_{\mathrm{e}}=1$.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{dstfni_b1_t37.8.eps}
\includegraphics[scale=0.3]{dstfni_b1_t46.9.eps}
\caption{\label{fig:dstfn_b1_i}Velocity space structures of ion
distribution function for $\beta_{\mathrm{e}}=1$, normalized by
$\varepsilon n_{0}/(\sqrt{\pi}v_{\mathrm{th,i}})^{3}$.}
\end{center}
\end{figure}
\revise{Around $t/\tau_{\mathrm{A}}=52$, the secondary island moves due to the
numerical noise, and the electron dissipation significantly increases.
\nuno{Spatial distributions of the dissipation rate of electrons and
ions at this time are shown in Fig.~\ref{fig:diss_b1_plasmoid}. We see
that the electron dissipation at this time occurs at the newly formed
$X$ point, and is stronger compared with that at earlier times,
indicating re-activation of the heating process.
The ion dissipation does not significantly increase at the plasmoid
ejection, but just spreads outside the secondary plasmoid.}}
\begin{figure}
\begin{center}
\includegraphics[scale=0.22]{De_t52.7_b1.eps}
\includegraphics[scale=0.22]{Di_t52.7_b1.eps}
\caption{\label{fig:diss_b1_plasmoid}Spatial distribution of the
dissipation rate of electrons and ions for $\beta_{\mathrm{e}}=1$, when the
plasmoid moves.}
\end{center}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have reported gyrokinetic simulations of magnetic reconnection
in weakly collisional, \nuno{strongly magnetized} plasmas at varying
values of $\beta_{\mathrm{e}}$.
The peak reconnection rate we find is $\sim 0.1$, as is often reported
in the weak-guide-field case, and weakly decreases with increasing
$\beta_{\mathrm{e}}$. We also observe that, as $\beta_{\mathrm{e}}$ increases, the reconnection
site becomes unstable to secondary island formation.
During reconnection, phase-mixing structures slowly develop.
Electron heating occurs after the dynamical reconnection process
has ceased, and continues long after.
The ion heating is comparable to the electron heating for $\beta_{\mathrm{e}}\sim 1$,
and insignificant at lower values of $\beta_{\mathrm{e}}$.
The fraction of energy dissipation to released magnetic energy is a
complicated function of the local strength, area, and
duration of the phase-mixing process.
The electron heating that we measure in our simulations is caused by
parallel phase mixing. It initially occurs along the separatrix, and
slowly spreads to the interior of the primary magnetic island.
Phase mixing is most pronounced for electrons streaming along
the magnetic field at velocities $v_{\parallel}
\left(B_{\perp}/B_{z0}\right) \sim
V_{\mathrm{A}}$, where $V_{\mathrm{A}}$ is the perpendicular Alfv\'en velocity.
For our lowest $\beta_{\mathrm{e}}$ case, this implies $v_{\parallel}\sim v_{\mathrm{th,e}}$,
whereas at larger $\beta_{\mathrm{e}}$ the resonance condition is satisfied for
$v_{\parallel}<v_{\mathrm{th,e}}$. Consequently, electron dissipation is more
efficient at higher values of $\beta_{\mathrm{e}}$.
The ion heating, on the other hand, does not spread,
\revise{as long as the plasmoid is stationary},
occurring only at the reconnection site, and later inside the secondary
island that is formed. For ions, both parallel and perpendicular
phase-mixing processes are active.
\revise{It is also found that, once the secondary island moves, the
electron heating becomes active at the new $X$ point.}
In summary, we have shown that electron and ion bulk heating
via phase mixing is a significant energy dissipation channel in
\nuno{strong-guide-field}
reconnection, extending the results first reported
in \citet{LoureiroSchekochihinZocco_13} to the regime of $\beta_{\mathrm{e}}\sim
1$.
These results, therefore, underscore the importance of retaining
finite collisions in reconnection studies,
even if the reconnection process itself is collisionless.
\nuno{
In addition, it is perhaps worth noting that the usual particle
energization mechanisms found in weak-guide-field reconnection (Fermi
acceleration \citep{DrakeSwisdakChe_06}) cannot be efficient for
reconnection in strongly magnetized plasmas because the curvature drift
associated with the Fermi mechanism is small.
Our results may therefore point to an alternative heating mechanism valid
when the guide field is much larger than the reconnecting field. Scaling
studies such as those by \citet{TenbargeDaughtonKarimabadi_14} may help
identify the regions of parameter space where each of these mechanisms
(Fermi or phase mixing) predominates.
}\\
\section*{Acknowledgements}
This work was supported by JSPS KAKENHI Grant Number 24740373. This work
was carried out using the HELIOS supercomputer system at Computational
Simulation Centre of International Fusion Energy Research Centre
(IFERC-CSC), Aomori, Japan, under the Broader Approach collaboration
between Euratom and Japan, implemented by Fusion for Energy and JAEA.
N. F. L. was supported by Funda\c{c}\~ao para a Ci\^encia e a Tecnologia
through grants Pest-OE/SADG/LA0010/2011, IF/00530/2013 and PTDC/FIS/118187/2010.
|
1,108,101,566,431 | arxiv | \section{Introduction}
Traditionally, power distribution systems have been engineered as radial networks that carry power from the utility feeder to the end consumers of electricity. These consumers are seen as a set of passive loads, and generally, the utility has no control of their electricity demand, and does not collect any real-time information from them. In the future, under the \emph{SmartGrid} initiative, distribution systems are envisioned to undergo dramatic transformation in structure and functionality. Instead of passive demands, buses in the network could include clusters of households with rooftop solar installations, electric vehicles, and storage; these generation and storage devices are commonly referred to as distributed energy resources(DERs). It has been shown that massive penetration of rooftop solar can cause voltages to rise in the distribution system, while large-scale deployment of electrical vehicles can cause voltages to drop \cite{Carvalho08,Guille09}. The objective of this paper is to address this voltage regulation problem.
Voltage regulation in distribution networks is traditionally achieved via tap-changing under-load transformers and switched capacitors; however, while ensuring that voltage requirements are met, the setting of these devices might not optimal, in the sense that they may not minimize the thermal loss or some other objectives. On the other hand, by relying on a cyber-infrastructure for sensing, communication, and control, it is also possible to utilize DERs with capability to provide reactive power for optimal voltage regulation (see, e.g., \cite{Lam12} and the references therein). Unfortunately, the current speed of installation of DERs far outpaces the deployment of the cyber-infrastructure.\footnote{The advanced metering infrastructure (e.g. smart meters) transmit their information on a daily bases and are not used for real-time communication.} For example, most distribution networks in California have not implemented real-time two-way communication between DERs and the utilities, but the state already has 2900 MW of installed capacity \cite{SEIA13}, enough for about 600,000 households. Therefore we are faced with a situation where DERs are already in the distribution network with no communication network to coordinate them. Thus, an urgent question arises: in order to utilize DERs for voltage regulation in distribution networks, how should they be managed with no communication or very limited communication between the buses in the network?
In our setting, we assume each bus can measure its own voltage magnitude, active and reactive power injections, but cannot communicate with other buses in the network. The available control action at a bus is the adjustment of its active and reactive power injections. Altering the active power injection at certain buses corresponds to demand response (DR) actions. Because of economical and regulatory considerations, dynamic DR is difficult to implement without a communication network; therefore, in this paper we only consider reactive power injections at each bus as the main control knob for DER-based voltage regulation. We assume that reactive power can be provided by, e.g., the power electronics grid interfaces of rooftop solar installations \cite{JoOo:00,LoKr:01,solar_bridge}.
In this paper, we propose a method for voltage regulation that relies on the local control actions\footnote{We call the control actions \emph{local control actions} to emphasize that only local information is available at each bus.} of reactive-power-capable DERs. We provide sufficient conditions such that local control is guaranteed to be successful in maintaining any specified voltage profile. We also provide conditions on the network topology and active power injection regions that shows when local control cannot maintain voltage stability in the system. We arrive at our results by casting the voltage regulation problem as an optimization problem, and investigate the conditions under which optimality can be achieved without communication between the buses.
The problem of voltage regulation in distribution networks has received considerable attention in recent years. Centralized algorithms have been proposed in \cite{Baran07,Turitsyn10,Villacci06} and decentralized algorithms have been proposed in \cite{Farivar12,Lam12,Bolognani12,Robbins12}. For the latter, communication is generally assumed to be possible between subsets of the buses (e.g., \cite{Lam12} provides a distributed optimization algorithm that adheres to neighbor-to-neighbor communications, and \cite{Bolognani12} can choose different communication topology using an approximate model for power flow). In contrast, our proposed algorithm does not rely on a communication infrastructure. A related questions in context of voltage stability has been studied in \cite{Ilic:1986}.
This paper is organized as follows. Section \ref{sec:model} introduces the notations and the power system distribution model adopted throughout the paper. Section \ref{sec:result} gives the main algorithm. Section \ref{sec:sr} provides convergence analysis of the algorithm. Section \ref{sec:sim} provides case studies, and Section \ref{sec:con} concludes the paper. The proofs of the main results are provided in the Appendix.
\section{Model and Problem Formulation} \label{sec:model}
This section introduces the power distribution system models and also provides the formulation of the problem. Consider a distribution network with $n$ buses. Today, most power distribution systems operate as radial networks connected to a single feeder\cite{Kersting06}; therefore we model the network as a connected tree rooted at the feeder. By convention, the feeder is labelled as bus $0$. The distance from bus $i$ to the root (bus 0) is the number of vertices on the unique path from bus $i$ to the root. The depth of the tree is the maximum of the bus distances to the root.
Let $V_i=|V_i| e (j \theta_i)$ denote bus $i$ complex voltage, and define vector $\mathbf v =[ V_0 \; V_1 \dots V_{n-1}]^T \in \mathbb{C}^n$. Let $I_i$, $P_i$ and $Q_i$ denote, the complex current, active and reactive power injections at bus $i$, respectively; and define $\mathbf i=[I_0 \; \dots \; I_{n-1}] \in \mathbb{C}^n$, $\mathbf p=[P_0 \; \dots \; P_{n-1}] \in \mathbb{R}^n$ ,and $\mathbf q=[Q_0 \; \dots \; Q_{n-1}] \in \mathbb{R}^n$. Let $\mathbf s=\mathbf p + j \mathbf q$ denote the complex power. The active powers in a distribution network are typically constrained by upper and lower capacity limits, i.e., of $\underline{P}_i \leq P_i \leq \overline{P}_i$, where the upper and lower bound are determined by by the types of devices at bus $i$. Let $\mathcal P =\{\mathbf p \in \mathbb{R}^{n-1}: \underline{P}_i \leq P_i \leq \overline{P}_i \}$ be the feasible region formed by the bus injections (see \cite{Zhang13,LTZ12} for more detailed discussions).
Given two connected buses $i$ and $k$, let $y_{ik}=g_{ik}-jb_{ik}$ denote the admittance of the transmission line between them, and let $\mathbf Y$ be the bus admittance matrix \cite{Overbye04}. Then, the complex power injections are related to the voltage by $\mathbf s = \diag (\mathbf v \mathbf v^H \mathbf Y^H)$,
where $(\cdot)^H$ denotes the Hermitian transpose, and the $\diag(\cdot)$ operator returns the diagonal if the argument is a square matrix or a diagonal matrix if the argument is a vector. We are interested in regulating the voltage in the distribution network, so let $V_0^{ref}, V_1^{ref},\dots,V_{n-1}^{ref}$ be the set of reference voltage magnitudes that we are interested in tracking. We assume that the feeder behaves as an infinite bus, thus $V_0=V_0^{ref}$. We assume that bus $i$ can control its reactive power, whereas the active power is set at a prescribed value $P_i^*$. We formulate the voltage regulation problem as the following feasibility problem:
\begin{subequations} \label{eqn:feasible}
\begin{align}
\mbox{find } & \mathbf q \\
\mbox{s.t. } & |V_i|=V_i^{ref},~ i=0,\dots,n-1 \\
& P_i=P_i^*, \; i=1,\dots,n-1 \label{eqn:p_f} \\
& \mathbf p + j \mathbf q =\diag(\mathbf v \mathbf v^H \mathbf Y^H), \label{eqn:s_f}
\end{align}
\end{subequations}
where \eqref{eqn:p_f} corresponds to the fact that active powers are fixed, and the equality constraints in (1d) correspond to the power flow equations.
That is, we seek to find a reactive power injection vector $\mathbf q$ such that the voltages are at their desired levels. The feasibility of \eqref{eqn:feasible} can be decided by the following optimization problem:
\begin{subequations} \label{eqn:main_v}
\begin{align}
\min_{\mathbf q }\; & \sum_{i=1}^{n-1} (|V_i|-V_i^{ref})^2 \label{eqn:obj_v} \\
\mbox{s.t. } & P_i=P_i^*, \; i=1,\dots,n-1 \label{eqn:p_v} \\
& V_0=V_0^{ref} \label{eqn:v0_v} \\
& \mathbf p + j \mathbf q =\diag(\mathbf v \mathbf v^H \mathbf Y^H), \label{eqn:s_v}
\end{align}
\end{subequations}
where the feeder bus (bus 0) always holds its voltage to be at $V_0^{ref}$. The objective function means that the system is trying to minimize the deviation of the voltage magnitudes to their reference values. The feasibility problem in \eqref{eqn:feasible} has a solution if and only if the optimal value of the optimization problem \eqref{eqn:main_v} is 0.
It turns out the objective function in \eqref{eqn:obj_v} is not easy to analyze. We rewrite \eqref{eqn:main_v} with a new objective function as:
\begin{subequations} \label{eqn:main}
\begin{align}
\min_{\mathbf q }\; & \sum_{i=1}^{n-1} \left(|V_i|^2-(V_i^{ref})^2 \right)^2 \label{eqn:obj} \\
\mbox{s.t. } & P_i=P_i^*, \; i=1,\dots,n-1 \label{eqn:p} \\
& V_0=V_0^{ref} \label{eqn:v0} \\
& \mathbf p + j \mathbf q =\diag(\mathbf v \mathbf v^H \mathbf Y^H), \label{eqn:s}
\end{align}
\end{subequations}
Intuitively, the reason that \eqref{eqn:obj} is much easier to handle than \eqref{eqn:obj_v} is that $|V_i|$ is not differentiable with respect to $\mathbf q$ but $|V_i|^2$ is. The rest of the paper presents a local control algorithm to solve \eqref{eqn:main}.
\textbf{Remark:} In this paper we do not consider the constraints on bus reactive power injections $\mathbf q$. In an earlier work, where limited communication is present, all constraints were considered \cite{Lam12}. The next step of this work is to incorporate the constraints on reactive power injections. \hfill $\Box$
\subsection{Uniqueness of Power Flows} \label{sec:2A}
Given a power system with $n$ buses, the state of the system is described by $2n-1$ real numbers: the $n$ voltage magnitude and $n-1$ angles (taking bus $0$ to be the reference bus). This suggests that there are $2n-1$ degree of freedom in the power system. However, for transmission networks, specifying the voltage magnitudes and active powers $P_1,\dots,P_{n-1}$ is not enough to determine the state of the system since there could be multiple angles that result in those active powers \cite{BeVi:00}. In contrast, due to the tree structure of a distribution network \cite{LTZ12}, specifying voltage magnitudes and active powers at $n-1$ buses is enough to fully specify the state of the system. Therefore we often give the power system state in terms of voltage magnitude and active power; the corresponding angles and reactive powers are uniquely specified.
\section{Local Control Algorithm} \label{sec:result}
Consider a distribution system that is operating at its reference voltage, i.e., with $|V_i|=V_i^{ref}$ for $i=0 \dots n-1$. Denote the corresponding active and reactive injection vectors by $\overline{ \mathbf p}$ and $\overline{\mathbf q}$, respectively. Due to changes in renewables and loads, the active power injections in the system may change from $\overline{\mathbf p}$ to $\mathbf p'$. If the reactive powers stayed constant at $\overline{\mathbf q}$, the voltage magnitudes are no longer at the reference values. This section proposes an algorithm to bring the voltage magnitudes back to the reference values. By adjusting the reactive power, the algorithm relies purely on local voltage measurements and does not involve communication among nodes in the network.
To correct the voltage magnitudes, there are $n-1$ control variables: the reactive power injections at all the buses except the feeder (bus $0$). We assume that the problem in \eqref{eqn:feasible} has a solution, or equivalently the optimal value of the problem in \eqref{eqn:main} is 0, and we design a local control algorithm that finds the optimal reactive power injection vector $\mathbf q$. Since bus $i$ can only measure local quantities, we propose the following iterative update algorithm:
\textbf{Local Control Algorithm:}
\begin{itemize}
\item Initialize the system at $t=0$.
\item At time $t$, bus $i$ measures $|V_i[t]|$ and makes the following update:
\begin{equation} \label{eqn:update_i}
Q_i[t+1]=Q_i[t]-d_i \left(|V_i[t]|^2-(V_i^{ref})^2 \right)
\end{equation}
for $i=1,\dots,n-1$. Intuitively, a bus should inject more reactive power to raise its voltage, so $d_i$ is assume to be positive (see \cite{Robbins12} for a deeper discussion).
\item $t \leftarrow t+1$ and repeat
\end{itemize}
By a slight abuse of notation, let $\mathbf q=[Q_1 \; \dots Q_{n-1}]^T$ since we do not explicitly control the reactive injection at the feeder. The matrix version of the update in \eqref{eqn:update_i} can be written as
\begin{equation} \label{eqn:update}
\mathbf q[t+1]= \mathbf q[t]- \mathbf D \left(|\mathbf v[t]|^2 - (\mathbf v^{ref})^2 \right),
\end{equation}
where $\mathbf D=\diag(d_1,\dots,d_{n-1})$ is a diagonal matrix and the squares are taken component-wise. Note both $|\mathbf v[t]|$ and $\mathbf v^{ref}$ are both vectors. We envision the local control algorithm would run in a continuous fashion to supplement the actions of devices such as tap-changing under-load transformers (TCULs) and switched capacitors. This local control algorithm is similar to the first-stage algorithm in \cite{Robbins12} where we replaced $|V_i|$ by $|V_i|^2$.
\section{Convergence of the Local Control Algorithm}
The convergence properties of the local control algorithm are investigated in this section. Since the active bus injections are the parameters in the voltage regulation problem, the convergence of the algorithm also depends on the prescribed active power injections. Recall from Section \ref{sec:model} that $\mathcal P$ is the feasible injection region determined by the active bus power constraints. In this section, we answer the following two questions:
\begin{enumerate}
\item Under what condition can we ensure that the algorithm converges for every $\mathbf p \in \mathcal P$?
\item How does network topology influence the convergence of the algorithm? That is, for the two networks in Fig. \ref{fig:star_line} and same active injection vector, does the algorithm converge for one of the networks but not the other one?
\begin{figure}[ht]
\centering
\subfigure[Star Topology]{
\includegraphics[scale=0.7]{star.eps}}
\subfigure[Line Topology]{
\includegraphics[scale=0.7]{line_s.eps}}
\caption{We are interested in how the topology of the network affect the convergence property of the local control algorithm. We show that it is easier for the local algorithm to converge on the star network than the tree network. Therefore the depth of the network limits the convergence of the algorithm.}
\label{fig:star_line}
\vspace{-0cm}
\end{figure}
\end{enumerate}
Since power system planners generally plan for the worst-case scenario, answering the first question would ensure that the algorithm would work for every possible $\mathbf p$ and communication is not needed to regulate the voltage. Theorem \ref{thm:sr} below states that it is in fact sufficient to check the convergence of the algorithm at a single $\mathbf p$ to ensure the converge for the entire feasible injection region $\mathcal{P}$.
In section IV-C, we show that the local control algorithm converges for the star topology under a much broader set of conditions than for the line topology. Since typical distribution networks are more like lines than stars, they are in some sense inherently more difficult to control only using local actions. Before stating the convergence theorems for the entire $\mathcal P$, we first state the result for a single power injection vector.
\subsection{Convergence for a Power Injection Vector }
The local control algorithm shows that we can view the $n-1$ voltages on the non-feeder buses as a function of the $n-1$ reactive power injections at these buses. The active powers and the feeder voltage can be interpreted as parameters in this function. From this point of view, we make the following definition:
Let $\mathbf M$ be the $n-1$ by $n-1$ Jacobian matrix where the $(i,k)$-th entry is
\begin{equation} \label{eqn:M}
M_{ik}= \frac{\partial |V_i|^2}{\partial Q_k};
\end{equation}
the partial derivative is evaluated at some active power and voltage magnitudes.
Note that $\mathbf M$ depends on both values of both the active powers and voltage magnitudes; however, the variation of $\mathbf M$ with respect to voltage magnitude is relatively small over a wide range of operating conditions \cite{BeVi:00}. In this regard, we will always evaluate $\mathbf M$ at some $\mathbf p$ and the reference voltage magnitudes. To emphasize the dependence of $\mathbf M$ on $\mathbf p$, we write it as $\mathbf M (\mathbf p)$.
The next lemma gives the sufficient condition when the local control algorithm converges.
\begin{lem} \label{lem:1}
Suppose the optimization problem in \eqref{eqn:main} has optimal value $0$ (that is, the problem in \eqref{eqn:feasible} is feasible). Let $\mathbf p$ be the active power injection vector. The update in \eqref{eqn:update_i} converges if $\mathbf D \mathbf M(\mathbf p) + \mathbf M(\mathbf p)^T \mathbf D \succ 0 $, where $ \succ 0 $ denotes positive definiteness.
\end{lem}
The proof is an application of Prop. 2.1 in \cite{Bertsekas97} and is given in Appendix \ref{app:1}.
\subsection{Stability Region} \label{sec:sr}
The matrix inequality $\mathbf D \mathbf M(\mathbf p) + \mathbf M(\mathbf p)^T \mathbf D \succ 0 $ is in the form of a Lyapunov equation, and this motivates the next definition
\begin{defn}
An active power injection vector is called stable if there exists a diagonal matrix $\mathbf D$ such that $\mathbf D \mathbf M(\mathbf p) + \mathbf M(\mathbf p)^T \mathbf D \succ 0$. The set of all stable active power injections is called the stability region and is denoted by $\mathcal S_p$.
\end{defn}
Deciding the existence of a stabilizing $\mathbf D$ for a given $\mathbf p$ is a convex problem, and can be solved using semidefinite programming techniques \cite{Boyd04}. Theorem \ref{thm:sr} below shows how the stability for the entire region $\mathcal P$ can be checked.
Given the feasible injection region $\mathcal {P}$, let $\mathbf p_{\min}$ be the minimum injection vector in $\mathcal P$. That is, $\mathbf p_{\min}=[\underline{P}_1 \; \dots \; \underline{P}_{n-1}]$ (recall $\underline{P_i}<0$ is the lower bound on the active power injection, which correspond to the maximum demand). The next theorem gives a sufficient condition for the stability of $\mathcal P$ in terms of the stability of $\mathbf p_{\min}$.
\begin{thm} \label{thm:sr}
Let $\mathbf p_{\min}$ be the minimum injection vector. Let $\bm \theta_{\min}$ be the corresponding angles when the voltage magnitudes are at their reference values (recall Section \ref{sec:2A}). Suppose $|\theta_{\min,ik}| \leq \tan^{-1}(b_{ik}/g_{ik})$. Then if $\mathbf M(\mathbf p_{\min})$ is stable, $\mathbf M(\mathbf p)$ is stable for all $\mathbf p \in \mathcal P$. That is, $\mathcal P \subseteq \mathcal S_p$.
\end{thm}
The proof of Theorem \ref{thm:sr} is given in the Appendix. The significance of the theorem is that it suffices to check the stability of a single point, and this would ensure the global stability of the entire $\mathcal P$. Thus, in practice it is easy to check whether $\mathcal P \subseteq \mathcal S_p$ and how much demand need to be reduced to achieve stability under local controls.
The condition on $\theta_{\min,ik}$ is discussed in detail in \cite{LTZ12}, and is equivalent to the thermal constraints on the transmission lines. They are expected to hold in almost all practical situations.
\subsection{Effects of Topology on the Stability Region} \label{sec:long}
Theorem \ref{thm:sr} states that the stability of $\mathcal P$ is determined by the minimum injection vector. In this section, we show that the stability region $\mathcal S_p$ shrinks as the depth of the network increases. This shows that long networks are inherently more difficult to control, or equivalently, a line network is inherently more unstable than a star network. Due to space constraints, rather than stating a general theorem, we state a result pertaining to line networks.
\begin{lem} \label{lem:long}
Consider a $n$-bus homogeneous line network where every line has admittance $1-j$. Then the stability region of $\mathcal S_p$ approaches the single point $\mathbf 0$ as $n$ goes to infinity.
\end{lem}
This lemma states that as the depth of the network increases, communication becomes critical in maintaining the voltage stability of the network. Practical networks often have a large depth, e.g., the 123-bus system has a depth of 23; therefore, some communication would likely be necessary for large distribution networks.
\section{Case Studies} \label{sec:sim}
We analyze the stability of the proposed local control algorithm for three of the IEEE test feeders: the 8-bus, 34-bus and the 123-bus network (see \cite{testfeeders} for system data). Since only one demand point is given in the data, we compute $\mathbf M$ at that point. By Theorem \ref{thm:sr}, all demand vectors less than the given demand is stable. Table \ref{tab:stable} shows whether the networks is stable under its current demand.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|}
Networks & 8-bus & 34-bus & 123-bus \\
\hline
Depth & 6 & 19 & 23 \\
Stable & yes & yes & no
\end{tabular}
\caption{Stability of the three network at the demand given in \cite{testfeeders}.}
\label{tab:stable}
\vspace{-0.5cm}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.35]{34sim_a.eps}
\caption{The behavior of voltage magnitudes under the local control algorithm. Each curve is the normalized voltage, i.e., $|V_i[t]|/V_i^{ref}$. All of the curves converges to $1$, which means that all voltages are at their reference.}
\label{fig:34sim_a}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.35]{34sim_b.eps}
\caption{The behavior of voltage magnitudes under the local control algorithm. Each curve is the normalized voltage, i.e., $|V_i[t]|/V_i^{ref}$. The demand in the system is 3.5 times the demand in Fig. \ref{fig:34sim_a}. Due the heavier loading, the network becomes unstable under the local control algorithm and experiences large voltage swings.}
\label{fig:34sim_b}
\vspace{-0.7cm}
\end{figure}
We focus on the 34-bus network and demonstrate the action of the local control algorithm. Figures \ref{fig:34sim_a} and \ref{fig:34sim_b} respectively show the voltage magnitudes in the network when the local action is able to stabilize the voltage and when it fails to do so. In Fig. \ref{fig:34sim_a} the demand is set to be the current demand in \cite{testfeeders}, and initially, we vary the voltage randomly by 10 \% and apply the local control algorithm in Section \ref{sec:result}. The voltage magnitudes converge to the reference voltages.
The local control algorithm fails to converge once the demand in the network is increased to 3 times its current value. In Fig. \ref{fig:34sim_b}, we increase the demand to $3.5$ times the current demand, and again vary the voltage randomly by 10 \% from its reference value; the figure shows the unstable behavior of the voltage magnitudes.
\section{Conclusion} \label{sec:con}
In this paper we investigated the utilization of reactive-power-capable DERs for voltage regulation in distribution networks without explicit communications among the buses. We proposed an algorithm where each bus controls its reactive power injections sensing only its own voltage magnitude. We provided sufficient conditions when the algorithm converges for the entire feasible active power injection region of a network, and necessary conditions that shows longer and heavier loaded networks are harder to control using localized algorithms. We validated our claims in case studies using the IEEE 34-bus test feeder.
\bibliographystyle{IEEEtran}
|
1,108,101,566,432 | arxiv | \section{Introduction}
In the context of conventional (classical) error correction,
deletion correction, which was introduced by Levenshtein in 1966
\cite{levenshtein66}, has attracted much attention recently
(see, for example, \cite{sima21} and the references therein).
In the correction of erasures,
the receiver is aware of positions of erasures \cite{bennett97,grassl97,macwilliams77}.
In contrast to this, the receiver is unaware of positions of deletions,
which adds extra difficulty to correction of deletions and code constructions
suitable for deletion correction.
Partly due to the combined difficulties of deletion correction and
quantum error correction, the study of quantum deletion correction
has begun very recently \cite{hagiwara20b,hagiwara20a,hagiwara20c}.
Those researches provided concrete examples of quantum deletion-correcting codes.
The first systematic construction of
$1$-deletion-correcting binary quantum codes was proposed in \cite{hagiwara20b},
where $((2^{k+2}-4, k))_2$ codes were constructed for any positive integer $k$.
Very recently, the first systematic constructions of
$t$-deletion-correcting binary quantum codes were proposed \cite{hagiwara21,picode2}
for any positive integer $t$.
The number of codewords was two in \cite{hagiwara21,picode2}.
There are the following problems in the existing studies:
(1) There is no systematic construction for nonbinary quantum codes correcting
more than $1$ deletions.
(2) Existing studies of quantum error correction cannot be reused
in an obvious manner.
In this paper, we tackle these problems by proposing two systematic
constructions of nonbinary quantum codes.
The first one is based on the method of type in the information theory \cite{csiszarbook2}.
The constructed codes belong to the class of permutation-invariant quantum codes
\cite{picode,picode2}. It can construct quantum codes for qudits of arbitrary dimension,
but the codes can correct only $1$ deletion and asymptotically bad.
The second construction converts quantum erasure-correcting codes
to deletion-correcting ones. The construction is asymptotically good,
and can correct as many deletions as the number of correctable erasures
of the underlying quantum codes.
But the second construction has severe limitations on the dimension of qudits.
For example, the second construction cannot construct binary or ternary quantum codes.
This paper is organized as follows: Section \ref{sec2}
introduces necessary notations and concepts.
Section \ref{sec3} proposes the first construction.
Section \ref{sec4} proposes the second construction.
Section \ref{sec5} concludes the paper.
\section{Preliminaries}\label{sec2}
Let $\mathbf{Z}_\ell = \{0$, $1$, \ldots, $\ell-1\}$.
A type $P$ \cite{csiszarbook2}
of length $n$ on the alphabet $\mathbf{Z}_\ell$
is a probability distribution on $\mathbf{Z}_\ell$
such that each probability in $P$ is of the form $m/n$, where $m$ is an integer.
The alphabet is fixed to $\mathbf{Z}_\ell$
when we consider types.
For $\vec{x} = (x_1$, \ldots, $x_n) \in \mathbf{Z}_\ell^n$,
the type $P_{\vec{x}}$ of $\vec{x}$ is
the probability distribution $P_{\vec{x}}(a) = \sharp \{ i \mid x_i = a \} / n$,
where $\sharp$ denotes the number of elements in a set.
For a type $P$ of length $n$,
$T(P)$ denotes the set of all sequences with type $P$, that is,
\[
T(P) = \{ \vec{x} \in \mathbf{Z}_\ell^n \mid P_{\vec{x}} = P \}.
\]
For types $P_1$ and $P_2$, we define $P_1 \sim P_2$
if there exists a permutation $\sigma$ on $\ell$ numbers in a type such that
$\sigma(P_1) = P_2$.
For example, when $P_1 = (1/3$, $1/6$, $1/2)$, $\sigma(P_1)$
can be $(1/6$, $1/2$, $1/3)$.
This $\sim$ is an equivalence relation, and we can consider
equivalence classes induced by $\sim$.
We denote an equivalence class represented by $P$ by $[P]$.
We define $T([P]) = \bigcup_{Q \in [P]} T(Q)$.
\begin{definition}
For $0 \leq t \leq n-1$, we say
a type $P_1$ of length $n-t$ to be a type of $P_2$ after $t$ deletion,
where $P_2$ is a type of length $n$, if
\begin{itemize}
\item For each $a \in \mathbf{Z}_\ell$,
$(n-t)P_1(a) \leq n P_2(a)$,
\item and $\sum_{a \in \mathbf{Z}_\ell} \{n P_2(a) - (n-t)P_1(a)\} = t$.
\end{itemize}
\end{definition}
We see that $P_{\vec{y}}$ is a type of $P_{\vec{x}}$ after $t$ deletion
if $\vec{y}$ is obtained by deleting $t$ components in $\vec{x}$.
\begin{definition}
Let $S = \{ P_0$, \ldots, $P_{M-1} \} $ be a set of types of length $n$.
We call $S$ to be suitable for $t$-deletion correction
if for any $Q_1 \in [P_i]$ and any $Q_2 \in [P_j]$ with $Q_1 \neq Q_2$
there does not exist a type $R$ of length $n-t$
such that $R$ is a type of both $Q_1$ and $Q_2$ after $t$ deletion.
\end{definition}
Let $\mathcal{H}_\ell$ be the complex linear space of
dimention $\ell$.
By an $((n,M))_\ell$ quantum code we mean
an $M$-dimentional complex linear subspace $Q$ of
$\mathcal{H}_\ell^{\otimes n}$.
An $((n,M))_\ell$ code is said to be $\ell$-adic.
The information rate of $Q$ is defined to be $(\log_\ell M) / n$.
A code construction is said to be asymptotically good
if it can give a sequence of codes with which $\liminf_{n\rightarrow\infty} (\log_\ell M) / n >0$
\cite{macwilliams77}, and said to be bad otherwise.
\section{First Construction of Quantum Deletion Codes}\label{sec3}
\subsection{Construction}
With a given $S$ suitable for $t$-deletion correction,
we construct $((n,M))_\ell$ quantum code as follows:
An $M$-level quantum state
$\alpha_0 \ket{0} + \cdots + \alpha_{M-1}\ket{M-1}$
is encoded to a codeword $\ket{\varphi} \in Q$ as
\[
\sum_{k=0}^{M-1} \alpha_k \frac{1}{\sqrt{\sharp T([P_k])}} \sum_{\vec{x} \in T([P_k])}
\ket{\vec{x}}.
\]
In the next subsection, we will prove this construction can correct $t=1$ deletion.
\subsection{Proof of $1$-Deletion Correction}
We assume $t=1$ in this subsection (see Remark \ref{rem1}).
The proof argument does not work for $t>1$.
Firstly, for any codeword $\ket{\varphi} \in Q$,
any permutation of $n$ qudits in $\ket{\varphi}$ does not
change $\ket{\varphi}$.
Our constructed codes are instances of the permutation-invariant
quantum codes \cite{picode,picode2}.
So any $t$ deletion of $\ket{\varphi}$ is the same as deleting
the first, the second, \ldots, the $t$-th qudits in
$\ket{\varphi}$.
Therefore, $t$ deletion on $\ket{\varphi} \in Q$
can be corrected by assuming $t$ erasures in
the first, the second, \ldots, the $t$-th qudits.
By using Ogawa et al.'s condition \cite[Theorem 1]{ogawa05},
we show that the code can correct one erasure at the first qudit
by computing the partial trace $\mathrm{Tr}_{\overline{\{1\}}}[\ket{\varphi}\bra{\varphi}]$
of $\ket{\varphi}\bra{\varphi}$
over the second, the third, \ldots, and the $n$-th qudits.
Let $\ket{\varphi_k} = \frac{1}{\sqrt{\sharp T([P_k])}} \sum_{\vec{x} \in T([P_k])}\ket{\vec{x}}$.
We first compute
$\mathrm{Tr}_{\overline{\{1\}}}[\ket{\varphi_k}\bra{\varphi_k}]$.
Let $D_1$ be the deletion map from $\mathbf{Z}_\ell^n$
to $\mathbf{Z}_\ell^{n-1}$ deleting the first component.
For $\vec{x} \in \mathbf{Z}_\ell^n$, $x_i$ denotes the $i$-component.
\begin{eqnarray*}
&& \mathrm{Tr}_{\overline{\{1\}}}[\ket{\varphi_k}\bra{\varphi_k}] \\
&=& \frac{1}{\sharp T([P_k])} \sum_{a,b \in \mathbf{Z}_\ell}
\ket{a}\bra{b} \times \sharp \{ (\vec{x}, \vec{y})
\in T([P_k])\times T([P_k]) \\
&& \quad \mid
x_1 = a, y_1 =b, D_1(\vec{x})=D_1(\vec{y})\}.
\end{eqnarray*}
When $a=x_1 \neq b=y_1$ and $D_1(\vec{x})=D_1(\vec{y})$
we have $P_{\vec{x}} \neq P_{\vec{y}}$.
Since there does not exist a type $R$ of length $n-1$ such that
$R$ is $P_{\vec{x}}$ after 1 deletion and also $R$ is $P_{\vec{y}}$
after 1 deletion, for any $k$
there cannot exist $\vec{x}$, $\vec{y} \in T([P_k])$
such that $a=x_1 \neq b=y_1$ and $D_1(\vec{x})=D_1(\vec{y})$.
On the other hand, by the symmetry of the construction,
for any $a \in \mathbf{Z}_\ell$,
$\sharp \{ (\vec{x}, \vec{y})
\in T([P_k])\otimes T([P_k]) \mid
x_1 = a=y_1, D_1(\vec{x})=D_1(\vec{y})\}$ has the same size.
Therefore, we see that
\[
\rho_k = \mathrm{Tr}_{\overline{\{1\}}}[\ket{\varphi_k}\bra{\varphi_k}]
= \frac{1}{\ell} \sum_{a\in \mathbf{Z}_\ell} \ket{a}\bra{a}.
\]
On the other hand,
by the construction,
for $k_1 \neq k_2$, $\vec{x} \in T([P_{k_1}])$,
$\vec{y} \in T([P_{k_2}])$, $D_1 (\vec{x})$ is always different from
$D_1 (\vec{y})$, which implies
\begin{equation}
\mathrm{Tr}_{\overline{\{1\}}}[\ket{\varphi}\bra{\varphi}]
= \sum_{k=0}^{M-1} |\alpha_k|^2 \rho_k = I_{\ell \times \ell} / \ell. \label{eq1}
\end{equation}
By \cite[Theorem 1]{ogawa05}, this implies that the constructed code
can correct one erasure at the first qudit,
which in turn implies one deletion correction by the symmetry of codewords
with respect to permutations. \rule{1ex}{1ex}
\begin{remark}\label{rem1}
When $t>1$, Eq.\ (\ref{eq1}) sometimes depends on the encoded quantum information,
and one cannot apply \cite[Theorem 1]{ogawa05}.
Since the number of types is polynomial in $n$
\cite{csiszarbook2}, the proposed construction is asymptotically bad.
\end{remark}
\subsection{Examples}
\subsubsection{Nakahara's Code}
Let $\ell=n=3$.
Then $P_0 = ( 1,0,0)$ and $P_1 = (1/3,1/3,1/3)$ are suitable for $1$-deletion correction.
This code was first found by Dr.\ Mikio Nakahara at Kindai University.
Since $1$-deletion correcting quantum code of length $2$
is prohibited by the quantum no-cloning theorem \cite{wootters82},
this code has the shortest possible length among all $1$-deletion-correcting quantum
codes.
\subsubsection{Example 2}
Let $n=7$, $\ell=3$.
Then $P_0=(7/7,0,0)$, $P_1=(5/7, 1/7,1/7)$, $P_2=(3/7, 2/7,2/7)$
are suitable for $1$-deletion correction.
\subsubsection{Example 3}
Let $n=8$, $\ell=4$.
Then $P_0 = (8/8,0,0,0)$, $P_1=(6/8, 1/8, 1/8,0)$,
$P_2 = (4/8, 4/8, 0, 0)$, $P_3=(4/8, 2/8, 1/8, 1/8)$
are suitable for $1$-deletion correction.
\section{Second Construction of Quantum Deletion Codes}\label{sec4}
The previous construction allows arbitrary $\ell$, but the information rate $(\log_\ell M) / n$
goes to zero as $n\rightarrow \infty$.
In this section, we construct a $t$-deletion-correcting code over $\mathcal{H}_{(t+1)\ell}$,
that is, we assume that the qudit has $(t+1)\ell$ levels.
We introduce an elementary lemma,
which is known in the conventional coding theory \cite{hagiwara20c}.
\begin{lemma}
Let $\vec{x} = (0$, $1$, \ldots, $t$, $0$, $1$, \ldots $) \in \mathbf{Z}_{t+1}^n$.
Let $\vec{y}$ be a vector after deletions of at most $t$ components in $\vec{x}$.
Then one can determine all the deleted positions from $\vec{y}$.
\end{lemma}
\noindent\textbf{Proof:}
Let $i = \min \{ j \mid y_j > y_{j+1} \}$.
Then $y_1$, \ldots, $y_i$ correspond to $x_1$, \ldots, $x_{t+1}$.
The set difference $\{ x_1$, \ldots, $x_{t+1} \} \setminus \{ y_1$, \ldots, $y_i \}$
reveals the deleted positions among $x_1$, \ldots, $x_{t+1}$.
Repeat the above precedure from $y_{j+1}$ until the rightmost component in $\vec{y}$
and
one gets all the deleted positions.
\rule{1ex}{1ex}
Let $Q \subset \mathcal{H}_\ell^n$ be a $t$-erasure-correcting $((n,M))_\ell$
quantum code.
A codeword $\ket{\psi_1} \in Q$ can be converted to
a codeword in the proposed $t$-deletion-correcting code as follows:
Firstly, observe $\mathcal{H}_{(t+1)\ell}$ is isomorphic to $\mathcal{H}_\ell \otimes
\mathcal{H}_{t+1}$.
Let $\ket{\psi_2} = \ket{01\cdots t 0 1 \cdots } \in \mathcal{H}_{t+1}^{\otimes n}$.
The sender sends $\ket{\psi_1} \otimes \ket{\psi_2}$ as a codeword
in $\mathcal{H}_{(t+1)\ell}^{\otimes n}$.
The receiver receives $\rho \in \mathcal{S}(\mathcal{H}_{(t+1)\ell}^{\otimes n-t'})$,
where $0 \leq t' \leq t$,
where $\mathcal{S}(\mathcal{H}_{(t+1)\ell}^{\otimes n-t'})$ denotes
the set of density matrices on $\mathcal{H}_{(t+1)\ell}^{\otimes n-t'}$.
The quantum system of received state can be divided to
$\mathcal{H}_\ell^{\otimes n-t'}$ and $\mathcal{H}_{t+1}^{\otimes n-t'}$.
The receiver make a projective measurement
on the subsystem $\mathcal{H}_{t+1}^{\otimes n-t'}$
defined by
$\{ \ket{\vec{y}}\bra{\vec{y}}$ $ \mid \vec{y} \in \mathbf{Z}_{t+1}^{n-t'} \}$.
Then the receiver knows all the deleted positions.
After that,
the receiver applies the erasure correction procedure of $Q$,
for example, \cite{matsumoto17uni} for quantum stabilizer codes
\cite{ashikhmin00,calderbank97,calderbank98,gottesman96,matsumotouematsu00}.
When $\ell$ is a prime power and $t$ is fixed relative to $n$,
$\lim_{n\rightarrow \infty} (\log_\ell M) / n$ can attain $1$ \cite{feng04},
and by the above construction the information rate
$\lim_{n\rightarrow \infty} (\log_{(t+1)\ell} M) / n$ can attain $\log_{(t+1)\ell} \ell$.
\begin{remark}
Let $\rho \in \mathcal{S}(\mathcal{H}_\ell^{\otimes n})$ be a quantum codeword
of an entanglement assisted code \cite{brun06}. By using $\rho$ in place of $\ket{\varphi_1}$
in this section, one can construct $t$-deletion-correcting entanglement assisted code.
\end{remark}
\section{Conclusion}\label{sec5}
This paper proposes two systematic constructions of
quantum deletion-correcting codes.
The first one has advantage of supporting arbitrary dimension of qudits.
The second one has advantages of multiple deletion correction
and asymptotic goodness.
It is a future research agenda to find a construction of having
all the above stated advantages.
\section*{Acknowledgment}
The authors would like to thank Dr.\ Mikio Nakahara at Kindai University
for the helpful discussions.
|
1,108,101,566,433 | arxiv | \section{Introduction}
Describing chemical reactions which take place on more than just one electronic potential-energy surface
poses one of the primary open challenges in the field of
chemical reaction dynamics. \cite{Althorpe_2016,Lawrence2019rates}
These processes are relevant to many phenomena which we encounter not only in different disciplines of science, but also in our everyday life.
Ranging from redox reactions to photosynthesis, harvesting light in solar cells, molecular switches and many more, the most fundamental step of these processes is a nonadiabatic transition from one electronic state to another,
leading to a breakdown of the
Born--Oppenheimer approximation. \cite{Tully2012perspective}
Due to the great interest in these phenomena,
the study of nonadiabatic transitions is
an important topic for research.
Hence, there exists a plethora of different algorithms to simulate nonadiabatic dynamics,
\cite{Curchod_2018,Tully_2012,Crespo_Otero_2018,Yarkony_2011,Worth_2008,Makri2015QCPI}
from computationally expensive, but accurate methods based on wave-function propagation \cite{Beck_2000,MCTDH,Wang2006flux,Richings2015vMCG} to heuristically motivated, pragmatic methods such as trajectory surface hopping.\cite{Tully_1990,Subotnik_2016,Mai_2018}
Simulating the direct dynamics of a chemical reaction, however, is not usually a practical way to obtain information about the reaction rate, because
the typical time scales of chemical reactions are long.
Instead, a nonadiabatic extension of transition-state theory (TST) is required.
\cite{Rips1995ET}
The thermal rate constant for the transition from the reactant electronic state, with
internal energy levels $E_0^{\lambda}$ and a partition function $Z_0 = \sum_{\lambda} \mathrm{e}^{-\beta E_0^{\lambda}}$,
to the product electronic state, with
internal energy levels $E_1^{\nu}$, can be found by applying perturbation theory.
The result to lowest order in the coupling
$\Delta_{\lambda\nu}$ between these states
is given by the famous golden-rule formula \cite{Dirac1927,Wentzel_1927}
generalized for an initial thermal distribution \cite{Zwanzig,Nitzan}
\begin{align}
\label{equ:k_qs}
k_{\mathrm{QM}} &= \frac{2\pi}{\hbar} \sum_{\lambda} \frac{\mathrm{e}^{-\beta E_0^{\lambda}}}{Z_0} \sum_{\nu}
|\Delta_{\lambda\nu}|^2
\delta(E_0^{\lambda} - E_1^{\nu}) \, ,
\end{align}
whose name (given by Fermi) indicates its overwhelming relevance and applicability in a multitude of different fields,
including nuclear physics, light-matter interactions and nonadiabatic transitions.
\cite{Fermi1974,Schinke_1993,ConicalIntersections2,Parker2005,Ribeiro2018polariton}
One important example of the latter
is the nonadiabatic transfer of an electron from a donor to an acceptor.
\cite{Marcus1993review}
Marcus was awarded with the Nobel prize in chemistry in 1992 \cite{Marcus1993review}
for his work on electron-transfer rate theory. \cite{Marcus1956ET,Marcus1985review}
One of the great triumphs of his theory was the prediction of
an
inverted regime, in which the rates decrease despite increasing thermodynamic driving force, and which was later confirmed by experiment.\cite{Miller1984inverted}
Because of its simplicity and practicability, Marcus theory remains the most commonly applied approach to describe electron-transfer reactions. \cite{Blumberger2015ET,Rosspeintner2013photochemistry,Koslowski2012ET,Pelzer2016,Antoniou2006,Feher1989,LyndenBell2007}
However, there are a number of approximations inherent in Marcus theory, \cite{ChandlerET}
which includes the assumption of parabolic free-energy curves along the reaction coordinate.
It also employs classical statistics and cannot therefore capture nuclear quantum effects like zero-point energy and quantum tunnelling,
the neglect of which could lead to deviations from the exact rate of several orders of magnitude especially at low temperatures.
The inclusion of these effects in novel nonadiabatic rate theories
which can be applied to molecular systems without making the parabolic approximation
is therefore a major objective. \cite{Althorpe_2016}
In particular it has been predicted that quantum tunnelling effects can substantially speed up the rate in the inverted regime.\cite{Siders1981inverted}
This was confirmed experimentally in \Ref{Akesson1991,*Akesson1992},
in which
reaction rates were found to be up to 8 orders of magnitude larger than were predicted by
Marcus theory,
but which could be explained by including a quantum-mechanical treatment of the vibrational modes. \cite{Jortner1988}
Early approaches such as these for including quantum statistics into Marcus theory
were, however, only possible by
restricting the system to simplistic models such as the spin-boson model. \cite{Siders1981quantum}
On the other hand, classical golden-rule transition-state theory \cite{nonoscillatory,ChandlerET} has no restriction on the complexity of the system,
but does not take account of quantum nuclear effects.
A domain of methods which proved to be particularly successful in describing nuclear quantum effects in more general systems is based on Feynman's path-integral formulation of quantum mechanics.\cite{Feynman}
For instance, Fermi's golden rule has been recast into a semiclassical instanton formulation,\cite{GoldenGreens}
which does not require detailed knowledge about the internal states and can therefore be applied to complex systems.
However, there is a problem with these methods in the inverted regime because the imaginary-time propagator, on which most of these methods rely, diverges.
Hence, many previous semiclassical
\cite{Wolynes1987dissipation,Cao1995nonadiabatic,*Cao1997nonadiabatic,*Schwieters1998diabatic,Ranya2019instanton,GoldenGreens,GoldenRPI,AsymSysBath}
and imaginary-time path-integral approaches \cite{Wolynes1987nonadiabatic,Schwieters1999diabatic,Lawrence2019ET}
did not tackle the inverted regime.
One approach for extending these methods to treat
the inverted regime was suggested by Lawrence and Manolopoulos, \cite{Lawrence2018Wolynes} who analytically continued Wolynes' rate expression \cite{Wolynes1987nonadiabatic} into the inverted regime.
The rate is then obtained by extrapolating the path-integral data collected in the normal regime into the inverted regime.
While their methodology appears to work very well at predicting rates,
the mechanistic view may be lost by this approach,
as the rate is not extracted directly from a simulation of the system in the inverted regime.
The electron-transfer rate in the inverted regime
cannot be tackled directly by standard ring-polymer molecular dynamics \cite{Habershon2013RPMDreview}
where the transferred electron is treated as an explicit particle
\cite{Menzeleev2011ET}
as can be explained by a semiclassical analysis. \cite{Shushkov2013instanton}
However, more recent modifications of ring-polymer molecular dynamics, \cite{Menzeleev2014kinetic,*Kretchmer2016KCRPMD,Duke2016Faraday,Tao2019RPSH}
and golden-rule quantum transition-state theory (GR-QTST) \cite{GRQTST,GRQTST2}
have started to address this problem,
but these still lack the rigour, simplicity of implementation and mechanistic insight of instanton theory.
In this paper, we propose an extension of the semiclassical instanton method
\cite{GoldenGreens,GoldenRPI,AsymSysBath}
for the inverted regime
in the golden-rule limit.
The rate expression in the inverted regime is derived by analytic continuation of the formula in the normal regime, leading to a one-shot method which requires no extrapolation
of results collected in a different regime.
We show excellent agreement with the exact golden-rule rates for the model systems studied.
At the same time it gives direct mechanistic insights, as it automatically locates the dominant tunnelling pathway for the reaction under investigation, which is equivalent to predicting the mechanism.
It can therefore be used to shed light on the role of quantum nuclear tunnelling in electron-transfer reactions,
which is expected to be of substantial importance, especially in the inverted regime.
The instanton approach can be implemented using a ring-polymer discretization,
in which the only change necessary to make the algorithm applicable for the inverted regime is a slight variation in the optimization scheme,
which
turns out to be just as reliable and effective as the optimization in the normal regime.
Hence, the method is conceptually ideally suited for use in conjunction with \textit{ab-initio} potentials and thereby for realistic simulations of molecular systems,
as has been demonstrated for the standard ring-polymer instanton approach. \cite{porphycene,i-wat2,Asgeirsson2018instanton,hexamerprism,Rommel2012enzyme}
\section{Instanton theory in the normal regime}
In this section we summarize our previous derivation of semiclassical instanton theory
for the golden-rule rate in the normal regime.\cite{GoldenGreens}
This follows a similar approach to our derivation of instanton theory on a single Born--Oppenheimer potential. \cite{AdiabaticGreens,InstReview}
We consider a general multidimensional system with two electronic states,
each with a nuclear Hamiltonian of the form
\begin{align}
\label{Hn}
\op{H}_n &= \sum_{j=1}^D \frac{\op{p}_j^2}{2m} + V_n(\op{\mat{x}}),
\end{align}
where
$n\in\{0,1\}$ is the electronic-state index
and
$\mat{x}=(x_1,\dots,x_D)$
are the Cartesian coordinates
of $D$ nuclear degrees of freedom.
These nuclei move on the potential-energy surface $V_n(\mat{x})$
with conjugate momenta $\op{p}_j$.
Without loss of generality, the nuclear degrees of freedom have been mass-weighted such that each has the same mass, $m$.
The electronic states $\ket{n}$ are coupled by $\Delta(\hat{\mat{x}})$ to give the total Hamiltonian
in the diabatic representation, \cite{ChandlerET}
\begin{align}
\op{H} &= \op{H}_0 \ketbra{0}{0} + \op{H}_1 \ketbra{1}{1} + \Delta(\op{\mat{x}}) \big( \ketbra{0}{1} + \ketbra{1}{0} \big).
\end{align}
We shall take the diabatic coupling to be constant, $\Delta(\op{\mat{x}}) = \Delta$, and
assume that it is very weak, i.e.\ $\Delta\rightarrow0$, known as the golden-rule limit,
which is typically the case in electron-transfer reactions. \cite{ChandlerET}
The quantum-mechanical rate is then given by the golden-rule expression, \eqn{equ:k_qs},
which is valid both in the normal and inverted regimes.
However, in order to calculate the rate in this way,
the internal states of both the reactant and product would be required,
which are typically not known and cannot be computed for a complex system.
\subsection{Correlation function formalism}
The purpose of the semiclassical instanton approach is to obtain a good approximation to the golden-rule rate
without detailed knowledge of the internal states.
Therefore, instead of using the expression in \eqn{equ:k_qs},
we employ the alternative exact definition of the quantum rate \cite{Miller1983rate,ChandlerET,nonoscillatory}
\begin{equation}
k_{\mathrm{QM}} Z_0 = \frac{\Delta^2}{\hbar^2} \int_{-\infty}^{\infty} c(\tau+\ensuremath{\mathrm{i}} t) \,\mathrm{d} t \, ,
\label{equ:ExactK}
\end{equation}
where the flux correlation function is
\begin{equation}
c(\tau+\ensuremath{\mathrm{i}} t) = \Tr \left[ \mathrm{e}^{-\hat{H}_0 (\beta\hbar - \tau-\mathrm{i} t )/\hbar}\mathrm{e}^{-\hat{H}_1 (\tau + \mathrm{i} t )/\hbar}\right] \, ,
\label{equ:fcf}
\end{equation}
and the reactant partition function is $Z_0 = \Tr\big[ \mathrm{e}^{-\beta \hat{H}_0} \big]$.
Note that in order to write the expression in this form, it is necessary to assume that the energies of the internal states of both the reactant and product are bounded from below, i.e.\ there exists a finite-energy ground state of $\op{H}_0$ and $\op{H}_1$.
We shall in addition assume that the energies are not bounded from above, which is the typical situation for molecular Hamiltonians.
In this section we shall choose
$\tau$ in the range $0<\tau<\beta\hbar$.
Under these circumstances,
it can be shown that $c(\ensuremath{\mathrm{i}} z)$ is an analytic function of $z=t-\ensuremath{\mathrm{i}}\tau$
and as such the integral $\int_{-\infty}^\infty c(\ensuremath{\mathrm{i}} z) \, \mathrm{d} z$ is independent of the contour of integration
and hence the rate is independent of the choice of $\tau$, at least within its range of validity.
Expanding the trace in a coordinate-space representation gives
\begin{multline}
k_{\mathrm{QM}} Z_0 = \frac{\Delta^2}{\hbar} \iiint_{-\infty}^{\infty} K_0(\mat{x}',\mat{x}'',\beta\hbar - \tau - \mathrm{i}t )\\
\times K_1(\mat{x}'',\mat{x}',\tau + \mathrm{i}t) \, \mathrm{d} \mat{x}' \, \mathrm{d} \mat{x}'' \,\mathrm{d} t ,
\label{equ:pi_rate}
\end{multline}
where
the imaginary-time quantum propagators, defined by
\begin{equation}
\label{equ:kernel}
K_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n) = \braket{\mat{x}_\text{f} | \mathrm{e}^{-\tau_n\hat{H}_n/\hbar} |\mat{x}_\text{i}} \, ,
\end{equation}
describe the dynamics of the system evolving from the initial position $\mat{x}_\text{i}$ to the final position $\mat{x}_\text{f}$
in imaginary time $\tau_n$
according to the Hamiltonian $\hat{H}_n$.
Real-time dynamics can also be described by
making the third argument complex.
\Eqn{equ:pi_rate} is valid for systems in both the normal and inverted regimes and can be evaluated for model systems where the propagators are known analytically by numerical integration.
However, because it is necessary to limit ourselves to the range $0 < \tau < \beta\hbar$,
as we will show,
the semiclassical approximation
described in \secref{SC}
can only be derived directly for the normal regime.
\subsection{Semiclassical approximation}
\label{sec:SC}
The instanton expression for the rate is obtained by first replacing the exact quantum propagators by semiclassical van-Vleck propagators \cite{GutzwillerBook} generalized for imaginary-time arguments \cite{Miller1971density,InstReview}
\begin{equation}
K_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n) \sim \sqrt{\frac{C_n}{(2\pi\hbar)^D}} \mathrm{e}^{- S_n/\hbar} \, .
\end{equation}
This expression is evaluated using the classical trajectory, $\mat{x}_n(u)$,
which travels
from $\mat{x}_n(0) = \mat{x}_\text{i}$ to $\mat{x}_n(\tau_n) = \mat{x}_\text{f}$
in imaginary time $\tau_n$.
This trajectory is found as the path
which makes the Euclidean action, $S_n$, stationary,
and the action is defined as
\begin{align}
\nonumber S_n &= S_n(\mat{x}_\text{i}, \mat{x}_\text{f}, \tau_n)\\
&= \int_{0}^{\tau_n} \left[ \frac{1}{2} m \| \dot{\mat{x}}_n(u) \|^2 + V_n(\mat{x_n}(u)) \right] \mathrm{d} u \, ,
\label{equ:action}
\end{align}
where $\dot{\mat{x}}_n(u) = \frac{\mathrm{d} \mat{x}_n}{\mathrm{d} u}$ is the imaginary-time velocity.
The prefactor of the semiclassical propagator is given by the determinant
\begin{equation}
C_n = \left| -\frac{\partial^2S_n}{\partial \mat{x}_\text{i} \partial \mat{x}_\text{f}} \right| \, .
\label{equ:C_n}
\end{equation}
Plugging this semiclassical propagator into \eqn{equ:pi_rate}
allows us to perform
the integrals over the
end-points $\mat{x}'$, $\mat{x}''$ and over time $t$
employing the method of steepest descent. \cite{BenderBook}
This leads to the following expression for the golden-rule instanton rate, \cite{GoldenGreens}
\begin{equation}
k_{\mathrm{SCI}} Z_0 = \sqrt{2\pi\hbar} \frac{\Delta^2}{\hbar^2} \sqrt{\frac{C_0 C_1}{- \Sigma}} \mathrm{e}^{-S/\hbar} ,
\label{equ:kinst}
\end{equation}
where the total action is
\begin{equation}
S = S(\mat{x}',\mat{x}'',\tau) = S_0(\mat{x}',\mat{x}'',\beta\hbar-\tau) + S_1(\mat{x}'',\mat{x}',\tau) ,
\label{equ:total_action}
\end{equation}
and the determinant arising from the steepest-descent integration is
\begin{equation}
\Sigma = \begin{vmatrix}
\frac{\partial^2S}{\partial \mat{x}' \partial \mat{x}'} & \frac{\partial^2S}{\partial \mat{x}' \partial \mat{x}''} & \frac{\partial^2S}{\partial \mat{x}' \partial \tau} \\
\frac{\partial^2S}{\partial \mat{x}'' \partial \mat{x}'} & \frac{\partial^2S}{\partial \mat{x}'' \partial \mat{x}''} & \frac{\partial^2S}{\partial \mat{x}'' \partial \tau} \\
\frac{\partial^2S}{ \partial \tau \partial \mat{x}'} & \frac{\partial^2S}{ \partial \tau \partial \mat{x}''} & \frac{\partial^2S}{\partial \tau^2}
\end{vmatrix} \, .
\end{equation}
All these expressions are evaluated at a set of values for $\mat{x}'$, $\mat{x}''$ and $\tau$
which describes the stationary point of the action
defined by $\frac{\partial S}{\partial\mat{x}'} = \frac{\partial S}{\partial\mat{x}''} = \frac{\partial S}{\partial \tau} = 0$.
If $\tau$ is chosen according to this prescription, it is not even necessary to deform the integration contour,
as the saddle point appears at $t=0$ with both $\mat{x}'$ and $\mat{x}''$ purely real.
The minus sign before $\Sigma$ in \eqn{equ:kinst} arises naturally from the Cauchy--Riemann relations \cite{ComplexVariables}
when re-expressing derivatives with respect to $t$ as derivatives with respect to $\tau$.
If there is more than one stationary point of the action,
the rate is given by a sum over each solution. \cite{GRQTST2}
For consistency, the reactant partition function, $Z_0$, should also be evaluated within a semiclassical approximation.\cite{InstReview}
If we instead take the steepest-descent integration over the end-points first and then separately over time,
this leads to the alternative, but equivalent expression
\footnote{This is effectively the same result as was obtained in \Ref{Cao1997nonadiabatic} from an analysis of the ``barrier partition function,''
although
in that paper, the extra approximation that $C_0C_1\approx CZ_0^2$ was also made,
which is exact for the spin-boson model but not in general}
\begin{equation}
k_{\mathrm{SCI}} Z_0 = \sqrt{2\pi\hbar} \frac{\Delta^2}{\hbar^2} \sqrt{\frac{C_0 C_1}{C}} \left( - \frac{\mathrm{d}^2 S}{\mathrm{d}\tau^2} \right)^{-\frac{1}{2}} \mathrm{e}^{-S/\hbar}
\label{equ:kinst2} \, ,
\end{equation}
where
\begin{equation}
C = \begin{vmatrix}
\frac{\partial^2S}{\partial \mat{x}' \partial \mat{x}'} & \frac{\partial^2S}{\partial \mat{x}' \partial \mat{x}''}\\
\frac{\partial^2S}{\partial \mat{x}'' \partial \mat{x}'} & \frac{\partial^2S}{\partial \mat{x}'' \partial \mat{x}''}
\end{vmatrix}.
\end{equation}
Thus the total action, $S(\mat{x}',\mat{x}'',\tau)$ [\eqn{equ:total_action}], is a sum of the actions of two imaginary-time classical trajectories, one for each electronic state.
One trajectory travels on the reactant state from $\mat{x}'$ to $\mat{x}''$ in imaginary time $\tau_0 = \beta\hbar-\tau$
and the other from $\mat{x}''$ to $\mat{x}'$ in imaginary time $\tau_1 = \tau$ on the product state.
This forms a closed path of total imaginary time $\tau_0 + \tau_1 \equiv \beta\hbar$, known as the instanton.
Classical trajectories in imaginary time are described by ordinary classical mechanics but with an upside-down potential. \cite{Miller1971density}
They describe quantum tunnelling by travelling through the classically forbidden region,
and typically ``bounce'' against the potential,
which we define as an encounter with a turning point such that the momentum is instantaneously zero. \cite{GoldenGreens}
As has been shown in \Ref{GoldenGreens} for instantons in the normal regime,
the fact that $\mat{x}'$ and $\mat{x}''$ are chosen as stationary points of $S$
is tantamount to saying that
the imaginary-time
momenta on each surface, $\mat{p}_n(u)=m\dot{\mat{x}}_n(u)$,
must have the same magnitude and point in the same direction at the end-points.
Hence, the two classical trajectories join smoothly into each other.
Furthermore the restriction $\pder{S}{\tau}=0$ ensures
energy conservation,
which implies that the instanton is a periodic orbit.
Typically the two end-points will be located in the same place,
which we call the
hopping point, $\mat{x}^\ddag$,
and
we can conclude that
it
must be located on the crossing seam of the two potentials, where $V_0(\mat{x}^\ddag) = V_1(\mat{x}^\ddag)$.
Further details about these statements will be given in \secref{analysis}.
All the steps in this derivation are asymptotic approximations, which become exact in the $\hbar\rightarrow0$ limit (with $\beta\hbar$ kept constant).
This is therefore known as a semiclassical approximation.
Semiclassical instanton theory gives the exact rate for a system of two linear potentials,
and for more general systems in the limit of high temperature or heavy masses
it approaches a harmonic approximation to classical transition-state theory. \cite{GoldenGreens}
In practice, the theory is applied using a ring-polymer discretization of the imaginary-time trajectories
following the approach described in \Ref{GoldenRPI}.
This method has been previously used to study tunnelling effects and
the dependence of asymmetrical reorganization energies in a system-bath model in the normal regime. \cite{AsymSysBath}
\subsection{Classical limit}
\label{sec:classical}
Here we consider the classical, high-temperature limit ($\beta\hbar\rightarrow 0$) of a general curve crossing problem.
The instanton for this system corresponds to two very short imaginary-time trajectories describing a periodic orbit which is collapsed onto an infinitesimally short line. \cite{GoldenGreens}
Note that unlike for instanton theory on a single Born--Oppenheimer potential, \cite{Miller1975semiclassical,Perspective}
there is no cross-over temperature for golden-rule instanton theory.
For a one-dimensional system in this classical limit,
the action of a single trajectory can be written in the simpler form
\begin{equation}
\label{Sclassical}
S_n(x_\text{i},x_\text{f},\tau_n) = \frac{m}{2\tau_n} x_-^2 + V_n(x_+) \tau_n \,,
\end{equation}
where $x_- = x_\text{f} - x_\text{i}$ and $x_+ = \tfrac{1}{2} (x_\text{i} + x_\text{f})$.
The stationary points of the total action, \eqn{equ:total_action},
can be found by searching first for the solution to
$\pder{S}{x_-}=0$, which gives $x_-=0$,
and then for the solution to $\pder{S}{\tau}=0$ evaluated at $x_-=0$, which requires that the hopping point, $x_+=x^\ddag$,
obeys $V_0(x^\ddag)=V_1(x^\ddag)$.
Finally $\pder{S}{x_+}=0$ requires that
\begin{align}
\label{equ:tau}
\tau=\beta\hbar \frac{\del V_0(x^\ddag)}{\del V_0(x^\ddag) - \del V_1(x^\ddag)},
\end{align}
where $\del V_n(x^{\ddagger})$ is the derivative of the potential $V_n(x)$ with respect to $x$ evaluated at $x^{\ddagger}$.
These solutions give the simple interpretation that the transition-state for the reaction is located at the crossing point between the two diabatic potentials, $x^\ddag$.
Although the value of $\tau$ which makes $S$ stationary does not have a clear interpretation within the classical theory,
it plays an important role in defining the instanton.
We shall therefore consider the behaviour of $\tau$ in various regimes.
The common definition of the ``inverted'' regime is typically expressed in the context of Marcus theory
by a system which has a larger driving force than reorganization energy.
A more general definition is
that the different regimes are defined by the slope of the potentials at the crossing point;
in the inverted regime the gradients
have the same sign,
whereas in the normal regime the gradients have opposite signs.
An alternative terminology for these two cases
is ``Landau--Zener'' type or ``nonadiabatic-tunnelling'' type. \cite{NakamuraNonadiabatic}
Note that the common definition is equivalent to the more general definition as long as
the driving force is defined as $\varepsilon=V_0(x_\text{min}^{(0)})-V_1(x_\text{min}^{(1)})$
and the (product) reorganization energy as $\Lambda=V_1(x_\text{min}^{(0)})-V_1(x_\text{min}^{(1)})$,
where $x_\text{min}^{(n)}$ is the minimum of $V_n(x)$.
In multidimensional systems, there is a crossing seam, and one would say that the scalar product of the two gradient vectors on this seam is positive only in the inverted regime.
In fact at the minimum-energy crossing point,
which is the location of the hopping point in the classical limit,
these gradient vectors are antiparallel in the normal regime \cite{GoldenGreens} but parallel in the inverted regime.
From \eqn{equ:tau} it can be seen that in the normal regime,
where $\del V_0(x^\ddag)$ and $\del V_1(x^\ddag)$ have opposite signs,
the value of $\tau$ which makes $S$ stationary falls in the range
$0 < \tau < \beta\hbar$ and is therefore always positive.
In the activationless regime, where $\del V_0(x^\ddag) = 0$, the situation changes to $\tau = 0$.
In this paper, we shall
consider only one type of inverted regime,
$\del V_1(x^\ddag) / \del V_0(x^\ddag) > 1$,
which is the typical case encountered in Marcus theory.
Here $\tau$ takes a negative value.
Needless to say, all our results could be easily converted
to describe a system in the alternative inverted regime, $\del V_0(x^\ddag) / \del V_1(x^\ddag) > 1$.
\Eqn{equ:tau} generalizes to multidimensional systems by
projecting each gradient vector along the same arbitrary direction,
and $\tau$ is thus seen to have the same behaviour.
Therefore, in the context of the high-temperature limit of a general (anharmonic) system of two potential-energy surfaces which cross,
we have shown that %
the sign of $\tau$ is different in the two regimes.
This rule is also known to hold for situations involving quantum tunnelling,
\cite{Weiss,Lawrence2018Wolynes,nonoscillatory}%
$^,$
\footnote{The argument can easily be extended for the archetypal one-dimensional crossed linear system defined by
$V_n(x) = \kappa_n (x-x^\ddag)$, for which
$S_n = m x_-^2/2\tau_n - \kappa_n (x_+-x^\ddag) \tau_n - \kappa_n^2 \tau_n^3/ 24 m$.
Solving for the stationary point
$\pder{S}{x_+}=0$ gives $\tau=\beta\hbar \kappa_0 / (\kappa_0 - \kappa_1)$, which is in agreement with \eqn{equ:tau}.
The other two equations give $x_-=0$ and $x_+=x^\ddag$ as expected
}
which will be confirmed by the analysis later in this paper.
When substituting
the action [\eqn{Sclassical}] evaluated at its stationary point
into \eqn{equ:kinst}, one obtains the classical rate
\begin{equation}
\label{clTST}
k_{\text{cl-TST}} Z_0^\text{cl} = \sqrt{\frac{2\pi m }{\beta\hbar^2}}
\frac{\Delta^2}{\hbar |\del V_0(x^\ddagger) - \del V_1(x^\ddagger)|}
\, \mathrm{e}^{-\beta V^\ddagger} \, ,
\end{equation}
where $V^\ddag = V_0(x^\ddag) = V_1(x^\ddag)$ and $Z_0^\text{cl}$ is the classical limit of the reactant partition function.
Note that this expression for the rate is equivalent to that
derived from classical statistical mechanics employing the Landau--Zener hopping probability in the golden-rule limit.\cite{Nitzan,nonoscillatory}
It can also be derived directly from classical golden-rule TST, \cite{nonoscillatory,ChandlerET}
which is proportional to $\int \mathrm{e}^{-\beta V_0}\delta(V_0-V_1)\,\mathrm{d} x$, by noting that
the integral can be performed easily for one-dimensional systems due to the constraint.
For a spin-boson model, this classical expression reduces to Marcus theory.
\Eqn{clTST}
is in fact valid, not just in the normal regime, but also for the inverted regime.
That is
whether $\del V_0(x^\ddag)$ and $\del V_1(x^\ddag)$ have the same sign or opposite signs, as long as they are not equal to each other.
It is noteworthy that we can obtain valid classical formulas for the inverted regime from this approach
even though in the derivation we assumed that $0<\tau<\beta\hbar$, which is only appropriate in the normal regime.
In \secref{inverted} we shall also attempt to generalize the instanton method to the inverted regime in a similar way.
\section{Instanton theory in the inverted regime}
\label{sec:inverted}
In \secref{classical} we defined the inverted regime using the gradients on the crossing seam
and found that the value of $\tau$ which makes $S$ stationary becomes negative.
With the definitions given above this
implies $\tau_1<0$ and $\tau_0 > \beta\hbar$, in such a way that $\tau_0+\tau_1 \equiv \beta\hbar$ still holds.
This has important consequences for the implementation and interpretation of the instanton approach,
which we discuss in this section.
\subsection{Analytic continuation}
The main complication for the derivation of instanton theory in the inverted regime
is that the expression for the
imaginary-time propagator $K_1$
diverges for negative $\tau_1$.
One cannot therefore write the rate in terms of the coordinate-space integral as in \eqn{equ:pi_rate}
using the appropriate value for $\tau$, which would be necessary in order to carry out the steepest-descent integration.
However, we will show that while this path-integral expression does diverge,
Fermi's golden rule remains well defined and can be approximated to good accuracy by an
analytic continuation of the instanton rate formula.
Here we study more carefully the cause of the divergence in the inverted regime.
To do this,
we shall investigate the correlation function written as a
sum over
the contributions of the eigenstates,
$\psi_0^{\lambda}$, of the reactant Hamiltonian, $\op{H}_0$, labelled by the quantum number $\lambda$
and the eigenstates,
$\psi_1^{\nu}$, of the product Hamiltonian, $\op{H}_1$, labelled by $\nu$.
\footnote{In order for the rate to exist, we assume that at least one of the reactant or product has quasi-continuous states. If the states are truly continuous, one should replace the sums by integrals.
}
The flux correlation function, \eqn{equ:fcf}, expanded in the energy basis is thus given by
\begin{subequations}
\label{equ:trace2}
\begin{align}
\tilde{c}(\tau) &= \sum_{\nu} g_{\nu}(\tau) \, ,\\
g_\nu(\tau) &= \sum_{\lambda} |\theta_{\lambda\nu}|^2 \mathrm{e}^{-(\beta\hbar-\tau) E^{\lambda}_0/\hbar - \tau E^{\nu}_1 /\hbar}\, ,
\end{align}
\end{subequations}
where $\theta_{\lambda\nu} = \int \psi_0^{\lambda}(\mat{x})^* \psi_1^{\nu}(\mat{x}) \, \mathrm{d} \mat{x}$ such that
the coupling between states used in \eqn{equ:k_qs} is given by $\Delta_{\lambda\nu}=\Delta \theta_{\lambda\nu}$.
The overlap integral is clearly bounded by $0 \le |\theta_{\lambda\nu}| \le 1$, which follows from the Cauchy--Schwarz inequality assuming the wave functions are square-integrable and normalized.
In order to discuss the convergence of the correlation function,
let us first assume that we have a finite system for which the partition functions exist at any temperature,
i.e.\ the sums
$\sum_\nu \mathrm{e}^{-\tau_n E_n^\nu/\hbar}$ converge for any $\tau_n>0$.
In both the normal and inverted regime we are interested in values of $\tau<\beta\hbar$,
and so
it is clear from the comparison test
that the sum over $\lambda$ converges making $g_{\nu}(\tau)$ a well-defined quantity in either regime.
By similar arguments,
the sum over $\nu$ is also clearly convergent
when $\tau$ is in the range $0<\tau<\beta\hbar$,
which would be appropriate only for the normal regime.
Let us also consider the
flux correlation function expanded in the position basis,
\begin{equation}
c(\tau) = \iint K_0(\mat{x}',\mat{x}'',\beta\hbar - \tau ) K_1(\mat{x}'',\mat{x}',\tau) \, \mathrm{d} \mat{x}' \, \mathrm{d} \mat{x}'' \, ,
\end{equation}
where $K_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n) = \sum_{\nu} \psi_n^{\nu}(\mat{x}_\text{i})^* \psi_n^\nu(\mat{x}_\text{f}) \mathrm{e}^{-\tau_n E_n^\nu/\hbar}$.
We expect that for a physical system, $|\psi_n^\nu(\mat{x})|$
is bounded by a positive number, $\Psi_\text{max}$, for all $\nu$ and $\mat{x}$.
Therefore, the absolute value of any term in the sum
is necessarily less than
$\Psi_\text{max}^2 \mathrm{e}^{-\tau_n E_n^\nu/\hbar}$.
The comparison test
(also known as the ``Weierstrass M test'') \cite{ComplexVariables}
can again be invoked to prove that the propagators converge for $\tau_n>0$
for any values of $\mat{x}'$ and $\mat{x}''$,
which is known as uniform convergence. \cite{ComplexVariables}
Note however that $K_1(\mat{x}_\text{i},\mat{x}_\text{f},\tau_1)$ diverges in the inverted regime
if we choose $\tau_1<0$, as is required to perform the steepest-descent integration.
Inserting the definition for $\theta_{\lambda\nu}$ into $\tilde{c}(\tau)$ and the wave-function expansion for $K_n$ into $c(\tau)$, it is clear that
the two correlation functions are defined in a similar way, the only difference being
that the sums and integrals are taken in a different order.
Only for a uniformly convergent series can we interchange sums and integrals without affecting the result,
and thus it is possible to show that $\tilde{c}(\tau) = c(\tau)$ only for $0<\tau<\beta\hbar$.
Because $\tilde{c}(\ensuremath{\mathrm{i}} z)$ and $c(\ensuremath{\mathrm{i}} z)$ are analytic functions of $z=t-\ensuremath{\mathrm{i}}\tau$ in the regions where they converge,
this analysis can easily be extended to study the correlation functions at any value of $t$.
This simply adds phases to each term which
does not change the convergence behaviour and the fact that they are identical
for $0<\tau<\beta\hbar$.
\begin{figure}
\includegraphics[width=8.5cm]{TraceC-eps-converted-to.pdf}
\caption{Terms contributing to the trace defined by \eqn{equ:trace2}
each of which corresponds to a product eigenstate with quantum number $\nu$.
The terms are computed for the spin-boson model of Sec.~\ref{subsec:SB} with $D=1$ and $\varepsilon/\Lambda=2$
for $\tau\approx-0.26\beta\hbar$, which is the stationary point of the action for this system.
}
\label{fig:traces}
\end{figure}
If we choose $\tau$ to be negative, however, as would be appropriate for the case of the inverted regime,
the sum in $K_1$ is no longer uniformly convergent
and thus $c(\tau)$ diverges.
Interestingly, we find that
the correlation function $\tilde{c}(\tau)$
remains
(at least in some cases)
well defined.
To demonstrate this, we take as an example the one-dimensional version of the spin-boson model defined in Sec.~\ref{subsec:SB}
deep in the inverted regime
with
a driving force twice that of the reorganization energy.
Using the value of $\tau$ which makes $S$ stationary,
we evaluate the terms in \eqn{equ:trace2}
in the eigenfunction bases of the two harmonic oscillators.
In Fig.~\ref{fig:traces},
we plot the contributions from each term in the series
and demonstrate
that the $g_{\nu}(\tau)$ terms exhibit a distinct peak at some value of $\nu$ and fall off rapidly either side.
This occurs because $\theta_{\lambda\nu}$ exponentially decreases for states of widely different energies.
Therefore the correlation function $\tilde{c}(\tau)$ converges in this example and is analytic
even for systems in
the inverted regime where $\tau$ is negative.
In the normal regime,
we know how to make good semiclassical approximations to $c(\tau)$ using instanton theory but have no simple approach based on $\tilde{c}(\tau)$.
Therefore, in order to formulate instanton theory in the inverted regime,
we employ the mathematical process of analytic continuation. \cite{ComplexVariables}
This allows us to evaluate an approximation to $c(\ensuremath{\mathrm{i}} z)$ for positive $\tau$,
which because $c(\ensuremath{\mathrm{i}} z)=\tilde{c}(\ensuremath{\mathrm{i}} z)$,
must also be a good approximation to $\tilde{c}(\ensuremath{\mathrm{i}} z)$ in this regime.
Because $\tilde{c}(\ensuremath{\mathrm{i}} z)$ is analytic across both regimes, this approximation will be valid also in the inverted regime.
Accordingly, we propose to analytically continue the instanton method into the inverted regime
and will
employ the semiclassical instanton rate expression of \eqn{equ:kinst},
not just in the normal regime, where it was originally derived, but also for the inverted regime.
Note the important distinction of this proposed approach to previous work.
In effect, the method of \Ref{Lawrence2018Wolynes} analytically continued the function $c(\tau)$ into the region with $\tau<0$ by fitting it numerically to a suitable functional form \cite{Lawrence2019rates}
based on calculations in the normal regime and extrapolating to describe systems in the inverted regime.
We will go one step further to find a semiclassical instanton approach which is directly applicable in the inverted regime and requires no extrapolation.
In the following we analyse this new approach and show that it gives a valid approximation to Fermi's golden-rule rate.
\subsection{Analysis of the inverted-regime instanton orbit}
\label{sec:analysis}
Through analytic continuation, we have a formula [\eqn{equ:kinst}] for the golden-rule rate in the inverted regime
based on an action $S(\mat{x}',\mat{x}'',\tau)$.
This should be evaluated at its stationary point, which defines the instanton and in this regime has a negative value for $\tau$.
In this section, we shall study the behaviour of the instanton in the inverted regime
and establish that it remains a periodic orbit which travels through classically forbidden regions.
We start with the imaginary-time Euclidean action of a single trajectory, $S_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n)$, defined by \eqn{equ:action},
and will be careful to ensure that all our formulas are valid for both positive and negative imaginary times, $\tau_n$.
The trajectory of interest, $\mat{x}_n(u)$,
starts at
$\mat{x}_n(0)=\mat{x}_\text{i}$ and travels to $\mat{x}_n(\tau_n)=\mat{x}_\text{f}$ in imaginary time $\tau_n$.
This trajectory has a conserved energy, $E_n$, because the Hamiltonian is time-independent.
We can therefore add a zero under the integral
\begin{subequations}
\begin{multline}
S_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n) = \int_{0}^{\tau_n} \Big[ {\ensuremath{\tfrac{1}{2}}} m \|\dot{\mat{x}}_n(u)\|^2 + V_n(\mat{x}_n(u))\\
- E(u) + E_n \Big] \mathrm{d} u \, ,
\label{equ:action_E}
\end{multline}
where $E(u) = -\frac{1}{2} m \| \dot{\mat{x}}_n(u) \|^2 + V_n(\mat{x}_n(u))$ is the instantaneous energy,
which is constant (independent of $u$) and equal to $E_n$.
Inserting this definition in \eqn{equ:action_E} leads to
\begin{align}
\label{equ:legendre_1}
S_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n) &= \int_0^{\tau_n} \left[ m \|\dot{\mat{x}}_n(u)\|^2 + E_n \right] \mathrm{d} u \\
\label{equ:legendre}
&= \int_{\mat{x}_\text{i}}^{\mat{x}_\text{f}} \mat{p}_n \cdot \mathrm{d} \mat{x}_n + E_n\tau_n \, ,
\end{align}
\end{subequations}
where
$\mathrm{d}\mat{x}_n$ is an infinitesimal displacement vector pointing along the path in the direction from $\mat{x}_\text{i}$ to $\mat{x}_\text{f}$.
We call this the direction of particle flow.
Our convention will be to define the imaginary-time momentum, $\mat{p}_n=m\der{\mat{x}}{u}$,
such that it points along the
direction of change of position in a positive imaginary-time interval.
Therefore for a trajectory travelling from $\mat{x}_\text{i}$ to $\mat{x}_\text{f}$ in positive imaginary time $\tau_n>0$,
the momentum, $\mat{p}_n(u)$, will point along the path in the direction from $\mat{x}_\text{i}$ to $\mat{x}_\text{f}$.
However, for a trajectory travelling $\mat{x}_\text{i}$ to $\mat{x}_\text{f}$ in negative imaginary time $\tau_n<0$,
the momentum, $\mat{p}_n(u)$, will point in the opposite direction, i.e.\ along the path in the direction from $\mat{x}_\text{f}$ to $\mat{x}_\text{i}$.
From these equations we can determine that
$S_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n) \equiv - S_n(\mat{x}_\text{i},\mat{x}_\text{f},-\tau_n)$.
This can be seen from \eqn{equ:legendre_1} or \eqn{equ:action}
as the integral changes sign when the integration range goes from 0 to a negative number.
Alternatively one can see that both terms in \eqn{equ:legendre} change sign when $\tau_n<0$
because
for negative imaginary times
the momentum vector, $\mat{p}_n$, points in the opposite direction from the particle flow,
i.e.\ it is antiparallel to $\mathrm{d} \mat{x}_n$.
In particular if the zero potential is chosen below the instanton energy
(for instance at the reactant minimum),
making $E_n\ge0$,
then $S_n$ will be positive when $\tau_n>0$ and negative when $\tau_n<0$.
Therefore, whereas $S_0$ remains positive just as in the normal regime, in the inverted regime the value of $S_1$ becomes negative.
As the instanton corresponds to a stationary point of the total action,
we need to know the derivatives of the individual actions of each trajectory
with respect to the initial and final end-points as well as with respect to imaginary time.
These can be found by taking derivatives of \eqn{equ:legendre} to give
\begin{subequations}
\begin{align}
\pder{S_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n)}{\mat{x}_\text{i}} &= -\mat{p}_n(0)\, , \\
\pder{S_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n)}{\mat{x}_\text{f}} &= \mat{p}_n(\tau_n)\, , \\\
\pder{S_n(\mat{x}_\text{i},\mat{x}_\text{f},\tau_n)}{\tau_n} &= E_n \, .
\end{align}
\end{subequations}
Note that the derivative with respect to the
initial point has a different sign from that with respect to the
final point. \cite{GutzwillerBook}
By utilizing these relations in the definition of the total action of the closed orbit, \eqn{equ:total_action}, we arrive at the conditions
\begin{subequations}
\label{equ:conditions}
\begin{align}
\pder{S}{\mat{x}'} &= -\mat{p}'_0 + \mat{p}'_1 \, ,\\
\pder{S}{\mat{x}''} &= \mat{p}''_0 - \mat{p}''_1 \, ,\\
\pder{S}{\tau} &= -E_0 + E_1 \, ,
\end{align}
\end{subequations}
where
$\mat{p}_n'$ is the momentum of the trajectory $\mat{x}_n$ at the end marked $\mat{x}'$
(and likewise for double primes),
i.e.\ $\mat{p}_0' = m\dot{\mat{x}}_0(0)$, $\mat{p}_0'' = m\dot{\mat{x}}_0(\tau_0)$, $\mat{p}_1' = m\dot{\mat{x}}_1(\tau_1)$, $\mat{p}_1'' = m\dot{\mat{x}}_1(0)$.
All of the derivatives in \eqs{equ:conditions} must simultaneously vanish at the instanton configuration,
which effectively imposes energy and momentum conservation at the intersection of the trajectories,
$\mat{p}_0'=\mat{p}_1'$, $\mat{p}_0''=\mat{p}_1''$ and $E_0=E_1$.
The simplest solution to these equations (and typically the only one) is found when
both $\mat{x}'$ and $\mat{x}''$ are located at the same coordinate, which we call the hopping point, $\mat{x}^\ddag$.
The first two conditions
require that at the hopping point
the momenta $\mat{p}_0$ and $\mat{p}_1$ must be vectors with the same direction and the same magnitude.
Because the third condition requires the energies
of trajectory to match, $E_0=E_1$, and to be conserved along the path, the potentials at the hopping point must be identical as well.
We can thus conclude that
the hopping point is located somewhere on the crossing seam between the two potential-energy surfaces
such that $V_0(\mat{x}^\ddag) = V_1(\mat{x}^\ddag)$.
These findings are
equivalent to those found in previous work limited to the normal regime \cite{GoldenGreens}
but we have now shown that they also hold in the inverted regime.
However, there is nonetheless a fundamental difference
in the inverted regime when we study the paths followed by the trajectories
due to the fact that one path travels in negative imaginary time.
\begin{figure}
\includegraphics[width=8.5cm]{Sketch_combine-eps-converted-to.pdf}
\caption{Visualization of the two imaginary-time trajectories forming instantons in (a) the normal and (b) the inverted regime at energy $E=E_0=E_1$.
The reactant trajectory, $x_0$, is given in blue and the product trajectory, $x_1$, in red.
Arrows indicate the direction
of particle flow from the initial point to the final point of each trajectory.
The steepest-descent integration of positions is taken about the crossing point $x'=x''=x^{\ddagger}$ at which $V_0(x^{\ddagger}) = V_1(x^{\ddagger}) = V^{\ddagger}$.
In the normal regime,
the trajectories bounce on either side of the crossing point,
i.e.\ $x_0^\text{b} < x^\ddag < x_1^\text{b}$,
whereas in the inverted regime,
both trajectories bounce on the same side of the crossing seam, i.e.\ $x^\ddag < x_1^\text{b} < x_0^\text{b}$.
}
\label{fig:instvis}
\end{figure}
The imaginary-time classical trajectories, $\mat{x}_n(u)$,
which start and end at the same point $\mat{x}^\ddag$ but travel in a non-zero amount of time $\tau_n$, whether positive or negative,
will typically bounce against the potential.
This happens halfway along at the turning point, $\mat{x}_n^\text{b} = \mat{x}_n(\tau_n/2)$,
at which the kinetic energy is zero and the total energy, $E_n=V_n(\mat{x}_n^\text{b})$.
These considerations, along with the conditions in \eqn{equ:conditions}, give us the picture shown in Fig.~\ref{fig:instvis}.
In this way, we have discovered the form of the instanton appropriate for describing Fermi's golden-rule in the inverted regime.
We did this purely through a consideration of the stationary point of $S$, defined by \eqn{equ:total_action}.
This instanton has a number of important differences when compared with that in the normal regime,
as can be seen from the plots in \fig{feynman} obtained by
joining the two trajectories together to make the instanton orbit.
\begin{figure}
\includegraphics[width=8.5cm]{feynman_combine-eps-converted-to.pdf}
\caption{The two trajectories $x_0$ (blue) and $x_1$ (red) which form the instanton
are plotted as a function of imaginary time, $u$, in (a) the normal and (b) the inverted regime.
In both cases the periodicity of the full instanton is $\beta\hbar$ and three cycles are shown.
In the normal regime the trajectories travel in positive imaginary time and bounce on opposite sides of $x^{\ddagger}$.
In contrast in the inverted regime both trajectories bounce on the same side and
in order to have a continuous path, $x_1$ must travel backwards in imaginary time.
The arrows indicate the direction of particle flow followed by trajectories from their initial point to their final point.
}
\label{fig:feynman}
\end{figure}
In the normal regime,
the dynamics are periodic with a periodicity of $\beta\hbar$.
The motion can be described as
a particle which travels on the reactant state for an imaginary time $\tau_0$,
then suddenly turns into a particle on the product state with the same energy and momentum where it travels for an imaginary time $\tau_1$
before turning back into a reactant-state particle.
In the inverted regime, the instanton formed by joining the two trajectories
is not single valued at certain times.
However, we will still talk about it as an orbit because it remains periodic in imaginary time with periodicity $\beta\hbar$ and has
continuous position and momentum vectors as well as conserving energy along its whole path.
In particular, the energy and momentum are conserved at the two hopping points
even though the path itself takes a sharp change of direction
when the particle starts to travel in negative imaginary time.
There is a similarity with these pictures and those used to explain the scattering of particles and antiparticles
according to the Feynman--St\"uckelberg interpretation. \cite{Feynman1986antiparticles}
The dynamics of antiparticles are equivalent to those of ordinary particles except that time is reversed,
so that they are found at their final point at an earlier time than their initial point.
\footnote{Imagine watching a movie backwards of a clock falling out of a window.
It still has momentum pointing down, but it travels along a path upwards from the ground to the window and registers a negative elapsed time for this journey.}
Reading Fig.~\ref{fig:feynman}(b) from left to right one sees a single particle travelling on the reactant electronic state.
At imaginary time $u=0$, a new particle/antiparticle pair is created at the hopping point, while the old particle coexists at a different location.
The new particle also travels on the reactant state but the antiparticle on the product state.
At $u=|\tau|$, the antiparticle annihilates with the original reactant particle at the hopping point, $\mat{x}^\ddag$,
leaving only the new reactant particle,
which continues in the role of the original particle when the cycle repeats.
The stationary point which corresponds to the instanton can be classified by an index,
which is defined as the number of negative eigenvalues of the second-derivative matrix of the action.
Knowing this index will be helpful not only for designing an algorithm for finding the instanton
but also
in order to compute the rate in the inverted regime,
for which we need to compute
determinants of
second derivatives of the action.
In the normal regime, $C_0$ and $C_1$ are always positive because both trajectories are minimum-action paths and $\Sigma$ is negative because the instanton is a single-index saddle point in the space $(\mat{x}',\mat{x}'',\tau)$.
On the other hand, in the inverted regime, the trajectory $\mat{x}_1$ is a maximum-action path and thus all eigenvalues of the matrix $-\pders{S_1}{\mat{x}''}{\mat{x}'}$ are negative.
$C_1$, which is the determinant of this matrix, may thus be positive or negative depending on whether there are an even or an odd number of nuclear degrees of freedom, $D$.
To find the signs of the second derivatives of the total action,
we turn to the classical limit, which was studied in \Ref{GoldenGreens}.
From Eq.~(67) in that paper, one can see that $\Sigma$ has $D+1$ negative eigenvalues and $D$ positive eigenvalues in the inverted regime.
As the instanton moves smoothly from the high-temperature classical limit
to a low-temperature tunnelling mechanism,
there is no reason why the number of negative eigenvalues of $\Sigma$ should change,
so this result holds also in the general case.
Therefore $\Sigma$ will always have the opposite sign from $C_1$,
ensuring that the square root in the prefactor of \eqn{equ:kinst} remains real.
Hence the same instanton rate expression
can be uniformly applied across both the normal and inverted regime.
Finally, we conclude that the instanton is a $(D+1)$-index saddle point of $S(\mat{x}',\mat{x}'',\tau)$,
although due to the fact that $\mat{x}_1$ is a maximum-action path,
it will have an even higher index in the space of paths as explained in \secref{RP}.
\subsection{Hamilton--Jacobi formulation}
Up to this point, we have employed the Lagrangian formulation of classical mechanics to define the imaginary-time trajectories.
An alternative approach is provided by the Hamilton--Jacobi formulation,
which uses an action as a function of energy rather than time. \cite{GutzwillerBook}
This leads to further insight into the behaviour of the instanton in the inverted regime.
To derive the energy-dependent action, we start from \eqn{equ:legendre} and write
$S_1$ in a slightly different way to give
\begin{align}
\label{equ:jacobi}
S_1(\mat{x}_\text{i},\mat{x}_\text{f},\tau_1) &= -\int_{\mat{x}_\text{i}}^{\mat{x}_\text{f}} \bar{\mat{p}}_1 \cdot \mathrm{d} \mat{x} + E_1\tau_1 \, .
\end{align}
By introducing the antiparticle momentum $\bar{\mat{p}}_1 = -\mat{p}_1$,
we align the direction of the momentum with the particle flow and thereby make the integral strictly positive.
The magnitude of the imaginary-time momentum is $p_n(\mat{x},E_n)= \sqrt{2m[V_n(\mat{x}) - E_n]}$, which is always real and positive in the classically forbidden region where the instanton exists.
These definitions enable us to make the transition from the Lagrange to the Hamilton--Jacobi formalism by defining the abbreviated actions as
\begin{align}
W_n(\mat{x}_\text{i},\mat{x}_\text{f},E_n) = \int p_n(\mat{x}(s),E_n) \, \mathrm{d} s \, ,
\label{equ:abbrev_a}
\end{align}
where we have introduced the line element $\mathrm{d} s$ defining a metric in the configuration space
such that $(\mathrm{d} s)^2 = \mathrm{d}\mat{x}\cdot\mathrm{d}\mat{x}$.
This gives a line integral along the path, defined in such a way that $\int\mathrm{d} s$ is the path length.
Note that $W_n$ is purely positive and is independent of the sign of $\tau_n$ or direction of the trajectories, which has already been accounted for by the sign in \eqn{equ:jacobi}.
This definition is therefore symmetric to an exchange of end-points,
i.e.\ $W_n(\mat{x}_\text{i},\mat{x}_\text{f},E_n) = W_n(\mat{x}_\text{f},\mat{x}_\text{i},E_n)$.
The relations $S_n = \pm W_n + E_n \tau_n$, where the sign is chosen to match the sign of $\tau_n$,
thus can be viewed as Legendre transforms obeying the conditions $\pder{S_n}{\tau_n} = E_n$ and $\pder{W_n}{E_n} = -|\tau_n|$.
It also follows that $\pder{S_n}{\mat{x}_\text{i}} = \pm \pder{W_n}{\mat{x}_\text{i}}$
and $\pder{S_n}{\mat{x}_\text{f}} = \pm \pder{W_n}{\mat{x}_\text{f}}$.
It is well known that classical trajectories can be defined either as paths which make $S_n$ stationary (known as Hamilton's principle)
or paths which make $W_n$ stationary (known as Maupertuis' principle).
Typically classical trajectories are minimum-action paths, \cite{LandauMechanics}
and in the normal regime, both $S_0$ and $S_1$ are minima with respect to variation of the path.
However, in the inverted regime, the product trajectory will be a maximum of $S_1$.
Trajectories which bounce once have a conjugate point \cite{GutzwillerBook}
and so the associated $W_n$ are single-index saddle points with respect to variation of the path. \cite{GoldenGreens}
There is nothing different in the definition of $W_1$ in the inverted regime, so the product trajectory is also a single-index saddle point.
If we define $W\equiv W(\mat{x}',\mat{x}'',E)$ appropriately
in the normal and inverted regimes,
the total action, \eqn{equ:total_action}, of the instanton can also be written
\begin{equation}
\label{Legendre}
S(\mat{x}',\mat{x}'',\tau) = W(\mat{x}',\mat{x}'',E)
+ \beta\hbar E,
\end{equation}
where either $E=E_0=E_1$ is a function of $(\mat{x}',\mat{x}'',\tau)$ via $E_n=\pder{S_n}{\tau_n}$
or $\beta$ is a function of $(\mat{x}',\mat{x}'',E)$ via $\beta\hbar = - \pder{W}{E}$.
In order to clarify the definitions in each regime, we give an overview over the most important equations
\begin{widetext}
\begin{align*}
&\textbf{Normal regime} \hspace{0.3\textwidth} & &\textbf{Inverted regime}\\
&0<\tau_0<\beta\hbar; \, 0<\tau_1<\beta\hbar & &\tau_0 > \beta\hbar; \, \tau_1 < 0\\
&S = S_0(\mat{x}',\mat{x}'',\tau_0) + S_1(\mat{x}'',\mat{x}',\tau_1) &
&S = S_0(\mat{x}',\mat{x}'',\tau_0) + S_1(\mat{x}'',\mat{x}',\tau_1)\\
&S_0(\mat{x}_\text{i},\mat{x}_\text{f},\tau_0) = W_0(\mat{x}_\text{i},\mat{x}_\text{f},E_0) + E_0 \tau_0 &
&S_0(\mat{x}_\text{i},\mat{x}_\text{f},\tau_0) = W_0(\mat{x}_\text{i},\mat{x}_\text{f},E_0) + E_0 \tau_0\\
&S_1(\mat{x}_\text{i},\mat{x}_\text{f},\tau_1) = W_1(\mat{x}_\text{i},\mat{x}_\text{f},E_1) + E_1 \tau_1 &
&S_1(\mat{x}_\text{i},\mat{x}_\text{f},\tau_1) = -W_1(\mat{x}_\text{i},\mat{x}_\text{f},E_1) + E_1 \tau_1\\
&W = W_0(\mat{x}',\mat{x}'',E) + W_1(\mat{x}'',\mat{x}',E) &
&W = W_0(\mat{x}',\mat{x}'',E) - W_1(\mat{x}'',\mat{x}',E)
\end{align*}
\end{widetext}
where the definition of $W$ follows in each case from the required relationship in \eqn{Legendre}.
Differentiating \eqn{Legendre} using the chain rule,
we find $\pder{S}{\mat{x}'}=\pder{W}{\mat{x}'}$ and $\pder{S}{\mat{x}''}=\pder{W}{\mat{x}''}$, which shows that
the instanton, which is a stationary point of $S$,
could also be defined as a stationary point of $W$
with respect to $\mat{x}'$ and $\mat{x}''$.
Either the corresponding temperature can be found for a given $E$ using
$\beta\hbar = - \pder{W}{E}$
or $E$ can be varied until this equation is solved for a given $\beta$. \cite{GoldenRPI}
Let us now check that these definitions are consistent with the one-dimensional schematic shown in Fig.~\ref{fig:instvis}.
In the normal regime it is clear that if we change either $x'$ or $x''$, we increase the path length of one trajectory while simultaneously decreasing the path length of the other.
This thus increases one of the abbreviated actions, $W_n$
and decreases the other.
In the normal regime,
the derivative of the total abbreviated action, $W = W_0 + W_1$, vanishes at the hopping point
because here
$\pder{W_0}{x'}=\pder{W_0}{x''}=-\pder{W_1}{x'}=-\pder{W_1}{x''} = p^\ddag$,
where $p^\ddag = p_0(x^\ddag,E) = p_1(x^\ddag,E)$.
In the case of the instanton in the inverted regime, changing the positions of the terminal points $x'$ or $x''$ leads to either an elongation or contraction of both trajectories.
In this case, the total abbreviated action has been defined as $W = W_0 - W_1$
and its derivative also vanishes at the hopping point
because here
$\pder{W_0}{x'}=\pder{W_0}{x''}=\pder{W_1}{x'}=\pder{W_1}{x''} = -p^\ddag$.
Furthermore, it is possible to show in this formulation that
in the normal regime,
$\pders{W}{x'}{x'} = \pders{W}{x''}{x''} = 2m[\del V_0(x^\ddag) - \del V_1(x^\ddag)]/p^\ddag>0$
whereas in the inverted regime,
$\pders{W}{x'}{x'} = \pders{W}{x''}{x''} = -2m[\del V_0(x^\ddag) - \del V_1(x^\ddag)]/p^\ddag<0$,
and in both cases $\pders{W}{x'}{x''}=0$.
Therefore the instanton is defined by
a minimum of $W(x',x'')$ in the normal regime, but a maximum in the inverted regime.
\subsection{Interpretation of the mechanism}
Finally, we wish to check that the analytically continued instanton formula gives a reasonable
physical model of tunnelling in the inverted regime.
Neglecting the less-important prefactors, the rate is proportional to $\mathrm{e}^{-S/\hbar} = \mathrm{e}^{-\beta E} \mathrm{e}^{-W/\hbar}$,
where we have used \eqn{Legendre} to separate the exponential into two terms.
In a similar way to the standard instanton approach, \cite{Perspective}
we can interpret the
first term as the probability of
the system acquiring energy $E$ from
thermal fluctuations
and the
second term as being the probability of tunnelling at this energy.
In order to make this interpretation,
we require that $W$ is positive, ensuring that the tunnelling probability is bounded by 1.
Noting that $W_n \ge 0$,
this is clearly the case in the normal regime, where $W=W_0+W_1$.
However, in the inverted regime, where $W=W_0-W_1$, we will need to show that $W_0 \ge W_1$.
This can be justified,
at least for a system similar to the schematic in \fig{instvis}(b),
using the fact that $V_0(\mat{x}) \ge V_1(\mat{x})$ and thus $p_0(\mat{x},E_0) \ge p_1(\mat{x},E_1)$ for any point $\mat{x}$ along the instanton.
Noting also that the path length on the reactant state is longer,
it is easy to see from the definition of $W_n$ in \eqn{equ:abbrev_a} that $W_0 \ge W_1$ in this case.
As a corollary to the proof that $W \ge 0$,
by taking the potential to be zero at the reactants, such that
$V_0(\mat{x}^{(0)}_\text{min})$,
which ensures that $E\ge0$ for the instanton, it follows that $S\ge0$.
This suggests that, at least within the approximation of neglecting the prefactors,
the fastest rate will be found when $S=0$, which is the activationless regime.
There is a barrier to reaction (relative to the reactant minimum)
in both the normal and inverted regime, which gives a positive action and thus smaller rate.
One thus recovers the Marcus turnover picture.
However, because in the inverted regime, $W$ is a difference rather than a sum of two contributions,
it will typically be smaller than in the normal regime,
leading to a larger tunnelling effect.
For the standard instanton approach in the Born--Oppenheimer limit,
it is known that the $\mathrm{e}^{-W/\hbar}$ factor can be derived from one-dimensional WKB theory
as the probability of reaction at a given energy, $E$. \cite{Miller1975semiclassical,Perspective}
WKB theory has also been applied to the nonadiabatic transfer problem
and the result is known as the Zhu--Nakamura formula. \cite{Zhu1995ZN,NakamuraNonadiabatic}
In the diabatic limit their result (written in our notation) for the inverted regime is $\mathrm{e}^{-(W_0-W_1)/\hbar}$,
where the $W_n$ values are calculated along paths from the crossing point to the turning points and back again.
This is identical to the factor found here from analytic continuation of instanton theory and
confirms that our new formula provides the correct physical description of the reaction
according to the mechanistic interpretation suggested in Fig.~\ref{fig:instvis}.
Note that our formula can be applied to general multidimensional systems,
whereas the Zhu--Nakamura formula is a one-dimensional theory.
Therefore, only with instanton theory will it be possible to study the dominant tunnelling pathways in multidimensional systems,
which in general will involve all degrees of the system,
and will typically not require that
the $\mat{x}_1$ trajectory follows part of the same path as $\mat{x}_0$.
\section{Ring-polymer instanton formulation}
\label{sec:RP}
In practical implementations of the instanton approach,
we typically
adopt a ring-polymer formulation. \cite{Schwieters1998diabatic,Andersson2009Hmethane,RPInst,Rommel2011locating,Perspective}
For golden-rule instanton calculations, we follow the approach suggested for the normal regime in \Ref{GoldenRPI}
in which
both paths are discretized into $N_n$ equally spaced imaginary-time intervals
of length $\eta_n = \tau_n/N_n$.
This same approach can be adapted for use in the inverted regime,
where
$\tau_1$ and therefore $\eta_1$ will be negative.
The resulting $N=N_0+N_1$ beads describe the
ring polymer
as shown in Fig.~\ref{fig:Beads}.
\begin{figure}
\includegraphics[width=8.5cm]{CombineBeads-eps-converted-to.pdf}
\caption{Schematic showing the ring polymer
corresponding to \eqn{SRP}
for an example with $N=10$ split into $N_0=6$ and $N_1=4$.
There is no fundamental difference in the setup of the ring polymer between two regimes,
but for clarity we show the configurations into which the beads will automatically arrange themselves
for (a) the normal and (b) the inverted regime.
The beads are shown as circles coloured blue if they feel the reactant potential, $V_0$,
and red if they feel the product potential, $V_1$. Beads 0 and $N_0$ feel an averaged potential.
The springs between beads are represented by lines coloured blue for an imaginary-time interval of $\eta_0$ and red for an imaginary-time interval of $\eta_1$.
In the inverted regime, $\eta_1$ is negative and thus the springs are repulsive.
}
\label{fig:Beads}
\end{figure}
As previously discussed, the instanton corresponds to a stationary point of the action,
which for a path described by a ring-polymer is given by
\begin{multline}
S_\text{RP}(\mat{x}^{(0)}, \ldots, \mat{x}^{(N-1)}; \tau) = \\
\sum_{i=1}^{N_0} \frac{m \|\mat{x}^{(i)} - \mat{x}^{(i-1)}\|^2}{2 \eta_0}
+ \sum_{i=1}^{N_0 - 1} \eta_0 V_0(\mat{x}^{(i)})\\
+ \sum_{i=N_0 + 1}^{N} \frac{m \|\mat{x}^{(i)} - \mat{x}^{(i-1)}\|^2}{2 \eta_1}
+ \sum_{i=N_0 + 1}^{N - 1} \eta_1 V_1(\mat{x}^{(i)})\\
+ \eta_0\frac{V_0(\mat{x}^{(0)}) + V_0(\mat{x}^{(N_0)})}{2} + \eta_1 \frac{V_1(\mat{x}^{(N_0)}) + V_1(\mat{x}^{(N)})}{2}\, ,
\label{SRP}
\end{multline}
where cyclic indices are implied such that $\mat{x}^{(0)} \equiv \mat{x}^{(N)}$ in order to form a closed orbit.
This is defined such that in the $N_0,N_1\rightarrow\infty$ limit the value of the ring-polymer action at its stationary point
is equal to the semiclassical instanton action, $S$.%
\footnote{In the activationless regime, the value of $\tau$ at the stationary point will be 0, and thus $\eta_1=0$, which leads to problems in evaluating the ring-polymer action.
However, this causes no problems as in this case, we know that the stationary point has all beads collapsed at the hopping point, which is also the minimum of $V_0$, so no ring-polymer optimization is necessary.
Derivatives of the action can then be evaluated analytically.
}
As in the normal regime,
we define the instanton as a stationary point of $S_\text{RP}$ with respect to
the bead positions, $\{\mat{x}^{(0)},\dots,\mat{x}^{(N-1)}\}$, and $\tau$ simultaneously.
In order to reduce the number of evaluations of the potential energy and its gradient,
one could alternatively employ a half-ring polymer formalism in a similar way to the standard approach. \cite{GoldenRPI,Andersson2009Hmethane}
Although this is valid for both the normal and inverted regimes,
to simplify the explanations of our findings, we shall not take this extra step here.
In the normal regime, the instanton corresponds to a single-index saddle point.
This is because the instanton is formed of two minimum-action paths,
and $S$ is a maximum only in the $\tau$ variable, i.e.\ $\der[2]{S}{\tau}<0$.
However,
in the inverted regime the stationary point of the action which we associate with our semiclassical instanton is not
a single-index saddle point.
In the inverted regime we seek a saddle point with $N_0 D$ positive and $N_1 D + 1$ negative eigenvalues.
This can be understood by first considering the situation where
the values of $\tau$ as well as $\mat{x}^{(0)}$ and $\mat{x}^{(N_0)}$ are fixed at the hopping point.
We would need to minimize the action with respect to the beads $\{\mat{x}^{(1)},\dots,\mat{x}^{(N_0-1)}\}$ and maximize it with respect to the beads $\{\mat{x}^{(N_0+1)},\dots,\mat{x}^{(N-1)}\}$ in order to reproduce the instanton configuration.
Note that this gives perfectly reasonable trajectories, as maximizing the action with respect to the second set of beads, which due to the fact that $\eta_1<0$ have repulsive springs,
is equivalent to minimization with attractive springs.
Then, as explained in \secref{analysis}, the variation of the remaining points gives $D$ negative and $D$ positive eigenvalues, and variation of $\tau$ gives one additional negative eigenvalue.
Starting from an initial guess, we carry out a stationary-point search and thereby simultaneously optimize the bead positions, $\{\mat{x}^{(0)},\dots,\mat{x}^{(N-1)}\}$, and $\tau$.
In the normal regime, \cite{GoldenRPI} we use a standard single-index saddle point optimization algorithm.\cite{WalkingOnPES}
However,
due to the nature of the instanton in the inverted regime as a higher-index saddle point we have to adapt the optimization scheme slightly.
Finding such higher-index saddle points can sometimes be a very demanding task.
For instance, standard root finding algorithms such as MINPACK's \texttt{hybrd} and \texttt{hybrj} routines (based on a modified Powell method) as implemented for example in scipy can be used.
Note that these approaches locate stationary points of any index and thus may not find the instanton unless a very good initial guess is given.
However, this is made simpler as we exactly know the index of the saddle point we are seeking.
This enables us to use eigenvector-following methods
modified so as to reverse the signs of forces
corresponding to the first $N_1 D+1$ modes.
\cite{WalesEnergyLandscapes,Doye2002}
One can alternatively modify a single-index saddle-point optimizer such as that introduced in \Ref{WalkingOnPES}
by reversing all but the first of these modes.
This is the approach used to optimize the instantons in \secref{results}.
One might worry that
the number of higher-index stationary points is often larger than the number of minima and single-index saddle points as was seen in \Ref{Doye2002} for studies of Lennard--Jones clusters.
However, in our case the stationary points of the action have the direct physical interpretation of two classical trajectories which join together into a periodic orbit.
At least for the simple systems we have studied, it is clear that there is only one
periodic orbit which can exist, and thus only one stationary point of the action.
In fact we ran root-finding algorithms (which optimize to any stationary point regardless of its index) starting from randomly chosen initial conditions and did not find any other stationary points of the action.
We therefore conclude that there is no particular problem in locating ring-polymer instantons in the inverted regime.
In practice, we
make a sophisticated initial guess of the instanton geometry
using our knowledge that the hopping point is located at the crossing seam and about the general shape of the instanton in the inverted regime.
In addition we can
initiate the optimization at a high temperature, where the instanton is known to be collapsed at the
minimum-energy crossing point and then systematically cool down the system,
which ensures an excellent initial guess for each step.
In this way the instanton optimization in the inverted regime can be made just as numerically efficient and reliable as in the normal regime.
Once the instanton orbit is found,
the derivatives of the action with respect to $\mat{x}'$, $\mat{x}''$ and $\tau$,
which are required for the prefactor,
can be evaluated in terms of the bead positions, potentials, gradients and Hessians
using the formulas given in the appendix of \Ref{GoldenRPI}.
This allows the rate to be computed directly using the formula in \eqn{equ:kinst}.
\section{Application to model systems}
\label{sec:results}
The analytic-continuation procedure
ensures that we retain the correct behaviour in the high-temperature classical limit
and also that instanton theory continues to give the exact result for a system of two linear potentials. \cite{GoldenGreens}
Here we shall test the numerical implementation for a multidimensional and an anharmonic system
to check that it remains well behaved.
We chose to apply our method to the same two model systems as Lawrence and Manolopoulos in their work on the extrapolation of Wolynes theory into the inverted regime. \cite{Lawrence2018Wolynes}
\subsection{Spin-boson model}
\label{subsec:SB}
The first model system under consideration is the spin-boson model at $T= \SI{300}{\K}$ defined by the potentials
\begin{subequations}
\begin{align}
V_0(\mat{x}) &= \sum_{j=1}^D \tfrac{1}{2} m \omega_j^2(x_j + \zeta_j)^2 \, ,\\
V_1(\mat{x}) &= \sum_{j=1}^D \tfrac{1}{2} m \omega_j^2(x_j - \zeta_j)^2 - \varepsilon \, ,
\end{align}
\end{subequations}
where $\zeta_j = c_j/m\omega_j^2$ and
\begin{align}
\nonumber \omega_j = \omega_{\mathrm{c}} \, \tan\frac{(j - \frac{1}{2})\pi}{2D}& \, , \qquad
c_j = \sqrt{\frac{m\Lambda}{2D}} \omega_j \, ,
\end{align}
with reorganization energy $\Lambda = \SI{50}{\Calorie\per\mol}$
and characteristic frequency $\omega_{\text{c}} = \SI{500}{\per\cm}$.
This has a
discretized spectral density in $D$ degrees of freedom,
\begin{align}
J(\omega) &= \frac{\pi}{2} \sum_{j=1}^{D} \frac{c_j^2}{m\omega_j} \delta(\omega - \omega_j) \, ,
\end{align}
which reproduces a Debye spectral density in the $D\rightarrow\infty$ limit. \cite{Wang2003RuRu,Berkelbach2012hybrid}
The exact quantum golden-rule rate for this system with constant diabatic coupling, $\Delta$, can be calculated by numerical integration of the flux correlation function\cite{Weiss}
\begin{align}
\label{equ:kex}
k_{\mathrm{QM}} &= \frac{\Delta^2}{\hbar^2} \int_{-\infty - \mathrm{i}\tau}^{\infty - \mathrm{i}\tau} \mathrm{e}^{-\phi(t)/\hbar} \,\mathrm{d} t \, ,
\end{align}
with
\begin{align}
\phi(t) &= -\mathrm{i}\varepsilon t + \frac{4}{\pi} \int \frac{J(\omega)}{\omega^2} \left[ \frac{1 - \cos{\omega t}}{\tanh{{\ensuremath{\frac{1}{2}}} \beta \hbar \omega}} + \mathrm{i}\, \sin{\omega t} \right] \mathrm{d} \omega \, ,
\end{align}
where the rate is independent of $\tau$, which can therefore be chosen in order to get a faster convergence of the time-integral.
A commonly used approach \cite{Bader1990golden} is to perform a stationary-phase approximation to the rate expression in \eqn{equ:kex} about the point $t = -\mathrm{i}\tau$ such that $\phi'(-\mathrm{i}\tau) = 0$
\begin{align}
k_{\mathrm{SP}} = \frac{\Delta^2}{\hbar^2} \sqrt{\frac{2\pi\hbar}{\phi''(-\mathrm{i}\tau)}} \, \mathrm{e}^{-\phi(-\mathrm{i}\tau)/\hbar} \, ,
\label{equ:statphase}
\end{align}
where the primes denote derivatives with respect to $t$.
In the case of the spin-boson model the closed-form expressions for the actions of classical trajectories on each surface are known\cite{Kleinert,Feynman} and the instanton rate can be calculated analytically.
\cite{GoldenGreens,Cao1997nonadiabatic}
The derivation, starting from \eqn{equ:kinst2}, in the inverted regime is completely analogous.
Note that the semiclassical partition function is exact for this harmonic system, i.e.\ $Z_0=\prod_{j=1}^D [2\sinh{\beta\hbar\omega_j/2}]^{-1}$.
The action at the stationary point in the spatial coordinates is given by
\begin{equation}
\label{Sspinboson}
S(\tau) = -\varepsilon\tau + \sum_{j=1}^{D} 2m\omega_j \zeta_j^2 \left[ \frac{1 - \cosh{\omega_j\tau}}{\tanh{{\ensuremath{\tfrac{1}{2}}}\beta\hbar\omega_j}} + \sinh{\omega_j\tau} \right],
\end{equation}
which holds for both positive and negative $\tau$.
In the case of the spin-boson model, the prefactor in \eqn{equ:kinst2} can be shown to cancel with the reactant partition function.\cite{GoldenGreens}
Therefore the rate expression reduces to
\begin{equation}
k_{\mathrm{SCI}} = \sqrt{2\pi\hbar} \, \frac{\Delta^2}{\hbar^2} \left( - \frac{\mathrm{d}^2 S}{\mathrm{d}\tau^2} \right)^{-\frac{1}{2}} \mathrm{e}^{-S(\tau)/\hbar} \, ,
\end{equation}
which should be evaluated at a value of $\tau$ found numerically to be the stationary point of the action, \eqn{Sspinboson}.
This expression however coincides exactly with the stationary-phase approximation given in \eqn{equ:statphase},
as we identify $S(\tau)\equiv\phi(-\ensuremath{\mathrm{i}}\tau)$.
This therefore shows that, for the spin-boson model, the analytically continued instanton theory is equivalent to the stationary-phase approximation in both the normal and inverted regimes.
In order to demonstrate that the ring-polymer instanton optimization and rate calculation
are numerically stable, we carried out calculations using a general multidimensional algorithm,
which did not take the fact that we could solve the problem analytically into account.
The bath was discretized into $D=12$ degrees of freedom as in \Ref{Lawrence2018Wolynes}.
We present all results
as functions of the driving force, $\varepsilon$, and
compare the computed rate constants with
those of Marcus theory,
\begin{equation}
k_{\mathrm{MT}}(\varepsilon) = \frac{\Delta^2}{\hbar} \sqrt{\frac{\pi\beta}{\Lambda}}\mathrm{e}^{-\beta(\Lambda - \varepsilon)^2/4\Lambda} \, .
\end{equation}
Note that Marcus theory is equal to classical TST for this system,
to which instanton theory tends in the classical limit. \cite{GoldenGreens}
The inverted regime is defined by $\varepsilon/\Lambda > 1$.
\begin{figure}
\includegraphics[width=8.5cm]{SB_lrates-eps-converted-to.pdf}
\caption{Marcus theory, semiclassical instanton (SCI) and exact quantum rates for a twelve-dimensional spin boson model.
Results are presented as a function of the driving force relative to the classical Marcus theory rate of the symmetric ($\varepsilon=0$) system.
}
\label{fig:SB}
\end{figure}
The semiclassical instanton rates
are depicted in Fig.~\ref{fig:SB}
where they can be compared with the exact result and with Marcus theory.
As expected, instanton theory gives very accurate rates, at least for this system,
as we have explained that it
matches the results obtained by the stationary-phase expression,
which in turn is known to be in excellent agreement with the exact rate for this model.
\footnote{The stationary-phase result deviates from the exact rate by 4\% in the activationless regime,
but the errors for the other cases tested are well below 1\%.}
Furthermore Fig.~\ref{fig:SB} confirms that nuclear quantum effects in the inverted regime can be much larger than those in the normal regime,\cite{Siders1981inverted} causing a dramatic orders-of-magnitude increase in the rate compared with the classical Marcus theory.
It is for this reason that we considered it of particular importance
to develop the practical instanton approach
described in this paper.
Table~\ref{table:1} shows how the results converge with the total number of ring-polymer beads, $N$,
for systems in the normal, activationless and inverted regimes.
It can be seen that when the beads are split among the two potentials according to the optimal ratio $N_1/N_0 = |\tau_1|/\tau_0$, the rate converges very quickly.
Even with only $N=128$ beads, all rates are found to be converged to within 2.5\% of the stationary-phase result.
However, in general, $\tau$ is not known prior to the optimization.
Hence we also show the rates optimized with an equal split of the beads, $N_0=N_1$.
Although the instanton rates approach the stationary-phase results slower compared to the rates calculated with optimal ratio, convergence is again reached in all cases.
Consequently a proficient initial guess makes convergence faster but is not required to obtain accurate results.
Furthermore, a simple approach suggests itself in which one could use the optimized $\tau$ value obtained from an instanton search with a low number of beads.
The split of the beads can then be adjusted accordingly for more accurate calculations performed with a larger number of beads.
These results
confirm that
golden-rule instanton theory is not only as accurate,
but also as efficient in the inverted regime as it is in the normal regime.
In fact convergence to an almost perfect quantitative agreement is achieved
even deep in the inverted regime, where the quantum rate has a $10^7$-fold speed up due to tunnelling.
\begin{table*}
\caption{
Numerical results for the reaction rates of the spin-boson model
(parameters defined in Sec.~\ref{subsec:SB})
computed using various methods
given relative to the Marcus rate for the same system as $\log_{10} [k(\varepsilon)/k_{\mathrm{MT}}(\varepsilon)]$.
The values of $\tau$ given are determined from the calculation of the stationary-phase expression.
We optimize the instanton using two approaches for splitting the $N$ beads into two sets,
one with an optimal bead ratio (to the nearest integers) defined by
$N_1/N_0 = r_{\mathrm{opt}} = |\tau_1|/\tau_0$
(using $\tau$ obtained from the stationary-phase approximation)
and the other with an equal split $N_1/N_0 = r_{1/2} = 1$.
A cell with ``$-$'' indicates failure to find a stationary point with the correct index.
In the limit $N\rightarrow\infty$, the result tends to the stationary-phase approximation (SP), \eqn{equ:statphase}.
Exact rates are calculated by numerically integrating \eqn{equ:kex}.
}
\label{table:1}
\begin{ruledtabular}
\begin{tabular}{L{0.5cm}C{0.1cm} C{0.2cm}C{1.3cm}C{0.1cm} C{1.3cm}C{1.3cm}C{0.1cm} C{1.3cm}C{1.3cm}C{0.1cm} C{1.3cm}C{1.3cm}C{0.1cm} C{1.3cm}C{1.3cm}}
\multicolumn{2}{r}{$\varepsilon/\Lambda$} & & $0.0$ & & \multicolumn{2}{c}{$0.5$} & &
\multicolumn{2}{c}{$1.0$} & & \multicolumn{2}{c}{$1.5$} & & \multicolumn{2}{c}{$2.0$} \\
\multicolumn{2}{r}{\quad$\tau/\beta\hbar$} & & $0.5000$ & & \multicolumn{2}{c}{$0.1589$} & & \multicolumn{2}{c}{$0.0000$} & & \multicolumn{2}{c}{$-0.0430$} & & \multicolumn{2}{c}{$-0.0612$} \\
\cline{4-4} \cline{6-7} \cline{9-10} \cline{12-13} \cline{15-16}
\multicolumn{2}{l}{$N$} & & $r_{\mathrm{opt}}$ & & $r_{\mathrm{opt}}$ & $r_{1/2}$ & & $r_{\mathrm{opt}}$ & $r_{1/2}$ & & $r_{\mathrm{opt}}$ & $r_{1/2}$ & & $r_{\mathrm{opt}}$ & $r_{1/2}$ \\
\hline
\multicolumn{2}{l}{$32$} & & $2.24$ & & $1.12$ & $1.50$ & & $-0.25$ & $0.60$ & & $1.37$ & $-$ & & $7.00$ & $-$ \\
\multicolumn{2}{l}{$64$} & & $2.26$ & & $1.12$ & $1.26$ & & $-0.26$ & $0.04$ & & $1.31$ & $2.17$ & & $7.01$ & $-$ \\
\multicolumn{2}{l}{$128$} & & $2.26$ & & $1.13$ & $1.16$ & & $-0.26$ & $-0.18$ & & $1.31$ & $1.47$ & & $7.04$ & $7.19$ \\
\multicolumn{2}{l}{$256$} & & $2.26$ & & $1.13$ & $1.14$ & & $-0.26$ & $-0.24$ & & $1.32$ & $1.36$ & & $7.05$ & $7.08$ \\
\multicolumn{2}{l}{$512$} & & $2.26$ & & $1.13$ & $1.13$ & & $-0.26$ & $-0.26$ & & $1.32$ & $1.33$ & & $7.05$ & $7.06$ \\
\multicolumn{2}{l}{$1024$} & & $2.26$ & & $1.13$ & $1.13$ & & $-0.26$ & $-0.26$ & & $1.32$ & $1.33$ & & $7.05$ & $7.06$\\
\multicolumn{2}{l}{$2048$}& & $2.26$ & & $1.13$ & $1.13$ & & $-0.26$ & $-0.26$ & & $1.32$ & $1.32$ & & $7.05$ & $7.05$\\
\hline
\multicolumn{2}{l}{SP} & & $2.26$ & & \multicolumn{2}{c}{$1.13$} & & \multicolumn{2}{c}{$-0.26$} & & \multicolumn{2}{c}{$1.32$} & & \multicolumn{2}{c}{$7.05$} \\
\multicolumn{2}{l}{Exact} & & $2.26$ & & \multicolumn{2}{c}{$1.13$} & & \multicolumn{2}{c}{$-0.25$} & & \multicolumn{2}{c}{$1.31$} & & \multicolumn{2}{c}{$7.05$} \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{Predissociation model}
In this section, we
show that the approach is not restricted to harmonic systems, but works just as well
for anharmonic potentials,
which is of course the main advantage of the instanton approach.
We consider the predissociation model previously studied in \Refs{nonoscillatory} and \onlinecite{Lawrence2018Wolynes},
which is not only anharmonic and asymmetric but also contains an unbound state.
The potentials are given as
\begin{subequations}
\label{equ:pd_pot}
\begin{align}
V_0(x) &= {\ensuremath{\tfrac{1}{2}}} m\omega^2x^2 \, ,\\
\label{equ:pd_pot1}
V_1(x) &= D_{\text{e}} \mathrm{e}^{-2\alpha(x-\zeta)} - \varepsilon \, ,
\end{align}
\end{subequations}
with reduced units $m=1$, $\omega = 1$, $D_{\text{e}} = 2$, $\alpha = 0.2$, $\zeta = 5$, $\beta = 3$ and $\hbar = 1$,
whereas $\varepsilon$ is varied.
Both states are depicted in Fig.~\ref{fig:predissociation} for one particular choice of the driving force, $\varepsilon$.
We present results for a range of values of driving force relative to
the reorganization energy given by $\Lambda=D_{\text{e}}\mathrm{e}^{2\alpha\zeta}$, which is the key parameter for determining the crossover between normal and inverted regimes.
The exact quantum golden-rule rate expressed in the eigenbases of reactant and product states \cite{nonoscillatory} $\{\psi_0^{\lambda}, \psi_1^{\nu}\}$ initialized by a thermal distribution of reactant states is calculated using \eqn{equ:k_qs}.
Just as in the spin-boson example we give all rates relative to the corresponding classical rate.
For the predissociation model, with $\Delta$ taken to be constant, the classical limit is given by the one-dimensional classical golden-rule transition-state theory (cl-TST) rate,
\eqn{clTST},
where $Z_0^\text{cl} = 1/\beta\hbar\omega$.
\begin{figure}
\includegraphics[width=8.5cm]{PESsketch-eps-converted-to.pdf}
\caption{The diabatic potential-energy curves for the one-dimensional predissociation model
for a particular choice of driving force, $\varepsilon$, in the inverted regime.
The reorganization energy, $\Lambda$, is also indicated.
}
\label{fig:predissociation}
\end{figure}
The results are depicted in Fig.~\ref{fig:PD}, again showing excellent agreement between the exact and instanton rates with a maximal relative error of 0.1\%.
The order-of-magnitude deviation of the classical rate for large $\varepsilon$ emphasizes the remarkable relevance of nuclear tunnelling effects in these systems especially in the inverted regime, which can be almost perfectly captured with our semiclassical instanton approach.
Note that although Lawrence and Manolopoulos were able to achieve similarly accurate results with their approach, \cite{Lawrence2018Wolynes}
it was necessary for them to design a special functional form to treat this dissociative system,
whereas we could apply an identical algorithm to the case of the spin-boson model.
\begin{figure}
\includegraphics[width=8.5cm]{pd_rates-eps-converted-to.pdf}
\caption{Semiclassical instanton (SCI), exact quantum (QM) and classical TST rates for the predissociation model.
Results from the various methods are presented as a function of the driving force, $\varepsilon$,
relative to the classical rate of the $\varepsilon=0$ system.
}
\label{fig:PD}
\end{figure}
Besides the calculation of electron-transfer rates, we want to stress another interesting possible application of the method, for which the predissociation model provides a simple example.
Instead of artificially shifting two potential-energy surfaces in order to simulate different regimes of electron transfer, the shift between the two surfaces could be caused by a variable external field.
For instance, we consider a system with a ground electronic state $\ket{0}$
which is uncoupled to an excited electronic state $\ket{1}$
and these potential-energy surfaces may be well separated in energy.
We can then study the situation
of the interaction of the uncoupled system with a light field with continuous-wave frequency $\omega_{\mathrm{ex}}$
and interpret the golden-rule result as a photodissociation spectrum.
The two electronic states will now be coupled by the electric dipole operator $\mu(\hat{\mat{x}})$
instead of the nonadiabatic coupling, $\Delta(\hat{\mat{x}})$.
Hence, the golden-rule limit is equivalent to a linear response treatment of the weak-field interaction.
The reason why we can use the instanton method for this problem is because, like the rate, it is also described by
Fermi's golden rule.
The simple connection between the definitions of the rate defined by \eqn{equ:k_qs} and the total photodissociation cross section in the weak-coupling limit initialized by a thermal equilibrium distribution becomes apparent when looking at the formula for the total photodissociation cross section starting from a thermal equilibrium distribution\cite{Tannor,Schinke_1993,Manolopoulos1992}
\begin{multline}
\sigma_{\mathrm{tot}} (\omega_{\mathrm{ex}}) = \frac{\pi\omega_{\mathrm{ex}}}{\epsilon_0 c}
\sum_{\lambda} \frac{\mathrm{e}^{-\beta E_0^{\lambda}}}{Z_0}\\
\times \int |\mu_{\lambda\nu}|^2 \delta(E_0^{\lambda} + \hbar\omega_{\mathrm{ex}} - E_1^{\nu}) \, \mathrm{d} E_1^{\nu} \, ,
\end{multline}
where $c$ is the speed of light, $\epsilon_0$ is the vacuum permittivity
and $\mu_{\lambda\nu} = \int \psi_0^{\lambda}(\mat{x})^* \mu(\mat{x}) \psi_1^{\nu}(\mat{x}) \,\mathrm{d} \mat{x}$. Note that in our example of a scattering excited state the energies $E_1^{\nu}$ are continuous. Therefore we have replaced the sum in \eqn{equ:k_qs} by an integral and used energy-normalized continuum wave functions $\psi_1^{\nu}$.
Here we shall assume the transition dipole moment to be constant, also known as the Condon approximation.
Hence the total cross section is directly related to golden-rule rate theory by
\begin{equation}
\sigma_{\mathrm{tot}}(\omega_\text{ex}) = \frac{\hbar\omega_{\mathrm{ex}}}{2\epsilon_0 c} k_\text{QM}(\hbar\omega_\text{ex}) \, ,
\label{equ:sofk}
\end{equation}
where the rate constant, $k_\text{QM}(\hbar\omega_\text{ex})$,
is computed
with
the reactant potential shifted by the photon energy, i.e.\ $V_0(\mat{x}) \rightarrow V_0(\mat{x}) + \hbar\omega_\text{ex}$
and
$\Delta$ is replaced by $\mu$.
The rate thus depends on the photon frequency in the sense that it shifts the potential-energy surfaces relative to each other.
Similar expressions can be given for spontaneous or stimulated emission. \cite{Tannor}
Replacing the exact rate in \eqn{equ:sofk} by the instanton or the classical TST rate,
we obtain the various approximate simulated spectra shown in Fig.~\ref{fig:sigma}.
In this case, we used
the predissociation model [\eqs{equ:pd_pot}] with a fixed value of $\varepsilon=-2\Lambda$,
which then describes the typical case of an excited electronic-state potential $V_1$
high above the ground-state potential $V_0$.
The deviation of the classical cross sections again illustrates the sizeable effect of quantum nuclear effects, this time on the line-shape of optical spectra.
On the other hand, semiclassical instanton theory reaches graphical accuracy with the exact result.
\begin{figure}
\includegraphics[width=8.5cm]{sigma-eps-converted-to.pdf}
\caption{Total photodissociation cross sections for the predissociation model obtained by semiclassical instanton (SCI), exact quantum (QM) and classical TST calculations.
The dissociative excited-state potential, $V_1$, is given an asymptotic energy of $-\varepsilon=2\Lambda$
and is coupled to the ground-state potential, $V_0$, by a continuous-wave weak field with
frequency $\omega_{\mathrm{ex}}$.
}
\label{fig:sigma}
\end{figure}
In order to showcase what our method can contribute to the description of spectra
it is worth making a comparison with
standard approaches used in quantum chemistry.
The simplest and probably the most common method
is to calculate the vertical excitation energy from the ground state minimum, which corresponds to $\Lambda-\varepsilon$ in our model (specifically $3\Lambda$ according to our choice of parameters).
This gives a single peak in the spectrum at $\hbar\omega_\text{ex} = \Lambda-\varepsilon$ and
with this approach one completely disregards the statistics (or dynamics) of both states.
This method can be improved by
assuming a classical Boltzmann distribution in the ground state, which can, for example, be sampled by molecular dynamics simulations.
By calculating vertical excitation energies from different sampled configurations the natural width of the absorption bands can be revealed.
This corresponds to our classical TST calculations.
The instanton method improves this description
by including quantum nuclear effects for both the ground and excited state.
As can be seen in Fig.~\ref{fig:sigma} the absorption maximum in both the quantum and semiclassical calculations is slightly shifted to lower energies compared to the classical result. This shift ($\approx 0.03\Lambda$) is a direct consequence of the reactant potential's zero-point energy, which is not accounted for in the classical calculations. Furthermore, due to tunnelling in the normal regime the absorption band exhibits an earlier onset, whereas the equivalent process in the inverted regime causes a slower decay of the band.
The speed up of the transition rate induced by quantum tunnelling therefore directly translates into adding longer tails to both sides of the absorption spectrum.
We do however ignore the real-time dynamics within the wells
such that
our method thus probes only the time-scale of the fastest decay of the wave packet which imprints itself in the broadest features of the spectrum.
These features form an envelope of constraint on the spectrum that will not be changed by any other dynamics
leading to vibrational and rotational fine structure. \cite{HellerBook}
Therefore, although we shall not be able to describe vibrational progressions with this approach,
we expect to predict the correct envelope of the spectrum.
We note that this example of transition to a dissociative state is thus a particularly favourable case for us.
\section{Conclusions}
We have extended the semiclassical instanton method
for calculating Fermi's golden rule into the Marcus inverted regime.
It can be applied to multidimensional and anharmonic systems and gives a good approximation to the exact result even when nuclear quantum effects are important.
The theory reduces to classical golden-rule TST in the high-temperature limit and hence Marcus theory when the potentials are harmonic,
is exact for a system of two linear potentials,
and
is identical to the stationary-phase approximation in the case of the spin-boson model.
The main difference between the normal and inverted regimes
is the form of the instanton periodic orbit,
although in both cases it is
defined by the stationary point of the total action formed by joining two imaginary-time trajectories together.
In the normal regime, both trajectories are minimum-action paths which travel in positive imaginary time on the reactant or product potential-energy surfaces.
However, in the inverted regime, the product trajectory is a maximum-action path which travels in negative imaginary time.
In both regimes, the energy and momentum are conserved when hopping from one state to the other,
which occurs at a point where the two diabatic potentials are degenerate.
In order to locate the inverted-regime instanton within the ring-polymer formalism,
we search for a high-index saddle point of the total action.
We show that by using the knowledge we have about the expected number of negative eigenvalues
as well as the approximate shape and location of the two trajectories, the algorithm can be made just as efficient as in the normal regime.
Therefore
this approach can be used
to calculate reaction rates across the whole range of electron-transfer reactions
or for simulating spectral line shapes.
In contrast to closed-form expressions for the rate, \cite{NakamuraNonadiabatic,Jang2006ET}
which effectively require a \emph{global} harmonic approximation for the potential-energy surfaces,
the instanton approach locates the instanton without making any assumptions
and takes only a \emph{local} harmonic approximation about the tunnelling path.
All one has to provide to the algorithm
are functions returning the potentials, gradients and Hessians on the two diabatic potential-energy surfaces for a given nuclear configuration.
Additionally it only requires this knowledge about a rather small region
located around the crossing point.
This is another reason for the computational efficiency of the method and makes it conceptually easily applicable to
molecular systems,
even in conjuncture with on-the-fly \textit{ab-initio} electronic-structure codes.
For further enhancements to the efficiency, machine-learning approaches could be applied. \cite{GPR}
Apart from the excellent agreement with exact quantum rates for the model systems studied in this work, one of the main advantages of the method from a qualitative perspective is that it provides direct mechanistic insight into the reaction under investigation.
In this respect, our method appealingly stands out from alternative approaches
which effectively extrapolate the data collected in the normal regime. \cite{Lawrence2018Wolynes}
The instanton orbit can be interpreted as the `dominant tunnelling pathway'
and identifies which nuclear modes are involved in the reaction.
In cases where there are competing reactions, the instanton approach will identify the dominant mechanism.
Comparison to the classical (Marcus or golden-rule TST) rate allows easy identification of the role of quantum nuclear effects (tunnelling and zero-point energy),
which are expected to be particularly important in the inverted regime.
Kinetic isotope effects can also be easily predicted, which often provide the easiest connection to experimental data. \cite{MUSTreview}
A limitation of all instanton methods is that the semiclassical approximation is not valid for
liquid systems.
Nonetheless, a good understanding of the instanton paths has helped in the development of a number of methods based on path-integral sampling which are applicable to reactions in solution. \cite{GRQTST,GRQTST2,QInst}
We hope that the information obtained on the instanton in this work
will help derive novel path-integral-based rate theories which can describe the inverted regime more rigorously.
The method described in this work is, however, well suited to be applied to complex chemical reactions
in the gas-phase, in clusters, on surfaces and in solids.
A wide range of processes involving a transition between two electronic states
can be studied in this way,
so long as the coupling falls in the golden-rule regime.
This encompasses not only electron-transfer reactions, but also spectroscopy, intersystem crossing and electrochemical processes.
Showing the capability of the method in such applications will be an integral part of future work.
\section*{Acknowledgements}
This work was financially supported by the Swiss National Science Foundation through SNSF Project 175696.
\input{arxiv.bbl}
\end{document}
|
1,108,101,566,434 | arxiv | \section{Introduction}
From the very beginning of the invention of Galois theory, one problem has emerged. For a given finite group $G$, find a Galois extension $K/\Q$ such that ${\rm Gal}(K/\Q)\simeq G$. This is still an open problem in spite of the great efforts of a number of mathematicians and substantial progress having been made with specific groups $G$. (See \cite{Se3}.)
A more general problem is to ask the same question over other base fields $F$. This is a challenging and difficult problem even for groups G of prime power order.
In this paper we make progress on this classical problem in Galois theory. Moreover this progress fits together well with a new development relating Massey products in Galois cohomology to basic problems in Galois theory.
For all primes $p$ and all fields in the key test case of $n=4$, we construct Galois extensions with the unipotent Galois group $\U_n(\F_p)$ assuming only the existence of some Galois extensions of order $p^3$.
This fits into a program outlined in \cite{MT1} and \cite{MT2}, for the systematic construction of Galois $p$-closed extensions of general fields, assuming only knowledge of Galois extensions of degree less than or equal to $p^3$ and the structure of $p$th power roots of unity in the base field.
Thus both the methods and the results in this paper pave the way to a program for obtaining the structure of maximal pro-$p$-quotients of absolute Galois groups for all fields. We shall now describe some previous work of a number of mathematicians which has influenced our work, as well as its significance for further developments and applications.
Recently there has been substantial progress in Galois cohomology which has changed our perspective on Galois $p$-extensions over general fields. In some remarkable work, M. Rost and V. Voevodsky proved the Bloch-Kato conjecture on the structure of Galois cohomology of general fields. (See \cite{Voe1,Voe2}.)
From this work it follows that there must be enough Galois extensions to make higher degree Galois cohomology decomposable. However the explicit construction of such Galois extensions is completely mysterious.
In \cite{MT1}, \cite{MT2} and \cite{MT5}, two new conjectures, the Vanishing $n$-Massey Conjecture and the Kernel $n$-Unipotent Conjecture were proposed.
These conjectures in \cite{MT1} and \cite{MT2}, and the results in this paper, lead to a program of constructing these previously mysterious Galois extensions in a systematic way.
In these papers it is shown that the truth of these conjectures has some significant implications on the structure of absolute Galois groups.
These conjectures are based on a number of previous considerations. One motivation comes from topological considerations. (See \cite{DGMS} and \cite{HW}.) Another motivation is a program to describe various $n$-central series of absolute Galois groups as kernels of simple Galois representations. (See \cite{CEM, Ef, EM1,EM2, MSp,NQD,Vi}.)
If the Vanishing $n$-Massey Conjecture is true, then by a result in \cite{Dwy}, we obtain a program of building up $n$-unipotent Galois representations of absolute Galois groups by induction on $n$. This is an attractive program because we obtain a procedure of constructing larger Galois $p$-extensions from smaller ones, efficiently using the fact that certain {\it a priori} natural cohomological obstructions to this procedure always vanish.
Recall that for each natural number $n$, $\U_n(\F_p)$ is the group of upper triangular $n\times n$-matrices with entries in $\F_p$ and diagonal entries 1. Then $\U_3(\F_2)$ is isomorphic to the dihedral group of order 8, and if $p$ is odd, then $\U_3(\F_p)$ is isomorphic to the Heisenberg group $H_{p^3}$ of order $p^3$. For all $n\geq 4$ and all primes $p$, we can think of $\U_n(\F_p)$ as "higher Heisenberg groups" of order $p^{n(n-1)/2}$.
It is now recognized that these groups play a very special role in current Galois theory. Because $\U_n(\F_p)$ is a Sylow $p$-subgroup of ${\rm GL}_n(\F_p)$, and every finite $p$-group has a faithful linear $n$-dimensional representation over $\F_p$, for some $n$, we see that every finite $p$-group can be embedded into $\U_n(\F_p)$ for some $n$. Besides, the Vanishing $n$-Massey Conjecture and the Kernel $n$-Unipotent Conjecture also indicate some deeper reasons why $\U_n(\F_p)$ is of special interest.
The constructions of Galois extensions with the Galois group $\U_3(\F_p)$ over fields which admit them, are well-known in the case when the base field is of characteristic not $p$. They are an important basic tool in the Galois theory of $p$-extensions. (See for example \cite[Sections 6.5 and 6.6]{JLY}. Some early papers related to these topics like
\cite{MNg} and \cite{M} now belong to classical work on Galois theory.)
In \cite[Section 4]{GLMS}, a construction of Galois extensions $K/F$, ${\rm char}(F)\not=2$, with ${\rm Gal}(K/F)\simeq \U_4(\F_2)$, was discovered. Already at that time, one reason for searching for this construction was the motivation to find ideas to extend deep results on the characterization of the fixed field of the third 2-Zassenhaus filtration of an absolute Galois group $G_F$ as the compositum of Galois extensions of degree at most 8 (see \cite{Ef, EM2, MSp,Vi}), to a similar characterization of the fixed field of the fourth 2-Zassenhaus filtration of $G_F$.
In retrospect, looking at this construction, one recognizes some elements of the basic theory of Massey products. However at that time the authors of \cite{GLMS} were not familiar with Massey products. It was realized that such a construction would also be desirable for $\U_4(\F_p)$ for all $p$ rather than $\U_4(\F_2)$, but none has been found until now.
In \cite{GLMS}, in the construction of a Galois field extension $K/F$ with ${\rm Gal}(K/F)\simeq \U_4(\F_2)$, a simple criteria was used for an element in $F$ to be a norm from a bicyclic extension of degree 4 modulo non-zero squares in the base field $F$. However in \cite{Me}, A. Merkurjev showed that a straightforward generalization of this criteria for $p$ odd instead of $p=2$, is not true in general.
Therefore it was not clear whether such an \mbox{analogous} construction of Galois extensions $K/F$ with ${\rm Gal}(K/F)\simeq \U_4(\F_p)$ was possible for $p$ odd.
On the other hand, a new consideration in \cite{HW}, \cite{MT1} and \cite{MT2} led us to formulate the Vanishing $n$-Massey Conjecture, and the most natural way to prove this conjecture for $n=3$ in the key non-degenerate case would be through constructing explicit Galois $\U_4(\F_p)$-extensions.
In fact we pursued both cohomological variants of proving the Vanishing 3-Massey Conjecture and the Galois theoretic construction of Galois $\U_4(\F_p)$-extensions.
The story of proving this conjecture and finally constructing Galois $\U_4(\F_p)$-extensions over all fields which admit them, is interesting.
First M. J. Hopkins and K. G. \mbox{Wickelgren} in \cite{HW} proved a result which implies that the Vanishing 3-Massey Conjecture with respect to prime 2, is true for all global fields of characteristic not 2. In \cite{MT1} we proved that the result of \cite{HW} is valid for any field $F$.
At the same time, in \cite{MT1} the Vanishing $n$-Massey Conjecture was formulated, and applications on the structure of the quotients of absolute Galois groups were deduced.
In \cite{MT3} we proved that the Vanishing 3-Massey Conjecture with respect to any prime $p$ is true for any global field $F$ containing a primitive $p$-th root of unity.
In \cite{EMa}, I. Efrat and E. Matzri provided alternative proofs for the above-mentioned results in \cite{MT1} and \cite{MT3}.
In \cite{Ma}, E. Matzri proved that for any prime $p$ and for any field $F$ containing a primitive $p$-th root of unity, every defined triple Massey product contains 0. This established the Vanishing 3-Massey Conjecture in the form formulated in \cite{MT1}. Shortly after \cite{Ma} appeared on the arXiv, two new preprints, \cite{EMa2} and \cite{MT5}, appeared nearly simultaneously and independently on the arXiv as well.
In \cite{EMa2}, I. Efrat and E. Matzri replace \cite{Ma} and provide a cohomological approach to the proof of the main result in \cite{Ma}. In \cite{MT5} we also provide a cohomological method of proving the same result. We also extend the vanishing of triple Massey products to all fields, and thus remove the restriction that the base field contains a primitive $p$-th root of unity.
We further provide applications on the structure of some canonical quotients of absolute Galois groups, and also show that some special higher $n$-fold Massey products vanish. Finally in this paper we are able to provide a construction of the Galois $\U_4(\F_p)$-extension $M/F$ for any field $F$ which admits such an extension.
We use this construction to provide a natural new proof, which we were seeking from the beginning of our search for a Galois theoretic proof, of the vanishing of triple Massey products over all fields.
Some interesting cases of "automatic" realizations of Galois groups are known. These are cases when the existence of one Galois group over a given field forces the existence of some other Galois groups over this field. (See for example \cite{Je, MS,MSS, MZ, Wh}.) However, nontrivial cases of automatic realizations coming from an actual construction of embedding smaller Galois extensions to larger ones, are relatively rare, and they are difficult to produce.
In our construction we are able, from knowledge of the existence of two Heisenberg Galois extensions of degree $p^3$ over a given base field $F$ as above, to find possibly another pair of Heisenberg Galois extensions whose compositum can be automatically embedded in a Galois $\U_4(\F_p)$-extension. (See also Remark~\ref{rmk:modification}.)
Observe that in all proofs of the Vanishing 3-Massey Conjecture we currently have, constructing Heisenberg Galois extensions of degree $p^3$ has played an important role. For the sake of a possible inductive proof of the Vanishing $n$-Massey Conjecture, it seems important to be able to inductively construct Galois $\U_n(\F_p)$-extensions. This has now been achieved for the induction step from $n=3$ to $n=4$, and it opens up a way to approach the Vanishing 4-Massey Conjecture.
Another motivation for this work which combines well with the motivation described above, comes from anabelian birational considerations. Very roughly in various generality and precision, it was observed that small canonical quotients of absolute Galois groups determine surprisingly precise information about base fields, in some cases entire base fields up to isomorphisms. (See \cite{BT1,BT2,CEM,EM1,EM2,MSp,Pop}.)
But these results suggest that some small canonical quotients of an absolute Galois group together with knowledge of the roots of unity in the base field should determine larger canonical quotients of this absolute Galois group.
The Vanishing $n$-Massey Conjecture and the Kernel $n$-Unipotent Conjecture, together with the program of explicit constructions of Galois $\U_n(\F_p)$-extensions, make this project more precise. Thus our main results, Theorems~\ref{thm:construction}, \ref{thm:construction char not p}, \ref{thm:construction char p} and~\ref{thm:U4}, contribute to this project.
A further potentially important application for this work is the theory of Galois $p$-extensions of global fields with restricted ramification and questions surrounding the Fontaine-Mazur conjecture. (See \cite{Ko}, \cite{La},
\cite{McL}, \cite{Ga},\cite{Se2}.) For example in \cite[Section 3]{McL}, there is a criterion for infinite Hilbert $p$-class field towers over quadratic imaginary number fields relying on the vanishing of certain triple Massey products. The explicit constructions in this paper should be useful for approaching these classical number theoretic problems.
Only relatively recently, the investigations of the Galois realizability of some larger $p$-groups among families of small $p$-groups, appeared. (See the very interesting papers \cite{Mi1}, \cite{Mi2}, \cite{GS}.)
In these papers the main concern is understanding cohomological and Brauer group obstructions for the realizability of Galois field extensions with prescribed Galois groups. In our paper the main concern is the explicit constructions and their connections with Massey products.
In other recent papers \cite{CMS} and \cite{Sch}, the authors succeeded to treat the cases of characteristic equal to $p$ or not equal to $p$, nearly uniformly. This is also the case with our paper.
Our paper is organized as follows. In Section 2 we recall basic notions about norm residue symbols and Heisenberg extensions of degree $p^3$. (For convenience we think of the dihedral group of order 8 as the Heisenberg group of order 8.) In Section 3 we provide a detailed construction of Galois $\U_4(\F_p)$-extensions beginning with two "compatible" Heisenberg extensions of degree $p^3$. Section 3 is divided into two subsections. In Subsection 3.1 we provide a construction of the required Galois extension $M/F$ over any field $F$ which contains a primitive $p$-th root of unity. In Subsection 3.2 we provide such a construction
for all fields of characteristic not $p$, building on the results and methods in Subsection 3.1.
In Example~\ref{ex:p=2} we illustrate our method on a surprisingly simple construction of Galois $\U_4(\F_2)$-extensions over any field $F$ with ${\rm char}(F)\not=2$.
In Section 4 we provide a required construction for all fields of characteristic $p$.
After the original and classical papers of E. Artin and O. Schreier \cite{ASch} and E. Witt \cite{Wi}, these constructions seem to add new results on the construction of basic Galois extensions $M/F$ with Galois groups $\U_n(\mathbb{F}_p)$, $n = 3$ and $n = 4$. These are aesthetically pleasing constructions with remarkable simplicity. They follow constructions in characteristic not $p$, but they are simpler. See also \cite[Section 5.6 and Appendix A1]{JLY} for another procedure to obtain these Galois extensions.
In Section 5 we provide a new natural Galois theoretic proof of the vanishing of triple Massey products over all fields in the key non-degenerate case. We also complete the new proof of the vanishing of triple Massey products in the case when a primitive $p$-th root of unity is contained in the base field.
Finally we formulate a necessary and sufficient condition for the existence of a Galois $\U_4(\F_p)$-extension $M/F$ which contains an elementary $p$-extension of any field $F$ (described by three linearly independent characters), and we summarize the main results in Theorem~\ref{thm:U4}.
\\
\\
{\bf Acknowledgements: } We would like to thank M. Ataei, L. Bary-Soroker, S. K. Chebolu, I. Efrat, H. \'{E}snault, E. Frenkel, S. Gille, J. G\"{a}rtner, P. Guillot, D. Harbater, M. J. Hopkins, Ch. Kapulkin, I. K\v{r}\'i\v{z}, J. Labute, T.-Y. Lam, Ch. Maire, E. Matzri, C. McLeman, D. Neftin, J. Nekov\'a\v{r}, R. Parimala, C. Quadrelli, M. Rogelstad, A. Schultz, R. Sujatha, Ng. Q. {\fontencoding{T5}\selectfont Th\'\abreve{}ng}, A. Topaz, K. G. Wickelgren and O. Wittenberg
for having been able to share our enthusiasm for this relatively new subject of Massey products in Galois cohomology, and for their encouragement, support, and inspiring discussions. We are very grateful to the anonymous referee for his/her careful reading of our paper, and for providing us with insightful comments and valuable suggestions which we used to improve our exposition.
\\
\\
{\bf Notation:} If $G$ is a group and $x,y\in G$, then $[x,y]$ denotes the commutator $xy x^{-1}y^{-1}$.
For any element $\sigma$ of finite order $n$ in $G$, we denote $N_{\sigma}$ to be the element $1 +\sigma +\cdots +\sigma^{n-1}$ in the integral group ring $\Z[G]$ of G.
For a field $F$, we denote $F_s$ (respectively $G_F$) to be its separable closure (respectively its absolute Galois group ${\rm Gal}(F_s/F))$. We denote $F^\times$ to be the set of non-zero elements of $F$.
For a given profinite group $G$, we call a Galois extension $E/F$, a (Galois) {\it $G$-extension} if the Galois group ${\rm Gal}(E/F)$ is isomorphic to $G$.
For a unital commutative ring $R$ and an integer $n\geq 2$, we denote $\U_n(R)$ as the group of all upper-triangular unipotent $n\times n$-matrices with entries in $R$.
For any (continuous) representation $\rho\colon G \to \U_n(R)$ from a (profinite) group $G$ to $\U_n(R)$ (equipped with discrete topology ), and $1\leq i< j\leq n$, let $\rho_{ij}\colon G \to R$ be the composition of $\rho$ with the projection from $\U_n(R)$ to its $(i,j)$-coordinate.
\section{Heisenberg extensions}
\label{sec:Heisenberg}
The materials in this section have been taken from \cite[Section 3]{MT5}.
\subsection{Norm residue symbols}
\label{subsec:norm residue}
Let $F$ be a field containing a primitive $p$-th root of unity $\xi$.
For any element $a$ in $F^\times$, we shall write $\chi_a$ for the character corresponding to $a$ via the Kummer map $F^\times\to H^1(G_F,\Z/p\Z)={\rm Hom}(G_F,\Z/pZ)$. From now on we assume that $a$ is not in $(F^\times)^p$. The extension $F(\sqrt[p]{a})/F$ is a Galois extension with the Galois group $\langle \sigma_a\rangle\simeq \Z/p\Z$, where $\sigma_a$ satisfies $\sigma_a(\sqrt[p]{a})=\xi\sqrt[p]{a}$.
The character $\chi_a$ defines a homomorphism $\chi^a\in {\rm Hom}(G_F,\frac{1}p\Z/\Z)\subseteq {\rm Hom}(G_F,\Q/\Z)$ by the formula
\[
\chi^a =\frac{1}{p} \chi_a.
\]
Let $b$ be any element in $F^\times$. Then the norm residue symbol may be defined as
\[
(a,b):= (\chi^a,b):= b\cup \delta \chi^a.
\]
Here $\delta$ is the coboundary homomorphism $\delta \colon H^1(G,\Q/\Z)\to H^2(G,\Z)$ associated to the short exact sequence of trivial $G$-modules
\[
0\to \Z\to \Q\to \Q/\Z\to 0.
\]
The cup product $\chi_a\cup \chi_b\in H^2(G_F,\Z/p\Z)$ can be interpreted as the norm residue symbol $(a,b)$. More precisely, we consider the exact sequence
\[
0\longrightarrow \Z/p\Z \longrightarrow F_s^\times \stackrel{x\mapsto x^p}{\longrightarrow} F_s^\times \longrightarrow 1,
\]
where $\Z/p\Z$ has been identified with the group of $p$-th roots of unity $\mu_p$ via the choice of $\xi$. As $H^1(G_F,F_s^\times)=0$, we obtain
\[
0{\longrightarrow} H^2(G_F,\Z/p\Z)\stackrel{i}{\longrightarrow} H^2(G_F,F_s^\times) \stackrel{\times p}{\longrightarrow} H^2(G_F,F_s^\times).
\]
Then one has $i(\chi_a\cup \chi_b)=(a,b)\in H^2(G_F,F_s^\times)$. (See \cite[Chapter XIV, Proposition 5]{Se}.)
\subsection{Heisenberg extensions}
\label{subsec:Heisenberg}
In this subsection we recall some basic facts about Heisenberg extensions. (See \cite[Chapter 2, Section 2.4]{Sha} and \cite[Sections 6.5 and 6.6 ]{JLY}.)
Assume that $a,b$ are elements in $F^\times$, which are linearly independent modulo $(F^\times)^p$. Let $K= F(\sqrt[p]{a},\sqrt[p]{b})$. Then $K/F$ is a Galois extension whose Galois group is generated by $\sigma_a$ and $\sigma_b$. Here $\sigma_a(\sqrt[p]{b})=\sqrt[p]{b}$, $\sigma_a(\sqrt[p]{a})=\xi \sqrt[p]{a}$; $\sigma_b(\sqrt[p]{a})=\sqrt[p]{a}$, $\sigma_b(\sqrt[p]{b})=\xi \sqrt[p]{b}$.
We consider a map $\U_3(\Z/p\Z)\to (\Z/p\Z)^2$ which sends $\begin{bmatrix}
1 & x & z\\
0 & 1 & y\\
0 & 0 & 1
\end{bmatrix}$
to $(x,y)$. Then we have the following embedding problem
\[
\xymatrix{
& & &G_F \ar@{->}[d]^{\bar\rho} \\
0\ar[r]& \Z/p\Z \ar[r] &\U_3(\Z/p\Z)\ar[r] &(\Z/p\Z)^2\ar[r] &1,
}
\]
where $\bar\rho$ is the map $(\chi_a,\chi_b)\colon G_F\to {\rm Gal}(K/F)\simeq (\Z/p\Z)^2$. (The last isomorphism ${\rm Gal}(K/F)\simeq (\Z/p\Z)^2$ is the one which sends $\sigma_a$ to $(1,0)$ and $\sigma_b$ to $(0,1)$.)
Assume that $\chi_a\cup \chi_b=0$. Then the norm residue symbol $(a,b)$ is trivial. Hence there exists $\alpha$ in $F(\sqrt[p]{a})$ such that $N_{F(\sqrt[p]{a})/F}(\alpha)=b$ (see \cite[Chapter XIV, Proposition 4 (iii)]{Se}). We set
\[
A_0=\alpha^{p-1} \sigma_a(\alpha^{p-2})\cdots \sigma_a^{p-2}(\alpha)=\prod_{i=0}^{p-2} \sigma_a^{i}(\alpha^{p-i-1}) \in F(\sqrt[p]{a}).
\]
\begin{lem} Let $f_a$ be an element in $F^\times$. Let $A=f_aA_0$. Then we have
\label{lem:operator}
\[
\frac{\sigma_a(A)}{A}=\frac{N_{F(\sqrt[p]{a})/F}(\alpha)}{\alpha^p}=\frac{b}{\alpha^p}.
\]
\end{lem}
\begin{proof}Observe that $\dfrac{\sigma_a(A)}{A}=\dfrac{\sigma_a(A_0)}{A_0}$.
The lemma then follows from the identity
\[
(s-1)\sum_{i=0}^{p-2} (p-i-1)s^{i} = \sum_{i=0}^{p-1} s^i -p s^0.
\qedhere
\]
\end{proof}
\begin{prop}
\label{prop:Heisenberg extension}
Assume that $\chi_a\cup \chi_b=0$. Let $f_a$ be an element in $F^\times$. Let $A=f_aA_0$ be defined as above. Then the homomorphism $\bar{\rho}:= (\chi_a,\chi_b)\colon G_F\to \Z/p\Z\times \Z/p\Z$ lifts to a Heisenberg extension $\rho\colon G_F\to \U_3(\Z/p\Z)$.
\end{prop}
\begin{proof}[Sketch of Proof]
Let $L:=K(\sqrt[p]{A})/F$. Then $L/F$ is Galois extension.
Let $\tilde{\sigma}_a\in {\rm Gal}(L/F)$ (resp. $\tilde{\sigma}_b\in {\rm Gal}(L/F)$) be an extension of $\sigma_a$ (resp. $\sigma_b$). Since $\sigma_b(A)=A$, we have $\tilde{\sigma}_b(\sqrt[p]{A})=\xi^j\sqrt[p]{A}$, for some $j\in \Z$.
Hence $\tilde{\sigma}_b^p(\sqrt[p]{A})=\sqrt[p]{A}$. This implies that $\tilde{\sigma}_b$ is of order $p$.
On the other hand, we have
$
\tilde{\sigma}_a(\sqrt[p]{A})^p=\sigma_a(A) =A \dfrac{b}{\alpha^p}.
$
Hence $\tilde{\sigma}_a(\sqrt[p]{A})=\xi^i \sqrt[p]{A}\dfrac{\sqrt[p]{b}}{\alpha}$, for some $i\in \Z$. Then $\tilde{\sigma}_a^{p}(\sqrt[p]{A})=\sqrt[p]{A}$. Thus $\tilde{\sigma}_a$ is of order $p$.
If we set $\sigma_A:=[\tilde{\sigma}_a,\tilde{\sigma}_b]$, then $\sigma_A(\sqrt[p]{A})=\xi^{-1} \sqrt[p]{A}$. This implies that $\sigma_A$ is of order $p$. Also one can check that
\[
[\tilde{\sigma}_a,\sigma_A]=[\tilde{\sigma}_b,\sigma_A]=1.
\]
We can define an isomorphism $\varphi \colon {\rm Gal}(L/F)\to \U_3(\Z/p\Z)$ by letting
\[
\sigma_a \mapsto \begin{bmatrix}
1& 1 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{bmatrix},
\sigma_b\mapsto
\begin{bmatrix}
1& 0 & 0 \\
0& 1 & 1 \\
0& 0 & 1
\end{bmatrix},
\sigma_{A}\mapsto
\begin{bmatrix}
1& 0 & 1 \\
0& 1 & 0 \\
0& 0 & 1
\end{bmatrix}.
\]
Then the composition $\rho\colon G_F\to {\rm Gal}(L/F)\stackrel{\varphi}{\to} \U_3(\Z/p\Z)$ is the desired lifting of $\bar{\rho}$.
Note that $[L:F]=p^3$. Hence there are exactly $p$ extensions of $\sigma_a\in {\rm Gal}(E/F)$ to the automorphisms in ${\rm Gal}(L/F)$ since $[L:E]=p^3/p^2=p$. Therefore for later use, we can choose an extension, still denoted by $\sigma_a\in {\rm Gal}(L/F)$, of $\sigma_a\in {\rm Gal}(K/F)$ in such a way that $\sigma_a(\sqrt[p]{A})= \sqrt[p]{A}\dfrac{\sqrt[p]{b}}{\alpha}$.
\end{proof}
\section{The construction of $\U_4(\F_p)$-extensions: the case of characteristic $\not=p$}
\subsection{Fields containing primitive $p$-th roots of unity}
\label{subsec:with p-primitive}
In this subsection we assume that $F$ is a field containing a primitive $p$-th root $\xi$ of unity.
The following result can be deduced from Theorem~\ref{thm:U4}, but for the convenience of the reader we include a proof here.
\begin{prop}
\label{prop:U4 existence}
Assume that there exists a Galois extension $M/F$ such that ${\rm Gal}(M/F)\simeq \U_4(\F_p)$. Then there exist $a,b,c\in F^\times$ such that $a,b,c$ are linearly independent modulo $(F^\times)^p$ and $(a,b)=(b,c)=0$. Moreover $M$ contains $F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})$.
\end{prop}
\begin{proof}
Let $\rho$ be the composite $\rho\colon G_F\surj{\rm Gal}(M/F)\simeq \U_4(\F_p)$. Then $\rho_{12},\rho_{23}$ and $\rho_{34}$ are elements in ${\rm Hom}(G_F,\F_p)$. Hence there are $a,b$ and $c$ in $F^\times$ such that $\chi_a=\rho_{12}$, $\chi_b=\rho_{23}$ and $\chi_c=\rho_{34}$. Since $\rho$ is a group homomorphism, by looking at the coboundaries of $\rho_{13}$ and $\rho_{24}$, we see that
\[
\chi_a\cup \chi_b =\chi_b\cup\chi_c=0 \in H^2(G_F,\F_p).
\]
This implies that $(a,b)=(b,c)=0$ by \cite[Chapter XIV, Proposition 5]{Se}.
Let $\varphi:=(\chi_a,\chi_b,\chi_c)\colon G_F\to (\F_p)^3$. Then $\varphi$ is surjective. By Galois correspondence, we have
\[
{\rm Gal}(F_s/F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c}))= \ker\chi_a\cap \ker\chi_b\cap\ker\chi_c=\ker\varphi.
\]
This implies that ${\rm Gal}(F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})/F)\simeq (\F_p)^3$. Hence by Kummer theory, we see that $a,b$ and $c$ are linearly independent modulo $(F^\times)^p$. Clearly, $M$ contains $F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})$.
\end{proof}
Conversely we shall see in this section that given these necessary conditions for the existence of $\U_4(\F_p)$-Galois extensions over $F$, as in Proposition~\ref{prop:U4 existence}, we can construct a Galois extension $M/F$ with the Galois group isomorphic to $\U_4(\F_p)$.
From now on we assume that we are given elements $a$, $b$ and $c$ in $F^\times$ such that
$a$, $b$ and $c$ are linearly independent modulo $(F^\times)^p$ and that
$(a,b)=(b,c)=0$. We shall construct a Galois $\U_4(\F_p)$-extension $M/F$ such that $M$ contains $F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})$.
First we note that $F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})/F$ is a Galois extension with ${\rm Gal}(F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})/F)$ generated by $\sigma_a,\sigma_b,\sigma_c$. Here
\[
\begin{aligned}
\sigma_a(\sqrt[p]{a})&=\xi \sqrt[p]{a}, \sigma_a(\sqrt[p]{b})=\sqrt[p]{b}, \sigma_a(\sqrt[p]{c})=\sqrt[p]{c};\\
\sigma_b(\sqrt[p]{a})&=\sqrt[p]{a}, \sigma_b(\sqrt[p]{b})=\xi \sqrt[p]{b}, \sigma_b(\sqrt[p]{c})= \sqrt[p]{c};\\
\sigma_c(\sqrt[p]{a})&=\sqrt[p]{a}, \sigma_c(\sqrt[p]{b})=\sqrt[p]{b}, \sigma_c(\sqrt[p]{c})=\xi \sqrt[p]{c}.
\end{aligned}
\]
Let $E= F(\sqrt[p]{a},\sqrt[p]{c})$. Since $(a,b)=(b,c)=0$, there are $\alpha$ in $F(\sqrt[p]{a})$ and $\gamma$ in $F(\sqrt[p]{c})$ (see \cite[Chapter XIV, Proposition 4 (iii)]{Se}) such that
\[
N_{F(\sqrt[p]{a})/F}(\alpha)=b=N_{F(\sqrt[p]{c})/F}(\gamma).
\]
Let $G$ be the Galois group ${\rm Gal}(E/F)$. Then $G=\langle \sigma_a,\sigma_c \rangle $, where $\sigma_a\in G$ (respectively $\sigma_c\in G$) is the restriction of $\sigma_a\in {\rm Gal}(F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})/F)$ (respectively $\sigma_c\in {\rm Gal}(F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})/F)$).
Our next goal is to find an element $\delta$ in $E^\times$ such that the Galois closure of $E(\sqrt[p]{\delta})$ is our desired $\U_4(\F_p)$-extension of $F$.
We define
\[
\begin{aligned}
C_0=\prod_{i=0}^{p-2} \sigma_c^{i}(\gamma^{p-i-1}) \in F(\sqrt[p]{a}),
\end{aligned}
\]
and define $B:=\gamma/\alpha$. Then we have the following result, which follows from Lemma~\ref{lem:operator} (see \cite[Proposition 3.2]{Ma} and/or \cite[Lemma 4.2]{MT5}).
\begin{lem}
\label{lem:operators}
We have
\begin{enumerate}
\item $\dfrac{\sigma_a(A_0)}{A_0}=N_{\sigma_c}(B)$.
\item $\dfrac{\sigma_c(C_0)}{C_0}=N_{\sigma_a}(B)^{-1}$.
\QEDB
\end{enumerate}
\end{lem}
\begin{rmk}
\label{rmk:modification}
We would like to informally explain the meaning of the next lemma.
From our hypothesis $(a,b)=0=(b,c)$ and from Subsection~\ref{subsec:Heisenberg}, we see that we can obtain two Heisenberg extensions $L_1=F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{A_0})$ and $L_2=F(\sqrt[p]{b},\sqrt[p]{c},\sqrt[p]{C_0})$ of $F$. Here we have chosen specific elements $A_0\in F(\sqrt[p]{a})$ and $C_0\in F(\sqrt[p]{c})$. However we may not be able to embed the compositum of $L_1$ and $L_2$ into our desired Galois extension $M/F$ with $\Gal(M/F)\simeq \U_4(\F_p)$.
We know that we can modify the element $A_0$ by any element $f_a\in F^\times$ and the element $C_0$ by any element $f_c\in F^\times$ obtaining elements $A=f_aA_0$ and $C=f_cC_0$ instead of $A_0$ and $C_0$. This new choice of elements may change the fields $L_1$ and $L_2$ but the new fields will still be Heisenberg extensions containing $F(\sqrt[p]{a},\sqrt[p]{b})$ and $F(\sqrt[p]{b},\sqrt[p]{c})$ respectively.
The next lemma will provide us with a suitable modification of $A_0$ and $C_0$. From the proof of Theorem~\ref{thm:construction} we shall see that the compositum of these modified Heisenberg extensions can indeed be embedded into a Galois extension $M/F$ with $\Gal(M/F)\simeq \U_4(\F_p)$. This explains our comment in the introduction in the paragraph related to the "automatic realization of Galois groups".
\end{rmk}
\begin{lem}
\label{lem:modification}
Assume that there exist $C_1, C_2\in E^\times$ such that
\[ B=\frac{\sigma_a(C_1)}{C_1} \frac{C_2}{\sigma_c(C_2)}.\]
Then $N_{\sigma_c}(C_1)/A_0$ and $N_{\sigma_a}(C_2)/C_0$ are in $F^\times$. Moreover, if we let $A=N_{\sigma_c}(C_1)\in F(\sqrt[p]{a})^\times$ and $C=N_{\sigma_a}(C_2)\in F(\sqrt[p]{c})^\times$, then there exists
$\delta \in E^\times$ such that
\[
\begin{aligned}
\frac{\sigma_c(\delta)}{\delta}&= A C_1^{-p},\\
\frac{\sigma_a(\delta)}{\delta}&=C C_2^{-p}.
\end{aligned}
\]
\end{lem}
\begin{proof} By Lemma~\ref{lem:operators}, we have
\[
\frac{\sigma_a(A_0)}{A_0}= N_{\sigma_c}(B) = N_{\sigma_c}\left(\frac{\sigma_a(C_1)}{C_1}\right) N_{\sigma_c}\left(\frac{C_2}{\sigma_c(C_2)}\right)
= \frac{\sigma_a(N_{\sigma_c}(C_1))}{N_{\sigma_c}(C_1)}.
\]
This implies that
\[
\frac{N_{\sigma_c}(C_1)}{A_0}= \sigma_a\left(\frac{N_{\sigma_c}(C_1)}{A_0}\right).
\]
Hence
\[
\dfrac{N_{\sigma_c}(C_1)}{A_0} \in F(\sqrt[p]{c})^\times\cap F(\sqrt[p]{a})^\times=F^\times.
\]
By Lemma~\ref{lem:operators}, we have
\[
\frac{\sigma_c(C_0)}{C_0}= N_{\sigma_a}(B^{-1}) = N_{\sigma_a}\left(\frac{C_1}{\sigma_a(C_1)}\right) N_{\sigma_a}\left(\frac{\sigma_c(C_2)}{C_2}\right)
= \frac{\sigma_c(N_{\sigma_a}(C_2))}{N_{\sigma_a}(C_2)}.
\]
This implies that
\[
\frac{N_{\sigma_a}(C_2)}{C_0}= \sigma_c\left(\frac{N_{\sigma_a}(C_2)}{C_0}\right).
\]
Hence
\[
\dfrac{N_{\sigma_a}(C_2)}{C_0} \in F(\sqrt[p]{a})^\times\cap F(\sqrt[p]{c})^\times=F^\times.
\]
Clearly, one has
\[
\begin{aligned}
N_{\sigma_a}(CC_2^{-p})&=1,\\
N_{\sigma_c}(A C_1^{-p})&=1.
\end{aligned}
\]
We also have
\[
\begin{aligned}
\frac{\sigma_a(AC_1^{-p})}{AC_1^{-p}}
\frac{CC_2^{-p}}{\sigma_c(CC_2^{-p})}&= \frac{\sigma_a(A)}{A} \left(\frac{\sigma_a(C_1)}{C_1}\right)^{-p} \frac{C}{\sigma_c(C)}\left(\frac{C_2}{\sigma_c(C_2)}\right)^{-p}\\
&= \frac{b}{\alpha^p}\frac{\gamma^p}{b} B^{-p}\\
&=1.
\end{aligned}
\]
Hence, we have
\[
\frac{\sigma_a(AC_1^{-p})}{AC_1^{-p}}= \frac{\sigma_c(CC_2^{-p})}{CC_2^{-p}}.
\]
From \cite[page 756]{Co} we see that there exists $\delta \in E^\times$ such that
\[
\begin{aligned}
\frac{\sigma_c(\delta)}{\delta}&= A C_1^{-p},\\
\frac{\sigma_a(\delta)}{\delta}&=C C_2^{-p},
\end{aligned}
\]
as desired.
\end{proof}
\begin{rmk}
The result of I. G. Connell which we use in the above proof, is a variant of Hilbert's Theorem 90. This result was independently discovered by S. Amitsur and D. Saltman in \cite[Lemma 2.4]{AS}. (See also \cite[Theorem 2]{DMSS} for the case $p=2$.)
\end{rmk}
\begin{lem}
\label{lem:C1C2}
There exists $e\in E^\times$ such that $B= \dfrac{\sigma_a\sigma_c(e)}{e}$. Furthermore, for such an element $e$ the following statements are true.
\begin{enumerate}
\item If we set $C_1:=\sigma_c(e)\in E^\times$, $C_2:=e^{-1}\in E^\times$, then $B=\dfrac{\sigma_a(C_1)}{C_1} \dfrac{C_2}{\sigma_c(C_2)}$.
\item If we set $C_1:=e \in E^\times$, $C_2:=(eB)\sigma_c(eB)\cdots \sigma_c^{p-2}(eB)\in E^\times$, then $B=\dfrac{\sigma_a(C_1)}{C_1} \dfrac{C_2}{\sigma_c(C_2)}$.
\end{enumerate}
\end{lem}
\begin{proof}
We have
\[ N_{\sigma_a\sigma_c}(B)=\frac{N_{\sigma_a\sigma_c}(\alpha)}{N_{\sigma_a\sigma_c}(\gamma)}=\frac{N_{\sigma_a}(\alpha)}{N_{\sigma_c}(\gamma)}=\frac{b}{b}=1.
\]
Hence by Hilbert's Theorem 90, there exists $e \in E^\times$ such that
$B=\dfrac{\sigma_a\sigma_c(e)}{e}. $
\\
\\
(1) Clearly, we have
\[
\dfrac{\sigma_a(C_1)}{C_1} \frac{C_2}{\sigma_c(C_2)}=\dfrac{\sigma_a(\sigma_c(e))}{\sigma_c(e)} \frac{e^{-1}}{\sigma_c(e^{-1})}=\dfrac{\sigma_a\sigma_c(e)}{e}=B.
\]
(2) From $B=\dfrac{\sigma_a\sigma_c(e)}{e}$, we see that $eB= \sigma_a\sigma_c(e)$. Hence $\sigma_c^{p-1}(eB)=\sigma_a(e)$. Therefore
\[
B=\frac{\sigma_a(e)}{e}\frac{eB}{\sigma_c^{p-1}(eB)}= \frac{\sigma_a(C_1)}{C_1} \frac{C_2}{\sigma_c(C_2)}.
\qedhere
\]
\end{proof}
\begin{thm}
\label{thm:construction}
Let the notation and assumption be as in Lemma~\ref{lem:modification}. Let $M:= E(\sqrt[p]{\delta},\sqrt[p]{A},\sqrt[p]{C},\sqrt[p]{b})$. Then $M/F$ is a Galois extension, $M$ contains $F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})$, and ${\rm Gal}(M/F)\simeq \U_4(\F_p)$.
\end{thm}
\begin{proof}
Let $W^*$ be the $\F_p$-vector space in $E^\times/(E^\times)^p$ generated by $[b]_E,[A]_E,[C]_E$ and $[\delta]_E$. Here for any $0\not= x$ in a field $L$, we denote $[x]_L$ the image of $x$ in $L^\times/(L^\times)^p$. Since
\begin{align}
\sigma_c(\delta)&= \delta A C_1^{-p} & \text{ (by Lemma~\ref{lem:modification})}, \label{eq:1}\\
\sigma_a(\delta)&=\delta C C_2^{-p} &\text{ (by Lemma~\ref{lem:modification})},\label{eq:2}\\
\sigma_a(A)&=A \frac{b}{\alpha^p} \label{eq:3} &\text{ (by Lemma~\ref{lem:operator})},\\
\sigma_c(C)&=C \frac{b}{\gamma^p} \label{eq:4} &\text{ (by Lemma~\ref{lem:operator})},
\end{align}
we see that $W^*$ is in fact an $\F_p[G]$-module. Hence $M/F$ is a Galois extension by Kummer theory.
\\
\\
{\bf Claim:} $\dim_{\F_p}(W^*)=4$. Hence $[L:F]=[L:E][E:F]=p^4p^2=p^6.$\\
{\it Proof of Claim:} From our hypothesis that $\dim_{\F_p}\langle [a]_F,[b]_F,[c]_F\rangle=3$, we see that $\langle [b]_E\rangle \simeq \F_p$.
Clearly, $\langle[b]_E\rangle \subseteq (W^*)^G$.
From \eqref{eq:3} one gets the relation
\[
[\sigma_a(A)]_E= [A]_E [b]_E.
\]
This implies that $[A]_E$ is not in $(W^*)^G$. Hence $\dim_{\F_p}\langle [b]_E,[A]_E\rangle=2$.
From \eqref{eq:4} one gets the relation
\[ [\sigma_c(C)]_E= [C]_E [b]_E.
\]
This implies that $[C]_E$ is not in $(W^*)^{\sigma_c}$. But we have $\langle [b]_E,[A]_E\rangle\subseteq (W^*)^{\sigma_c}$. Hence
\[
\dim_{\F_p}\langle [b]_E,[A]_E,[C]_E\rangle =3.
\]
Observe that the element $(\sigma_a-1)(\sigma_c-1)$ annihilates the $\F_p[G]$-module $\langle [b]_E,[A]_E,[C]_E\rangle$, while by \eqref{eq:1} and \eqref{eq:3} one has
\[
(\sigma_a-1)(\sigma_c-1)[\delta]_E= \frac{\sigma_a([A]_E)}{[A]_E}= [b]_E.
\]
Therefore one has
\[
\dim_{\F_p}W^*=\dim_{\F_p}\langle [b]_E,[A]_E,[C]_E,[\delta]_E\rangle =4.
\]
\\
\\
Let $H^{a,b}=F(\sqrt[p]{a},\sqrt[p]{A},\sqrt[p]{b})$ and $H^{b,c}=F(\sqrt[p]{c},\sqrt[p]{C},\sqrt[p]{b})$.
Let
\[N:=H^{a,b}H^{b,c}=F(\sqrt[p]{a},\sqrt[p]{c},\sqrt[p]{b},\sqrt[p]{A},\sqrt[p]{C})=E(\sqrt[p]{b},\sqrt[p]{A},\sqrt[p]{C}).\]
Then $N/F$ is a Galois extension of degree $p^5$.
This is because ${\rm Gal}(N/E)$ is dual to the $\F_p[G]$-submodule $\langle [b]_E,[A]_E,[C]_E \rangle$ via Kummer theory,
and the proof of the claim above shows that $\dim_{\F_p}\langle [b]_E,[A]_E,[C]_E\rangle =3$. We have the following commutative diagram
\[
\xymatrix{
{\rm Gal}(N/F) \ar@{->>}[r] \ar@{->>}[d] & {\rm Gal}(H^{a,b}/F) \ar@{->>}[d]\\
{\rm Gal}(H^{b,c}/F) \ar@{->>}[r] & {\rm Gal}(F(\sqrt[p]{b})/F).
}
\]
So we have a homomorphism $\eta$ from ${\rm Gal}(N/F)$ to the pull-back ${\rm Gal}(H^{b,c}/F)\times_{{\rm Gal}(F(\sqrt[p]{b})/F)} {\rm Gal}(H^{a,b}/F)$:
\[
\eta\colon {\rm Gal}(N/F) \longrightarrow {\rm Gal}(H^{b,c}/F)\times_{{\rm Gal}(F(\sqrt[p]{b})/F)} {\rm Gal}(H^{a,b}/F),
\]
which make the obvious diagram commute.
We claim that $\eta$ is injective. Indeed, let $\sigma$ be an element in $\ker\eta$. Then $\sigma\mid_{H^{a,b}}=1$ in ${\rm Gal}(H^{a,b}/F)$, and $\sigma\mid_{H^{b,c}}=1$ in ${\rm Gal}(H^{b,c}/F)$. Since $N$ is the compositum of $H^{a,b}$ and $H^{b,c}$, this implies that $\sigma=1$, as desired.
Since $|{\rm Gal}(H^{b,c}/F)\times_{{\rm Gal}(F(\sqrt[p]{b})/F)} {\rm Gal}(H^{a,b}/F)|=p^5=|{\rm Gal}(N/F)|$, we see that $\eta$ is actually an isomorphism.
As in the proof of Proposition~\ref{prop:Heisenberg extension}, we can choose an extension $\sigma_a\in {\rm Gal}(H^{a,b}/F)$ of $\sigma_a\in {\rm Gal}(F(\sqrt[p]{a},\sqrt[p]{b})/F)$ (more precisely, of $\sigma_a\hspace*{-5pt}\mid_{F(\sqrt[p]{a},\sqrt[p]{b})}\in {\rm Gal}(F(\sqrt[p]{a},\sqrt[p]{b})/F)$) in such a way that
\[
\sigma_a(\sqrt[p]{A})= \sqrt[p]{A} \frac{\sqrt[p]{b}}{\alpha}.
\]
Since the square commutative diagram above is a pull-back, we can choose an extension $\sigma_a\in {\rm Gal}(N/F)$ of $\sigma_a\in {\rm Gal}(H^{a,b}/F)$ in such a way that
\[
\sigma_a\mid_{H^{b,c}}=1.
\]
Now we can choose any extension $\sigma_a\in {\rm Gal}(M/F)$ of $\sigma_a\in{\rm Gal}(N/F)$. Then we have
\[
\sigma_a(\sqrt[p]{A})= \sqrt[p]{A} \frac{\sqrt[p]{b}}{\alpha} \; \text{ and } \sigma_a\mid_{H^{b,c}}=1.
\]
Similarly, we can choose an extension $\sigma_c\in {\rm Gal}(M/F)$ of $\sigma_c\in {\rm Gal}(F(\sqrt[p]{b},\sqrt[p]{c})/F)$ in such a way that
\[
\sigma_c(\sqrt[p]{C})= \sqrt[p]{C} \frac{\sqrt[p]{b}}{\gamma}, \text{ and } \sigma_c\mid_{H^{a,b}}=1.
\]
We define $\sigma_b\in {\rm Gal}(M/E)$ to be the element which is dual to $[b]_E$ via Kummer theory. In other words, we require that
\[
\sigma_b(\sqrt[p]{b})=\xi \sqrt[p]{b},
\]
and $\sigma_b$ acts trivially on $\sqrt[p]{A}$, $\sqrt[p]{C}$ and $\sqrt[p]{\delta}$. We consider $\sigma_b$ as an element in ${\rm Gal}(M/F)$, then it is clear that $\sigma_b$ is an extension of $\sigma_b\in {\rm Gal}(F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})/F)$.
Let $W={\rm Gal}(M/E)$, and let $H={\rm Gal}(M/F)$, then we have the following exact sequence
\[
1\to W\to H\to G\to 1.
\]
By Kummer theory, it follows that $W$ is dual to $W^*$, and hence $W\simeq (\Z/p\Z)^4$. In particular, we have $|H|=p^6$.
Recall that from \cite[Theorem 1]{BD}, we know that the group $\U_4(\F_p)$ has a presentation with generators $s_1,s_2,s_3$ subject to the following relations
\[
\label{eq:R}
\tag{R}
\begin{aligned}
s_1^p=s_2^p=s_3^p=1,\\
[s_1,s_3]=1,\\
[s_1,[s_1,s_2]]=[s_2,[s_1,s_2]]=1,\\
[s_2,[s_2,s_3]]=[s_3,[s_2,s_3]]=1,\\
[[s_1,s_2],[s_2,s_3]]=1.
\end{aligned}
\]
Note that $|\Gal(M/F)|=p^6$.
So in order to show that $\Gal(M/F)\simeq \U_4(\F_p)$, we shall show that $\sigma_a,\sigma_b$ and $\sigma_c$ generate $\Gal(M/F)$ and that they satisfy these above relations.
\\
\\
{\bf Claim:} The elements $\sigma_a,\sigma_b$ and $\sigma_c$ generate $\Gal(M/F)$.\\
{\it Proof of Claim:} Let $K$ be the maximal $p$-elementary subextension of $M$. Note that $\Gal(M/F)$ is a $p$-group. So in order to show that $\sigma_a,\sigma_b$ and $\sigma_c$ generate $\Gal(M/F)$, we only need to show that (the restrictions of) these elements generate $\Gal(K/F)$ by the Burnside basis theorem. (See e.g. \cite[Theorem 12.2.1]{Ha} or \cite[Theorem 4.10]{Ko}.)
We shall now determine the field $K$.
By Kummer theory, $K=F(\sqrt[p]{\Delta})$, where $\Delta= (F^\times\cap {M^\times}^p)/(F^\times)^p$.
Let $[f]_F$ be any element in $\Delta$, where $f\in F^\times \cap (M^\times)^p\subseteq E^\times \cap (M^\times)^p$.
By Kummer theory, one has
$
W^*=(E^\times \cap (M^\times)^p)/(E^\times)^p.
$
Hence we can write
\[
[f]_E=[\delta]_E^{\epsilon_\delta} [A]_E^{\epsilon_A} [C]_E^{\epsilon_C} [b]_E^{\epsilon_b},
\]
where $\epsilon_\delta,\epsilon_A,\epsilon_C,\epsilon_b\in \Z$. By applying $(\sigma_a-1)(\sigma_c-1)$ on $[f]_E$ we get
$
[1]_E=[b]_E^{\epsilon_\delta}.
$
(See the proof of the first claim of this proof.)
Thus $\epsilon_\delta $ is divisible by $p$, and one has
\[
[f]_E=[A]_E^{\epsilon_A} [C]_E^{\epsilon_C} [b]_E^{\epsilon_b}.
\]
By applying $\sigma_a-1$ on both sides of this equation, we get
$
[1]_E=[b]_E^{\epsilon_A}.
$
Thus $\epsilon_A$ is divisible by $p$. Similarly, $\epsilon_C$ is also divisible by $p$. Hence $f=b^{\epsilon_b}e^p$ for some $e\in E$. Since $b$ and $f$ are in $F$, $e^p$ is in $F^\times \cap (E^\times)^p$ and $[e^p]_F$ is in $\langle [a]_F,[c]_F\rangle$. Therefore $[f]_F$ is in $\langle [a]_F,[b]_F,[c]_F\rangle$ and
\[
\Delta= \langle [a]_F,[b]_F,[c]_F\rangle.
\]
So $K=F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})$. Then it is clear that $\sigma_a,\sigma_b$ and $\sigma_c$ generate $\Gal(K/F)$ and the claim follows.
\\
\\
{\bf Claim:} The order of $\sigma_a$ is $p$.\\
{\it Proof of Claim:}
As in the proof of Proposition~\ref{prop:Heisenberg extension}, we see that $\sigma_a^p(\sqrt[p]{A})=\sqrt[p]{A}$.
Since $\sigma_a(\delta)=\delta C C_2^{-p}$ (equation~\eqref{eq:2}), one has $\sigma_a(\sqrt[p]{\delta})=\xi^i \sqrt[p]{\delta} \sqrt[p]{C} C_2^{-1}$ for some $i\in \Z$. This implies that
\[
\begin{aligned}
\sigma_a^2(\sqrt[p]{\delta})&= \xi^i\sigma_a(\sqrt[p]{\delta}) \sigma_a(\sqrt[p]{C}) \sigma_a(C_2)^{-1}\\
&= \xi^{2i} \sqrt[p]{\delta} (\sqrt[p]{C})^2 C_2^{-1}\sigma_a(C_2)^{-1}.
\end{aligned}
\]
Inductively, we obtain
\[
\begin{aligned}
\sigma_a^{p}(\sqrt[p]{\delta})&= \xi^{pi}\sqrt[p]{\delta}(\sqrt[p]{C})^p N_{\sigma_a}(C_2)^{-1}\\
&= \sqrt[p]{\delta}(C )N_{\sigma_a}(C_2)^{-1}\\
& =\sqrt[p]{\delta}.
\end{aligned}
\]
Therefore, we can conclude that $\sigma_a^p=1$, and $\sigma_a$ is of order $p$.
\\
\\
{\bf Claim:} The order of $\sigma_b$ is $p$.\\
{\it Proof of Claim:} This is clear because $\sigma_b$ acts trivially on $\sqrt[p]{A},\sqrt[p]{C}, \delta$, $E$ and $\sigma_b(\sqrt[p]{b})=\xi\sqrt[p]{b}$.
\\
\\
{\bf Claim:} The order of $\sigma_c$ is $p$.\\
{\it Proof of Claim:} As in the proof of Proposition~\ref{prop:Heisenberg extension}, we see that $\sigma_c^p(\sqrt[p]{C})=\sqrt[p]{C}$.
Since $\sigma_c(\delta)=\delta A C_1^{-p}$ (equation~\eqref{eq:1}), one has $\sigma_c(\sqrt[p]{\delta})=\xi^j \sqrt[p]{\delta} \sqrt[p]{A} C_1^{-1}$ for some $j\in \Z$. This implies that
\[
\begin{aligned}
\sigma_c^2(\sqrt[p]{\delta})&= \xi^j\sigma_c(\sqrt[p]{\delta}) \sigma_c(\sqrt[p]{A}) \sigma_c(C_1)^{-1}\\
&= \xi^{2j} \sqrt[p]{\delta} (\sqrt[p]{A})^2 C_1^{-1}\sigma_c(C_1)^{-1}.
\end{aligned}
\]
Inductively, we obtain
\[
\begin{aligned}
\sigma_c^{p}(\sqrt[p]{\delta})&= \xi^{pj}\sqrt[p]{\delta}(\sqrt[p]{A})^p N_{\sigma_c}(C_1)^{-1}\\
&= \sqrt[p]{\delta}(A )N_{\sigma_c}(C_1)^{-1}\\
& = \sqrt[p]{\delta}.
\end{aligned}
\]
Therefore, we can conclude that $\sigma_c^p=1$, and $\sigma_c$ is of order $p$.
\\
\\
{\bf Claim:} $[\sigma_a,\sigma_c]=1$.\\
{\it Proof of Claim:}
It is enough to check that $\sigma_a\sigma_c(\sqrt[p]{\delta})=\sigma_c\sigma_a(\sqrt[p]{\delta})$.
We have
\[
\begin{aligned}
\sigma_a\sigma_c(\sqrt[p]{\delta})&= \sigma_a(\xi^j \sqrt[p]{\delta} \sqrt[p]{A} C_1^{-1})\\
&= \xi^j \sigma_a(\sqrt[p]{\delta})\sigma_a(\sqrt[p]{A})\sigma_a(C_1)^{-1}\\
&= \xi^j \xi^i \sqrt[p]{\delta} \sqrt[p]{C} C_2^{-1} \sqrt[p]{A} \frac{\sqrt[p]{b}}{\alpha}\sigma_a(C_1)^{-1}\\
&=\xi^{i+j} \sqrt[p]{\delta} \sqrt[p]{C} \sqrt[p]{A} \frac{\sqrt[p]{b}}{\alpha} (\sigma_a(C_1)C_2)^{-1}\\
&= \xi^{i+j} \sqrt[p]{\delta} \sqrt[p]{C} \sqrt[p]{A} \frac{\sqrt[p]{b}}{\alpha} \frac{(C_1\sigma_c(C_2))^{-1}}{B}\\
&=\xi^{i+j} \sqrt[p]{\delta} \sqrt[p]{C} \sqrt[p]{A} \frac{\sqrt[p]{b}}{\gamma}(C_1\sigma_c(C_2))^{-1}.
\end{aligned}
\]
On the other hand, we have
\[
\begin{aligned}
\sigma_c\sigma_a(\sqrt[p]{\delta})&= \sigma_c(\xi^i \sqrt[p]{\delta} \sqrt[p]{C} C_2^{-1})\\
&= \xi^i \sigma_c(\sqrt[p]{\delta})\sigma_c(\sqrt[p]{C})\sigma_c(C_2)^{-1}\\
&= \xi^i \xi^j \sqrt[p]{\delta} \sqrt[p]{A} C_1^{-1} \sqrt[p]{C} \frac{\sqrt[p]{b}}{\gamma}\sigma_c(C_2)^{-1}\\
&=\xi^{i+j} \sqrt[p]{\delta} \sqrt[p]{A} \sqrt[p]{C} \frac{\sqrt[p]{b}}{\gamma} (C_1\sigma_c(C_2))^{-1}.
\end{aligned}
\]
Therefore, $\sigma_a\sigma_c(\sqrt[p]{\delta})=\sigma_c\sigma_a(\sqrt[p]{\delta})$, as desired.
\\
\\
{\bf Claim:} $[\sigma_a,[\sigma_a,\sigma_b]]=[\sigma_b,[\sigma_a,\sigma_b]]=1$.\\
{\it Proof of Claim:} Since $G$ is abelian, it follows that $[\sigma_a,\sigma_b]$ is in $W$. Now both $\sigma_b$ and $[\sigma_a,\sigma_b]$ are in $W$. Hence $[\sigma_b,[\sigma_a,\sigma_b]]=1$ because $W$ is abelian.
Now we show that $[\sigma_a,[\sigma_a,\sigma_b]]=1$.
Since the Heisenberg group $\U_3(\F_p)$ is a nilpotent group of nilpotent length 2, we see that $[\sigma_a,[\sigma_a,\sigma_b]]=1$ on $H^{a,b}$ and $H^{b,c}$. So it is enough to check that $[\sigma_a,[\sigma_a,\sigma_b]](\sqrt[p]{\delta})=\sqrt[p]{\delta}$.
From the choice of $\sigma_b$, we see that
\[
\sigma_b\sigma_a(\sqrt[p]{\delta})=\sigma_a(\sqrt[p]{\delta})= \sigma_a\sigma_b(\sqrt[p]{\delta}).
\]
Hence, $[\sigma_a,\sigma_b](\sqrt[p]{\delta})=\sqrt[p]{\delta}$. Since $\sigma_a$ and $\sigma_b$ act trivially on $\sqrt[p]{C}$, and $\sigma_b$ acts trivially on $E$, we see that
\[
[\sigma_a,\sigma_b](\sqrt[p]{C})=\sqrt[p]{C},\;\; \text{ and } [\sigma_a,\sigma_b](C_2^{-1})=C_2^{-1}.
\]
We have
\[
\begin{aligned}
{[\sigma_a,\sigma_b]}\sigma_a (\sqrt[p]{\delta})&=[\sigma_a,\sigma_b](\xi^i\sqrt[p]{\delta}\sqrt[p]{C}C_2^{-1})\\
&= [\sigma_a,\sigma_b](\xi^i) [\sigma_a,\sigma_b](\sqrt[p]{\delta}) [\sigma_a,\sigma_b](\sqrt[p]{C}) [\sigma_a,\sigma_b](C_2^{-1})\\
&= \xi^i \sqrt[p]{\delta}\sqrt[p]{C}C_2^{-1}\\
&=\sigma_a(\sqrt[p]{\delta})\\
&=\sigma_a[\sigma_a,\sigma_b](\sqrt[p]{\delta}).
\end{aligned}
\]
Thus $[\sigma_a,[\sigma_a,\sigma_b]](\sqrt[p]{\delta})=\sqrt[p]{\delta}$, as desired.
\\
\\
{\bf Claim:} $[\sigma_b,[\sigma_b,\sigma_c]]=[\sigma_c,[\sigma_b,\sigma_c]]=1$.\\
{\it Proof of Claim:} Since $G$ is abelian, it follows that $[\sigma_b,\sigma_c]$ is in $W$. Now both $\sigma_b$ and $[\sigma_b,\sigma_c]$ are in $W$. Hence $[\sigma_b,[\sigma_b,\sigma_c]]=1$ because $W$ is abelian.
Now we show that $[\sigma_c,[\sigma_b,\sigma_c]]=1$. Since the Heisenberg group $\U_3(\F_p)$ is a nilpotent group of nilpotent length 2, we see that $[\sigma_c,[\sigma_b,\sigma_c]]=1$ on $H^{a,b}$ and $H^{b,c}$. So it is enough to check that $[\sigma_c,[\sigma_b,\sigma_c]](\sqrt[p]{\delta})=\sqrt[p]{\delta}$.
From the choice of $\sigma_b$, we see that
\[
\sigma_b\sigma_c(\sqrt[p]{\delta})=\sigma_c(\sqrt[p]{\delta})= \sigma_c\sigma_b(\sqrt[p]{\delta}).
\]
Hence, $[\sigma_b,\sigma_c](\sqrt[p]{\delta})=\sqrt[p]{\delta}$.
Since $\sigma_b$ and $\sigma_c$ act trivially on $\sqrt[p]{A}$, and $\sigma_b$ acts trivially on $E$, we see that
\[
[\sigma_b,\sigma_c](\sqrt[p]{A})=\sqrt[p]{A},\;\; \text{ and } [\sigma_b,\sigma_c](C_1^{-1})=C_1^{-1}.
\]
We have
\[
\begin{aligned}
{[\sigma_b,\sigma_c]}\sigma_c (\sqrt[p]{\delta})&=[\sigma_b,\sigma_c](\xi^j\sqrt[p]{\delta}\sqrt[p]{A}C_1^{-1})\\
&= [\sigma_b,\sigma_c](\xi^j) [\sigma_b,\sigma_c](\sqrt[p]{\delta}) [\sigma_b,\sigma_c](\sqrt[p]{A}) [\sigma_b,\sigma_c](C_1^{-1})\\
&= \xi^j \sqrt[p]{\delta}\sqrt[p]{A}C_1^{-1}\\
&=\sigma_c(\sqrt[p]{\delta})\\
&=\sigma_c[\sigma_a,\sigma_b](\sqrt[p]{\delta}).
\end{aligned}
\]
Thus $[\sigma_c,[\sigma_b,\sigma_c]](\sqrt[p]{\delta})=\sqrt[p]{\delta}$, as desired.
\\
\\
{\bf Claim:} $[[\sigma_a,\sigma_b],[\sigma_b,\sigma_c]]=1$.\\
{\it Proof of Claim}: Since $G$ is abelian, $[\sigma_a,\sigma_b]$ and $[\sigma_b,\sigma_c]$ are in $W$. Hence $[[\sigma_a,\sigma_b],[\sigma_b,\sigma_c]]=1$ because $W$ is abelian.
An explicit isomorphism $\varphi\colon {\rm Gal}(M/F)\to \U_4(\F_p)$ may be defined as
\[
\sigma_a \mapsto \begin{bmatrix}
1& 1 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{bmatrix}, \; \;
\sigma_b\mapsto \begin{bmatrix}
1& 0 & 0 & 0\\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{bmatrix}, \;\;
\sigma_c\mapsto \begin{bmatrix}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{bmatrix}.
\]
\end{proof}
\begin{ex}
\label{ex:p=2}
Let the notation and assumption be as in Lemma~\ref{lem:modification}. Let us consider the case $p=2$. In Lemma~\ref{lem:C1C2}, we can choose $e=\dfrac{\alpha}{\alpha+\gamma}$. (Observe that $\alpha+\gamma\not=0$.) In fact, one can easily check that
\[
\sigma_a\sigma_c(\frac{\alpha}{\alpha+\gamma})=\frac{\gamma}{\alpha} \frac{\alpha}{\alpha+\gamma}.
\]
\\
\\
(1) If we choose $C_1=\sigma_c(e)$ and $C_2=e^{-1}$ as in Lemma~\ref{lem:C1C2} part (1), then we have
\[
\begin{aligned}
A=N_{\sigma_c}(C_1)&=N_{\sigma_c}(e)= \frac{\alpha^2\gamma}{(\alpha+\gamma)(\alpha\gamma+b)},\\
C= N_{\sigma_a}(C_2)&=N_{\sigma_a}(e^{-1})=\frac{(\alpha+\gamma)(\alpha\gamma+b)}{b\alpha}.
\end{aligned}
\]
In Lemma~\ref{lem:modification}, we can choose $\delta=e^{-1}=\dfrac{\alpha+\gamma}{\alpha}$. In fact, we have
\[
\begin{aligned}
\frac{\sigma_c(\delta)}{\delta}&=\sigma_c(e)^{-1}e= \sigma_c(e)^{-2} e\sigma_c(e)= C_1^{-2} N_{\sigma_c}(e)= AC_1^{-2},\\
\frac{\sigma_a(\delta)}{\delta}&=\sigma_a(e^{-1})e= e^{-1}\sigma_a(e^{-1}) e^2= N_{\sigma_a}(e^{-1})C_2^{-2}= CC_2^{-2}.
\end{aligned}
\]
Therefore
\[
\begin{aligned}
M&= F(\sqrt{b},\sqrt{A},\sqrt{C},\sqrt{\delta})= F(\sqrt{b},\sqrt{ \frac{\alpha^2\gamma}{(\alpha+\gamma)(\alpha\gamma+b)}},\sqrt{\frac{(\alpha+\gamma)(\alpha\gamma+b)}{b\alpha}},\sqrt{\dfrac{\alpha+\gamma}{\alpha}})\\
&=F(\sqrt{b},\sqrt{\frac{\alpha+\gamma}{\alpha}},\sqrt{\alpha\gamma+b},\sqrt{\alpha\gamma}).
\end{aligned}
\]
\\
\\
(2) If we choose $C_1=e=\dfrac{\alpha}{\alpha+\gamma}$ and $C_2=eB=\dfrac{\gamma}{\alpha+\gamma}$ as in Lemma~\ref{lem:C1C2} part (2), then we have
\[
\begin{aligned}
A=N_{\sigma_c}(C_1)&=N_{\sigma_c}(e)= \frac{\alpha^2\gamma}{(\alpha+\gamma)(\alpha\gamma+b)},\\
C= N_{\sigma_a}(C_2)&=N_{\sigma_a}(eB)=\frac{\gamma^2\alpha}{(\alpha+\gamma)(\alpha\gamma+b)}.
\end{aligned}
\]
In Lemma~\ref{lem:modification}, we can choose $\delta=(\alpha+\gamma)^{-1}$. In fact, we have
\[
\begin{aligned}
\frac{\sigma_c(\delta)}{\delta}&=\frac{\gamma(\alpha+\gamma)}{\alpha\gamma+b}= AC_1^{-2},\\
\frac{\sigma_a(\delta)}{\delta}&=\frac{\alpha(\alpha+\gamma)}{\alpha\gamma+b}= CC_2^{-2}.
\end{aligned}
\]
Therefore
\[
M= F(\sqrt{b},\sqrt{A},\sqrt{C},\sqrt{\delta})= F(\sqrt{b},\sqrt{\frac{\alpha^2\gamma}{\alpha\gamma+b}},\sqrt{\frac{\alpha\gamma^2}{\alpha\gamma+b}},\sqrt{\alpha+\gamma}).
\]
Observe also that $M$ is the Galois closure of $E(\sqrt{\delta})=F(\sqrt{a},\sqrt{c},\sqrt{\alpha+\gamma})$.
\end{ex}
\subsection{Fields of characteristic not $p$}
Let $F_0$ be an arbitrary field of characteristic $\not=p$. We fix a primitive $p$-th root of unity $\xi$, and let $F=F_0(\xi)$. Then $F/F_0$ is a cyclic extension of degree $d=[F:F_0]$. Observe that $d$ divides $p-1$. We choose an integer $\ell$ such that $d\ell\equiv 1\bmod p$.
Let $\sigma_0$ be a generator of $H:={\rm Gal}(F/F_0)$. Then $\sigma_0(\xi)=\xi^e$ for an $e\in \Z\setminus p\Z$.
Let $\chi_1,\chi_2,\chi_3$ be elements in ${\rm Hom}(G_{F_0},\F_p)=H^1(G_{F_0},\F_p)$. We assume that $\chi_1,\chi_2,\chi_3$ are $\F_p$-linearly independent and
$\chi_1 \cup \chi_2 = \chi_2 \cup \chi_3 =0$. By \cite[Lemma 2.6]{MT4}, the homomorphism $(\chi_1,\chi_2,\chi_3)\colon G_{F_0}\to (\F_p)^3$ is surjective.
Let $L_0$ be the fixed field of $(F_0)^s$ under the kernel of the surjection $(\chi_1,\chi_2,\chi_3)\colon G_{F_0}\to (\F_p)^3$. Then $L_0/F_0$ is Galois with ${\rm Gal}(L_0/F_0)\simeq (\F_p)^3$. We shall construct a Galois extension $M_0/F_0$ such that ${\rm Gal}(M_0/F_0)\simeq \U_4(\F_p)$ and $M_0$ contains $L_0$.
The restrictions $\res_{G_F}(\chi_1),\res_{G_F}(\chi_2),\res_{G_F}(\chi_3)$ are elements in ${\rm Hom}(G_F,\F_p)$. They are $\F_p$-linearly independent and
$\res_{G_F}(\chi_1) \cup \res_{G_F}(\chi_2) =\res_{G_F}(\chi_2) \cup\res_{G_F}(\chi_3) =0$.
By Kummer theory there exist $a,b,c$ in $F^\times$ such that $\res_{G_F}(\chi_1)=\chi_a$, $\res_{G_F}(\chi_2)=\chi_b$, $\res_{G_F}(\chi_3)=\chi_c$. Then we have
$(a,b)=(b,c)=0$ in $H^2(G_F,\F_p)$.
Let $L=L_0(\xi)$. Then $L=F(\sqrt[p]{a},\sqrt[p]{b},\sqrt[p]{c})$, and $L/F$ is Galois with ${\rm Gal}(L/F)\simeq {\rm Gal}(L_0/F_0)\simeq (\F_p)^3$.
\\
\\
{\bf Claim 1:} $L/F_0$ is Galois with ${\rm Gal}(L/F_0)\simeq {\rm Gal}(F/F_0)\times {\rm Gal}(L/F)$.\\
{\it Proof of Claim}: Since $L_0/F_0$ and $F/F_0$ are Galois extensions of relatively prime degrees, the claim follows.
\\
\\
We define $\displaystyle \Phi:= \ell[\sum_{i=0}^{d-1}e^i\sigma_0^{-i}]\in \Z[H]$. The group ring $\Z[H]$ acts on $F$ in the obvious way, and if we let $H$ act trivially on $L_0$ we get an action on $L$ also. Then $\Phi$ determines a map
\[
\Phi\colon L\to L, x\mapsto \Phi(x).
\]
For convenience, we shall denote $\tilde{x}:=\Phi(x)$.
The claim above implies that $\Phi\sigma=\sigma\Phi$ for every $\sigma\in {\rm Gal}(L/F)$.
\\
\\
{\bf Claim 2:} We have $\tilde{a}=a$ modulo $(F^\times)^p$; $\tilde{b}=b$ modulo $(F^\times)^p$, $\tilde{c}=c$ modulo $(F^\times)^p$.\\
{\it Proof of Claim:} A similar argument as in the proof of Claim 1 shows that $F(\sqrt[p]{a})/F_0$ is Galois with ${\rm Gal}(F(\sqrt[p]{a})/F_0)={\rm Gal}(F(\sqrt[p]{a})/F)\times {\rm Gal}(F/F_0)$. Since both groups ${\rm Gal}(F(\sqrt[p]{a})/F)$ and ${\rm Gal}(F/F_0)$ are cyclic and of coprime orders, we see that the extension $F(\sqrt[p]{a})/F_0$ is cyclic. By Albert's result (see \cite[pages 209-211]{Alb} and \cite[Section 5]{Wat}), we have $\sigma_0 a =a^e $ modulo $(F^\times)^p$. Hence for all integers $i$, $\sigma_0^i(a)=a^{e^i}\bmod (F^\times)^p$. Thus $\sigma_0^{-i}(a^{e^i})=a\bmod (F^\times)^p$.
Therefore, we have
\[
\tilde{a}=\Phi(a)=\left[ \prod_{i=0}^{d-1}\sigma_0^{-i}(a^{e^i})\right]^\ell= \left[\prod_{i=0}^{d-1} a \right]^\ell= a^{d\ell}=a \bmod (F^\times)^p.
\]
Similarly, we have $\tilde{b}=b$ modulo $(F^\times)^p$, $\tilde{c}=c$ modulo $(F^\times)^p$.
\\
\\
{\bf Claim 3:} For every $x\in L$, we have $\dfrac{\sigma_0\tilde{x}}{\tilde{x}^e}= \sigma_0(x^{\ell(1-e^d)/p})^p \in L^p$.\\
{\it Proof of Claim:} This follows from the following identity in the group ring $\Z[H]$,
\[
(\sigma_0-e)(\sum_{i=0}^{d-1}e^i\sigma_0^{-i})= \sigma_0(1-e^d)\equiv 0\bmod p.
\]
\\
\\
By our construction of Galois $\U_4(\F_p)$-extensions over fields containing a primitive $p$-th root of unity (see Subsection~\ref{subsec:with p-primitive}), we have $\alpha,\gamma,B,...,A,C,\delta$ such that if we let
$M:= L(\sqrt[p]{A},\sqrt[p]{C},\sqrt[p]{\delta})$, then $M/F$ is a Galois $\U_4(\F_p)$-extension.
We set $\tilde{M}:=L(\sqrt[p]{\tilde{A}},\sqrt[p]{\tilde{C}},\sqrt[p]{\tilde{\delta}})$.
\\
\\
{\bf Claim 4:} $\tilde{M}/F$ is Galois with ${\rm Gal}(\tilde{M}/F)\simeq \U_4(\F_p)$.\\
{\it Proof of Claim}: Since $\Phi$ commutes with every $\sigma\in {\rm Gal}(L/F)$, this implies that $\tilde{M}/F$ is Galois.
This, together with Claim 2, also implies that ${\rm Gal}(\tilde{M}/F)\simeq \U_4(\F_p)$ because the construction of $\tilde{M}$ over $F$ is obtained in the same way as in the construction of $M$, except that we replace the data $\{a,b,c, \alpha,\gamma, B,...\}$ by their "tilde" counterparts $\{\tilde{a},\tilde{b},\tilde{c}, \tilde{\alpha},\tilde{\gamma},\tilde{B},...\}$.
\\
\\
{\bf Claim 5:} $\tilde{M}/F_0$ is Galois with ${\rm Gal}(\tilde{M}/F_0)\simeq {\rm Gal}(\tilde{M}/F)\times {\rm Gal}(F/F_0)$.\\
{\it Proof of Claim}: By Claim 3, we see that $\sigma_0 \tilde{x}=\tilde{x}^e $ modulo $(L^\times)^p$ for every $\tilde{x}$ in the $\F_p$-vector subspace $\tilde{W^*}$ of $L^\times/(L^\times)^p$ generated by $\tilde{A}$, $\tilde{C}$, and $\tilde{\delta}$. Hence $\tilde{W^*}$ is an $\F_p[{\rm Gal}(L/F_0)]$-module.
Therefore $\tilde{M}/F_0$ is Galois by Kummer theory.
We also have the following exact sequence of groups
\[
1\to {\rm Gal}(\tilde{M}/F)\to {\rm Gal}(\tilde{M}/F_0)\to {\rm Gal}(F/F_0)\to 1.
\]
Since $|{\rm Gal}(\tilde{M}/F)|$ and $|{\rm Gal}(F/F_0)|$ are coprime, the above sequence is split by Schur-Zassenhaus's theorem. (See \cite[IV.7,Theorem 25]{Za}.)
The Galois group ${\rm Gal}(\tilde{M}/F_0)$ is the semidirect product of ${\rm Gal}(\tilde{M}/F)$ and $H={\rm Gal}(F/F_0)$, with $H$ acting on ${\rm Gal}(\tilde{M}/F)$ by conjugation. We need to show that this product is in fact direct, i.e., that the action of $H$ on ${\rm Gal}(\tilde{M}/F)$ is trivial.
Note that $H$ has an order coprime to $p$, and $H$ acts trivially on ${\rm Gal}(L/F)$ (see Claim 1) which is the quotient of ${\rm Gal}(\tilde{M}/F)$ by its Frattini subgroup.
Then a result of P. Hall (see \cite[Theorem 12.2.2]{Ha}) implies that $H$ acts trivially on ${\rm Gal}(\tilde{M}/F)$.
From the discussion above we obtain the following result.
\begin{thm}
\label{thm:construction char not p}
Let the notation be as above.
Let $M_0$ be the fixed field of $\tilde{M}$ under the subgroup of ${\rm Gal}(\tilde{M}/F_0)$ which is isomorphic to ${\rm Gal}(F/F_0)$.
Then $M_0/F_0$ is Galois with
${\rm Gal}(M_0/F_0)\simeq {\rm Gal}(\tilde{M}/F)\simeq \U_4(\F_p)$, and $M_0$ contains $L_0$.
\end{thm}
\begin{proof} Claim 5 above implies that $M_0/F_0$ is Galois with
${\rm Gal}(M_0/F_0)\simeq {\rm Gal}(\tilde{M}/F)\simeq \U_4(\F_p)$. Since $H\simeq {\rm Gal}(\tilde{M}/M_0)$ acts trivially on $L_0$, we see that $M_0$ contains $L_0$.
Let $\sigma_1:=\sigma_a|_{M_0}$, $\sigma_2:=\sigma_b|_{M_0}$ and $\sigma_3:=\sigma_c|_{M_0}$.
Then $\sigma_1,\sigma_2$ and $\sigma_3$ generate ${\rm Gal}(M_0/F_0)\simeq \U_4(F_p)$. We also have
\[
\begin{aligned}
\chi_1(\sigma_1)=1, \chi_1(\sigma_2)=0, \chi_1(\sigma_3)=0;\\
\chi_2(\sigma_1)=0, \chi_2(\sigma_2)=1, \chi_2(\sigma_3)=0;\\
\chi_3(\sigma_1)=0, \chi_3(\sigma_2)=0, \chi_3(\sigma_3)=1.
\end{aligned}
\]
(Note that for each $i=1,2,3$, $\chi_i$ is trivial on ${\rm Gal}(M/M_0)$, hence $\chi_i(\sigma_j)$ makes sense for every $j=1,2,3$.)
An explicit isomorphism $\varphi\colon {\rm Gal}(M_0/F_0)\to \U_4(\F_p)$ may be defined as
\[
\sigma_1 \mapsto \begin{bmatrix}
1& 1 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{bmatrix}, \; \;
\sigma_2\mapsto \begin{bmatrix}
1& 0 & 0 & 0\\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{bmatrix}, \;\;
\sigma_3\mapsto \begin{bmatrix}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{bmatrix}.
\]
\end{proof}
\section{The construction of $\U_4(\F_p)$-extensions: The case of characteristic $p$}
In this section we assume that $F$ is of characteristic $p>0$. Although by a theorem of Witt (see \cite{Wi} and \cite[Chapter 9, Section 9.1]{Ko}), we know that the Galois group of the maximal $p$-extension of $F$ is a free pro-$p$- group, finding specific constructions of Galois $p$-extensions over $F$ can still be challenging. The following construction of an explicit Galois extension $M/F$ with Galois group $\U_4(\F_p)$ is an analogue of the construction in Subsection 3.1 when we assumed that a $p$-th root of unity is in $F$. However we find the details interesting, and therefore for the convenience of the reader, we are including them here. Observe that even the case of the explicit construction of Heisenberg extensions of degree $p^3$ in characteristic $p$ is of interest.
In the case when $F$ has characteristic not $p$, the constructions of Heisenberg extensions of degree $p^3$ are now classical, important tools in Galois theory. We did not find any such constructions in the literature in the case of characteristic $p$. Nevertheless the construction in Subsection 4.2 seems to be simple, useful and aesthetically pleasing. What is even more surprising is that the field construction of Galois $\U_4(\F_p)$-extensions over a field $F$ of characteristic $p$ in Subsection 4.3 is almost equally simple. We have to check more details to confirm the validity of this construction, but the construction of the required Galois extension $M$ itself, is remarkably simple.
The possibility of choosing generators in such a straightforward manner (as described in Theorem~\ref{thm:construction char p}) is striking. It is interesting that the main construction in Section 3 carries over with necessary modifications in the case of characteristic $p$.
\subsection{A brief review of Artin-Schreier theory} (For more details and the origin of this beautiful theory, see \cite{ASch}.)
Let $F$ be a field of characteristic $p>0$. Let $\wp(X)=X^p-X$ be the Artin-Schreier polynomial. For each $a$ in $F$ of characteristic $p$, we let $\theta_a$ be a root of $\wp(X)=a$.
We also denote $[a]_F$ to be the image of $a$ in $F/\wp(F)$.
For each subgroup $U$ of $F/\wp(F)$, let $F_U:=F(\theta_u: [u]_F\in U)$. Then the map $U\mapsto F_U$ is a bijection between subgroups of $F/\wp(F)$ and abelian extensions of $F$ of exponent dividing $p$. There is a pairing
\[
{\rm Gal}(F_U/F) \times U\to \F_p,
\]
defined by $\langle \sigma, a\rangle = \sigma(\theta_a)-\theta_a$,
which is independent of the choice of root $\theta_a$. Artin-Schreier theory says that this pairing is non-degenerate.
Now assume that $F/k$ is a finite Galois extension. The Galois group ${\rm Gal}(F/k)$ acts naturally on $F/\wp(F)$. As an easy exercise, one can show that such an extension $F_U$, where $U$ is a subgroup of $F/\wp(F)$, is Galois over $k$ if and only if $U$ is actually an $\F_p[{\rm Gal}(F/k)]$-module.
\subsection{Heisenberg extensions in characteristic $p>0$}
For each $a\in F$, let $\chi_a\in {\rm Hom}(G_F,\F_p)$ be the corresponding element associated with $a$ via Artin-Schreier theory. Explicitly, $\chi_a$ is defined by
\[
\chi_a(\sigma)= \sigma(\theta_a)-\theta_a.
\]
Assume that $a,b$ are elements in $F$, which are linearly independent modulo $\wp(F)$. Let $K= F(\theta_a,\theta_b)$. Then $K/F$ is a Galois extension whose Galois group is generated by $\sigma_a$ and $\sigma_b$. Here $\sigma_a(\theta_b)=\theta_b$, $\sigma_a(\theta_a)= \theta_{a}+1$; $\sigma_b(\theta_{a})=\theta_{a}$, $\sigma_b(\theta_{b})= \theta_{b}+1$.
We set $A=b\theta_a$. Then
\[
\sigma_a(A)= A+b,\, \text{ and } \sigma_b(A)=A.
\]
\begin{prop}
\label{prop:Heisenberg char p} Let the notation be as above.
Let $L=K(\theta_A)$. Then $L/F$ is Galois whose Galois group is isomorphic to $\U_3(\F_p)$.
\end{prop}
\begin{proof}
From $\sigma_a(A)- A= b \in \wp(K)$, and $\sigma_b(A)=A$, we see that $\sigma(A)-A \in \wp(K)$ for every $\sigma \in {\rm Gal}(K/F)$. This implies that the extension $L:=K(\theta_{A})/F$ is Galois. Let $\tilde{\sigma}_a\in {\rm Gal}(L/F)$ (resp. $\tilde{\sigma}_b\in {\rm Gal}(L/F)$) be an extension of $\sigma_a$ (resp. $\sigma_b$). Since $\sigma_b(A)=A$, we have $\tilde{\sigma}_b(\theta_A)=\theta_{A}+j$, for some $j\in \F_p$. Hence $\tilde{\sigma}_b^p(\theta_A)=\theta_{A}$. This implies that $\tilde{\sigma}_b$ is of order $p$.
On the other hand, we have
\[
\wp(\tilde{\sigma}_a(\theta_{A}))=\sigma_a(A) =A +b.
\]
Hence $\tilde{\sigma}_a(\theta_{A})= \theta_{A}+\theta_{b}+i$, for some $i\in \F_p$. Then
\[
\tilde{\sigma}_a^p(\theta_{A})= \theta_{A} +p\theta_b+pi=\theta_{A}.
\]
This implies that $\tilde{\sigma}_a$ is also of order $p$. We have
\[
\begin{aligned}
\tilde{\sigma}_a\tilde{\sigma}_b(\theta_{A}) &= \tilde{\sigma}_a(j+\theta_{A})=i+j+\theta_{A}+\theta_{b},\\
\tilde{\sigma}_b\tilde{\sigma}_a(\theta_{A}) &=\tilde{\sigma}_b(i+ \theta_{A}+\theta_{b})={i+j}+\theta_{A}+1+\theta_{b}.
\end{aligned}
\]
We set $ \tilde{\sigma}_{A}:= \tilde{\sigma}_a \tilde{\sigma}_b\tilde{\sigma}_a^{-1}\tilde{\sigma}_b^{-1}$. Then
\[
\tilde{\sigma}_{A}(\theta_{A})=\theta_{A}-1.
\]
This implies that $\tilde{\sigma}_{A}$ is of order $p$ and that ${\rm Gal}(L/F)$ is generated by $\tilde{\sigma}_a$ and $\tilde{\sigma}_b$. We also have
\[
\begin{aligned}
\tilde{\sigma}_a \tilde{\sigma}_{A}= \tilde{\sigma}_{A}\tilde{\sigma}_a, \;\text{ and } \tilde{\sigma}_b \tilde{\sigma}_{A}= \tilde{\sigma}_{A}\tilde{\sigma}_b.
\end{aligned}
\]
We can define an isomorphism $\varphi \colon {\rm Gal}(L/F)\to \U_3(\Z/p\Z)$ by letting
\[
\tilde{\sigma}_a \mapsto \begin{bmatrix}
1& 1 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{bmatrix},
\tilde{\sigma}_b\mapsto
\begin{bmatrix}
1& 0 & 0 \\
0& 1 & 1 \\
0& 0 & 1
\end{bmatrix},
\tilde{\sigma}_{A}\mapsto
\begin{bmatrix}
1& 0 & 1 \\
0& 1 & 0 \\
0& 0 & 1
\end{bmatrix}.
\]
Note that $[L:F]=p^3$. Hence there are exactly $p$ extensions of $\sigma_a\in {\rm Gal}(K/F)$ to the automorphisms in ${\rm Gal}(L/F)$ since $[L:K]=p^3/p^2=p$. Therefore for later use, we can choose an extension of $\sigma_a\in {\rm Gal}(K/F)$, which we shall denote $\sigma_a\in {\rm Gal}(L/F)$ with a slight abuse of notation, in such a way that $\sigma_a(\theta_{A})= \theta_{A}+ \theta_{b}$.
\end{proof}
\begin{rmk} It is interesting to compare our choices of generators $A$ of Heisenberg extensions over given bicyclic extensions in the case of characteristic $p$ and the case when the base field in fact contains a primitive $p$-th root of unity.
See the proofs of Proposition~\ref{prop:Heisenberg extension} and the proposition above. Although the form of $A$ in Proposition~\ref{prop:Heisenberg extension} is more complicated than the strikingly simple choice above, the basic principle for the search of a suitable $A$ is the same in both cases. We need to guarantee that $\sigma_a(A)/A\in (K^\times)^p$ and $\sigma_a(A)-A\in \wp(K)$ in order that $K(\sqrt[p]{A})$ and $K(\theta_A)$ are Galois over $F$.
Further, in order to guarantee that $\sigma_a$ and $\sigma_b$ will not commute, we want $\sigma_a(A)/A=bk^p$, for some $k\in K^\times$, where $\sigma_b$ acts trivially on $k$.
Similarly in the characteristic $p$ case we want $\sigma_a(A)-A=b+\wp(k)$, where $\sigma_b(k)=k$. If ${\rm char}(F)=p$, then this is always possible in a simple way above with $k$ even being $0$. (See also \cite[Appendix A.1, Example]{JLY}, where the authors discuss a construction of quaternion $Q_8$-extensions over fields of characteristic $2$.)
In the case when $F$ contains a primitive $p$-th root of unity, the search for a suitable $A$ leads to the norm condition $N_{F(\sqrt[p]{a})/F}(\alpha)=b$. For some related considerations see \cite[Section 2]{MS1}.
\end{rmk}
\subsection{Construction of Galois $\U_4(\F_p)$-extensions}
We assume that we are given elements $a$, $b$ and $c$ in $F$ such that
$a$, $b$ and $c$ are linearly independent modulo $\wp(F)$. We shall construct a Galois $\U_4(\F_p)$-extension $M/F$ such that $M$ contains $F(\theta_{a},\theta_{b},\theta_{c})$.
First we note that $F(\theta_{a},\theta_{b},\theta_{c})/F$ is a Galois extension with ${\rm Gal}(F(\theta_{a},\theta_{b},\theta_{c})/F)$ generated by $\sigma_a,\sigma_b,\sigma_c$. Here
\[
\begin{aligned}
\sigma_a(\theta_{a})&=1+ \theta_{a}, \sigma_a(\theta_{b})=\theta_{b}, \sigma_a(\theta_{c})=\theta_{c};\\
\sigma_b(\theta_{a})&=\theta_{a}, \sigma_b(\theta_{b})=1+ \theta_{b}, \sigma_b(\theta_{c})= \theta_{c};\\
\sigma_c(\theta_{a})&=\theta_{a}, \sigma_c(\theta_{b})=\theta_{b}, \sigma_c(\theta_{c})=1+ \theta_{c}.
\end{aligned}
\]
Recall that $A=b\theta_a$. We set $C:= b\theta_c$.
We set $\delta:=(AC)/b=b\theta_a\theta_c \in E:=F(\theta_a,\theta_c)$. Then we have
\[
\begin{aligned}
\sigma_a(\delta)-\delta =b\sigma_a(\theta_a)\sigma_a(\theta_c)-b\theta_a\theta_c= b[\sigma_a(\theta_a)-\theta_a]\theta_c=b\theta_c=C,\\
\sigma_c(\delta)-\delta =b\sigma_c(\theta_a)\sigma_c(\theta_c)-b\theta_a\theta_c= b\theta_a[\sigma_c(\theta_c)-\theta_c]=b\theta_a=A.
\end{aligned}
\]
Finally set $G :={\rm Gal}(E/F).$
\begin{thm}
\label{thm:construction char p}
Let $M:= E(\theta_{\delta},\theta_{A},\theta_{C},\theta_{b})$. Then $M/F$ is a Galois extension, $M$ contains $F(\theta_{a},\theta_{b},\theta_{c})$, and ${\rm Gal}(M/F)\simeq \U_4(\F_p)$.
\end{thm}
\begin{proof}
Let $W^*$ be the $\F_p$-vector space in $E/\wp(E)$ generated by $[b]_E,[A]_E,[C]_E$ and $[\delta]_E$. Since
\[
\begin{aligned}
\sigma_c(\delta)&= \delta +A,\\
\sigma_a(\delta)&=\delta+ C,\\
\sigma_a(A)&=A+b,\\
\sigma_c(C)&=C+b,
\end{aligned}
\]
we see that $W^*$ is in fact an $\F_p[G]$-module. Hence $M/F$ is a Galois extension by Artin-Schreier theory.
\\
\\
{\bf Claim:} $\dim_{\F_p}(W^*)=4$. Hence $[L:F]=[L:E][E:F]=p^4p^2=p^6.$\\
{\it Proof of Claim:} From our hypothesis that $\dim_{\F_p}\langle [a]_F,[b]_F,[c]_F\rangle=3$, we see that $\langle [b]_E\rangle \simeq \F_p$.
Clearly, $\langle[b]_E\rangle \subseteq (W^*)^G$.
From the relation
\[
[\sigma_a(A)]_E= [A]_E+ [b]_E,
\]
we see that $[A]_E$ is not in $(W^*)^G$. Hence $\dim_{\F_p}\langle [b]_E,[A]_E\rangle=2$.
From the relation
\[ [\sigma_c(C)]_E= [C]_E+ [b]_E,
\]
we see that $[C]_E$ is not in $(W^*)^{\sigma_c}$. But we have $\langle [b]_E,[A]_E\rangle\subseteq (W^*)^{\sigma_c}$. Hence
\[
\dim_{\F_p}\langle [b]_E,[A]_E,[C]_E\rangle =3.
\]
Observe that the element $(\sigma_a-1)(\sigma_c-1)$ annihilates the $\F_p[G]$-module $\langle [b]_E,[A]_E,[C]_E\rangle$, while
\[
(\sigma_a-1)(\sigma_c-1)[\delta]_E= \sigma_a([A]_E)-[A]_E= [b]_E,
\]
we see that
\[
\dim_{\F_p}W^*=\dim_{\F_p}\langle [b]_E,[A]_E,[C]_E,[\delta]_E\rangle =4.
\]
\\
\\
Let $H^{a,b}=F(\theta_{a},\theta_{A},\theta_{b})$ and $H^{b,c}=F(\theta_{c},\theta_{C},\theta_{b})$.
Let
\[N:=H^{a,b}H^{b,c}=F(\theta_a,\theta_{c},\theta_{b},\theta_{A},\theta_{C})=E(\theta_{b},\theta_{A},\theta_{C}).\]
Then $N/F$ is a Galois extension of degree $p^5$.
This is because ${\rm Gal}(N/E)$ is dual to the $\F_p[G]$-submodule $\langle [b]_E,[A]_E,[C]_E \rangle$ via Artin-Schreier theory,
and the proof of the claim above shows that $\dim_{\F_p}\langle [b]_E,[A]_E,[C]_E\rangle =3$. We have the following commutative diagram
\[
\xymatrix{
{\rm Gal}(N/F) \ar@{->>}[r] \ar@{->>}[d] & {\rm Gal}(H^{a,b}/F) \ar@{->>}[d]\\
{\rm Gal}(H^{b,c}/F) \ar@{->>}[r] & {\rm Gal}(F(\theta_{b})/F).
}
\]
So we have a homomorphism $\eta$ from ${\rm Gal}(N/F)$ to the pull-back ${\rm Gal}(H^{b,c}/F)\times_{{\rm Gal}(F(\theta_{b})/F)} {\rm Gal}(H^{a,b}/F)$:
\[
\eta\colon {\rm Gal}(N/F) \longrightarrow {\rm Gal}(H^{b,c}/F)\times_{{\rm Gal}(F(\theta_{b})/F)} {\rm Gal}(H^{a,b}/F),
\]
which make the obvious diagram commute.
We claim that $\eta$ is injective. Indeed, let $\sigma$ be an element in $\ker\eta$. Then $\sigma\mid_{H^{a,b}}=1$ in ${\rm Gal}(H^{a,b}/F)$, and $\sigma\mid_{H^{b,c}}=1$ in ${\rm Gal}(H^{b,c}/F)$. Since $N$ is the compositum of $H^{a,b}$ and $H^{b,c}$, this implies that $\sigma=1$, as desired.
Since $|{\rm Gal}(H^{b,c}/F)\times_{{\rm Gal}(F(\theta_{b})/F)} {\rm Gal}(H^{a,b}/F)|=p^5=|{\rm Gal}(N/F)|$, we see that $\eta$ is actually an isomorphism.
As in the proof of Proposition~\ref{prop:Heisenberg char p}, we can choose an extension $\sigma_a\in {\rm Gal}(H^{a,b}/F)$ of $\sigma_a\in {\rm Gal}(F(\theta_{a},\theta_{b})/F)$ in such a way that
\[
\sigma_a(\theta_{A})= \theta_{A}+\theta_{b}.
\]
Since the square commutative diagram above is a pull-back, we can choose an extension $\sigma_a\in {\rm Gal}(N/F)$ of $\sigma_a\in {\rm Gal}(H^{a,b}/F)$ in such a way that
\[
\sigma_a\mid_{H^{b,c}}=1.
\]
Now we can choose any extension $\sigma_a\in {\rm Gal}(M/F)$ of $\sigma_a\in{\rm Gal}(N/F)$. Then we have
\[
\sigma_a(\theta_{A})= \theta_{A}+\theta_{b} \; \text{ and } \sigma_a\mid_{H^{b,c}}=1.
\]
Similarly, we can choose an extension $\sigma_c\in {\rm Gal}(M/F)$ of $\sigma_c\in {\rm Gal}(F(\theta_b,\theta_c)/F)$ in such a way that
\[
\sigma_c(\theta_{C})= \theta_{C}+\theta_{b}, \text{ and } \sigma_c\mid_{H^{a,b}}=1.
\]
We define $\sigma_b\in {\rm Gal}(M/E)$ to be the element which is dual to $[b]_E$ via Artin-Schreier theory. In other words, we require that
\[
\sigma_b(\theta_{b})=1+ \theta_{b},
\]
and $\sigma_b$ acts trivially on $\theta_{A}$, $\theta_{C}$ and $\theta_{\delta}$. We consider $\sigma_b$ as an element in ${\rm Gal}(M/F)$, then it is clear that $\sigma_b$ is an extension of $\sigma_b\in {\rm Gal}(F(\theta_{a},\theta_{b},\theta_{c})/F)$.
Let $W={\rm Gal}(M/E)$, and let $H={\rm Gal}(M/F)$, then we have the following exact sequence
\[
1\to W\to H\to G\to 1.
\]
By Artin-Schreier theory, it follows that $W$ is dual to $W^*$, and hence $W\simeq (\Z/p\Z)^4$. In particular, we have $|H|=p^6$.
Recall that from \cite[Theorem 1]{BD}, we know that the group $\U_4(\F_p)$ has a presentation with generators $s_1,s_2,s_3$ subject to the relations \eqref{eq:R} displayed in the proof of Theorem~\ref{thm:construction}.
Note that $|\Gal(M/F)|=p^6$.
So in order to show that $\Gal(M/F)\simeq \U_4(\F_p)$, we shall show that $\sigma_a,\sigma_b$ and $\sigma_c$ generate $\Gal(M/F)$ and they satisfy these relations~\eqref{eq:R}.
The proofs of the following claims are similar to the proofs of analogous claims in the proof of Theorem~\ref{thm:construction}. Therefore we shall omit them.
\\
\\
{\bf Claim:} The elements $\sigma_a$, $\sigma_b$ and $\sigma_c$ generate $\Gal(M/F)$.\\
{\bf Claim:} The order of $\sigma_a$ is $p$.\\
{\bf Claim:} The order of $\sigma_b$ is $p$.\\
{\bf Claim:} The order of $\sigma_c$ is $p$.\\
{\bf Claim:} $[\sigma_a,\sigma_c]=1$.\\
{\bf Claim:} $[\sigma_a,[\sigma_a,\sigma_b]]=[\sigma_b,[\sigma_a,\sigma_b]]=1$.\\
{\bf Claim:} $[\sigma_b,[\sigma_b,\sigma_c]]=[\sigma_c,[\sigma_b,\sigma_c]]=1$.\\
{\bf Claim:} $[[\sigma_a,\sigma_b],[\sigma_b,\sigma_c]]=1$.\\
\\
An explicit isomorphism $\varphi\colon {\rm Gal}(M/F)\to \U_4(\F_p)$ may be defined as
\[
\sigma_a \mapsto \begin{bmatrix}
1& 1 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{bmatrix}, \; \;
\sigma_b\mapsto \begin{bmatrix}
1& 0 & 0 & 0\\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{bmatrix}, \;\;
\sigma_c\mapsto \begin{bmatrix}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{bmatrix}.
\]
\end{proof}
\section{Triple Massey products}
Let $G$ be a profinite group and $p$ a prime number. We consider the finite field $\F_p$ as a trivial discrete $G$-module. Let $\sC^\bullet=(C^\bullet(G,\F_p),\partial,\cup)$ be the differential graded algebra of inhomogeneous continuous cochains of $G$ with coefficients in $\F_p$ (see \cite[Ch.\ I, \S2]{NSW} and \cite[Section 3]{MT1}). For each $i=0,1,2,\ldots$, we write $H^i(G,\F_p)$ for the corresponding cohomology group. We denote by $Z^1(G,\F_p)$ the subgroup of $C^1(G,\F_p)$ consisting of all 1-cocycles. Because we use trivial action on the coefficients $\F_p$, we have $Z^1(G,\F_p)=H^1(G,\F_p)={\rm Hom}(G,\F_p)$. Let $x,y,z$ be elements in $H^1(G,\F_p)$. Assume that
\[
x\cup y=y\cup z=0\in H^2(G,\F_p).
\]
In this case we say that the triple Massey product $\langle x,y,z\rangle$ is defined. Then there exist cochains $a_{12}$ and $a_{23}$ in $C^1(G,\F_p)$ such that
\[
\partial a_{12}=x\cup y \; \text{ and } \partial a_{23}= y\cup z,
\]
in $C^2(G,\F_p)$. Then we say that $D:=\{x,y,z,a_{12},a_{23}\}$ is a {\it defining system} for the triple Massey product $\langle x, y, z\rangle$. Observe that
\[
\begin{aligned}
\partial (x\cup a_{23}+ a_{12}\cup z) & = \partial x\cup a_{23}-x\cup \partial a_{23}+\partial a_{12}\cup z -a_{12}\cup \partial z\\
&=0 \cup a_{23}-x\cup (y\cup z) +(x\cup y)\cup z - a_{12}\cup 0\\
&=0 \in C^2(G,\F_p).
\end{aligned}
\]
Therefore $x\cup a_{23}+a_{12}\cup z$ is a 2-cocycle. We define the value $\langle x, y, z\rangle_D$ of the triple Massey product $\langle x, y, z\rangle$ with respect to the defining system $D$ to be the cohomology class $[x \cup a_{23}+ a_{12}\cup z]$ in $H^2(G,\F_p)$. The set of all values $\langle x, y, z\rangle_D$ when $D$ runs over the set of all defining systems, is called the triple Massey product $\langle x,y ,z \rangle \subseteq H^2(G,\F_p)$. Note that we always have
\[
\langle x,y ,z \rangle = \langle x, y, z\rangle_D + x\cup H^1(G,\F_p) + H^1(G,\F_p)\cup z.
\]
We also have the following result.
\begin{lem}
\label{lem:additivity}
If the triple Massey products $\langle x,y,z\rangle $ and $\langle x,y^\prime,z\rangle$ are defined, then the triple Massey product $\langle x,y+y^\prime,z\rangle$ is defined, and
\[
\langle x,y+y^\prime,z\rangle=\langle x,y,z\rangle +\langle x,y^\prime,z\rangle.
\]
\end{lem}
\begin{proof} Let $\{x,y,z, a_{12}, a_{23}\}$ (respectively $\{x,y^\prime,z, a^\prime_{12}, a^\prime_{23}\}$) be a defining system for $\langle x,y,z\rangle$ (respectively $\langle x, y^\prime,z\rangle$). Then$ \{x,y+y^\prime,z, a_{12}+a^\prime_{12}, a_{23}+a^\prime_{23}\}$ is a defining system for $\langle x, y+y^\prime,z\rangle$. We also have
\[
\begin{aligned}
\langle x,y,z\rangle+ \langle x, y^\prime,z\rangle = &[x\cup a_{23}+a_{12}\cup z] +x\cup H^1(G,\F_p)+ H^1(G,\F_p) \cup z \\
&+ [x\cup a^\prime_{23}+a^\prime_{12}\cup z] +x\cup H^1(G,\F_p)+ H^1(G,\F_p) \cup z \\
= & [x\cup (a_{23}+a^\prime_{23})+ (a_{12}+a^\prime_{12})\cup z] + x\cup H^1(G,\F_p)+ H^1(G,\F_p)\cup z\\
=&\langle x,y+y^\prime,z\rangle,
\end{aligned}
\]
as desired.
\end{proof}
The following lemma is a special case of a well-known fact (see \cite[Lemma 6.2.4 (ii)]{Fe}) but for the sake of convenience we provide its proof.
\begin{lem}
\label{lem:scalar Massey}
If the triple Massey product $\langle x,y,z\rangle $is defined, then for any $\lambda\in \F_p$ the triple Massey product $\langle x,\lambda y,z\rangle$ is defined, and
\[
\langle x,\lambda y,z\rangle\supseteq \lambda\langle x,y,z\rangle.
\]
\end{lem}
\begin{proof}
Let $D=\{x,y,z, a_{12}, a_{23}\}$ be any defining system for $\langle x,y,z\rangle$. Clearly $D^\prime:=\{x,\lambda y,z, \lambda a_{12}, \lambda a_{23}\}$ is a defining system for $\langle x,\lambda y,z\rangle$, and
\[\lambda \langle x,y,z\rangle_D=\lambda[x\cup a_{23}+a_{12}\cup z]=[x\cup(\lambda a_{23})+(\lambda a_{12})\cup z]= \langle x,\lambda y,z\rangle_{D^\prime}.
\]
Therefore $\lambda\langle x,y,z\rangle\subseteq \langle x,\lambda y,z\rangle.$
\end{proof}
A direct consequence of Theorems~\ref{thm:construction},~\ref{thm:construction char not p} and~\ref{thm:construction char p}, is the following result which roughly says that every "non-degenerate" triple Massey product vanishes whenever it is defined.
\begin{prop}
\label{prop:nondegenerate Massey}
Let $F$ be an arbitrary field. Let $\chi_1,\chi_2,\chi_3$ be elements in ${\rm Hom}(G_F,\F_p)$. We assume that $\chi_1,\chi_2,\chi_3$ are $\F_p$-linearly independent. If the triple Massey product $\langle \chi_1,\chi_2,\chi_3\rangle$ is defined, then it contains 0.
\end{prop}
\begin{proof}Let $L$ be the fixed field of $(F)^s$ under the kernel of the surjection $(\chi_1,\chi_2,\chi_3)\colon G_{F}\to (\F_p)^3$. Then Theorems~\ref{thm:construction}, \ref{thm:construction char not p} and~\ref{thm:construction char p} imply that $L/F$ can be embedded in a Galois $\U_4(\F_p)$-extension $M/F$. Moreover there exist $\sigma_1,\sigma_2,\sigma_3$ in ${\rm Gal}(M/F)$ such that they generate ${\rm Gal}(M/F)$, and
\[
\begin{aligned}
\chi_1(\sigma_1)=1, \chi_1(\sigma_2)=0, \chi_1(\sigma_3)=0;\\
\chi_2(\sigma_1)=0, \chi_2(\sigma_2)=1, \chi_2(\sigma_3)=0;\\
\chi_3(\sigma_1)=0, \chi_3(\sigma_2)=0, \chi_3(\sigma_3)=1.
\end{aligned}
\]
(Note that for each $i=1,2,3$, $\chi_i$ is trivial on ${\rm Gal}(M/M_0)$, hence $\chi_i(\sigma_j)$ makes sense for every $j=1,2,3$.)
An explicit isomorphism $\varphi\colon {\rm Gal}(M/F)\to \U_4(\F_p)$ can be defined as
\[
\sigma_1 \mapsto \begin{bmatrix}
1& 1 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{bmatrix}, \; \;
\sigma_2\mapsto \begin{bmatrix}
1& 0 & 0 & 0\\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{bmatrix}, \;\;
\sigma_3\mapsto \begin{bmatrix}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{bmatrix}.
\]
Let $\rho$ be the composite homomorphism $\rho\colon\Gal_F\to \Gal(M/F)\stackrel{\varphi}{\simeq} \U_4(\F_p)$. Then one can check that
\[
\begin{aligned}
\rho_{12} &=\chi_1,\; \rho_{23}=\chi_2, \; \rho_{34} =\chi_3.
\end{aligned}
\]
(Since all the maps $\rho,\chi_1,\chi_2,\chi_3$ factor through $\Gal(M/F)$, it is enough to check these equalities on elements $\sigma_1,\sigma_2,\sigma_3$.)
This implies that $\langle -\chi_1,-\chi_2,-\chi_3\rangle$ contains 0 by \cite[Theorem 2.4]{Dwy}. Hence $\langle \chi_1,\chi_2,\chi_3\rangle$ also contains 0.
\end{proof}
For the sake of completeness we include the following proposition, which together with Proposition~\ref{prop:nondegenerate Massey}, immediately yields a full new proof for a result which was first proved by E. Matzri \cite{Ma}. Matzri's result says that defined triple Massey products vanish over all fields containing a primitive $p$-th root of unity.
Alternative cohomological
proofs for Matzri's result are in \cite{EMa2} and \cite{MT5}. Our new proof given in this section of the crucial "non-degenerate" part of this result (see Proposition~\ref{prop:nondegenerate Massey}), which relies on explicit constructions of $\U_4 (\F_p)$-extensions, is a very natural proof because of Dwyer's result \cite[Theorem 2.4]{Dwy}.
Observe that in \cite{MT5} we extended this result to all fields.
\begin{prop}
\label{prop:degenerate Massey}
Assume that $\dim_{\F_p}\langle [a]_F,[b]_F,[c]_F\rangle\leq 2$. Then if the triple Massey product $\langle \chi_a,\chi_b,\chi_c\rangle$ is defined, then it contains 0.
\end{prop}
\begin{proof}
We can also assume that $a$, $b$ and $c$ are not in $(F^\times)^p$. The case that $p=2$, was treated in \cite{MT1}. So we shall assume that $p>2$.
\\
\\
{\bf Case 1:} Assume that $a$ and $c$ are linearly dependent modulo $(F^\times)^p$. This case is considered in \cite[Proof of Theorem 4.10]{MT5}. We include a proof here for the convenience of the reader. Let $\varphi=\{\varphi_{ab},\varphi_{bc}\}$ be a defining system for $\langle \chi_a,\chi_b,\chi_c\rangle$. We have
\[
\begin{aligned}
{\rm res}_{\ker \chi_a}(\langle \chi_a,\chi_b,\chi_c\rangle_\varphi)&= \res_{\ker\chi_a}(\chi_a\cup \varphi_{bc}+\varphi_{ab}\cup\chi_c )\\
&= \res_{\ker\chi_a}(\chi_a)\cup \res_{\ker\chi_a}(\varphi_{bc})+ \res_{\ker\chi_a}(\varphi_{ab})\cup\res_{\ker\chi_a}(\chi_c)\\
&= 0\cup \res_{\ker\chi_a}(\varphi_{bc})+ \res_{\ker\chi_a}(\varphi_{ab})\cup 0\\
& = 0.
\end{aligned}
\]
Then \cite[Chapter XIV, Proposition 2]{Se}, $\langle \chi_a,\chi_b,\chi_c\rangle_\varphi=\chi_a\cup\chi_x$ for some $x\in F^\times$. This implies that $\langle \chi_a,\chi_b,\chi_c\rangle$ contains 0.
\\
\\
{\bf Case 2:} Assume that $a$ and $c$ are linearly independent. Then $[b]_F$ is in $\langle [a]_F,[c]_F\rangle$. Hence there exist $\lambda,\mu\in \F_p$ such that
\[
\chi_b = \lambda\chi_a +\mu\chi_c.
\]
Then we have
\[
\langle \chi_a,\chi_b,\chi_c\rangle = \langle \chi_a,\lambda\chi_a,\chi_c\rangle +\langle \chi_a,\mu\chi_c,\chi_c\rangle \supseteq \lambda \langle \chi_a,\chi_a,\chi_c\rangle +\mu \langle \chi_a,\chi_c,\chi_c\rangle.
\]
(The equality follows from Lemma~\ref{lem:additivity} and the inequality follows from Lemma~\ref{lem:scalar Massey}.) By \cite[Theorem 5.9]{MT5} (see also \cite[Proof of Theorem 4.10, Case 2]{MT5}), $\langle \chi_a,\chi_a,\chi_c\rangle$ and $\langle \chi_a,\chi_c,\chi_c\rangle$ both contain 0. Hence $\langle \chi_a,\chi_b,\chi_c\rangle$ also contains 0.
\end{proof}
\begin{thm}
\label{thm:U4}
Let $p$ be an arbitrary prime and $F$ any field. Then the following statements are equivalent.
\begin{enumerate}
\item There exist $\chi_1,\chi_2,\chi_3$ in ${\rm Hom}(G_F,\F_p)$ such that they are $\F_p$-linearly independent, and if ${\rm char} F\not=p$ then $\chi_1 \cup \chi_2 = \chi_2 \cup \chi_3 =0$.
\item There exists a Galois extension $M/F$ such that ${\rm Gal}(M/F)\simeq \U_4(\F_p)$.
\end{enumerate}
Moreover, assume that (1) holds, and let $L$ be the fixed field of $(F)^s$ under the kernel of the surjection $(\chi_1,\chi_2,\chi_3)\colon G_{F}\to (\F_p)^3$. Then
in (2) we can construct $M/F$ explicitly such that $L$ is embedded in $M$.
If $F$ contains a primitive $p$-th root of unity, then the two above conditions are also equivalent to the following condition.
\begin{enumerate}
\item[(3)] There exist $a,b,c\in F^\times$ such that $[F(\sqrt[p]{a},\sqrt[p]{b}, \sqrt[p]{c}):F]=p^3$ and $(a,b)=(b,c)=0$.
\end{enumerate}
If $F$ of characteristic $p$, then the two above conditions (1)-(2) are also equivalent to the following condition.
\begin{enumerate}
\item[(3')] There exist $a,b,c\in F^\times$ such that $[F(\theta_{a},\theta_{b}, \theta_{c}):F]=p^3$.
\end{enumerate}
\end{thm}
\begin{proof} The implication that (1) implies (2), follows from Theorems~\ref{thm:construction}, \ref{thm:construction char not p} and~\ref{thm:construction char p}.
\\
\\
Now assume that $(2)$ holds. Let $\rho$ be the composite $\rho\colon G_F\surj{\rm Gal}(M/F)\simeq \U_4(\F_p)$. Let $\chi_1:=\rho_{12}$, $\chi_2:=\rho_{23}$ and $\chi_3:=\rho_{34}$. Then $\chi_1, \chi_2,\chi_3$ are elements in ${\rm Hom}(G_F,\F_p)$, and $(\chi_1,\chi_2,\chi_3)\colon G_F\to (\F_p)^3$ is surjective. This implies that $\chi_1,\chi_2,\chi_3$ are $\F_p$-linearly independent by \cite[Lemma 2.6]{MT4}.
On the other hand, since $\rho$ is a group homomorphism, we see that
\[
\chi_1\cup \chi_2 =\chi_2\cup\chi_3=0.
\]
Therefore $(1)$ holds.
\\
\\
Now we assume that $F$ contains a primitive $p$-th root of unity. Note that for any $a,b\in F^\times$, $\chi_a\cup\chi_b=0$, if and only if $(a,b)=0$ (see Subsection~\ref{subsec:norm residue}).
Then (1) is equivalent to (3) by Kummer theory in conjunction with an observation that $[F(\sqrt[p]{a},\sqrt[p]{b}, \sqrt[p]{c}):F]=p^3$, if and only if $\chi_a,\chi_b,\chi_c$ are $\F_p$-linearly independent.
\\
\\
Now we assume that $F$ of characteristic $p>0$. Then (1) is equivalent to (3') by Artin-Schreier theory in conjunction with an observation that $[F(\theta_{a},\theta_{b}, \theta_{c}):F]=p^3$, if and only if $\chi_a,\chi_b,\chi_c$ are $\F_p$-linearly independent.
\end{proof}
|
1,108,101,566,435 | arxiv | \section{Introduction}\label{sec:Introduction}
Multi-agents prediction has been attracting great attention from both industry and academia researchers, due to the practical desire to deal with complex problems. In most of the real-world scenarios, multiple agents are coordinated in different roles but aim at achieving the same goal.
For example, in a war, an agent wants to recognize a person who is coming over whether is a hostile enemy or a common civilian, and hence the agent and collaborators are able to execute some reactions. In a basketball game, an agent in the defensive team wants to understand the future trajectories of the offensive players, so that she and her teammates can make the optimized defense actions.
It is an open challenge to design a sequential model to capture such complex behavior of multiple agents that is highly coordinated and non-deterministic.
\begin{figure}[t]
\centering
\includegraphics[scale=0.17,trim={50 150 0 100},clip]{images/basketball_trajectory.pdf}
\caption{\textbf{Actual basketball game trajectories.} The trajectory data is from a Toronto Raptors NBA game in 2015, which contains offense (red), defense (blue), and ball (green) 2D overhead-view trajectories. \label{fig:basketballtrajectory}}
\end{figure}
In this paper, we focus on solving a multiple-agents context-aware prediction problem --- forecasting trajectories in sports games such as basketball and soccer.
In demonstrations of basketball or football games, each game segment corresponds to an \emph{unordered} set of $K$ trajectories, which includes $K-1$ players (i.e., agents), and a ball.
For example, in the professional basketball games illustrated in Figure~\ref{fig:basketballtrajectory}, different players play in different roles (e.g., Shooting Guard) but coordinate together to achieve a common goal (i.e., win the game).
Roles may change during a game, and the roles of the agents and role assignment mechanism are unknown (to be predicted) at the demonstrations.
Previous multi-agent trajectory prediction approaches~\cite{zheng2016generating,felsen2018will}, use various heuristics (e.g., tree-based role alignment~\cite{sha2017fine}) to assign players to roles, thereby fixing an agent ordering across games. In the most recent research~\cite{yeh2019diverse,sun2019stochastic}, agents are considered as nodes of a fully connected graph, which enables aggregating the players information by applying GNNs. In particular, these methods use reduction operation such as sum or average to aggregate the agents' information.
However, these methods are simply aggregating the agents information at each time step, and treat the dependency of agents equally. Such dependency plays a key role in the multi-agent games, and the dependencies of different positions/players are significant different. Imaging in a basketball game, a player's future movement does not only depend on the current and past states of other agents, but it also has a strong relation with the dependency of other agents.
In a soccer game, the right back are less likely to pass the ball to the left midfielder than to the right midfielder. This dependency ignored by previous works needs to be carefully investigated.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.113,trim={0 0 0 0cm},clip]{images/overview.pdf}
\caption{\textbf{Architectural elements in our approach.} (a) \emph{Spatial layer}: at each time step $t$, the players are considered as nodes in a fully connected graph. Hence, we generate a `graph embedding' according to the players information such as their positions at time $t$ by leveraging the attention mechanism that enables modeling the dependencies among players. \emph{Temporal Layer}: We use TCN to model the temporal dependency among the players throughout a demonstration. This convolutional method supports long effective history sizes as well as other features, such as parallelism and stable gradients. (b) \emph{Attention block}: The inputs to the attention block are the tracking data of $K$ players at a time $t$, $\mathbf{x}^t = \{\mathbf{x}^t_1, \mathbf{x}^t_2, . . . , \mathbf{x}^t_K\}$. The output of attention block is the `graph' embedding $g^t$ in the time $t$. Combining with the `graph' embeddings in other time steps, we have $g = {g^1, g^2, ..., g^T}$, which is the input to the residual block in TCN. \emph{TCN Residual block} allows layers to model modifications to the identity mapping, which has repeatedly been shown to benefit very deep networks~\cite{bai2018empirical}. \label{fig:systemoverview}}
\end{figure*}
To address such challenges, in this paper, we propose an attention-based multi-agents trajectory prediction model. To the best of our knowledge, we are the first to propose an approach that is able to capture the fine-grained dependency among the players. Our model can deliver improved forecast to current trajectories prediction approach by:
\begin{itemize}
\item Using fully-connected graph structure to achieve permutation equivariant.
\item Introducing the attention mechanism. The attention coefficients enable modeling dependencies among agents. We are the first to leverage attention mechanism for modeling multi-agents trajectory prediction.
\item Using a temporal convolutional network (TCN) as the sequential model (instead of recurrent networks such as LSTM) to support long effective history and provide important features such as parallelism and stable gradients. We are also the first to use the convolutional sequential model for multi-agents trajectory prediction.
\item The proposed approach is trained end-to-end to predict future states of agents.
\end{itemize}
To verify the performance of the proposed model, we have tested it on two different datasets: basketball trajectories (from Toronto Raptors NBA games illustrated by Figure~\ref{fig:basketballtrajectory}), and soccer trajectories (from 45 games in the Premier League). We show that our approach is able to predict the future trajectories more accurately than the state-of-the-art.
\paragraph*{Organization} The rest of the paper is organized as follows.
Section~\ref{sec:ourmethod} describes the main design of our model. Section~\ref{sec:Experimentsetup} shows the baseline models we compare in the experiments, as well as the datasets and how we preprocess the datasets. We report the evaluation results in Section~\ref{sec:evaluation}. Finally, we conclude the paper in Section~\ref{sec:Conclusion}.
\section{Our Method} \label{sec:ourmethod}
In this section, we present the architecture of our model. In particular, we first discuss the problem setting in this paper. Then, we introduce the architecture of proposed approach (the overview of the architecture is shown in Figure~\ref{fig:systemoverview}) that includes a spatial module (graph structure with attention mechanism) and a temporal module (TCN). Last, we show how our models is trained end-to-end to infer the future trajectories of agents.
\subsection{Problem Setting}
In our problem setting, we assume that we have $K$ agents (i.e., K-1 players, and a ball in a sports game scenario) with a common goal (or sequence of goals) in a set of demonstrations, $D=\{D_1, D_2, ..., D_U\}$.
Let $\mathbf{x}_k^t \in \mathbb{R}^2$ denote the 2-dimensional location of agent $k$ at time $t$, and $\mathbf{x}^t = \{x^t_1, x^t_2, . . . , x^t_K\}$ be the corresponding states (e.g., positions in sports game) of $K$ agents at time $t$. Consequently, $D_i = \{x^1, x^2, . . . , x^T\}$ is a demonstration that includes a set of `snapshots' of agents' states, corresponding to one segment of a game, where $T$ is the total length in this segment, and $D$ represents all demonstration from the data. Our goal is to build an end-to-end approach to model/predict the trajectories of agents over the sequential data $D$.
\subsection{Spatial Module with Attention Mechanism}
Trajectory data consists of unordered $K$ trajectories, and do not contain the information of the order of such $K$ agents. In order to build a model where the modeled probability is the same across demonstrations in $D$, we need to handle this ordering discrepancy. Prior works \cite{zheng2016generating,felsen2018will,sha2017fine,lucey2013representing} solve this ordering discrepancy problem by `sorting' the agents before (or during) modeling. In other words, similar behaving agents are sorted at the same index across different demonstrations. In this paper, to solve the ordering discrepancy problem, we use a graph-based method to achieve permutation equivariant. In particular, $K$ agents are considered as $K$ nodes that are fully connected in a graph, and the agents' states are the attributes of the corresponding nodes. Hence, a demonstration $D_i$ can be considered as a graph that are dynamically changing the nodes' attributes over the time. We then use a graph-based neural network that handles the graph's input properly because they propagate on each node respectively, ignoring the input order of nodes. In other words, the output of the spatial module is invariant for the order the agents.
In addition, we also introduce the attention mechanism to the spatial module. Attention-based graph model are widely discussed in several research areas such as traffic prediction~\cite{guo2019attention,yu2017spatio}, action recognition~\cite{yan2018spatial}, multivariate time series modeling~\cite{wu2020connecting}, and code similarity detection~\cite{buggraph}.
Here our approach is able to model the fine-grained dependency among the players across the demonstrations. Our approach computes the hidden representations of each node in the graph, by attending over its neighbors, following a self-attention strategy. Comparing with the vanilla GNN-based methods \cite{sun2019stochastic,yeh2019diverse}, our attention-based approach has the following benefits:
\begin{itemize}
\item It employs the attention coefficient that is able to model the fine-grained dependency among the players. For example, the coefficient between node $i$ and node $j$ indicates the importance of node $j$'s features to node $i$, which reveals the relation/dependency between player $i$ and player $j$.
\item The operation of our approach is efficient. We follow the design of Graph Attention Network (GAT) and it is parallelizable across node neighbor pairs. Hence our model allows for (implicitly) assigning different importances to nodes of a same neighborhood, enabling a leap in model capacity~\cite{velivckovic2017graph}.
\end{itemize}
A single round graph convolution with attention mechanism is illustrated in Figure~\ref{fig:systemoverview}(b). In particular, we generate a graph embedding $g^t$ at time $t$. The output of the spatial module for a demonstration is a set of graph embeddings $\mathbf{g} = \{g^1, g^2, ..., g^t\}$, given an known input sequence $D_i = \{\mathbf{x}^1, \mathbf{x}^2, . . . , \mathbf{x}^t\}$. Detailed information of spatial module is discussed in APPENDIX.A.
\subsection{Temporal Module with Convolutional Nets}
Instead of using recurrent networks (e.g., LSTM) to model the temporal dependency in a demonstration $D_i$, our temporal module is built on a convolutional network --- TCN, that employs very deep networks (augmented with residual layers) and dilated convolutions to support long effective history sizes. In this paper, we use TCN as the sequential model to predict the trajectories. In addition to supporting long effective history, TCN also has the following advantages: 1) \emph{Parallelism}: unlike in RNNs where the predictions for later timesteps must wait for their predecessors to complete, convolutions can be done in parallel. 2) \emph{Stable gradients}: unlike recurrent architectures, TCN has a backpropagation path different from the temporal direction of the sequence, and thus avoids exploding/vanishing gradients. 3) \emph{Low memory requirement for training}: RNNs and LSTMs can easily use up a lot of memory to store the partial results for their multiple cell gates. However, in a TCN the filters are shared across a layer, with the backpropagation path depending only on network depth.
The goal of the temporal module is to model a sequence of graph embeddings $\mathbf{g} = \{g^1, g^2, ..., g^t\}$ generated from the spatial module, and infer the future unseen states of the agents $\mathbf{x} = \{x^{t+1}, x^{t+2}, ..., x^T\}$, where $\mathbf{g} \in \mathbb{R}$. Note, the key constraint in this module is that to predict the output $x^{t+m}$ for time $t+m$, we are only able to use those inputs that have been previously observed: $\{g^1, g^2, ..., g^t, ..., g^{t+m-1}\}$.
By applying the temporal module $T$ on $\mathbf{g} = \{g^1, g^2, ..., g^t\}$, we have the predicted states of the graphs at unseen time steps $t+1, t+2, ..., T$, as shown in Equation~\ref{eq:temporalprediction}:
\begin{equation}
\mathbf{\hat{g}} = \{\hat{g}^{t+1}, \hat{g}^{t+2}, ..., \hat{g}^{T}\} = T(\mathbf{g})
\label{eq:temporalprediction}
\end{equation}
Detailed information of temporal module is discussed in APPENDIX.B.
\subsection{End-to-end training}
In order to compare with the actual agents' 2-D states $\mathbf{x}$, we apply a fully connected layer to $\mathbf{\hat{g}}$ to reduce its dimension:
\begin{equation}\label{eq:reducedemention}
\mathbf{\hat{x}} = \{\hat{x}^{t+1}, \hat{x}^{t+2}, ..., \hat{x}^{T}\} = F(\mathbf{\hat{g}})
\end{equation}
Then the goal is to minimize the $\ell_2$ loss between $\mathbf{\hat{x}} = \{\hat{x}^{t+1}, \hat{x}^{t+2}, ..., \hat{x}^{T}\}$ and the agents' actual states $\mathbf{x} = \{x^{t+1}, x^{t+2}, ..., x^{T}\}$:
\begin{equation}\label{eq:Loss}
L_{2}(\mathbf{x}, \mathbf{\hat{x}})
\end{equation}
Here $\mathbf{x}$ includes a set of states $x_k^{t+m}$ that are the positions of agent $k$ at time $t+m$.
Note that it is not a one-time feed forward prediction for all unseen time steps. To predict a distant positions (e.g., $x^{t+i}$), we first decode all its previous positions, $\{x^{t+1}, x^{t+2}, ..., x^{t+i-1}\}$, to guarantee the accuracy of the prediction.
\section{Experiments Setup} \label{sec:Experimentsetup}
In this section, we discuss the experiment setup, including the baseline models we compare in the evaluation, datasets we use, and the implementation details.
\subsection{Models}\label{sec:models}
We compare our proposed approach with several deep learning baselines:
\begin{itemize}
\item \textbf{Velocity}: We use the velocity inference as a simple baseline for the sanity check, i.e., each of the agent's predictions is linearly inferred using its past observed velocity.
\item \textbf{LSTM~\cite{hochreiter1997long}} is recurrent sequential model baseline implemented using a set of gates to control the flow of information. The model uses an MLP as decoder for prediction.
\item \textbf{TCN~\cite{bai2018empirical}} is convolutional sequential model baseline that can provide several advantages than recurrent sequential models. The model uses an MLP as decoder for prediction.
\item \textbf{HMM + LSTM~\cite{le2017coordinated}} is a permutation non-equivalent model that leverage HMMs to capture the orders of agents, and use LSTM to model the temporal dependency.
\item \textbf{Graph + VRNNs~\cite{yeh2019diverse}} is a graph-based permutation equivalent models. It leverage vanilla GNNs for permutation equivalent, and VRNNs for temporal dependency.
\item \textbf{Graph + Attention + TCN (Ours)} is a fully connected graph modeled by attention mechanism to capture the interactions among agents. TCN is used for modeling the temporal dependency to avoid the drawbacks raised by recurrent models.
\item \textbf{Other Variants}: We also consider the models with different components of the neural networks (i.e., \textbf{Graph + Attention + LSTM} and \textbf{HMM + TCN}) to verify the necessity of each component.
\end{itemize}
\subsection{Datasets}\label{sec:datasets}
In this work, we use two datasets in the experiments:
\begin{itemize}
\item \textbf{Basketball dataset}\footnote{NBA Toronto Raptors Game Records: \url{http://www.cs.toronto.edu/~urtasun/courses/CSC2541_Winter17/project_2.pdf} (Feb, 2017)} is provided with Sportvue trajectory and play-by-play data for 42 Toronto Raptors games in Fall 2015. We only use the trajectory data in our experiment. The original dataset includes 10 players and 1 ball, and records the trajectories at 25 frames per second. It covers the whole court throughout the game. For each frame, the position of the players and the ball are provided. We preprocess the data by the following steps: 1) removing the timeframe where the ball or players are not presented on court; 2) Discarding the timeframes that are not single sided; 3) resampling the dataset to 5Hz; and 4) normalizing the trajectories to be in the range of [-1, 1].
\item \textbf{Soccer dataset~\cite{le2017coordinated}} contains trajectories of 22 soccer players and 1 ball from multiple anonymous teams in the Premier League. The dataset amounts to equivalently and approximately 45 games worth of playing time, with redundant and ``dead'' situations removed. It includes a training set with 7,500 sequences and two separate sets of sequences for testing. The sequences are in different length and they are sampled at 10Hz. We follow the work~\cite{yeh2019diverse} to preprocess the data. First we split the data into segments of length 50 by using a sliding window with 50\% overlap on both the training and test set. The trajectories are centered and normalized to be in the range of [-1, 1]. Note that, although the goal keepers tend not to move much in a demonstration, we still model them because that we believe the movement of goal keepers would affect other players' trajectories.
\end{itemize}
\setlength\extrarowheight{2pt}
\begin{table*}[!htbp]
\centering
\caption{Quantitative results on the \emph{Basketball} and \emph{Soccer} datasets, modeling offense team and defense team respectively. Each demonstration includes 10 seconds in total, where 6-second data observed, and 4-second data to be predicted. We report mean and standard deviation of the mean. Lower numbers are better, and bold is the best results. \emph{Average error} and \emph{Max error} have the units of feet (in the basketball dataset) or meter (in the soccer dataset); and \emph{miss rate} has the units of percentage.}
\setlength\tabcolsep{3.5pt}
\small
\begin{tabular}{l|c |c|c c c|c c c}
\Xhline{2\arrayrulewidth}
\multirow{2}{2.8cm}{\centering \textbf{Methods}} &\multirow{2}{*}{\textbf{Order}} & \multirow{2}{*}{\textbf{Team}} & \multicolumn{3}{c|}{\textbf{\emph{Basketball} (ft)}} & \multicolumn{3}{c}{\textbf{\emph{Soccer} (m)}} \\ \cline{4-9}
& & & Avg $\ell_2$ error & Max error & Miss rate &Avg $\ell_2$ error & Max error & Miss rate \\ \midrule
Velocity & None & \multirow{8}{*}{Defense} &$09.80 \pm .03$ & $16.10 \pm .05 $ & $73.01 \pm .09 $ & $04.63 \pm .02 $ & $08.84 \pm .02 $ & $81.41 \pm .12 $ \\
LSTM & None & &$10.02 \pm .04$ & $18.33 \pm .05 $ & $75.52 \pm .09 $ & $05.37 \pm .02 $ & $09.57 \pm .02 $ & $84.21 \pm .12 $ \\
TCN & None & &$09.42 \pm .02$ & $15.75 \pm .04 $ & $72.74 \pm .09 $ & $04.17 \pm .02 $ & $07.94 \pm .02 $ & $77.51 \pm .12 $ \\
HMM + LSTM & Role-based & &$09.24 \pm .03$ & $13.59 \pm .04$ & $70.74 \pm .08 $ & $03.34 \pm .01 $ & $07.67 \pm .02 $ & $68.46 \pm .07 $ \\
HMM + TCN & Role-based & &$08.79 \pm .02$ & $12.14 \pm .04$ & $67.81 \pm .08 $ & $03.08 \pm .01 $ & $06.99 \pm .02 $ & $64.21 \pm .07 $ \\
Graph + VRNNs& Equivariant & &$07.22 \pm .01$ & $\pmb{9.88 \pm .02} $ & $63.21\pm.05$ & $03.02 \pm .01 $ & $06.21 \pm .01 $ & $62.11 \pm .04 $ \\
Graph + Attention + LSTM & Equivariant& & $07.21 \pm .01$ & $10.21 \pm .02$ & $64.64 \pm .05$ & $02.91 \pm .01$ & $06.06 \pm .02$ & $58.75 \pm .02$ \\
Graph + Attention + TCN & Equivariant & & $\pmb{06.95 \pm .01}$ & $09.91 \pm .01$ & $\pmb{59.44 \pm .05}$ & $\pmb{02.65 \pm .01}$ & $\pmb{05.81 \pm .02}$ & $\pmb{55.63 \pm .03}$ \\ \midrule
Velocity & None & \multirow{8}{*}{Offense} & $10.21 \pm .04 $ & $16.50 \pm .05 $ & $74.23 \pm .08 $ & $04.67 \pm .02 $ & $08.96 \pm .02 $ & $83.02 \pm .11 $ \\
LSTM & None & &$10.44 \pm .03$ & $18.94 \pm .05 $ & $76.23 \pm .09 $ & $05.89 \pm .02 $ & $10.40 \pm .02 $ & $85.13 \pm .13 $ \\
TCN & None & &$09.94 \pm .03$ & $16.22 \pm .05 $ & $73.58 \pm .09 $ & $04.84 \pm .02 $ & $08.75 \pm .02 $ & $80.14 \pm .12 $ \\
HMM + LSTM & Role-based & & $09.51 \pm .03 $ & $14.23 \pm .03$ & $71.33 \pm .07$ & $03.78 \pm .01 $ & $07.82 \pm .02 $ & $70.22 \pm .05 $ \\
HMM + TCN & Role-based & &$09.23 \pm .03$ & $13.59 \pm .04$ & $70.74 \pm .08 $ & $03.34 \pm .01 $ & $07.68 \pm .02 $ & $68.41 \pm .07 $ \\
Graph + VRNNs & Equivariant & & $08.80 \pm .01 $ & $13.56 \pm .04 $ & $68.71\pm.06$ & $03.17 \pm .01 $ & $06.50 \pm .01 $ & $64.47 \pm .05 $ \\
Graph + Attention + LSTM & Equivariant & & $07.68 \pm .01$ & $11.92 \pm .01$ & $65.33 \pm .05$ & $\pmb{03.04 \pm .02}$ & $06.77 \pm .02$ & $63.10 \pm .03$ \\
Graph + Attention + TCN & Equivariant & & $\pmb{07.54 \pm .01}$ & $\pmb{10.64 \pm .01}$ & $\pmb{61.21 \pm .05}$& $03.08 \pm .01$ & $\pmb{06.20 \pm .02}$ & $\pmb{62.10 \pm .03}$ \\ \Xhline{2\arrayrulewidth}
\end{tabular} \label{tb:basiccomparison}
\end{table*}
\subsection{Implementation details}
The final goal is to predict the future trajectories of the agents. To do so, according to their past trajectories, we first infer the current state (2-D position) of the all the objects (players and ball). In particular, we minimize normalized $\ell_2$ distance between the ground truth locations and the predicted locations of all objects. Like imitation learning~\cite{le2017coordinated}, we predict the future trajectories also by minimizing $\ell_2$ loss.
We train all the models using the standard Adam optimizer. To prevent over-fitting, we select the best performing model using log-likelihood on the validation set. The models are trained on 4 V100 GPUs with synchronous training with batch size of 8 per GPU. The initial learning rate is 0.0005. The learning rate is decayed exponentially by a factor of 0.999 per epoch.
\section{Evaluation}\label{sec:evaluation}
In this section, we compare our approach with various baselines (discussed in Section~\ref{sec:models}) on two datasets: modeling of basketball and soccer game trajectories.
\subsection{Evaluation Metrics}
We evaluate the models on the task of predicting future trajectories, i.e., conditioned on the first $n$ seconds of all agents' trajectories, we predict the following (future) $m$ seconds trajectories. To demonstrate the efficiency and effectiveness of our approach, we evaluate on the following metrics:
\begin{itemize}
\item \textbf{Average error} is the $\ell_2$ error between predicted trajectories and the ground truth, averaged over each time step for each agent (as shown in Equation~\ref{eq:Loss}). For each test run, we randomly sample 20 data points and report the \emph{Average error}.
\item \textbf{Max error} is the maximum $\ell_2$ error between the prediction and ground truth for an agent trajectory, averaged over all agent trajectories. For each test run, we randomly sample 20 data points and report the \emph{Max error}.
\item \textbf{Miss rate} is calculated as the fraction of time the $\ell_2$ error exceeds 3 ft in basketball game, 1 meter in soccer game. This is reported on the best out of 20 data points per test run.
\end{itemize}
\subsection{Basic comparison}
We first compare our approach with the baselines in \emph{average error}, \emph{max error} and \emph{miss rate}. In particular, we compare the methods run on offense team as well as the defense team. Note that in the soccer dataset, although the trajectories of goalkeepers are not predicted, the model is conditioned with goalkeeper's information. For the basketball and soccer games, we consider 10 seconds in total, where 6-second data observed, and 4-second data unobserved (to be predicted). Table~\ref{tb:basiccomparison} shows the quantitative results.
Intuitively, predicting the trajectories of defense players is more accurate than predicting offense players, because the defense players react the actions of offense team. Hence, the offense trajectories contains more straightforward information comparing with the defense motion. This assumption is proved by the evaluation results (Table~\ref{tb:basiccomparison}) where the prediction of defense team improves around 8\% comparing with the offense team. It is observed that the velocity baseline outperforms the simple LSTM in the three metrics. In addition, similar as the results shown in~\cite{bai2018empirical}, convolutional sequential model performs better than the recurrent sequential model (i.e., LSTM). The similar conclusion could also be found in the comparison where the player's order is considered (i.e., role-based, permutation equivariant).
We also perform several ablation studies to verify the effectiveness of each the components. As the results shown in Table~\ref{tb:basiccomparison}, basically, graph-based models outperform all of the non-graph-based models. Furthermore, the attention mechanism indeed learns varied dependency among agents, as the most of the metrics are lower than the non-attention graph-based model. Note that, the datasets we used in this simulation do not differentiate the teams in the offense or defense teams. In another word, the offense model (or defense model) learns the behavior of multiple offense teams (or defense teams), and it does not represent for the strategy of a single team. As we have discussed in the Section~\ref{sec:Introduction}, the attention mechanism is able to learn the patterns (or strategy) of a specific team. In the next section, the evaluation is performed on two datasets divided by the basketball dataset: \emph{basketball\_tor\_offense}: Toronto Raptors as the offense team, and \emph{basketball\_tor\_defense}: Toronto Raptors as the defense team.
\setlength\extrarowheight{2pt}
\begin{table*}[!htbp]
\centering
\caption{Quantitative results on \emph{basketball\_tor\_offense} and \emph{basketball\_tor\_defense} datasets, modeling offense team and defense team respectively. Each demonstration includes 10 seconds in total, where 6-second data observed, and 4-second data to be predicted. We report mean and standard deviation of the mean. Lower numbers are better, and bold is the best results. \emph{Average error} and \emph{Max error} have the units of feet (in the basketball dataset) or meter (in the soccer dataset); and \emph{miss rate} has the units of percentage.}
\setlength\tabcolsep{3.5pt}
\small
\begin{tabular}{l|c |c|c c c|c c c}
\Xhline{2\arrayrulewidth}
\multirow{2}{2.8cm}{\centering \textbf{Methods}} &\multirow{2}{*}{\textbf{Order}} & \multirow{2}{*}{\textbf{Team}} & \multicolumn{3}{c|}{\textbf{\emph{basketball\_tor\_defense} (ft)}} & \multicolumn{3}{c}{\textbf{\emph{basketball\_tor\_offense} (ft)}} \\ \cline{4-9}
& & & Avg $\ell_2$ error & Max error & Miss rate &Avg $\ell_2$ error & Max error & Miss rate \\ \midrule
HMM + LSTM&Role-based&\multirow{4}{*}{Defense} &$09.24 \pm .03$ & $14.01 \pm .04$ & $70.64 \pm .08 $ &$09.23 \pm .03$ & $14.20 \pm .04$ & $70.72 \pm .08 $ \\
Graph + VRNNs& Equivariant & &$07.02 \pm .01$ & $09.44 \pm .02 $ & $63.01\pm.05$ &$07.22 \pm .01$ & $\pmb{09.80 \pm .02}$ & $63.21\pm.05$ \\
Graph + Attention + LSTM & Equivariant& & $05.41 \pm .01$ & $07.21 \pm .02$ & $53.64 \pm .05$ & $07.01 \pm .01$ & $10.31 \pm .02$ & $62.64 \pm .05$ \\
Graph + Attention + TCN & Equivariant & & $\pmb{04.95 \pm .01}$ & $\pmb{06.12 \pm .01}$ & $\pmb{48.44 \pm .05}$ & $\pmb{06.20 \pm .01}$ & $09.81 \pm .01 $ & $\pmb{59.44 \pm .05}$ \\ \midrule
HMM + LSTM&Role-based&\multirow{4}{*}{Offense}& $09.41 \pm .03 $ & $14.25 \pm .03$ & $71.13 \pm .07$ &$09.33 \pm .03$ & $14.11 \pm .04$ & $70.69 \pm .08 $ \\
Graph + VRNNs & Equivariant & & $08.82 \pm .01 $ & $13.51 \pm .04 $ & $68.11\pm.06$ & $07.62 \pm .01 $ & $12.11 \pm .04 $ & $64.31\pm.06$ \\
Graph + Attention + LSTM & Equivariant & & $07.63 \pm .01$ & $11.92 \pm .01$ & $65.23 \pm .05$ & $07.52 \pm .02$ & $10.64 \pm .02$ & $63.34 \pm .03$ \\
Graph + Attention + TCN & Equivariant & & $\pmb{07.44 \pm .01}$ & $\pmb{10.54 \pm .01}$ & $\pmb{61.11 \pm .05}$& $\pmb{06.21 \pm .01}$ & $\pmb{09.84 \pm .04}$ & $\pmb{58.92 \pm .05}$ \\ \Xhline{2\arrayrulewidth}
\end{tabular} \label{tb:modelingteam}
\end{table*}
\subsection{Comparison in modeling a specific team}
In this section, we verify the ability of our approach to learn (and infer) the strategy of a specific team. In particular, we compare our approach with the methods \emph{HMM + LSTM}, \emph{Graph + VRNNs} and \emph{Graph + Attention + LSTM}, that have comparable results in the previous section. The quantitative results is shown in Table~\ref{tb:modelingteam}.
The predictions of offense players in the \emph{basketball\_tor\_defense} dataset and defense players in the \emph{basketball\_tor\_offense} dataset have the similar numbers as the predictions in Table~\ref{tb:basiccomparison}. This is because that, the datasets for these evaluations are mixed with several different teams. What the models learned is the general behavior of basketball offense (or defense) players.
However, this is not the case for the predictions of attention-based approaches when modeling on the datasets include only one team (i.e., Toronto Raptors) --- predicting defense trajectories in the \emph{basketball\_tor\_defense} dataset and predicting offense trajectories in the \emph{basketball\_tor\_offense} dataset. As shown in Table~\ref{tb:modelingteam}, attention-based approaches (i.e., \emph{Graph + Attention + LSTM} and \emph{Graph + Attention + TCN}) largely outperform other baselines (i.e., \emph{HMM + LSTM}, \emph{Graph + VRNNs}). In particular, for \emph{HMM + LSTM} and \emph{Graph + VRNNs}, performance of predicting a single team is slightly better than the prediction in mixed teams; while for the attention-based methods, modeling in a single team has much lower $\ell_2$ error and miss rate. This proves that our attention based permutation equivariant method has the ability to model a single team and learn the dependency of that team. Namely, our approach learns the strategy of the team.
\section{Conclusion} \label{sec:Conclusion}
We study the problem of multi-agents trajectory prediction, and propose a spatial-temporal trajectory prediction approach. In particular, we use a fully-connected graph structure to achieve permutation equivariant. In addition, the attention mechanism is used for modeling the fine-grained dependency of the agents. In addition, instead of utilizing the recurrent networks (e.g., VRNN, LSTM), our method use a TCN as the sequential model support long effective history and provide important features such as parallelism and stable gradients.
The evaluation shows that our approach is able to predict the future trajectories of sports games more accurately than the state-of-the-art.
\section*{Acknowledgment}
The authors would like to thank the anonymous reviewers for their suggestions. This work was supported in part by National Science Foundation grants 1618706 and 1717774.
\bibliographystyle{IEEEtran}
|
1,108,101,566,436 | arxiv | \section{Introduction}
The spin symmetry energy of neutron matter, defined as the difference between
the energy per particle of spin polarized and unpolarized neutron matter,
is the main ingredient to understand the spin susceptibility of
neutron matter, which is basically proportional to the invers of this quantity. Microscopic calculations of the spin susceptibility, using realistic
interactions and a variety of many-body methods show that the correlations
induced by these realistic interactions considerable reduce the spin-susceptibility with respect
to the underlying non-interacting Fermi seas \cite{qmc,vidana02,vidana02b,vidana06,dbhf,dbhf2,locv}. This reduction implies an increase of the spin-symmetry energy of neutron matter.
This prediction has also important consequences in the description of situations of
astrophysical interest, such as for instance, the calculation of the mean free path of
neutrinos in dense matter and, in general, the study of supernovae and protoneutron stars \cite{reddy1999}.
In contrast with this scenario, it has been theoretically speculated that the spin symmetry of neutron matter can become zero, a fact that would indicate the existence of a phase transition to a
ferromagnetic state \cite{BROWNELL,RICE,CLARK,CLARK2,SILVERSTEIN,OST,PEAR,PANDA,BACK,HAENSEL,JACK,KUT,MARCOS}. Notice, that looking at the kinetic energies of the corresponding
underlying Fermi seas, at a given density the kinetic energy of the polarized Fermi sea will always be larger than the unpolarized one. Therefore, the hypothetical ferromagnetic transition should be
a consequence of the different role of the interactions in polarized and unpolarized
neutron matter. In fact, many effective nuclear interactions of Skyrme \cite{VIDAURRE,rios2005} or Gogny \cite{lopez-val2006} type predict
this transition at densities accesible in neutron stars.
However, in accordance with the reduction of the spin susceptibility comented above, microscopic calculations based on realistic interactions do not predict such transition
at least in the wide range of densities which have been explored \cite{qmc,vidana02,vidana02b,vidana06,dbhf,dbhf2,locv}.
The study of spin-polarization has also been recently considered for nuclear matter and finite nuclei
using finite range effective interactions \cite{vinas}. The possibility of a ferromagnetic
transition has also been discussed in the context of hard-sphere systems in conexion with ultracold atom systems \cite{pilati2010,arias2012}.
All these facts have motivated the interest for the study of neutron matter and in particular of polarized neutron matter.
It has also been pointed out that due to the large value of the $^1S_0$ scattering length,
the behaviour of neutron matter, at densities where the physics is dominated by this partial wave,
should show some similarities with the behaviour of a unitary Fermi gas \cite{maieron2008}. At the same time, the
absence due to the Pauli principle of the $^1$S$_0$ channel in polarized neutron matter has driven
the question of up to which density polarized neutron matter can behave as a weakly interacting Fermi gas \cite{kruger2015}.
Motivated by these questions, we have performed a microscopic calculation, in the framework
of the
Brueckner--Hartree--Fock approximation, of the magnetic susceptibility of neutron matter
employing the Argonne V18 (Av18) realistic nucleon-nucleon interaction \cite{WIRINGA}
suplemented with
the Urbana IX three-body force \cite{PUDLINER}, which for the use in the BHF approach is reduced to an effective two-body density-dependent interaction by averaging over the third nucleon \cite{bf99}.
In order to identify the nature of the correlations responsible
for the behavior of the magnetic susceptibility we have analyzed the contributions to the spin symmetry energy of neutron matter of
the different partial waves and also of the different operatorial parts of the interaction.
In addition, the degree of correlation of the two systems, polarized and unpolarized
neutron matter, is discussed by comparing the differences of the kinetic energy of the correlated system with the ones of the underlying Fermi sea.
The paper is organized in the following way. In Sec. II the Brueckner--Hartree--Fock approach
to spin polarized neutron matter and the Hellmann-Feynman theorem \cite{hellmann,feynman}
are shortly reviewed. Results for the
magnetic susceptibility or, equivalently, for the spin symmetry energy and its density dependence
are presented in Sec. III, where is also discussed the contribution of the different partial waves.
Finally, a short summary and the main conclusions are given in Sec. IV.
\section{BHF approach of spin-polarized neutron matter}
Spin-polarized neutron matter is an infinite nuclear system made of two different fermionic components: neutrons with
spin up and neutrons with spin down, having densities $\rho_\uparrow$ and $\rho_\downarrow$, respectively. The total density of the system is given by
\begin{equation}
\rho=\rho_\uparrow+\rho_\downarrow \ .
\label{eq:den}
\end{equation}
The degree of spin polarization of the system can be expressed by means of the spin polarization $\Delta$ defined as
\begin{equation}
\Delta=\frac{\rho_\uparrow-\rho_\downarrow}{\rho} \ .
\label{eq:sp}
\end{equation}
Note that the value $\Delta=0$ corresponds to nonpolarized (NP) or paramagnetic ($\rho_\uparrow=\rho_\downarrow$) neutron matter, whereas
$\Delta=\pm 1$ means that the system is totally polarized (TP), {\it i.e.,} all the spins are aligned along the same direction. Partially
polarized states correspond to values of $\Delta$ between $-1$ and $+1$.
The energy per particle of spin-polarized neutron matter does not change when a global flip of the spins is performed. Therefore, it can be
expanded on the spin polatization $\Delta$ as
\begin{equation}
E(\rho,\Delta) = E_{NP}(\rho)+S_{sym}(\rho)\Delta^2 + S_{4}(\rho)\Delta^4 + {\cal O}(6) \ ,
\label{eq:en}
\end{equation}
where $E_{NP}(\rho)\equiv E(\rho,0)$ is the energy per particle of nonpolarized neutron matter, $S_{sym}(\rho)$
is defined as the spin symmetry energy,
\begin{equation}
S_{sym}(\rho)=\frac{1}{2}\frac{\partial^2 E(\rho,\Delta)}{\partial \Delta^2}\Big|_{\Delta=0}
\label{eq:ssym}
\end{equation}
and
\begin{equation}
S_{4}(\rho)=\frac{1}{24}\frac{\partial^4 E(\rho,\Delta)}{\partial \Delta^4}\Big|_{\Delta=0} \ .
\label{eq:s4}
\end{equation}
It has been shown (see {\it e.g.,} Refs. \cite{vidana02,vidana02b,vidana06}) that the energy per particle of spin-polarized neutron matter
is practically parabolic in the full range of spin polarizations. Therefore, contributions from $S_4(\rho)$ and other higher order
terms can be neglected, and one can, in good approximation, estimate the spin symmetry energy simply as
the difference between the energy per particle of totally polarized, $E_{TP}(\rho) \equiv E(\rho,\pm 1)$, and nonpolarized
neutron matter {\it i.e.,}
\begin{equation}
S_{sym}(\rho) \sim E_{TP}(\rho)-E_{NP}(\rho) \ .
\label{eq:ssym2}
\end{equation}
A particularly interesting macroscopic property of spin polarized neutron matter related to $S_{sym}(\rho)$ is the magnetic susceptibility $\chi(\rho)$
which, at each density, characterizes the response of the system to an external magnetic field and gives a measure of the energy
required to produce a net spin alignment in the direction of it. If the strengh of the field is small $\chi(\rho)$ can be obtained simply as (see {\it e.g.,} Ref.\ \cite{vidana02})
\begin{eqnarray}
\chi(\rho)&=&\frac{\mu^2 \rho}{\frac{\partial^2 E(\rho,\Delta)}{\partial \Delta^2}\Big|_{\Delta=0}} \nonumber \\
&=&\frac{\mu^2 \rho}{2S_{sym}(\rho)}
\label{eq:chi1}
\end{eqnarray}
where $\mu$ is the magnetic moment of the neutron and in the second equality we have used Eq.\ (\ref{eq:ssym}).
The BHF description of spin-polarized neutron matter starts with the construction of the neutron-neutron $G$-matrix, which
describes in an effective way the interaction between two neutrons for each one of the spin combinations $\uparrow\uparrow,
\uparrow\downarrow, \downarrow\uparrow$ and $\downarrow\downarrow$. This is formally obtained by solving the well-known
Bethe-Goldstone equation, written schematically as
\begin{eqnarray}
G(\omega)_{\sigma_1\sigma_2\sigma_3\sigma_4}&=&
V_{\sigma_1\sigma_2\sigma_3\sigma_4}
+\frac{1}{\Omega}\sum_{\sigma_i\sigma_j}
V_{\sigma_1\sigma_2\sigma_i\sigma_j} \nonumber \\
&\times&\frac{Q_{\sigma_i\sigma_j}}{\omega-\varepsilon_{\sigma_i}-\varepsilon_{\sigma_j}+i\eta}
G(\omega)_{\sigma_i\sigma_j\sigma_3\sigma_4} \ ,
\label{eq:bge}
\end{eqnarray}
where $\sigma=\uparrow, \downarrow$ indicates the spin projection of the two neutrons in the
initial, intermediate and final states, $V$ is the bare nucleon-nucleon interaction, $\Omega$ is the (large) volume
enclosing the system, $Q_{\sigma_i\sigma_j}$ is the Pauli operator taking into account the effect of the
exclusion principle on the scattered neutrons, and $\omega$ is the so-called starting energy defined
as the sum of the non-relativistic single-particles energies, $\epsilon_{\uparrow(\downarrow)}$, of the interacting neutrons.
We note that Eq.\ (\ref{eq:bge}) is a coupled channel equation.
The single-particle energy of a neutron with momentum ${\vec k}$ and spin projection $\sigma$ is given by
\begin{equation}
\epsilon_{\sigma}({\vec k})=\frac{\hbar^2
k^2}{2m}+\mbox{Re}[U_{\sigma}({\vec k})] \ ,
\label{spe}
\end{equation}
where the real part of the single-particle potential
$U_{\sigma}({\vec k})$ represents the average potential ``felt'' by
a neutron due to its interaction with the other neutrons of the system. In the BHF approximation
$U_{\sigma}({\vec k})$ is calculated through the ``on-shell'' $G$-matrix, and is given by
\begin{equation}
U_{\sigma}({\vec k})=\frac{1}{\Omega}
\sum_{\sigma'\vec k'
\langle {\vec k}\sigma{\vec k'}\sigma' |G(\epsilon_{\sigma}(\vec k)+\epsilon_{\sigma'}(\vec k'))|{\vec k}\sigma{\vec k'}\sigma'\rangle_{A} \ ,
\label{eq:spp}
\end{equation}
where the sum runs over all neutron up and neutron down occupied states and the matrix elements
are properly antisymmetrized. Once a self-consistent solution of
Eqs.\ (\ref{eq:bge})-(\ref{eq:spp}) is achieved, the energy per particle in the BHF approximation can be calculated as
\begin{eqnarray}
E_{BHF}(\rho,\Delta)&=&\frac{1}{A}
\sum_{\sigma}\sum_{|\vec k|\leq k_{F_{\sigma}}}
\frac{\hbar^2
k^2}{2m} \nonumber \\
&+&
\frac{1}{2A}\sum_{\sigma}\sum_{|\vec k|\leq k_{F_{\sigma}}}
\mbox{Re}[U_{\sigma}(\vec k)]\ ,
\label{eq:bhf}
\end{eqnarray}
where the first term of the r.h.s. is simply the contribution of the free Fermi gas (FFG), and the second one is sometimes called in the literature {\it correlation energy}. We note that $E_{BHF}$ represents only the sum of {\it two-hole-line} diagrams and includes only the effect of two-body correlations
through the $G$-matrix. It has been shown by Song {\it et al.,} \cite{SONG} that the contribution to the energy from {\it three-hole-line} diagrams (which account for the effect of three-body correlations) is minimized when the so-called continuous prescription \cite{JEKEUNE}
is adopted for the single-particle potential when solving the Bethe--Goldstone equation. This presumably enhances the convergence of the hole-line expansion of which the BHF approximation represents the lowest order. We adopt this prescription in our BHF calculations which are
done using the Argonne V18 (Av18) potential \cite{WIRINGA} supplemented with
the Urbana IX three-nucleon force \cite{PUDLINER}, which for the
use in the BHF approach is reduced first to an effective
two-nucleon density-dependent force by averaging over the
coordinates of the third nucleon \cite{bf99,TBF}.
The BHF approach does not give direct access to the separate contributions of the kinetic and potential energies because it does not provide the correlated many-body wave funtion $|\Psi\rangle$. However, it has been shown \cite{muether99, sartor00, sartor01, vidana05} that the Hellmann--Feynman theorem \cite{hellmann,feynman} can be used to estimate the ground-state expectation value of both contributions from the derivative of the total energy with respect to a properly introduced parameter. Writing the nuclear matter Hamiltonian as $H=T+V$, and defining a $\lambda$-dependent Hamiltonian $H(\lambda)=T+\lambda V$, the expectation value of the potential energy is given as
\begin{equation}
\langle V \rangle \equiv \frac{\langle \Psi | V | \Psi \rangle}{\langle \Psi | \Psi \rangle}
=\left(\frac{dE}{d\lambda}\right)_{\lambda=1}
\end{equation}
and the kinetic energy contribution $\langle T \rangle$ can be simply obtained by substracting $\langle V \rangle$ from $E_{BHF}$.
\section{Results and Discussion}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.45\textwidth]{fig1.eps}
\caption{(color on-line) Kinetic $\langle T\rangle$ and potential $\langle V \rangle$ energy contributions to the total energy per particle of totally polarized and nonpolarized
neutron matter (panels a and b), and to the spin symmetry energy and its slope parameter (panels c and d) as a function of density. Notice the different scale in the ordinates.}
\label{fig:contri2}
\end{center}
\end{figure}
The discussion of our results starts by showing in Fig.\ \ref{fig:contri2} the density dependence of the kinetic $\langle T \rangle$ and potential $\langle V \rangle$ energy contributions to the
energy per particle of both TP (panel (a)) and NP (panel (b)) neutron matter as well as to the spin symmetry energy (panel (c)) and its slope
parameter (panel (d)) defined as
\begin{equation}
L_S(\rho) = 3\rho\frac{\partial S_{sym}(\rho)}{\partial\rho} \, ,
\label{ls}
\end{equation}
in analogy with the slope parameter of the nuclear symmetry energy, $L(\rho)$. The particular values of these contributions at
the empirical saturation density of symmetric nuclear matter, $\rho_0=0.16$ fm$^{-3}$, are reported in Tab.\ \ref{tab1}.
The results have been obtained by applying the Hellmann--Feynman theorem as explained at the end of the previous section.
As it can be seen in the figure, the total energy of TP neutron is always more repulsive than the NP one in all the density range explored. This
additional repulsion of TP neutron matter can be understood, firstly, in terms of the kinetic energy contribution, which is
larger in the TP case than in the NP one. Secondly, in terms of the potential
energy one because, due to symmetry arguments, all partial waves with even orbital angular momentum $L$ (some of them
attractive, as the important $^1S_0$) are excluded in TP neutron matter (see Tab.\ \ref{tab2}). An interesting conclusion which can
be inferred from here, already pointed out in previous works of the authors \cite{vidana02,vidana02b,vidana06} and other
studies \cite{qmc,dbhf,dbhf2,locv}, is that a spontaneous phase transiton to a ferromagnetic state is not to be expected. If such
a transition would exist, a crossing of energies of the TP and NP systems, with the consequent change of the sign of the spin symmetry energy,
would be observed at some density, indicating that the ground state of the system would be ferromagnetic from that density on. Notice that there is
no sign of such a crossing on the figure and that, on the contrary, it becomes less and less favorable as the density increases.
As it is seen in the figure the kinetic enegy contribution to the spin symmetry energy, although it is always smaller than that of the potential energy one, is
not negligible and, in particular, amounts $\sim 38\%$ of its total value at $\rho_0$. This result is different from what is found in the case of the nuclear symmetry
energy, $E_{sym}(\rho)$. In this case the kinetic energy contribution to $E_{sym}(\rho)$ is very small (and even negative) due to the strong cancellation of the kinetic energies
of neutron and symmetric nuclear matter \cite{providencia,carbone1}. Finally, note that the slope parameter $L_S(\rho)$ is also clearly dominated in the whole density range by the potential
energy contribution ($\sim 75 \%$ at $\rho_0$) except at very low densities where the kinetic energy one is of similar order.
Also interesting is the fact that in a significative density region around $\rho_o$, $L_s(\rho)$ is rather linear, indicating that
the derivative of $S_{sym}(\rho)$ respect to the density is approximately constant (see Eq.\ (\ref{ls})).
\begin{center}
\begin{table}[t]
\begin{tabular}{crrrr}
\hline
\hline
& $E_{TP}$ & $E_{NP}$ & $S_{sym}$ & $L_S$ \\
\hline
$\langle T_{FS} \rangle$ & $55.669$ & $35.069$ & $20.600$ & $ 41.200$ \\
$\langle T \rangle$ & $64.452$ & $47.827$ & $16.625$ & $25.225$ \\
$\langle V \rangle$ & $-4.784$ & $-31.050$ & $26.266$ & $75.914$ \\
Total & $59.668$ & $16.777$ & $42.891$ & $101.139$ \\
\hline
\hline
\end{tabular}
\caption{Kinetic, $\langle T \rangle$, and potential, $\langle V \rangle$, contributions
to the total energy per
particle of totally polarized (TP) and non-polarized (NP) neutron matter at the
empirical salturation density of symmetric nuclear matter, $\rho_0=0.16$ fm$^{-3}$.
The contribution to the corresponding spin symmetry energy $S_{sym}$ and its
slope parameter $L_S$ are reported in the last two columns, respectively. $<T_{FS}>$ correspond to the results
of the unerlying Fermi seas. Results are given in MeV. }
\label{tab1}
\end{table}
\end{center}
To get a further physical insight on the role of the potential energy, it is useful to look at the spin channel and partial wave decomposition of its contribution to the energies of TP and NP neutron matter as well as that to
the spin symmetry energy and its slope parameter. These contributions are denoted as $\langle V \rangle_{TP}$, $\langle V \rangle_{NP}$, $S^{\langle V \rangle}_{sym}$ and $L^{\langle V \rangle}_S$, respectively, and their values at $\rho_0$ are shown in Tab. \ref{tab2}. The main contribution to
$S^{\langle V \rangle}_{sym}$ and $L^{\langle V \rangle}_S$ is that of the $S=0$ channel, acting only in NP neutron matter, and in particular that of the $^1S_0$ and $^1D_2$ partial waves, which at $\rho_0$ amount $\sim 99\%$ of $S^{\langle V \rangle}_{sym}$ and $\sim 70\%$ of
$L^{\langle V \rangle}_S$. Notice that, at this density, the contribution of the $S=1$ channel to the energies of TP and NP matter is very similar and, therefore, the
contribution of this channel to $S^{\langle V \rangle}_{sym}$ is almost negligible. This is mainly due to the strong compensation of the P- and the F-waves which almost cancel completely, and to
the small contribution of the H- and J- and L-waves. Note also that, for this reason, the contribution from those partial waves where the tensor force is active ($^3P_2, ^3F_2, ^3F_4, ^3H_4, ^3H_6, ^3J_6, ^3J_8, ^3L_8$) represents a small percentage of the total values of $S^{\langle V \rangle}_{sym}$ and $L^{\langle V \rangle}_S$. This can interpreted as an indication that the tensor force plays a minor role in the determination of the spin symmetry energy and its density dependence. This conclusion differs from that drawn in the case of the nuclear symmetry energy whose value at saturation and its density dependence is known to be clearly dominated by the tensor force \cite{carbone,vidana2009} (see also {\it e.g.,} Ref.\ \cite{epja} and references therein).
\begin{center}
\begin{table}[t]
\begin{tabular}{crrrr}
\hline
\hline
& $\langle V \rangle_{TP}$ & $\langle V \rangle_{NP}$ & $S^{\langle V \rangle}_{sym}$ & $L^{\langle V \rangle}_S$ \\
\hline
$S=0$ & $0$ & $-26.875$ & $26.875$ & $56.198$ \\
$S=1$ & $-4.784$ & $-4.175$ & $-0.609$ & $19.716 $ \\
\hline
$^1S_0$ & $0.000$ & $-21.432$ & $21.432$ & $32.086$ \\
$^3P_0$ & $-5.499$ & $-4.624$ & $-0.875$ & $3.313$ \\
$^3P_1$ & $19.644$ & $13.027$ & $6.617$ & $30.927$ \\
$^3P_2$ & $-19.915$ & $-13.299$ & $-6.616$ & $-14.966$ \\
$^1D_2$ & $0.000$ & $-4.787$ & $4.787$ & $21.185$ \\
$^3F_2$ & $-1.263$ & $-0.574$ & $-0.689$ & $-2.655$ \\
$^3F_3$ & $3.109$ & $1.639$ & $1.470$ & $5.253$ \\
$^3F_4$ & $-1.726$ & $-0.597$ & $-1.129$ & $-5.492$ \\
$^1G_4$ & $0.000$ & $-0.607$ & $0.607$ & $3.055$ \\
$^3H_4$ & $-0.042$ & $0.012$ & $-0.054$ & $-0.094$ \\
$^3H_5$ & $0.699$ & $0.186$ & $0.513$ & $1.889$ \\
$^3H_6$ & $-0.028$ & $0.024$ & $-0.052$ & $-0.345$ \\
$^1I_6$ & $0.000$ & $-0.059$ & $0.059$ & $0.116$ \\
$^3J_6$ & $0.051$ & $0.024$ & $0.027$ & $0.370$ \\
$^3J_7$ & $0.107$ & $-0.025$ & $0.132$ & $0.476$ \\
$^3J_8$ & $0.050$ & $0.020$ & $0.030$ & $0.332$ \\
$^1K_8$ & $0.000$ & $0.011$ & $-0.011$ & $0.245$ \\
$^3L_8$ & $0.029$ & $0.011$ & $0.018$ & $0.219$ \\
\hline
\hline
\end{tabular}
\caption{Spin channel and partial wave decomposition of the potential energy of TP and NP neutron matter at $\rho_0=0.16$ fm$^{-3}$. The decompostions of the potential energy contribution to the spin symmetry and its slope parameter are also shown. Results are given in MeV.}
\label{tab2}
\end{table}
\end{center}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.45\textwidth]{fig2.eps}
\caption{(color on-line) Increase of the kinetic energy per particle due to SRC in TP and NP neutron matter as a function of density. The increase in the kinetic energy of symmetric nuclear matter is also shown for comparison.}
\label{fig:kinetic}
\end{center}
\end{figure}
A way of estimating the importance of correlations in a fermionic system is simply to evaluate the difference between the expectation value of the kinetic energy of the system and the energy of a
free Fermi gas with the same density and constituents,
\begin{equation}
\Delta T= \langle T \rangle- E_{FFG}.
\end{equation}
The larger is the value of $\Delta T$ the more important is the role of the correlations. We show in Fig.\ \ref{fig:kinetic} the density dependence of $\Delta T$ for TP and NP neutron matter as well as for conventional symmetric nuclear matter (SM). The increase of $\Delta T$ in the three cases indicates, as expected, that correlations become more and more important when the density of the system increases. Note that
in the whole range of densities explored, $\Delta T_{SM} > \Delta T_{NP} > \Delta T_{TP}$, reflecting the fact that SM is always more correlated than neutron matter independently of its spin polarization state, and
that NP neutron matter is always more correlated than TP one. However, the effect of correlations on
the kinetic energy of TP neutron matter can not be discarded. Note also that the difference $\Delta T_{SM} - \Delta T_{NP}$ is larger than the difference $\Delta T_{NP} - \Delta T_{TP}$ up to $\rho\sim 0.45$
fm$^{-3}$. This can be interpreted as an indication that the spin dependence of the nucleon-nucleon correlations is less strong than its isospin one at least in the low and medium density region.
To get a more quantitative idea of the spin dependence of the nucleon-nucleon correlations in the following we analyze the role played by the different terms of the nuclear force, and in particular the spin dependent ones,
in the determination of $S^{\langle V \rangle}_{sym}$ and $L^{\langle V \rangle}_S$. To such end, we apply the Hellmann--Feynman theorem to the separate contributions of the Av18 potential and the Urbana IX three-nucleon force. The Av18 has 18 components of the form $v_p(r_{ij})O^p_{ij}$ with
\begin{widetext}
\begin{eqnarray}
O^{p=1,18}_{ij}&=&1, \,\, \vec \tau_i \cdot \vec \tau_j, \,\, \vec \sigma_i \cdot \vec\sigma_j, \,\,
(\vec \sigma_i \cdot \vec \sigma_j )(\vec \tau_i \cdot \vec \tau_j), \,\,
S_{ij}, \,\, S_{ij}(\vec \tau_i \cdot \vec \tau_j), \,\, \vec L \cdot \vec S, \,\,
\vec L \cdot \vec S(\vec \tau_i \cdot \vec \tau_j), L^2, \nonumber \\
&&L^2(\vec \tau_i \cdot \vec \tau_j), \,\,
L^2(\vec \sigma_i \cdot \vec \sigma_j), \,\, L^2(\vec \sigma_i \cdot \vec \sigma_j)(\vec \tau_i \cdot \vec \tau_j), \,\,
(\vec L \cdot \vec S)^2, \,\,
(\vec L \cdot \vec S)^2(\vec \tau_i \cdot \vec \tau_j), \nonumber \\
&&T_{ij}, \,\, (\vec \sigma_i \cdot \vec \sigma_j)T_{ij}, \,\,
S_{ij}T_{ij}, \,\, (\tau_{zi}+\tau_{z_j})
\label{eq:av18}
\end{eqnarray}
\end{widetext}
being $S_{ij}$ the usual tensor operator, $\vec L$ the relative orbital angular momentum,
$\vec S$ the total spin of the nucleon pair, and $T_{ij}=3\tau_{zi}\tau_{zj}-\tau_i \cdot \tau_j$ the isotensor operator defined
analogously to $S_{ij}$. Note that the last four operators break the charge independence of the nuclear interaction.
As we said above, the Urbana IX three-body force is reduced to an effective density-dependent two-body force when used in the BHF
approach. For simplicity, in the following we refer to it as reduced Urbana force. This force is made of $3$ components of the
type $u_p(r_{ij},\rho)O^p_{ij}$ where
\begin{equation}
O^{p=1,3}_{ij}=1, \,\, (\vec\sigma_i\cdot\vec\sigma_j)(\vec\tau_i\cdot\vec\tau_j), \,\, S_{ij}(\vec\tau_i\cdot\vec\tau_j) \ ,
\label{eq:uix}
\end{equation}
introducing additional central, $\sigma\tau$ and tensor terms (see {\it e.g.,} Ref.\ \cite{bf99} for details).
The separate contributions of the various components of the Av18 potential and
the reduced Urbana force to the energy per particle of TP and NP neutron matter, and to $S^{\langle V \rangle}_{sym}$ and $L^{\langle V \rangle}_S$ at the empirical value of the nuclear saturation density are given in Tab.\ \ref{tab3}. Note that the largest contribution for both $S^{\langle V \rangle}_{sym}$ and $L^{\langle V \rangle}_S$ comes from the $\vec\sigma_i\cdot\vec\sigma_j$, $(\vec\sigma_i\cdot\vec\sigma_j)(\vec\tau_i\cdot\vec\tau_j)$ and $L^2$ terms.
\begin{center}
\begin{table}
\begin{tabular}{crrrr}
\hline
\hline
& $\langle V \rangle_{TP}$ & $\langle V \rangle_{NP}$ & $S^{\langle V \rangle}_{sym}$ & $L^{\langle V \rangle}_S$ \\
\hline
$\langle V_1 \rangle $ & $-24.856$ & $-26.415$ & $1.559$ & $-3.012$ \\
$\langle V_{\vec\tau_i\cdot\vec\tau_j} \rangle$ & $-3.129$ & $-4.157$ & $1.028$ & $0.506$ \\
$\langle V_{\vec\sigma_i\cdot\vec\sigma_j} \rangle$ & $3.207$ & $-0.438$ & $3.645$ & $9.147$ \\
$\langle V_{(\vec\sigma_i\cdot\vec\sigma_j)(\vec\tau_i\cdot\vec\tau_j)} \rangle$ & $13.046$ & $-5.470$ & $18.516$ & $50.328$ \\
$\langle V_{S_{ij}} \rangle$ & $-0.980$ & $-0.608$ & $-0.372$ & $-1.075$ \\
$\langle V_{S_{ij}(\vec\tau_i\cdot\vec\tau_j)} \rangle$ & $-5.725$ & $-4.219$ & $-1.506$ & $-3.625$ \\
$\langle V_{\vec L\cdot \vec S} \rangle$ & $-8.638$ & $-6.076$ & $-2.562$ & $-2.855$ \\
$\langle V_{\vec L\cdot \vec S (\vec\tau_i\cdot\vec\tau_j)} \rangle$ & $-3.090$ & $-2.148$ & $-0.942$ & $-3.303$ \\
$\langle V_{L^2} \rangle$ & $14.090$ & $9.188$ & $4.902$ & $18.735$ \\
$\langle V_{L^2(\vec\tau_i\cdot\vec\tau_j)} \rangle$ & $-2.899$ & $-2.142$ & $-0.757$ & $-3.238$ \\
$\langle V_{L^2(\vec\sigma_i\cdot\vec\sigma_j)} \rangle$ & $1.410$ & $1.016$ & $0.394$ & $0.741$ \\
$\langle V_{L^2(\vec\sigma_i\cdot\vec\sigma_j)(\vec\tau_i\cdot\vec\tau_j)} \rangle$ & $-0.787$ & $0.017$ & $-0.804$ & $-5.024$ \\
$\langle V_{(\vec L\cdot \vec S)^2} \rangle$ & $5.652$ & $3.262$ & $2.390$ & $12.803$ \\
$\langle V_{(\vec L\cdot \vec S)^2(\vec\tau_i\cdot\vec\tau_j)} \rangle$ & $6.903$ & $4.032$ & $2.871$ & $14.275$ \\
$\langle V_{T_{ij}} \rangle$ & $0.006$ & $0.002$ & $0.004$ & $0.022$ \\
$\langle V_{(\vec\sigma_i\cdot\vec\sigma_j)T_{ij}} \rangle$ & $-0.013 $ & $-0.015$ & $0.002$ & $-0.010$ \\
$\langle V_{S_{ij}T_{ij}} \rangle$ & $0.004$ & $0.003$ & $0.001$ & $-0.102$ \\
$\langle V_{(\tau_{z_i}+\tau_{z_j})} \rangle$ & $-0.055$ & $-0.070 $ & $0.015$ & $-0.054$ \\
\\
$\langle U_1 \rangle$ & $-0.019$ & $1.744$ & $-1.763$ & $-6.967$ \\
$\langle U_{(\vec\sigma_i\cdot\vec\sigma_j)(\vec\tau_i\cdot\vec\tau_j)} \rangle$ & $-0.922$ & $-0.708$ & $-0.214$ & $-0.872$ \\
$\langle U_{S_{ij}(\vec\tau_i\cdot\vec\tau_j)} \rangle$ & $2.011$ & $2.152$ & $-0.141$ & $-0.506$ \\
\hline
\hline
\end{tabular}
\caption{Contributions of the various components of the Av18 potential (denoted as $\langle V_i \rangle$) and the reduced Urbana force (denoted as $\langle U_i \rangle$) to the total energy per particle of TP and NP neutron matter and to the spin symmetry energy and its slope parameter at the empirical saturation density of symmetric nuclear matter $\rho_0=0.16$ fm$^{-3}$. Results are given in MeV.}
\label{tab3}
\end{table}
\end{center}
As we have already seen in Table II, the total interaction energy for TP neutron matter is in absolute
value much smaller than for the NP one. This is the result of strong cancellations between the contributions of the different pieces of the potential. The contributions to $S_{sym}^{<V>}$ and $L_S^{<V>}$
are important when there is a difference in the behavior of the interaction betwen TP and NP neutron matter. For instance, the contribution of the central part $\langle V_1\rangle$ is very similar in TP and NP neutron matter and therefore its contribution to $S_{sym}^{<V>}$ is small. Relevant contributions are associated to
$\langle V_{\vec\sigma_i\cdot\vec\sigma_j} \rangle$,
$\langle V_{(\vec\sigma_i\cdot\vec\sigma_j)(\vec\tau_i\cdot\vec\tau_j)} \rangle$ and also
to $\langle V_{L^2} \rangle$. On the other hand the contributions of the three-body forces to the
spin symmetry energy are moderatly small and of negative sign, at $\rho_0$.
\section{Summary and conclusions}
We have calculated the kinetic and potential energy contributions of the spin symmetry energy of neutron matter using the realistic Argonne Av18 two-body interaction suplemented with the Urbana IX three-body force averaged to provide a two-body density dependent one suitable to be used in BHF calculations. It has been shown that this realistic interaction do not favour a ferromagnetic transition of neutron matter. As the symmetry energy, the spin symmetry energy is an increasing function of density, at least in the range of densities considered. Both, the kinetic and the potential energy contributions, {\it i.e.,} the difference of these energies between polarized and normal neutron matter, are positive in the full range of densities considered.
The contributions of the different pieces of the interaction and
its partial wave decomposition allows to understand the origin of the different role
of the interaction in TP and NP neutron matter. In most of the cases, the Pauli principle, which forbids the interaction in certain partial waves in totally polarized neutron matter is the origin of most of the differences. The main contribution comes from the S=0 forbidden channels in TP neutron matter, in particular from the $^1$S$_0$ and $^1$D$_2$ partial waves. On the other hand, three-body forces play a secondary role in the determination of the spin symmetry energy.
Finally, we have quantitatively established that NP neutron matter is more correlated than TP one by looking at the difference of their kinetic energies and the corresponding ones of their underlying Fermi seas. In spite of being less correlated, however, the role of correlations in totally polarized neutron matter cannot be ignored when using realistic interactions.
\section*{Acknowledgements}
This work is supported by Grant No. FIS2014-54672-P from MICINN (Spain), and Grant No. 2014SGR-401 from Generalitat de Catalunya (Spain), the GSI-TU Darmstadt bilateral
cooperation agreement and by the Helmholtz International Center for FAIR, and by NewCompstar, COST Action MP1304.
|
1,108,101,566,437 | arxiv | \section{Rashba-Anderson Hamiltonian}
The complete Hamiltonian $H=H_c+H_d+H_h$, as described in the main text is given by,
\begin{equation}{\mylabel{eqn:Hamiltonian}}
\begin{split}
H = \sum_{\bk \alpha} \varepsilon_\alpha(\bk) c^\dagger_{\bk \alpha} c_{\bk \alpha} + \sum_{\sigma}{\tilde{\varepsilon}_{d}d_{\sigma}^{\dagger}d_{\sigma}} + U n_{d \uparrow}n_{d \downarrow} + \\
\frac{V}{\sqrt{\Omega}} \sum_{\bk, \sigma,\alpha}({f^{\alpha}_{\sigma}(\bk)}^*d_{\sigma}^{\dagger}c_{\bk \alpha} + {f^\alpha_{\sigma}(\bk)}c_{\bk \alpha}^{\dagger}d_{\sigma}).
\end{split}
\end{equation}
Here, ${f^{\alpha}_\sigma(\bk)}$ = $\langle \bk \alpha|\bk \sigma \rangle$ and $|\bk \alpha \rangle$ and $| \bk \sigma \rangle$ are the eigenkets of the Rashba spin-orbit coupled(RSOC) and the non-RSOC Fermi gas respectively. Explicitly $\langle\bk\sigma=\uparrow|\bk{\alpha=1}\rangle=\cos(\frac{\theta}{2})$, $\langle\bk\sigma=\downarrow|\bk{\alpha=1}\rangle=\sin(\frac{\theta}{2})e^{i\phi}$, $\langle\bk\sigma=\uparrow|\bk{\alpha=-1}\rangle=-\sin(\frac{\theta}{2})$ and $\langle\bk\sigma=\downarrow|\bk{\alpha=-1}\rangle=\cos(\frac{\theta}{2})e^{i\phi}$, where $\theta$ and $\phi$ are the polar and azimuthal angles made by $\bk$ in spherical polar coordinates. In presence of RSOC (of strength $\lambda$) the two helicity bands($\alpha=\pm1)$ have the following dispersion (adding a constant energy shift of $\lambda^2/2$),
\begin{equation}
\varepsilon_\alpha(\bk)= \varepsilon_\alpha(k)= (\frac{k}{\sqrt{2}} - \alpha \frac{\lambda}{\sqrt{2}})^2.
\end{equation}
$V$ is the strength of the hybridization of the impurity state $d$ with the conduction bath fermions $c_{\bk \alpha}$ and $\Omega$ is the volume. $\tilde{\varepsilon}_d$ is the impurity onsite energy and $U$ is the repulsive Hubbard interaction strength at the impurity site between two fermions.
The ``bath'' density of states, i.e. of the RSOC fermions is $\rho(\omega) = \frac{1}{\pi^2}(\frac{\lambda^2}{\sqrt{2\omega}} + \sqrt{2 \omega})$. Given a density of particles ($n_o=k_F^3/3\pi^2$), the chemical potential $\mu$ depends on $\lambda$ as \mycite{Jayantha_PRB2_2011},
\begin{equation}
\mylabel{eqn:chempot}
\sqrt{\frac{\mu}{E_F}}(\frac{3\lambda ^2}{k_F^2}+ \frac{\mu}{E_F}) = 1.
\end{equation}
\section{Ultraviolet Regularization and Impurity Spectral Function}
\mylabel{sec:imp}
The non-interacting impurity Green's function ($U=0$) is given by,
\begin{equation}
{\cal G}_{d \sigma}(\omega) = \frac{1}{(\omega- \tilde{\varepsilon}_{d}- \sum_{\bk,\alpha} \frac{V^2}{2\Omega} \frac{1}{(\omega- \varepsilon_{\alpha}(\bk))})}.
\end{equation}
The third term in the denominator of the above expression has an ultraviolet divergence. We describe the procedure of regularization mentioned in the main text. $\tilde{\varepsilon}_d$ is treated as a bare parameter and replaced by the corresponding physical parameter $\varepsilon_d$ using,
\begin{equation}
\varepsilon_d = \tilde{\varepsilon}_d - \frac{V^2}{\Omega} \sum_{|\bk| \le \Lambda} \frac{1}{|\bk^2/2|}=
\tilde{\varepsilon}_d - \frac{V^2 \Lambda}{\pi^2}.
\end{equation}
The regularized Green's function is,
\begin{equation}
{\cal G}_{d \sigma}(\omega) = \frac{1}{(\omega- \varepsilon_{d}- (-\frac{V^2 \lambda^2}{2\sqrt{2}\pi\sqrt{-\omega}} + \frac{V^2\sqrt{-\omega}}{\sqrt{2}\pi}))}.
\end{equation}
The corresponding impurity spectral function is,
\begin{equation}
A_d(\omega) = 2\pi Z \delta(\omega-\varepsilon_b) + \frac{2 \left(\frac{\lambda ^2 V^2}{2 \sqrt{2} \pi \sqrt{\omega }}+\frac{V^2 \sqrt{\omega }}{\pi \sqrt{2}}\right)}{(\omega - \varepsilon_d)^2+\left(\frac{\lambda ^2 V^2}{2 \sqrt{2} \pi \sqrt{\omega }}+\frac{V^2 \sqrt{\omega }}{\pi \sqrt{2}}\right)^2}
\mylabel{eqn:Spec}
\end{equation}
where, $\frac{1}{2\pi}\int_{-\infty}^{\infty} A_d(\omega) d\omega= 1$. $\varepsilon_b$ is the pole of the Green's function and $Z$ is the weight of the $d$ state in the $b$ bound state. $Z$ is evaluated by the following procedure. For any impurity Green's function of the form ${\cal G}_{d\sigma}(\omega) = \frac{1}{f(\omega)}$, if $\varepsilon_b$ solves for the pole({i.e., } $f(\omega=\varepsilon_b)=0$), then $Z=\frac{1}{f'(\omega)}|_{\omega=\varepsilon_b}$.
\section{Hartree-Fock Method}
Under the Hartree-Fock method(HF), the interaction term (see \eqn{eqn:Hamiltonian}) is treated as,
\begin{equation}
U n_{d\uparrow} n_{d\downarrow} \rightarrow U (\langle n_{d \uparrow} \rangle n_{d \downarrow} + n_{d \uparrow} \langle n_{d \downarrow} \rangle - \langle n_{d \uparrow} \rangle \langle n_{d \downarrow} \rangle).
\end{equation}
The occupancy of $d$ state for both spin labels can now be self consistently found by solving,
\begin{equation}
\langle n_{d \sigma} \rangle = \int_{-\infty}^{\frac{\mu(\lambda)}{E_F}} \frac{-1}{\pi} \Im [ {\cal G}_{d\sigma}(\frac{\omega^+}{E_F},\frac{\varepsilon_d + U \langle n_{d\bar{\sigma}} \rangle}{E_F}, \frac{V}{E_F^{1/4}},\frac{\lambda}{k_F})] d(\frac{\omega}{E_F}).
\end{equation}
This then allows us to find impurity moment $M=\langle n_{d\uparrow} - n_{d\downarrow}\rangle$ as a function of $U$ and $\lambda$, as is shown in the main text.
\section{Variational Calculation}
To build the variational calculation(VC), we first look at the resolution of identity in the non-RSOC basis,
\begin{equation}
1 = \frac{\Omega}{8\pi^3} ( \sum_{l,m,\sigma} \int_0^{\infty} dk k^2 |k,l,m,\sigma \rangle \langle k,l,m,\sigma| )
\end{equation}
where $k=|\bk|$, and $l,m$ and $\sigma=\pm1/2$ are the azimuthal, magnetic and the spin quantum numbers respectively. $|k,l,m,\sigma\rangle$ are therefore the free particle spherical wave states. Now $\lambda$ couples $l, m$ and $\sigma$ states to form $helicity$ states $(l, m, \sigma) \rightarrow (j, m_j, \alpha)$. Resolution of identity in this basis is,
\begin{equation}
1= \frac{\Omega}{8\pi^3} ( \sum_{j,m_j,\alpha} \int_0^{\infty} dk k^2 |k,j,m_j,\alpha \rangle \langle k,j,m_j,\alpha| )
\mylabel{ResolIden}
\end{equation}
where for any $k$,
\begin{equation}
|l=0,m=0,\uparrow\rangle= \frac{1}{\sqrt{2}}|j=\frac{1}{2},m_j=\frac{1}{2},\alpha=-1\rangle - \frac{1}{\sqrt{2}}|j=\frac{1}{2},m_j=\frac{1}{2},\alpha=1\rangle
\end{equation}
\begin{equation}
|l=0,m=0,\downarrow\rangle=\frac{1}{\sqrt{2}}|j=\frac{1}{2},m_j=-\frac{1}{2},\alpha=-1\rangle - \frac{1}{\sqrt{2}}|j=\frac{1}{2},m_j=-\frac{1}{2},\alpha=1\rangle.
\end{equation}
The Hamiltonian $H$ can therefore be written as,
\begin{equation}{\mylabel{eqn:SphericalHamiltonian}}
\begin{split}
H =\sum_{j,m_j,\alpha} \frac{\Omega}{8\pi^3} \int_{k}k^2dk \varepsilon_{\alpha}(k) |k,j,m_j,\alpha\rangle\langle k,j,m_j,\alpha|
+ U |d,\sigma\rangle\langle d,\sigma| |d, \bar{\sigma}\rangle\langle d,\bar{\sigma}|
+ \sum_{\sigma} \tilde{\varepsilon}_d |d,\sigma\rangle\langle d,\sigma| \\ + \sum_{\sigma,\alpha} V \frac{ \sqrt{\Omega} \sqrt{4\pi}}{8\pi^3}
( \times \int_{k}k^2 dk \frac{1}{\sqrt{2}}( \bar{\alpha}|k,j=1/2,m_j=\sigma,\alpha\rangle \langle d, \sigma| + h.c.) )
\end{split}
\end{equation}
Since $k \in (0,\infty)$, we transform $k=\tan(\frac{\pi x}{2})$ such that $dk= jac(x) dx $ where, $jac(x)=\sec^2(\frac{\pi x}{2})\frac{\pi}{2}$. The $x$-interval $(0,1)$ is now further divided into discrete Gauss-Legendre points, $\int_0^{1} dx \rightarrow \sum_{i} wt(x_i)$, such that resolution of identity can be rewritten as,
\begin{equation}
1= \sum_{i,j,m_j,\alpha} g(x_i) |k(x_i),j,m_j,\alpha \rangle \langle k(x_i),j,m_j,\alpha|
\end{equation}
where, $g(x_i)=(\frac{\Omega}{8\pi^3}) wt(x_i) jac(x_i) k(x_i)^2 $. Defining $|\tilde{k}_i\rangle=|\tilde{k}(x_i)\rangle \equiv \sqrt{g(x_i)} |k(x_i)\rangle$ the complete discretized Hamiltonian is,
\begin{equation}
\begin{split}
H = \sum_{i,j,m_j,\alpha} \varepsilon_{\alpha}(k_i)| \tilde{k}_i,j, m_j, \alpha\rangle\langle \tilde{k}_i,j, m_j, \alpha| \\ + Un_{d\uparrow} n_{d\downarrow} + \sum_{\sigma} \tilde{\varepsilon}_d |d,{\sigma}\rangle \langle d, \sigma| \\+ V \sum_i \frac{1}{\sqrt{\Omega}} \sqrt{2\pi g(x_i)} (-|\tilde{k}_i , j=\frac{1}{2}, m_j=\frac{1}{2}, \alpha=1 \rangle \\ + |\tilde{k}_i, j=\frac{1}{2}, m_j=\frac{1}{2}, \alpha=-1 \rangle ) \langle d \uparrow| \\ + V\sum_i \frac{1}{\sqrt{\Omega}} \sqrt{2\pi g(x_i)} (-| \tilde{k}_i , j=\frac{1}{2}, m_j=-\frac{1}{2}, \alpha=1 \rangle \\+ | \tilde{k}_i , j=\frac{1}{2},m_j=-\frac{1}{2}, \alpha=-1 \rangle ) \langle d \downarrow|
\end{split}
\end{equation}
The system is numerically diagonalized in the non-interacting sector ($U=0$), where the regularization of $\tilde{\varepsilon}_d$ is included. A rigid Fermi sea is implemented by discarding states which have $\varepsilon_{\alpha}(\bk) < \mu(\lambda)$. The $U$ term of the Hamiltonian is further diagonalized in the two-particle sector using the product of one particle states. Various observables can then be calculated by taking expectation on the ground state wavefunction. Typically $\approx 10^4$ states in the two-particle sector may be necessary to find accurate solutions.
\section{Hirsch-Fye Quantum Monte Carlo}
Hirsch-Fye quantum Monte Carlo numerics are performed following \mycite{Hirsch_PRL_1986} where the susceptibility is obtained by,
\begin{equation}
\begin{split}
\chi = \int_0^{\beta} d\tau \langle [d^{\dagger}_{\uparrow}(\tau)d_{\downarrow}(\tau) + d^{\dagger}_{\downarrow}(\tau)d_{\uparrow}(\tau)] \times \\ [d^{\dagger}_{\uparrow}(0)d_{\downarrow}(0) + d^{\dagger}_{\downarrow}(0)d_{\uparrow}(0)] \rangle .
\end{split}
\end{equation}
The starting Green's function can be obtained from the non-interacting impurity spectral function (see \eqn{eqn:Spec}). Throughout the calculations, the chemical potential is kept fixed at its zero-temperature value ($\mu(T, \lambda)=\mu(\lambda))$. Our formulation can be readily used to obtain quantities of interest to experiments using realistic (temperature/system dependent) values of parameters.
\section{Infrared Divergence of Density of States determines {\it Z}}
In order to understand the origin of $Z=2/3$, we construct conduction baths with infrared divergence in the density of states of the form,
\begin{equation}
\rho (\omega)= \frac{1}{\pi^2}(\sqrt{2 \omega} + \frac{\lambda}{\sqrt{2}}\frac{\omega^r}{\lambda^{2r}})
\end{equation}
The infrared divergence is characterized by the exponent $r$ $(-1<r<0)$. For a given density of particles $n_o$, one can obtain the dependence of $\mu$ on both $r$ and $\lambda$ (similar to \eqn{eqn:chempot}). The impurity Green's function and $Z$ is obtained as illustrated in \sect{sec:imp}. It is found that for large $\lambda$, $Z \rightarrow \frac{1}{1-r}$.
This can be obtained analytically, by discarding the $\sim \sqrt{\omega}$ term in $\rho(\omega)$ and considering $\rho(\omega)= \omega^r$ $(-1<r<0)$. The impurity Green's function in this case is given by,
\begin{equation}
{\cal G}_d(\omega) = \frac{1}{\omega -\varepsilon_d - V^2 \pi (-\frac{1}{\omega})^{-r} \csc(\pi r)}
\end{equation}
with $Z=\frac{1}{1-r}$ for all values of $V(\ne 0)$ and $\varepsilon_d=0$.
\clearpage
\setcounter{page}{1}
\setcounter{figure}{0}
\end{document}
|
1,108,101,566,438 | arxiv | \section{Introduction} \label{intro}
Most young stars are surrounded by circumstellar disks, the natural by-product of star formation.
After a protostar has formed, its disk plays a crucial role in the evolution of the system,
first by serving as reservoir for mass accretion, and later by becoming the birthplace of the planetary system.
At early evolutionary stage the mass of the {\sl primordial disk} is dominated by gas,
with a few percent of mass in small dust grains. The gaseous component
plays an important role in controlling the dust dynamics in the disk \citep{beckwith2000},
and its dispersal determines the time available to form Jupiter- and Saturn-like giant planets.
As the disk evolves, the gas content is removed
by viscous accretion \citep{lynden1974} and/or by photoevaporation \citep[e.g.,][]{alexander2006}.
Current observational results
imply that the primordial gas becomes largely depleted in the first 10\,Myr
\citep{zuckerman1995,pascucci2006,fedele2010}.
A gas-poor disk appears,
where the lifetime of individual dust grains under the influence of radiative forces -- without the stabilization effect of
the surrounding gas --
is far less than the stellar age.
The dust grains are believed to be continuously replenished
by collisions and/or evaporation of previously formed planetesimals
\citep[][and references therein]{wyatt2008}.
In these {\sl debris disks} only a small amount of gas is expected.
Similarly to the dust, this gas could be {\sl second generation},
produced by sublimation of planetesimals,
photodesorption from dust grains \citep{grigorieva2007}, or vaporization of colliding dust
particles \citep{czechowski2007}.
So far only a few debris disks are known with detectable gas component.
The edge-on orientation of the disk around $\beta$\,Pic
allowed the detection of a very small amount of circumstellar gas
($N_{\rm CO}\sim6\times10^{14}$\,cm$^{-2}$)
through the presence of
absorption/emission lines \citep[][]{roberge2000,brandeker2004}. \citet{redfield2007} successfully exploited the
favorable edge-on geometry of the disk around \object[HD 32297]{HD32297} to detect gas via \ion{Na}{1}.
In contrast to the disks mentioned above, the debris disk around the young main-sequence star \object[49 Ceti]{49\,Ceti}
seems to have substantial ($\sim$13\,$M_{\oplus}$) molecular gas \citep{zuckerman1995,dent2005,hughes2008}.
The origin of the gas in the above mentioned systems is currently under debate.
It can be residual primordial gas that survived longer in the outer disks than usually assumed
\citep{krivov2009} or
it may have formed or been released recently.
Based on dynamical arguments, \citet{fernandez2006} suggested that the gas in $\beta$\,Pic is secondary.
Analyzing high-resolution data obtained with the SMA interferometer at 230\,GHz,
\citet{hughes2008} proposed that \object[49 Ceti]{49\,Ceti} is in a late stage of its evolution, possibly representing
the link between gas-rich primordial disks and gas-poor optically thin
debris disks. \citet{rhee2007} suggested an age of 20\,Myr for \object[49 Ceti]{49\,Ceti}, while \citet{thi2001} derived $\sim$8\,Myr as
the age of the star. In the case of the older age, \object[49 Ceti]{49\,Ceti} would challenge the current picture
of gas disk evolution, while the lower value could still be marginally consistent with a primordial disk phase.
The confirmation of the existence of debris disks containing a significant amount of gas would require to find and study
more \object[49 Ceti]{49\,Ceti}-like systems with reliable age estimates.
It might well be possible that
a number of similar systems exists, since most debris disks that are
similar to \object[49 Ceti]{49\,Ceti} in terms of age and fractional
luminosity have never been observed in molecular lines (most such candidates are in the southern hemisphere).
Motivated by this fact, we carried out a survey with the APEX\footnote{This publication is based on data acquired with the Atacama
Pathfinder EXperiment (APEX). APEX is a collaboration between
the Max-Planck-Institut f\"ur Radioastronomie, the European Southern
Observatory, and the Onsala Space Observatory.} radio telescope
to detect molecular gas in 20 infrared-luminous debris disks.
Here we review the results of this survey and report on the discovery of a second \object[49 Ceti]{49\,Ceti}-like disk around
the 30\,Myr-old star \object[HD 21997]{HD21997}.
\section{Sample description} \label{sample}
For candidate selection we adopted the \object[49 Ceti]{49\,Ceti} system as a template. This A1-type star harbors a dusty
disk that re-emits a fraction $f_{\rm dust}\sim8-9\times10^{-4}$ of the star's radiation at infrared
wavelengths. This fractional luminosity is an order of magnitude lower than the corresponding value
in primordial disks.
We used the following target selection criteria:
(1) spectral type between A0 and F6;
(2) $f_{\rm dust}$ in the range of
$ 5\times 10^{-4} to 5\times 10^{-3}$ that excludes both primordial disks and very tenuous debris disks;
(3) the excess emission is confirmed by an instrument independent of {\sl IRAS}
(4) ages between 10 and 100\,Myr, the age estimate is based on kinematic group membership or other
reliable dating methods.
In total, 20 candidates were selected from the lists of \citet{chen2005} and \citet{moor2006,moor2010}.
Their basic properties are given in Table~\ref{table1}.
For most sources, disk parameters (temperature, radius, and
fractional luminosity) were adopted from the literature (Table~\ref{table1}).
In all cases, disk radii were estimated by
adopting blackbody-like dust emission. For \object[HD 121617]{HD121617} no previous literature
data were found, thus we collected infrared photometry from the {\sl IRAS} FSC,
{\sl AKARI} IRC, {\sl AKARI} FIS and {\sl WISE} \citep{wright2010}. For \object[HD 21997]{HD21997}, we also compiled a new
spectral energy distribution
based on literature data and our own photometry derived from archival observations obtained with the
Multiband Imaging Photometer for Spitzer \citep[see Figure~\ref{fig1}, 55.3$\pm$2.2\,mJy at 24\,{\micron}
and 662$\pm$47\,mJy at 70\,{\micron}; for the description of the processing see ][]{moor2010}.
Disk parameters for these two targets were derived following \citet{moor2010}.
For sources where submillimeter observations were available,
we computed dust masses assuming optically thin emission with $\kappa_{870\micron}=2$\,cm$^2$\,g$^{-1}$ and
$\beta = 1$ \citep{nilsson2010}, and dust temperature from Table~\ref{table1}.
For comparison of the fundamental parameters, \object[49 Ceti]{49\,Ceti} is also added to Table~\ref{table1}.
\section{Observations and data reduction} \label{obsanddatared}
Our survey was carried out
with the APEX 12\,m telescope \citep{gusten2006} in service mode, between 2008 October and 2009 November.
All objects were observed at the 345.796\,GHz $^{12}$CO $J$=3$-$2 line using the
SHeFI/APEX2 receiver
\citep{vassilev2008}.
One source, \object[HD 21997]{HD21997} was also
observed in the $J$=2$-$1 transition of $^{12}$CO (at 230.538\,GHz) with the SHeFI/APEX1 receiver.
For the backend, we used the
Fast Fourier Transform Spectrometer with 2048 channels providing a velocity resolution of 0.42 and 0.64\,kms$^{-1}$ in the
$J$=3$-$2 and $J$=2$-$1 transitions, respectively. An on-off observing pattern was utilized with beam switching.
The total on-source integration time for most sources ranged between 10 and 30\,minutes. For \object[HD 21997]{HD21997},
we integrated
longer for a better characterization of the line profile.
The weather conditions were generally dry,
the precipitable water vapor was below 1.3\,mm for most of the $J$=3$-$2 observations and ranged between
1.3 and 2.7\,mm during the $J$=2$-$1 measurements.
The data reduction was performed using the GILDAS/CLASS package\footnote{\url{http://iram.fr/IRAMFR/GILDAS/}}.
For the final average spectrum, we omitted {noisy scans}
and a linear baseline was subtracted from each individual scan.
\section{Results and analysis} \label{analysis}
Among the 20 targets, one system, \object[HD 21997]{HD21997}, was detected at $>$5\,$\sigma$ level in both CO lines.
Figure~\ref{fig2} shows the baseline-corrected CO profiles for \object[HD 21997]{HD21997}.
Both lines display double peaked line profile with identical peak positions.
The central velocities of both lines are consistent with the systemic velocity of the star \citep[+17.3$\pm$0.8kms$^{-1}$;][]{kharchenko2007}.
We integrated the intensities/fluxes over an interval of
8\,km\,s$^{-1}$ that covers the whole line profile.
The beam efficiencies and Kelvin-to-Jansky conversion factors were taken from the APEX web
page\footnote{\url{http://www.apex-telescope.org/telescope/efficiency/}}.
For the non-detected sources, upper limits were estimated
as $T_{\rm rms} \Delta{v} \sqrt{N}$, where $T_{\rm rms}$ is the measured noise, $\Delta{v}$ is the velocity channel width,
and $N$ is the number of velocity channels over an interval of 10\,km\,s$^{-1}$.
The total mass (or upper limit) of CO molecules ($M_{\rm CO}$) was estimated assuming optically thin emission and local thermodynamic equilibrium (LTE).
The excitation temperature ($T_{\rm ex}$) was assumed to be equal to the dust temperature in Table~\ref{table1} (i.e.,
gas and dust are sufficiently coupled).
The obtained line intensities/fluxes as well as the estimated CO masses are listed in Table~\ref{table1}.
Note that for \object[HD 21997]{HD21997} the CO masses computed independently from the (2$-$1) and (3$-$2) transitions
are significantly different.
Figure~\ref{fig3}(left panel) shows the integrated CO(3$-$2) fluxes and upper limits,
normalized to 100\,pc, plotted against the fractional luminosities of the disks.
For comparison, additional protoplanetary/debris disks around A0-F6-type pre-main/main-sequence stars, including \object[49 Ceti]{49\,Ceti},
are also displayed \citep{dent2005,greaves2000a,greaves2000}.
Our observations fill the gap between Herbig Ae/Be disks and older debris disks.
Note that the fractional luminosities of \object[HD 21997]{HD21997} and
\object[49 Ceti]{49\,Ceti} are modest even within the debris sample, and significantly lower than those of the primordial disks.
Thus, $f_{\rm dust}$ does not appear to be a good proxy for the presence of CO gas in debris disks.
Figure~\ref{fig3}(right panel) presents
the integrated CO(3$-$2) fluxes versus
the (blackbody) radii of the dust disks. Interestingly, the two definite detections, \object[HD 21997]{HD21997}
and \object[49 Ceti]{49\,Ceti},
harbor the most extended disks, suggesting that large radius and low dust temperature may be essential for CO detection.
Although the radii in this analysis rely on the assumption of blackbody grains, the conclusion holds in the case of realistic
grain size distributions. Using the blackbody assumption, we may systematically underestimate the true radius, due to
the presence of inefficiently emitting small particles. However, assuming similar
grains in all disks, the relative distribution of disk radii would not differ from the blackbody case
\citep[see e.g.,][]{wyatt2008}.
\object[HD 21997]{HD21997} is an A3-type star
at a distance of 72\,pc \citep{vanleeuwen07}, a member of the $\sim$30\,Myr old Columba group \citep{moor2006,torres2008}.
Fitting an ATLAS9 atmosphere model \citep{castelli2003} to the optical and near-IR ({\sl Hipparcos, Tycho-2, Two Micron All Sky Survey}) data,
assuming solar metallicity and $\log{g} = 4.25$ yields $T_{\rm eff}$ = 8300\,K. The evolutionary tracks of \citet{siess2000}
imply a stellar mass of 1.85\,$M_{\sun}$.
We modeled the measured line profiles of \object[HD 21997]{HD21997} with a simple disk geometry assuming
a combination of a radial power-law and a vertical Gaussian profile for the density distribution:
\begin{equation}
n_{\rm CO}(r,z)=n_{\rm CO,in}(r/R_{\rm in})^{-\alpha}e^{-z^2/2H^2}.
\end{equation}
We fixed the following parameters: $H=0.1r$, $\alpha=-2.5$, $R_{\rm in}=63$\,AU (Table~\ref{table1}), and
$R_{\rm out}=200$\,AU \citep[typical for Herbig Ae/Be disks;][]{panic2009}.
We assumed an H$_2$ abundance relative to CO of 10$^{4}$, and that gas and dust grains -- the latter act like blackbody -- are well mixed,
prescribing that the gas kinetic temperature and dust temperature distributions are identical.
The velocity of the material in the disk was derived by assuming
Keplerian orbits around a star of 1.85\,$M_\sun$ mass.
Then the CO level populations at each disk position, and the resulting emission line
profiles were calculated using the non-LTE spectral line radiation transfer code LIME \citep{brinch2010}.
First, we fitted the (3--2) line by adjusting $n_{\rm CO,in}$ and the disk inclination.
The best-fitting model spectrum with $n_{\rm CO,in}=10$\,cm$^{-3}$ and $i=45^\circ$ is overplotted with dashed line
in Figure~\ref{fig2}.
With the same parameters we also computed a CO (2--1) profile. As Figure~\ref{fig2} shows,
this model significantly underestimates the observed CO(2--1) feature.
The reason for this discrepancy
is the same as what causes the difference in the CO mass estimates from (2--1) and (3--2)
lines: the ratio of integrated CO(3$-$2) flux to the
integrated CO(2$-$1) flux is only 1.43$\pm$0.37, significantly
lower than the ratio of 3.8, expected for $T_{\rm ex}\sim60$\,K in LTE condition.
This low line flux ratio corresponds to an excitation temperature of
13.1$\pm$2.7\,K. This is unrealistically low (subthermal) for being an LTE value,
suggesting that the density of collision partners (H$_2$)
is lower than the critical density, and the excitation
is not collisionally dominated.
In order to provide a model that can fit both lines simultaneously, we gradually decreased the
H$_2$/CO abundance ratio and repeated the above-mentioned modeling process.
We found that with H$_2$/CO=1000$\pm$500, $n_{\rm CO,in}=22\pm5$\,cm$^{-3}$, and $i=45^{+15}_{-10}$, both line profiles
can be fitted (solid line in Figure~\ref{fig2}). Note that in this non-LTE model the
kinetic and excitation temperatures of the gas are different.
The total CO mass predicted by this model is $M_{\rm CO}=3.5\times10^{-4}$\,$M_{\oplus}$.
\section{Discussion} \label{discussion}
Its reliable age determination makes \object[HD 21997]{HD21997} the oldest example for a gas-bearing debris disk.
In many aspects it resembles the somewhat younger \object[49 Ceti]{49\,Ceti}.
Both systems contain an A-type central star that produces energetic UV that could dissociate CO molecules in the vicinity of the star.
\object[49 Ceti]{49\,Ceti} and \object[HD 21997]{HD21997} clearly stand out from our sample in terms of
disk radius, and of harboring a large amount of relatively cold dust ($T_{\rm dust}\leq80$\,K and $M_{\rm dust}\sim0.1$\,$M_{\oplus}$).
{Note also that these two systems exhibit very similar $M_{\rm CO}$/$M_{\rm dust}$ ratios ($\sim$0.003).}
Based on their similarities,
we speculate that \object[HD 21997]{HD21997} and \object[49 Ceti]{49\,Ceti}
may be the first representatives of a so far
undefined new class of relatively old ($\gtrsim$8\,Myr), gaseous dust disks.
\citet{hughes2008} claim that the disk around \object[49 Ceti]{49\,Ceti} contains predominantly primordial gas
and may represent the link between gas-rich primordial disks and gas-poor debris disks.
It is a question whether \object[HD 21997]{HD21997} system may be of similar origin.
The gas clearing process in primordial disks is expected to progress outwards
due to photoevaporation driven by the central star
\citep[e.g.,][]{alexander2006,pascucci2010},
thus the last reservoir of gas will be the outermost part of the circumstellar disk.
Indeed, \object[HD 21997]{HD21997} and \object[49 Ceti]{49\,Ceti} possess the largest disks and consequently have the longest expected
survival time for gas in our sample (though the evaporation timescale also depends on the high energy flux
of the central star). A confirmed primordial origin of the gas in the \object[HD 21997]{HD21997} system,
would pose a serious question to the current paradigm
since its age of $\sim$30\,Myr significantly exceeds both the model predictions
for disk clearing and the ages of the oldest T\,Tauri-like or transitional gas disks in the literature \citep{kastner2008}.
Primordial CO gas can survive only in the case of efficient shielding from the stellar/interstellar high-energy photons.
We determined the stellar UV flux from the fitted ATLAS9 atmosphere model (see Section~\ref{analysis}) and the UV component of
the interstellar radiation field (ISRF) from \citet{habing1968}.
For each disk position, where the H$_2$ and CO column densities are provided by our model, we
analyzed the shielding efficiency using the photodissociation model of \citet{visser2009}
for different $N_{\rm CO}$ and $N_{\rm H_2}$ pairs (shielding by dust grains is negligible in this tenuous disk).
We found that no region in the disk is shielded enough to provide a CO lifetime longer than 500\,yr.
Adopting a lower scale height would lead to higher radial column densities but would not
affect significantly the vertical column densities, thus the UV photons of the ISRF -- which dominate the stellar UV flux
almost everywhere in the disk -- could efficiently photodissociate CO molecules.
In the course of modeling we assumed that the gas and dust are sufficiently coupled leading to a common
temperature, but in tenuous debris disks this may not be true \citep{kamp2001,zagorovsky2010}.
Assuming a lower gas temperature similar to the measured excitation temperature would allow the existence of a larger amount of
hydrogen gas. However, the gas is not likely to cool down to such a low temperature all over the disk.
{Thus based on this result, as well as on the obtained H$_2$/CO ratio of $\sim$1000 that is lower
than expected for a primordial composition,
the scenario of primordial origin of gas in \object[HD 21997]{HD21997} is unlikely.}
Is it possible then, that the gas in this disk is of secondary origin, being continuously replenished from icy planetesimals?
In this scenario the gas may have a very low H$_2$/CO ratio.
Without the presence of a large amount of H$_2$,
shielding against UV photons is weak.
Our modeling predicts CO photodissociation timescales of less than 500 years in the whole disk.
In order to reproduce the observed CO mass of $\sim$3.5$\times10^{-4}$\,{$M_\oplus$}, CO has to be released from solids with a rate of
$>$7$\times10^{-7}$\,{$M_\oplus$}yr$^{-1}$.
Pure CO ice evaporates at temperatures above 18\,K, thus it is volatile
even far from the luminous central star, meaning that the surface of planetesimals is very likely already depleted
in CO ice. However, CO ice can persist in deeper layers and/or can be trapped in mixed H$_2$O--CO ices even on the
surface at temperatures below $\sim$140\,K.
Destructive collisions between planetesimals can lead to the release of subsurface ices.
In addition, frequent collisions of icy grains with smaller particles -- mainly with $\beta$ meteoroids --
could produce a continuous flux of CO via vaporization. Photodesorption from solids can also
significantly contribute to the gas production.
Extrapolation of the current production rate for the last 20\,Myr (assuming a primordial gas-rich disk phase in the first 10\,Myr
and a steady-state disk evolution afterwards)
would yield a total of $>$14\,{$M_\oplus$} of CO released from
planetesimals/grains. Adopting a CO mass abundance of 10\% in the planetesimals
\citep[see the composition of the comet Hale-Bopp;][]{huebner1999},
this scenario would require the complete destruction of more than 140\,{$M_\oplus$} mass of planetesimals in the outer disk.
It would significantly exceed
the full initial dust content of a typical protoplanetary disk, making this steady-state scenario questionable.
A more satisfactory explanation would be that the system is currently undergoing a phase of temporarily high CO production.
The origin of this contemporeous gas production might be imprinted in the spatial distribution
of the gas and dust, and could be revealed with future interferometers.
Our results indicate that neither primordial origin nor steady secondary production
can unequivocally explain the presence of CO gas in the disk of \object[HD 21997]{HD21997}.
An on-going temporarily high CO production may be more likely.
Detection of other gas components and transitions with {\sl Herschel} and {\sl ALMA},
as well as the better characterization of the disk structure may lead to the deeper understanding
of this enigmatic system and clarify whether \object[49 Ceti]{49\,Ceti} and \object[HD 21997]{HD21997}
are the first examples of a so far less studied phase of disk evolution.
\acknowledgments
We thank an anonymous referee whose comments significantly improved the manuscript.
We are grateful to the APEX staff, in particular to Carlos De Breuck (ESO), for their assistance.
This research was partly funded by the Hungarian OTKA grant K81966. The research of
\'A.K. is supported by the Netherlands Organisation for Scientific Research.
{\it Facilities:} \facility{APEX}.
|
1,108,101,566,439 | arxiv | \section{Introduction}
Stellar interiors are obvious sites for interesting dynamical phenomena,
with strong potential effects of convection and other forms of instabilities,
rotation and its evolution, as well as magnetic fields,
forming a fascinating playing field for theoretical investigations.
However, `classical' observations of stellar surfaces provide only limited
observational tests of the resulting models.
Observations of stellar oscillations, on the other hand, are sensitive to
many aspects of the structure and dynamics of stellar interiors.
In addition, the oscillations are themselves interesting dynamical phenomena.
The diagnostic potential of oscillations
has been evident for more than a decade in the solar case,
where helioseismic investigations have yielded very
detailed information about the structure and rotation of the solar interior
\citep[e.g.,][]{Gough1991, Christ2002}
and the detailed properties of the solar near-surface
region through local helioseismology \citep[for a review, see][]{Gizon2010}.
For a comprehensive review of global helio- and asteroseismology,
see also \citet{Aerts2010}.
The extension of such detailed investigations to other stars has been eagerly
sought since the beginning of helioseismology, given the expected
general presence of the oscillations in all cool stars.
There is strong evidence that the solar modes are excited stochastically
by the near-surface convection, and hence similar modes are expected in all
stars with such vigorous convective motions \citep{Christ1983}.
However, the predicted amplitudes of a few parts per million in relative
intensity or at most tens of centimeters per second in velocity have made
their detection extremely challenging.
Ground-based observations have had some success for a limited number of stars
\citep[for early examples, see][]{Kjelds1995a, Bouchy2001, Bouchy2002,
Frands2002},
but the required efforts in terms of manpower and valuable
telescope time have been very considerable.
However, in the last few years observations of stellar oscillations, and
other types of variability, have made tremendous progress, largely as a result
of the CoRoT and {\it Kepler} space missions, with significant contributions
also from the Canadian MOST satellite \citep{Walker2003, Matthe2007}.
Here we provide a brief overview of some of the results from these missions
which promise to revolutionize the study of stellar internal structure
and dynamics.
\section{CoRoT and Kepler}
Both CoRoT and {\it Kepler} carry out photometric observations,
with the dual purpose of detecting extra-solar planets (exoplanets)
using the transit technique and making asteroseismic investigations.
In fact, the observational requirements for these two types of investigation
are very similar.
A central goal of the exoplanet studies is to characterize the population
of Earth-like planets in orbit around Sun-like stars.
The corresponding transit corresponds to a reduction in stellar intensity
by around $10^{-4}$.
This level of sensitivity allows the detection of solar-like oscillations in
sufficiently extended observations.
Also, both types of investigations require very long
and continuous observations.
Finally, asteroseismology has the potential to characterize the properties
of the central stars in planetary systems detected by the transit technique.
This has proven to be very useful in a few cases for the {\it Kepler} mission
and is central to the PLATO mission proposed to ESA.
The CoRoT satellite \citep{Baglin2006, Baglin2009, Michel2008a}
was launched on 27 December 2006 into an orbit around the Earth.
The satellite has an off-axis telescope with a diameter of 28\,cm,
and a focal plane with 4 CCD detectors, two of which
(defining the exoplanet field) are optimized for
studying planet transits and the other two, with slightly defocussed images,
optimized for asteroseismology (the asteroseismic field).
The field of view of each CCD is $1.3^\circ \times 1.3^\circ$.
Given the low-Earth orbit, great care was taken to minimize effects of
scattered light.
Further details on the design and operations of the mission were provided by
\citet{Auverg2009}.
CoRoT's orbit allows extended observations, up to 5 months, in two regions
with a diameter of around $20^\circ$,
near the Galactic plane and the direction of
the Galactic centre and anticentre, respectively;
these are known as the `CoRoT eyes'.
The observed fields are selected within these regions, such that the
asteroseismic detectors contain a suitable sample
of relatively bright asteroseismic targets, up to a total of 10 targets,
with an adequate density of stars in the corresponding exo field.
It should be noted that while the observations in the latter are probably not
sufficiently sensitive to study solar-like oscillations in main-sequence
stars, they do provide very extensive data on other types of stellar
variability, including solar-like oscillations in red giants
(see Section~\ref{sec:redgiant}).
The photometric analysis is carried out on the satellite, with only
the photometric signal being transmitted
to the ground; a typical cadence for the astero field is 32\,s.
The mission suffered a loss of two of the four CCD detectors (one each
in the exo and astero fields) on 8 March 200
\footnote{Coincidentally a day after the launch of the {\it Kepler}
mission!}
but is otherwise performing flawlessly.
Early results on solar-like oscillations from CoRoT data were
presented by \citet{Michel2008b}.
The mission has now been extended until March 2013.
The {\it Kepler} mission \citep{Boruck2009, Koch2010}
was launched on 7 March 2009 into an Earth-trailing heliocentric
orbit, with a period of around 53 weeks.
This provides a very stable environment for the observations, in terms
of stray light and other disturbances, although with the drawback of
a gradually decreasing rate of data transmission with the increasing distance
from Earth.
Even so, it is hoped to keep the mission operating well beyond the current
nominal lifetime of $3\frac{1}{2}$ years.
The {\it Kepler} photometer consists of a Schmidt telescope with a corrector
diameter of 95\,cm and a $16^\circ$ diameter field of view.
The detector at the curved focal plane has 42 CCDs with a total
of 95 megapixels.
This provides a field of around 105 square degrees.
The data are downlinked in the form of small images around each of
the target stars, with the detailed photometry being carried out on the
ground.
This allows up to 170,000 targets to be observed at a 30-minute cadence,
while up to 512 stars can be observed at a 1-minute cadence;
the latter are the prime targets for asteroseismology, although for
more slowly varying stars the long-cadence data are also extremely valuable.
The spacecraft is rotated by $90^\circ$ four times per orbit to keep the solar
arrays pointed towards the Sun.
Thus the observations are naturally divided into quarters.
New target lists are uploaded for each quarter, although targets can
be observed for as little as one month each;
typically, most targets are in fact observed for several quarters,
in many cases throughout the mission.
For further details on the spacecraft and the operations,
see \citet{Koch2010}.
A detector module, corresponding to two of the 42 CCD detectors, failed in
January 2010.
Otherwise, the mission has been operating successfully, reaching very close
to the expected performance.
\citet{Boruck2010} provided an early overview of {\it Kepler}
results on exoplanets.
{\it Kepler} observes a fixed field in the region of the constellations of
Cygnus and Lyra,
centred $13.5^\circ$ above the Galactic plane and chosen to
secure a sufficient number of targets of the right type while avoiding
excessive problems with background confusion.
A very detailed characterization of the field was carried out before
launch, resulting in the Kepler Input Catalog \citep[KIC;][]{BrownT2011}.
To avoid problems with highly saturated trailing images, the field is
located such that the brightest stars are placed in gaps between the CCDs.
In addition, the CCDs are positioned such that a star is located
at approximately the same point on a CCD following each quarterly rotation.
{\it Kepler} asteroseismology \citep[e.g.,][]{Christ2008}
is carried out in the Kepler Asteroseismic Science Consortium (KASC),
which at the time of writing has around 450 members.
The members are organized into 13 working groups, generally dealing with
different classes of pulsating stars.
The KASC is responsible for proposing targets, and for the analysis and
publication of the results.
Data for the KASC are made available through the Kepler Asteroseismic Science
Operations Centre (KASOC) in Aarhus, Denmark, which also organizes
the sharing and discussion of manuscripts before publication.
The structure of the Kepler Asteroseismic Investigation (KAI) was presented
by \citet{Kjelds2010}.
In the early phases of the KAI a survey was made of a very large number of
stars, to characterize their oscillation properties and provide the basis
for selecting targets for more extended observations.
Initial results of this survey phase were discussed by \citet{Gillil2010}.
\section{Pulsating stars}
As a background for the discussion below of specific types of pulsating stars
we provide a brief overview of the properties of stellar pulsations.
For more detail, see \citet{Aerts2010}.
We restrict the discussion to slowly rotating stars and oscillations of
modest amplitude.
In this case the oscillations can, to leading order, be characterized
as small perturbations around a spherically symmetric equilibrium structure;
individual modes depend on co-latitude $\theta$ and longitude $\phi$ as
spherical harmonics $Y_l^m(\theta, \phi)$, where $l$ measures the total number
of nodal lines on the stellar surface
and $m$ the number of nodal lines crossing the equator, with $|m| \le l$.
Modes with $l = 0$ are typically described as {\it radial} oscillations.
For each $l, m$ the star has a set of modes distinguished by the radial order
$n$.
{}From a dynamic point of view there are two basic types of stellar mode
\footnote{In addition, modes corresponding to {\it surface gravity waves}
can be distinguished, but at degrees so far only relevant for spatially
resolved observations of the Sun.}:
acoustic modes (or p modes) where the restoring force is pressure,
and internal gravity waves (or g modes) where the restoring force is
buoyancy variations across spherical surfaces.
Thus g modes are only found for $l > 0$.
Being acoustic
the properties of the p modes predominantly depend on the sound-speed
variation within the star.
In many cases the result is that the frequencies $\nu$ approximately scale with
the inverse dynamical time scale, or
\begin{equation}
\nu \propto \left({G M \over R^3} \right)^{1/2}
\propto \langle \rho \rangle^{1/2} \; ,
\label{eq:nuscale}
\end{equation}
where $M$ and $R$ are the mass and radius of the star, $G$ is the gravitational
constant and $\langle \rho \rangle$ is the mean density.
The g-mode frequencies depend on the variation of the gravitational
acceleration and density gradient throughout the star,
the latter in turn being very sensitive to the variation of composition.
In unevolved stars typical g-mode frequencies are lower than the frequencies
of p modes.
However, as the star evolves, the core contracts, leading to a very high
local gravitational acceleration; in addition, strong composition gradients
are built up by the nuclear burning and possibly convective mixing.
In this case the local g-mode frequency may become large, and as a result the
modes can take on a mixed character, with p-mode behaviour in the outer
parts of the star and g-mode character in the deep interior.
We discuss examples of this in Sections \ref{sec:solar-like} and
\ref{sec:redgiant}.
An important measure of the properties of a mode of oscillation is its
normalized mode inertia
\begin{equation}
E = {\int_V \rho |\bolddelr|^2 \dd V \over M |\bolddelr_{\rm s}|^2} \; ,
\label{eq:inertia}
\end{equation}
where $\bolddelr$ is the displacement vector, $\bolddelr_{\rm s}$
the surface displacement, and the integral is over the volume $V$ of the star.
Acoustic modes have their largest amplitude in the outer layers of the star
and hence typically have relatively low inertias, whereas g modes are generally
confined in the deep interiors of stars, with correspondingly high inertia.
\begin{figure}[b]
\begin{center}
\includegraphics[width=3.5in]{jcd_mjt_fig_1.eps}
\caption{Schematic location of classes of pulsating stars in the
Hertzsprung-Russell diagram.
The diagonal dashed line marks the main sequence, where stars undergo
central hydrogen burning.
Evolution tracks following the main sequences for a few stellar masses
are shown by solid lines,
the triple-dot-dashed line indicates the location of horizontal-branch stars
with central helium burning, and the dotted line sketches the white dwarf
cooling track, after stars have stopped nuclear burning.
The hatching indicates the excitation mechanism:
slanted lines from lower right
to upper left for heat-engine excitation of p modes,
slanted lines from lower left
to upper right for heat-engine excitation of g modes,
and horizontal lines for stochastic excitation.
The two nearly vertical dashed lines mark the limits of the Cepheid
instability strip, where stars are generally excited by an opacity-driven
heat engine operating in the second helium ionization zone.
Stars discussed in the present paper are shown with bolder lines:
RR Lyrae stars (RR Lyr; Section~\ref{sec:rrlyr}),
massive main-sequence stars ($\beta$ Ceph; Section~\ref{sec:betaceph}),
long-period subdwarf B variables (sdBV; Section~\ref{sec:sdbv}),
solar-like pulsators (Section~\ref{sec:solar-like})
and red giants (RG; Section~\ref{sec:redgiant}).
}
\label{fig:pulshr}
\end{center}
\end{figure}
Energetically, two fundamentally different mechanisms may excite the
oscillations.
Some stars function as a heat engine where the relative phase of compression
and heating in a critical layer of the star is such that thermal energy
is converted into mechanical energy, contributing to driving of
the oscillations, and dominating over the other parts of the star
which have the opposite effect.
This is typically associated with opacity variations of specific
elements;
an important example is the effect of the second ionization of helium
\citep{Cox1958},
which causes instability in stars where the corresponding region in the star
is located at the appropriate depth beneath the stellar surface.
The driving leads to initially exponential growth of the oscillations,
with so far poorly understood mechanisms setting in to control the limiting
amplitudes of the modes.
In stars where the oscillations are not self-excited in this manner the modes
may be excited stochastically, through driving from other dynamical phenomena
in the star.
This is likely the case in the Sun where near-surface convection at nearly
sonic speed is a strong source of acoustic noise which excites the resonant
modes of the star \citep[e.g.,][]{Goldre1977, Houdek1999}.
In this case the oscillation amplitudes result from a balance between
the stochastic energy input and the damping of the modes.
Such excitation is expected in all stars with a
significant outer convection zone,
i.e., stars with effective temperature below around 7000\,K.
In principle, it excites all modes in a star;
however, typically only modes with low inertia
(cf.\ Eq. \ref{eq:inertia}), i.e., acoustic modes of rather high radial order,
are excited to sufficiently high amplitudes to be readily observable.
The stochastic excitation of acoustic oscillations leads to a characteristic
bell-shaped distribution of mode amplitudes
(see Fig.~\ref{fig:gemmaspec} below).
It has been found
\citep{Brown1991, Brown1994, Kjelds1995b, Beddin2003a, Stello2008}
that the frequency $\nu_{\rm max}$ at
maximum power scales as the acoustic cut-off frequency
\citep{Lamb1909}, leading to
\begin{equation}
\nu_{\rm max} \propto M R^{-2} T_{\rm eff}^{-1/2} \; ,
\label{eq:numax}
\end{equation}
where $T_{\rm eff}$ is the effective temperature.
This relation so far lacks a solid theoretical underpinning
\citep[see, however][]{Belkac2011},
but it has proven to be very useful in characterizing stars observed to
have solar-like oscillations (see Section~\ref{sec:redgiant}).
As illustrated in Fig.~\ref{fig:pulshr} pulsating stars are found throughout
the Hertzsprung-Russell diagram, in all phases of stellar evolution.
Thus there are excellent possibilities for learning about a broad range
of stars.
Most of these classes have been observed by CoRoT and {\it Kepler}.
In the following we discuss a few important examples.
\begin{figure}[b]
\begin{center}
\includegraphics[width=13.5cm]{jcd_mjt_fig_2.ps}
\caption{Lightcurves for RR Lyrae lightcurves.
The left panel shows the combined results of observations
with 6 ground-based telescopes \citep{Kolenb2006}, compared with the
first two quarters of {\it Kepler} data \citep{Kolenb2011}.
}
\label{fig:rrlyr}
\end{center}
\end{figure}
\section{RR Lyrae stars}
\label{sec:rrlyr}
The RR Lyrae stars are amongst the `classical' pulsating stars that have been
studied extensively from the ground, since the discovery in 1900 of
the variability of RR Lyr, the prototype of the class which
happens to be in the {\it Kepler} field
\citep[see][for more information on these stars]{Smith1995}.
They are low-mass stars with a low content of heavy elements, in the core
helium-burning phase of evolution.
As they have relatively well-defined luminosities they serve a very useful
purpose as distance indicators to nearby galaxies, such as the Magellanic
Clouds.
Their pulsations are excited by the heat-engine mechanism,
operating as a result of
opacity variations in the second helium ionization zone.
They oscillate predominantly in one or two low-order radial modes,
with amplitudes large enough to allow observations with even very modest
equipment.
What makes these stars interesting in the context of space asteroseismology
are the very interesting, and poorly understood, dynamical properties of
the oscillations in a substantial fraction of the class.
\citet{Blazko1907
\footnote{In the discovery paper the original Cyrillic name was
written as `Bla{\v z}ko'.
However, traditionally the name of the effect is now written as
`the Blazhko effect' with a slightly different transliteration;
we follow that tradition here.}
discovered in one member of the class
that the maximum amplitude varied cyclically with a period of 40.8\,d.
This phenomenon has since been found in a number of RR Lyrae stars,
including RR Lyr itself \citep{Kolenb2006}.
A centennial review, including a discussion of the so far questionable
attempts at explaining the effect, was provided by \citet{Kolenb2008}.
\begin{figure}[b]
\begin{center}
\includegraphics[width=2in, angle=-90]{jcd_mjt_fig_3a.ps}
\includegraphics[width=2in, angle=-90]{jcd_mjt_fig_3b.ps}
\caption{Phase plots of RR Lyr at maximum and minimum amplitude,
from {\it Kepler} observations. Figure courtesy of R. Szab\'o.
}
\label{fig:rrphase}
\end{center}
\end{figure}
The continuity, duration and precision of space-based observations offer obvious
advantages in investigating such long-term phenomena.
This is illustrated in Fig.~\ref{fig:rrlyr} which compares results
of an extensive ground-based campaign on RR Lyr with {\it Kepler} observations.
The latter obviously provide a far better phase coverage throughout
the Blazhko cycle.
Phase plots at maximum and minimum amplitude in the cycle are
illustrated in Fig.~\ref{fig:rrphase}.
Similar results have been obtained by CoRoT \citep{Porett2010}.
An early survey of 28 RR Lyrae stars by {\it Kepler} \citep{Kolenb2010}
found the Blazhko effect in 40 \% of the stars, a rather higher fraction than
previously suspected.
One may hope that these vastly improved data will bring us closer
to an understanding of this enigmatic phenomenon.
As an interesting piece of evidence \citet{Szabo2010},
using {\it Kepler} data, found that
three Blazhko RR Lyrae stars, including RR Lyr itself, showed
period doubling in certain phases of the Blazhko cycle, with slight
variations in the maximum amplitude between alternating pulsation cycles.
Also, from CoRoT observations \citet{Chadid2011} investigated cycle-to-cycle
changes in the Blazhko modulation.
Such results evidently provide constraints on, and inspiration for,
the attempts at theoretical modelling of these stars.
\section{Massive main-sequence stars}
\label{sec:betaceph}
The pulsations in hot main-sequence stars, the so-called $\beta$ Cephei stars,
have been known for a century,
but the cause of the pulsations was only definitely identified around 1990,
when substantial revisions in opacity calculations produced opacities which
allowed excitation of the observed modes through the heat-engine mechanism
operating through the opacity from iron-group elements \citep{Moskal1992}.
This causes excitation of both p and g modes, with a tendency towards g modes
at lower effective temperature and hence mass, and a transition to the
slowly pulsating B stars (SPB stars) dominated by high-order g modes with
very long periods.
An excellent overview of the excitation of oscillations in the B stars
was provided by \citet{Pamyat1999}.
These massive stars are of very considerable astrophysical interest as
precursors for the core-collapse supernovae.
Their structure and evolution depend strongly on the properties of the
convective core, including additional mixing outside the convectively unstable
region caused by overshoot or `semiconvection',
as well as other dynamical phenomena associated, for example, with rotation.
Thus the potential for asteroseismic investigation is very valuable,
provided that adequate data can be obtained.
Particularly useful diagnostics can be obtained from observations of g modes.
High-order g modes are approximately uniformly spaced in period, with
a period spacing $\Delta \Pi$ given, to leading order, by
\begin{equation}
\Delta \Pi = {2 \pi^2 \over \sqrt{l(l+1)}}
\left( \int_{r_1}^{r_2} N {\dd r \over r} \right)^{-1} \;
\label{eq:gper}
\end{equation}
\citep{Tassou1980};
here $N$ is the buoyancy frequency and $[r_1, r_2]$ is the interval
where the modes are trapped, with $N^2 > 0$.
Assuming an ideal gas the buoyancy frequency is determined by
\begin{equation}
N^2 \simeq {g^2 \rho \over p} ( \nabla_{\rm ad} - \nabla + \nabla_\mu) \; ,
\label{eq:buoy}
\end{equation}
where $g$ is the local gravitational acceleration, $\rho$ is density and $p$
is pressure;
also, following the usual convention,
\begin{equation}
\nabla = {\dd \ln T \over \dd \ln p} \; , \qquad
\nabla_{\rm ad} =
\left({\partial \ln T \over \partial \ln p} \right)_{\rm ad} \; , \qquad
\nabla_\mu = {\dd \ln \mu \over \dd \ln p} \; ,
\end{equation}
where $T$ is temperature, $\mu$ is the mean molecular weight and
the derivative in $\nabla_{\rm ad}$ is taken corresponding to an adiabatic
change.
In a detailed analysis \citet{Miglio2008} pointed out the diagnostic
potential of {\it departures} from the uniform period spacing.
Owing to the presence of the term in $\nabla_\mu$ in Eq.~(\ref{eq:buoy})
the buoyancy frequency is very sensitive to the detailed composition profile,
such as may result outside a convective core.
The resulting sharp features in the buoyancy frequency
introduce perturbations to $\Delta \Pi$ with a characteristic
oscillatory behaviour, the details of which depend strongly on conditions
at the edge of the core.
\begin{figure}[b]
\begin{center}
\includegraphics[width=3in]{jcd_mjt_fig_4.eps}
\caption{Period spacings in the B3V star HD\,50230, from CoRoT observations.
The variations in $\Delta \Pi$, here fitted with a decaying sinusoid,
reflect the properties of the buoyancy frequency just outside the convective
core in the star.
Adapted from \citet{Degroo2010b}.}
\label{fig:betaceph}
\end{center}
\end{figure}
As for the RR Lyrae stars (Section~\ref{sec:rrlyr}) the oscillations
can readily be detected in ground-based observations.
The difficulty is to obtain adequate frequency resolution and precision,
given the long periods and generally fairly dense spectra.
Substantial successes have been achieved with coordinated multi-site
observations over several months \citep{Handle2004},
leading to interesting information about convective core overshoot
\citep{Aussel2004, Pamyat2004} and internal rotation
\citep{Aerts2003, Dziemb2008}.
However, it is evident that such massive campaigns can only be carried out
in very special cases, and even then they do not provide the full desired
data continuity or sensitivity to low-amplitude modes.
Observations from CoRoT and {\it Kepler} have the potential to secure very
long continuous observations of these stars \citep{Degroo2010a, Balona2011}.
A very interesting case was discussed by
\citet{Degroo2010b}, for the star HD\,50230.
This is a massive main-sequence star, of spectral type B3V, which was observed
by CoRoT for 137 days.
The resulting power spectrum showed a large number of g-mode frequencies,
with periods up to a few days, in addition to several high-frequency p modes.
In the long-period part of the spectrum the authors were able to identify a
group of eight modes with almost constant period spacing, arguing that such
a sequence is very unlikely to be found by chance.
As illustrated in Fig.~\ref{fig:betaceph} these period spacings showed a
highly regular variation with period, of precisely the form
predicted by \citet{Miglio2008} to result from a sharp feature in
the buoyancy frequency.
As pointed out by Miglio et al.\ the decrease in the amplitude with increasing
period is a sensitive diagnostic of the properties of the feature.
A more detailed interpretation of the results will require more stringent
characterization of other properties of the star, through `classical'
observations as well as more extensive asteroseismic analyses of the rich
spectrum.
However, the result clearly demonstrates the potential of such observations
for characterizing the properties of convective cores in massive main-sequence
stars.
\section{Subdwarf B stars}
\label{sec:sdbv}
The subdwarf B stars (sdB stars) are very hot core helium burning stars,
at the blue end of the horizontal branch \citep[for a review, see][]{Heber2009}.
The high effective temperature is the result of the stars having lost most
of their hydrogen envelope, through processes that are so far not fully
understood.
Pulsations were first observed in such stars by \citet{Kilken1997},
with very high frequency.
That the stars might be unstable to acoustic modes was found in parallel,
and independently, by \citet{Charpi1996}.
The driving arises from the heat-engine mechanism operating on opacity
from the iron-group elements.
Subsequently \citet{Green2003} also observed long-period oscillations in
somewhat cooler sdB stars, corresponding to g modes of high order.
A detailed analysis of the excitation was carried out by \citet{Fontai2003}.
To be effective, the iron-group opacity must be enhanced through
enrichment of the elements in the critical region through radiative
levitation \citep[see also][]{Fontai2006};
owing to the high gravitational acceleration such processes of settling
and levitation are quite efficient in these stars.
\begin{figure}[b]
\begin{center}
\includegraphics[width=2in]{jcd_mjt_fig_5a_ed.eps}
\includegraphics[width=2in]{jcd_mjt_fig_5b_ed.eps}
\caption{The merit function $S^2$ (cf. Eq.~\ref{eq:sdbfit}) in a fit
of the observed {\it Kepler} periods for the star KPD~1943+4058 to a set
of models; the colour scale for $\log S^2$ is indicated to the right.
The left panel shows the fit in terms of the total mass of the star, in solar
units, and the logarithm of the fraction of the mass in the hydrogen-rich
envelope.
The right panel plots the merit as a function of the logarithm of the mass
fraction outside the mixed core and the combined abundance, by mass, of
carbon and oxygen in the core.
{}From \citet{VanGro2010a}.}
\label{fig:sdbchi}
\end{center}
\end{figure}
As in the previous cases discussed, major ground-based efforts have been
made to study these pulsations, involving coordinated observations between
several observatories over extended periods
\citep[e.g.,][]{Randal2006a, Randal2006b, Baran2009}.
The difficulties of such observations are particularly severe for the
g-mode pulsators, with periods of order hours and amplitudes of only a
few parts per thousand.
Yet these modes are particularly interesting as diagnostics of the
deep interiors of the stars.
Thus the sdB variables have been prime targets for asteroseismology
with CoRoT and {\it Kepler}.
The {\it Kepler} initial survey has included a substantial number of sdB stars
and so far led to the discovery of 14 pulsating stars,
all except one of which are long-period pulsators, with substantial numbers
of g modes of intermediate and high order
\citep{Ostens2010, Ostens2011, Baran2011}.
Thus the periods may be expected to satisfy the asymptotic relation
(\ref{eq:gper}).
\citet{Reed2011} showed that this is indeed the case and noted the importance
for mode identification.
As a specific example of the power of these data
we consider a detailed analysis of the star
KPD~1943+4058 based on the initial {\it Kepler} data.
Here \citet{Reed2010} detected 21 modes, with periods between 0.7 and 2.5\,h,
and in addition three modes in the p-mode region.
Similar observational results were obtained by \citet{VanGro2010a},
who in addition carried out a fit of the g-mode periods to models of the star.
The models were described by their total mass, the mass in the thin
envelope and an assumed fully mixed core, and the composition of the core,
characterized by the combined abundance of carbon and oxygen.
In addition, the models were constrained to be consistent with the
spectroscopically inferred effective temperature and surface gravity.
The fit to the observations measured by a merit function $S$ defined by
\begin{equation}
S^2 = \sum_i (\Pi_i^{\rm (obs)} - \Pi_i^{\rm (mod)} )^2 \; ,
\label{eq:sdbfit}
\end{equation}
where $\Pi_i^{\rm (obs)}$ and $\Pi_i^{\rm (mod)}$ are the observed and
model periods, respectively, and the identification of the computed modes
with the observed modes is part of the fitting procedure.
Some results are illustrated in Fig.~\ref{fig:sdbchi}.
The analysis provided very precise determinations of the properties of the
star, including strong evidence for a mixed core region significantly larger
than the expected convectively unstable region, indicating substantial core
overshoot.
It should be noted that the periods of the best-fitting
model did not agree with
the observed periods to within their observational uncertainty.
This evidently shows that further improvements of the modelling, beyond
the parameters included in the fit, are needed.
It seems likely that the number of observed periods is sufficient that a
formal inversion of the differences can be attempted, as has been
applied with great success in the solar case \citep[e.g.,][]{Gough1996}.
The results may provide a direct indication of the aspects of the models
where improvements are needed.
Similar analyses will be possible for the remaining stars for which
extensive g-mode data have been obtained with {\it Kepler}, and they will
evidently improve as the stars continue to be observed.
Also, an sdB star showing extensive g-mode pulsations has been observed
by the CoRoT mission \citep{Charpi2010}.
Model fitting to the resulting periods by \citet{VanGro2010b} yielded
results rather similar to those discussed above.
\section{Solar-like oscillations in main-sequence stars}
\label{sec:solar-like}
Solar-like oscillations are predominantly acoustic in nature and excited by
turbulent convection in the star's outer convective envelope. As already
noted, although this broadband excitation mechanism excites all modes in
principle, because of their low mode inertias it tends to be the high-order
p modes that are excited to observable amplitude. The first star in which such
oscillations were detected was of course the Sun, and the study of the Sun's
oscillations has led to the rich field of helioseismology, in which Juri Toomre
has played a leading role \citep[see, {\it e.g.},][]{Gough1991, Christ2002}.
\begin{figure}[b]
\begin{center}
\includegraphics[width=4.5in]{jcd_mjt_fig_6.eps}
\caption{
Location of stars observed by {\it Kepler} to show solar-like oscillations,
plotted as a function of effective temperature and the logarithm of the surface
gravity.
The size of the symbols measures the oscillation amplitude, while the colour
(in the electronic version)
indicates the apparent brightness of the stars, with red for the
brightest stars.
For comparison, evolution tracks at the indicated masses are also shown.
Adapted from \citet{Chapli2011} and \citet{Verner2011}.
Figure courtesy of C. Karoff.
}
\label{fig:solar-like}
\end{center}
\end{figure}
The asteroseismic investigation of solar-type stars has taken a major step
forward thanks to the {\it Kepler} mission \citep{Chapli2010}.
To date, {\it Kepler} has yielded
clear detections of oscillations in 500 solar-type stars \citep{Chapli2011}.
A plot of
the distribution in the HR diagram of {\it Kepler} stars with detected solar-like
oscillations is shown in Fig.~\ref{fig:solar-like}.
This represents an increase by more than a factor of 20 of the number of
known stars on and near the main sequence which show solar-like oscillations.
The high-order, low-degree p modes in solar-type stars
occupy resonant acoustic cavities that extend from the surface to
a region close to the stellar core.
Their cyclic frequencies $\nu_{nl}$ satisfy a simple expression:
\begin{equation}
\nu_{nl} \simeq \Delta \nu \left( n + {l \over 2} + \epsilon \right)
- d_{nl} \; .
\label{eq:pasymp}
\end{equation}
Here
\begin{equation}
\Delta \nu = \left(2 \int_0^R {\dd r \over c} \right)^{-1} \; ,
\label{eq:largesep}
\end{equation}
where $c$ is the adiabatic sound speed and the integral is over the
distance $r$ to the centre of the star;
also
\begin{equation}
d_{nl} = {l(l+1)\Delta\nu\over 4\pi^2\nu_{nl}}
\left[
{c(R)\over R} - \int_0^R{{\rm d}c\over{\rm d}r}{{\rm d}r\over r}
\right]
\end{equation}
\citep{Tassou1980, Gough1986}.
Accordingly, the frequencies of such modes of the same degree are separated by
{\it large separations}
\begin{equation}
\Delta \nu_{nl} = \nu_{n l} - \nu_{n\, l-1}
\approx \Delta\nu\;,
\label{eq:large_sep}
\end{equation}
while the small correction $d_{nl}$
gives rise to
{\it small separations}
\begin{equation}
\delta \nu_{nl} = \nu_{nl} - \nu_{n-1 \, l+2}
\approx
(4l+6){\Delta\nu\over 4\pi^2\nu_{nl}}
\left[
{c(R)\over R} - \int_0^R{{\rm d}c\over{\rm d}r}{{\rm d}r\over r}
\right]
\end{equation}
between the frequencies of modes that differ in degree by 2 and in order by 1.
Finally $\epsilon$ is a slowly varying function of frequency which
is predominantly determined by the properties of the near-surface region.
The quantity $\Delta\nu$ is a measure of the acoustic radius of the star. It
shares the scaling (Eq.~\ref{eq:nuscale}) of the frequencies
with the mean density, and hence so too do the large separations.
For main-sequence stars $d_{nl}$, and thus the small separations, are
mainly determined by the central regions of the star, being
sensitive in particular to the sound-speed gradient in the core,
and hence they provide
a measure of the star's
evolutionary state. Thus measuring the large and small
frequency separations gives a measure of the mean density and evolutionary
state of the star.
A useful seismic diagnostic is the
asteroseismic HR diagram, in which the star's average large separation is plotted
against its average small separation: this is illustrated in
Fig.~\ref{fig:asteroHR}.
For main-sequence stars, the asteroseismic HR diagram allows the mass and
age of the star to be estimated, assuming that other physical inputs (such as
initial chemical composition and the convective mixing-length parameter) are
known.
\begin{figure}[b]
\begin{center}
\includegraphics[width=4in]{jcd_mjt_fig_7.eps}
\caption{Asteroseismic HR diagram for a homogeneous set of
stellar models
of different masses and ages. The solid lines show how a star of given mass
evolves in terms of its large and small frequency separations. The dashed
lines connect stars which have aged by the same fraction of their total
main-sequence lifetime.
}
\label{fig:asteroHR}
\end{center}
\end{figure}
The existence of the large separation also motivates another diagnostic, which
is to plot the frequencies of the star in a so-called {\it \'echelle diagram}.
Here the frequencies $\nu_{nl}$ are reduced modulo $\Delta\nu$ and
$\bar\nu_{nl} \equiv \nu_{nl}\rm{\ mod\ }\Delta\nu$ is
plotted on the x-axis while $\nu_{nl}-\bar\nu_{nl}$ is plotted on the y-axis.
If the spacing of frequencies according to asymptotic expression
(\ref{eq:pasymp}) were exact, the modes of like degree would be aligned as
nearly vertical lines in the \'echelle diagram, with the lines corresponding to
$l=0,2$ being separated from one another by the small separation and
the line corresponding to $l=1$ being offset from those by an
amount corresponding to half the large separation. Deviations from such a
simple picture reveal deviations from the simple asymptotic relation and
contain physically interesting information about the star. An example of an
\'echelle diagram for star KIC~11026764, is shown in Fig.~\ref{fig:gemmaechl}.
The ridges corresponding to $l=0,2$ are evident.
The ridge corresponding to $l=1$
is more irregular: this is due to avoided crossings in this relatively
evolved star, an issue which is discussed further below.
\begin{figure}[b]
\begin{center}
\includegraphics[width=3in]{jcd_mjt_fig_8.eps}
\caption{An \'echelle diagram for the solar-type star KIC~11026764
(Gemma) indicating both the observed frequencies (filled symbols) and
those of a stellar model (open symbols).
Modes of degree $l=0, 1, 2$ are denoted respectively by circles, triangles
and squares. From \citet{Metcal2010}.}
\label{fig:gemmaechl}
\end{center}
\end{figure}
\begin{figure}[b]
\begin{center}
\includegraphics[width=4.5in]{jcd_mjt_fig_9.eps}
\caption{Observed power spectrum of the star KIC~11026764, known as
Gemma, based on early {\it Kepler} observations.
The solid line shows the smoothed spectrum, separated into the oscillation
signal (dash-dotted line) and background components from granulation and
faculae (dashed and dotted lines, respectively).
Figure, adapted from \citet{Chapli2010}, courtesy of C. Karoff.
}
\label{fig:gemmaspec}
\end{center}
\end{figure}
Even without a measurement of the small separations, it is possible to
make useful seismic estimates of stellar masses and radii using observational
measures of $\Delta \nu$ and of the frequency $\nu_{\rm max}$ of maximum
mode amplitude. From Eq.~(\ref{eq:largesep}) the former scales as $M/R^3$, whereas by
Eq.~(\ref{eq:numax}) the latter scales as $M/R^2$: hence a measurement of these
two yields an estimate of both $M$ and $R$. This has been applied to an
ensemble of 500
stars in the {\it Kepler} field by \citet{Chapli2011}.
This paper by Working Group 1 (WG1) of the KASC
concludes that while the
estimated radii of the 500 stars
are similar to those expected from stellar population models,
the observed distribution of masses is
wider at its peak than the model distribution, and is offset towards slightly
lower masses.
\citet{Chapli2010} published observations of three bright solar-type stars,
which were monitored during the first month of {\it Kepler}
science operations. This paper was the first
to establish the asteroseismic potential
of {\it Kepler} observations of solar-type stars:
about 20 modes were distinguished
in each star, and the frequencies and frequency separations allowed radii,
masses and ages of the stars to be estimated. The three stars that were
the objects of the study, KIC~6603624, KIC~3656476 and KIC~11026764, were
given the working names Saxo, Java and Gemm
\footnote{The working group allocated the names of pet cats to the stars that have been the early objects of study: this ideosyncracy is due to the WG1 lead,
Bill Chaplin.}
by the WG1 members. One of these stars, Gemma, was revealed to have evolved
off the main sequence
and proved more challenging to model and constrain asteroseismologically:
this interesting case is discussed further now.
Gemma is one of the best-studied solar-like stars to be investigated
thus far with {\it Kepler} data.
The observed power spectrum, as obtained by \citet{Chapli2010},
is shown in Fig.~\ref{fig:gemmaspec}.
The analysis of the frequencies of the star was the subject of the paper by
\citet{Metcal2010}. Gemma is 10-20 per cent more massive than the Sun and also
somewhat older. The core of Gemma is therefore more chemically evolved than is
the core of the Sun;
the models indicate that the star has a small compact helium core,
surrounded by a hydrogen-burning shell.
This leads to interesting behaviour of the
star's frequencies which provides a powerful diagnostic of the star's
evolutionary state. As a solar-like star evolves at the end of its main-sequence
life, it continues to grow in radius while forming a strong gradient in
mean-molecular weight at the edge of its core.
Also, the core contracts, increasing the central condensation and
the gravitational acceleration in the deep interior of the star.
These effects in turn cause a strong peak to form in the buoyancy frequency
(cf.\ Eq.~\ref{eq:buoy}),
which supports g modes at frequencies which are representative for the
stochastically excited solar-like oscillations.
Thus at such frequencies the star has two resonant cavities supporting
nonradial modes:
one in the deep envelope of the star where the modes behave like p modes,
increasingly confined to the outer regions of the star with increasing degree,
and one in the core where the modes behave like g modes.
These two regions are separated by an intermediate region where the modes
are evanescent.
With increasing age the star undergoes an overall expansion which
causes the frequencies of the p modes to decrease, while the increase in
the central condensation and hence the
buoyancy frequency causes the frequencies of g modes to increase. Although the
g modes are not in general
themselves observable directly, at times in the star's
evolution the frequencies of a g mode and a p mode get sufficiently close
for a strong coupling between the two modes to be possible, giving rise to a
so-called mixed mode. This evolution of the frequency spectrum is illustrated
in Fig.~\ref{fig:gemmaevol} for a representative stellar model of Gemma.
This shows how the radial ($l=0$) and $l=1$ p-mode frequencies change as
a function of age of the star. By their nature, g modes are non-radial; and
hence the radial modes cannot couple to them: thus the overall expansion of the
star simply causes the $l=0$ frequencies to decrease monotonically with
increasing age. The $l=1$ frequencies also tend to decrease with age; but
occasionally a given $l=1$ mode approaches the frequency of an $l=1$ g mode:
at that point, a strong coupling between the two modes occurs if the
evanescent region between their two resonant cavities is not too large.
The frequencies of the
two modes
never actually cross: instead, the modes
undergo an avoided crossing, which results in the observable mode increasing
in frequency as the star evolves, for the duration of the strong mode
coupling.
The evolving frequencies of the physical g~modes can
be discerned
in Fig.~\ref{fig:gemmaevol}
as the loci of the avoided crossings that take place for the $l=1$ p modes.
The frequency spectrum of $l=0$ and $l=1$ modes at any particular stellar age
can be read off
Fig.~\ref{fig:gemmaevol} by taking a vertical cut through the ridges: an
example at an age of about 5.98\,Gyr is indicated. It is evident that the
$l=0$ modes will be essentially evenly spaced in frequency, consistent
with the asymptotic expression (Eq.~\ref{eq:pasymp}), whereas the
series of avoided crossings will cause the $l=1$ modes to be nonuniform.
Moreover, the location in frequency space where avoided crossings are
``caught in the act'' is strongly dependent on the age of the star and so
can enable the age of the star to be determined rather precisely.
This behaviour is consistent with measured frequencies of Gemma, illustrated
in the \'echelle diagram in
Fig.~\ref{fig:gemmaechl}. The $l=0$ and $l=2$ frequencies are approximately
uniformly spaced, whereas the $l=1$ frequencies are more irregularly
spaced, consistent with avoided crossings.
(The $l=2$ and higher-degree p modes are affected much less
by coupling to the g modes than are the $l=1$ modes,
because their lower turning points are further from the core and
hence the evanescent region between the resonant cavities of the p and g modes
is wider.)
\citet{Metcal2010} modelled Gemma, fitting to the individual measured
frequencies and hence exploiting in particular the non-uniform distribution
of the $l=1$ modes. The frequencies of one of their resulting models are
shown in the \'echelle diagram. They found that two families of solutions,
one with stellar masses around $1.1\,M_\odot$ and the other with stellar
masses around $1.2\,M_\odot$,
that fitted the observed frequencies equally well. Notwithstanding this
10\% ambiguity in mass, the radius and age of the star were determined
with a precision of about 1\%, and an estimated accuracy of about 2\% for the
radius and about 15\% for the age. The mass ambiguity would be resolved if
the range of measured frequencies could be extended to higher frequencies,
at which point the model frequencies not only of the $l=1$ modes but also
of the $l=0,2$ modes diverge between the two families of models.
\begin{figure}[b]
\begin{center}
\includegraphics[width=12cm]{jcd_mjt_fig_10.eps}
\caption{Evolution of the $l=0$ (dotted) and $l=1$ (solid) mode frequencies
as a function of age for a representative stellar model of
KIC~11026764 (Gemma). The vertical line indicates the age of 5.77\,Gyr of
one good-fitting model to Gemma's observed frequencies.
{}From \citet{Metcal2010}.
\label{fig:gemmaevol}}
\end{center}
\end{figure}
\begin{figure}[b]
\begin{center}
\includegraphics[width=8cm]{jcd_mjt_fig_11.eps}
\caption{ Panel a) shows a blow-up of Fig.~\ref{fig:gemmaevol} around
the model illustrated in Fig.~\ref{fig:gemmaechl}.
Panel b) shows mode inertias for two of the $l = 1$ modes
(solid lines, identified in both panels by triangles and squares, respectively),
and the intervening radial mode (dashed line).
The increase in the inertia, relative to the radial mode,
is an indication of a predominant g-mode character of the modes.
\label{fig:gemmainertia}}
\end{center}
\end{figure}
The properties of the modes in the vicinity of the avoided crossings is
illustrated in more detail in terms of the mode inertia
(cf.\ Eq.~\ref{eq:inertia}) in Fig.~\ref{fig:gemmainertia}.
When the dipolar modes behave predominantly as acoustic modes,
with frequencies decreasing with increasing age, their inertia is
close to that of the neighbouring radial mode.
It increases when a mode behaves predominantly as a g mode;
at the point of closest approach in an avoided crossing the two modes
have the same inertia, intermediate between the g-mode and the p-mode
behaviour.
As discussed below for red giants, the inertia has an important influence
on the mode visibility.
For the dipolar modes in the Gemma models the contrast between
the p- and g-mode behaviour is modest and the modes are readily observable,
even when they are most g-mode-like.
On the other hand, for modes of degree $l = 2$ and higher the intermediate
evanescent region is substantially broader and the distinction between the
p- and g-mode behaviour correspondingly stronger.
Thus it is less likely to observed mixed modes at these higher degrees,
as confirmed by the interpretation of the observed spectrum.
Apart from the intrinsic interest in the asteroseismic studies of solar-like
stars, these studies provide an important possibility for characterizing
stars that host extra-solar planetary systems.
When a planet is detected using the transit technique the variation
in the detected stellar brightness depends on the ratio between the diameters
of the planet and the central star.
A reliable determination of the planet radius, of great importance to
the characterization of the nature of the planet, therefore depends on
determining the stellar radius.
As discussed above this can be provided by the asteroseismic analysis
of the oscillations of the star;
indeed, the target stars for the {\it Kepler} search for extra-solar
planets are generally in a range of parameters where solar-like oscillations
are expected, although most stars are too faint for reliable observations
to be possible.
However, the potential of the technique was demonstrated by
\citet{Christ2010} who analysed {\it Kepler} asteroseismic data for
a known exoplanet host.
More recently, asteroseismic analysis was used in the characterization
of the first rocky planet detected by {\it Kepler} \citep{Batalh2011}.
We finally note that the frequencies of solar-like oscillation are sensitive
to stellar magnetic activity.
In the solar case this has been studied extensively
\citep[e.g.,][]{Woodar1985, Libbre1990}.
Some evidence for such variation was found by \citet{Metcal2007} for the
star $\beta$ Hyi.
As discussed by \citet{Karoff2009} the long observing sequences possible
with CoRoT and in particular
{\it Kepler} provide a rich possibility for detecting similar effects
in other stars.
In fact, in a solar-like star observed by CoRoT
\citet{Garcia2010} detected variations in oscillation frequencies
and amplitudes which appeared to be the result of a rather short
stellar activity cycle.
\section{Red giants}
\label{sec:redgiant}
Assuming that the solar oscillations are excited stochastically by convection
\citep{Goldre1977} one would expect that all stars with vigorous outer
convection zones exhibit such oscillations.
A rough estimate of the amplitudes \citep{Christ1983} suggested that the
amplitude increases with increasing luminosity.
Thus red giants are obvious targets for the search for, and investigation of,
solar-like oscillations.
The first definite detection of individual modes in a red
giant was obtained by \citet{Frands2002} in the star $\xi$ Hya.
The frequency spectrum showed very clearly a series of uniformly spaced
peaks with a separation $\Delta \nu \simeq 7 \muHz$.
Simple modelling, given also the location in the HR diagram, strongly
suggested that only radial modes had been observed;
the alternative identification, of alternating $l = 0$ and $l = 1$ modes,
would correspond to a true large separation twice as big and hence
a radius smaller by roughly a factor 1.3, entirely inconsistent with the
observed luminosity and effective temperature.
Evidence for solar-like oscillations in giants had also been obtained from
more statistical analyses.
\citet{Christ2001} noted that the relation between the standard deviation
and mean of the amplitude variations in the so-called semi-regular variables,
based on visual observations carried out by the American Association of
Variable Star Observers (AAVSO), was consistent with the expectations for
stochastically excited oscillations.
The solar-like nature of the oscillations of selected semi-regular
variables was confirmed by
\citet{Beddin2003b} through analysis of their power spectra.
Also, \citet{Kiss2003, Kiss2004} analysed large sets of OGL
\footnote{Optical Gravitational Lensing Experiment}
observations of red giants,
obtaining clear indication of several modes of oscillation which may
likely be identified as solar-like.
Detailed analyses of the OGLE data were carried out by
\citet{Soszyn2007} and \citet{Dziemb2010}, confirming the solar-like
nature of the observed oscillations.
These investigations extend the realm of solar-like oscillations to stars with
a luminosity of up to $10\,000\, L_\odot$ and periods of several months.
The red-giant phase \citep[see][for a review]{Salari2002}
follows after the phase exemplified by Gemma, discussed
in Section~\ref{sec:solar-like} above.
The stars ascend the Hayashi region at almost constant effective temperature
and strongly increasing radius and luminosity,
with a very compact helium core and an extended, mostly convective,
envelope.
The energy production takes place through hydrogen fusion in a thin shell
around the helium core.
The tip of the red-giant branch is defined by the ignition of helium near the
centre.
Stars with central helium fusion are located in the so-called `red clump'
in the HR diagram (see Fig.~\ref{fig:huber_hr});
even for these, however, most of the energy is produced by the hydrogen shell.
The strongly centralized helium fusion gives rise to a small convective core,
although the bulk of the helium core remains radiative.
In both the ascending red giant and the clump phase the small extent of the
core gives rise to a very high gravitational acceleration and hence
buoyance frequency, further amplified by the presence of strong
composition gradients (cf.\ Eq. \ref{eq:buoy}).
Thus all nonradial modes have the character of high-order g modes in the core.
The resulting mixed nature of the modes, and the high density of modes
of predominantly g-mode character, are illustrated in Fig.~\ref{fig:inertia}
which shows the mode inertia $E$ (cf.\ Eq.~\ref{eq:inertia}) for a
typical model on the ascending red-giant branch.
Most of the modes with $l = 1$ and $2$ clearly have much higher inertias than
the radial modes, and hence are predominantly g modes.
However, there are resonances where the modes are largely trapped in
the outer acoustic cavity.
These p-dominated modes have inertias close to the inertia of the radial
modes, reflecting their small amplitudes in the core.
\begin{figure}[b]
\begin{center}
\includegraphics[width=4in]{jcd_mjt_fig_12.eps}
\caption{Mode inertias (cf.\ Eq.~\ref{eq:inertia}) against cyclic
frequency in a red-giant model of mass $1.4 \, M_\odot$ and
radius $5 R_\odot$.
Modes of degree $l = 0$ (circles connected by a solid line),
$l = 1$ (triangles connected by a dashed line) and
$l = 2$ (squares connected by a dot-dashed line)
are illustrated.}
\label{fig:inertia}
\end{center}
\end{figure}
\citet{Dziemb2001} considered the excitation and damping of modes
in red giants.
They found that the very high radial order of the nonradial modes in the core
led to substantial radiative damping, even for modes that were predominantly
acoustic.
On this basis \citet{Christ2004} concluded that nonradial oscillations
were unlikely to be observed in red giants.
(This would be consistent with the results of \citet{Frands2002} on
$\xi$ Hya where apparently only radial modes were found.)
Fortunately this conclusion was wrong: nonradial modes are indeed observed
in red giants and provide fascinating diagnostics of their interiors.
A first indication of the presence of nonradial oscillations in red giants
came from line-profile observations by \citet{Hekker2006}.
However, the major breakthrough came with the observation of a substantial
number of red giants by CoRoT, as presented by \citet{DeRidd2009}.
A selection of the resulting power spectra are shown in Fig.~\ref{fig:corotrg}.
The presence of solar-like oscillations is obvious, the peaks shifting to lower
frequency with increasing radius (cf.\ Eq.~\ref{eq:numax}).
Also, \citet{DeRidd2009} showed an example of an \'echelle diagram
which beyond any doubt identified modes of degree 0, 1 and 2.
\begin{figure}[b]
\begin{center}
\includegraphics[width=2in]{jcd_mjt_fig_13.eps}
\caption{Power spectra of solar-like oscillations in red giants,
from five months of observations with CoRoT.
The stars are identified by their CoRoT identification number,
with radius increasing towards the top.
{}From \citet{DeRidd2009}.}
\label{fig:corotrg}
\end{center}
\end{figure}
The potential visibility of nonradial modes in red giants
was made very much clearer by \citet{Dupret2009},
following an analysis by \citet{Chapli2005}.
The observational visibility of a mode is determined by the peak
height $H$ in the power spectrum.
This is related to the total observed power $P$ of the mode
by $P \propto H \Delta$, where $\Delta$ is the width of the peak.
If the mode is observed for much longer than the natural damping time,
the width is given by the damping rate, i.e., the imaginary part
$|\omega_{\rm i}|$ of the frequency.
If the damping is dominated by the near-surface layers, as is often the
case, at a given frequency $\omega_{\rm i}$ is related to the mode inertia $E$
by
\begin{equation}
\omega_{\rm i} \propto E^{-1} \; .
\label{eq:omegai}
\end{equation}
Thus those modes that are predominantly g modes, with high inertia
(cf. Fig.~\ref{fig:inertia}) have much smaller widths than
the p-dominated modes.
The power in the mode is determined by a balance between the energy
input from the stochastic noise near the stellar surface and the damping.
Assuming again Eq.~(\ref{eq:omegai}) the outcome is that $P \propto E^{-1}$
at fixed frequency.
It follows that the peak height $H$ is independent of $E$ at a given frequency
and hence that the g-dominated modes should be observed at the same height
as the p-dominated modes.
This, however, assumes that the duration ${\cal T}$ of the observation is
much longer than the lifetime $|\omega_{\rm i}|^{-1}$ of the mode.
If this is not the case, the peaks are broader and the height consequently
smaller.
As an approximate scaling of this dependence \citet{Fletch2006}
proposed
\begin{equation}
H \propto {P \over |\omega_{\rm i}| + 2/{\cal T}} \; ;
\label{eq:height}
\end{equation}
for ${\cal T} \ll |\omega_{\rm i}|^{-1}$,
in particular,
$H \propto P \propto E^{-1}$
and the g-dominated modes are essentially invisible.
As is clear from Fig.~\ref{fig:inertia} the g-dominated modes in practice often
have inertias much higher than the p-dominated modes and hence
correspondingly longer lifetimes;
thus they may be expected
have small observed peak heights, unless observations of very
long duration are analysed.
However, modes of mixed character, particularly those with $l = 1$,
may have damping times comparable with or shorter than the very long
observations made available by CoRoT and {\it Kepler} and hence may be visible.
Concerning $\xi$ Hya,
the apparent absence of nonradial modes in the observations
was probably caused by the
relatively short observing run of around one month,
compared with the five-month observations by \citet{DeRidd2009}.
Even the most p-mode-like dipolar modes have somewhat higher mode inertia
and hence lifetimes than the radial modes; thus the peak height of these modes
was likely suppressed in the observations by \citet{Frands2002}.
\begin{figure}[b]
\begin{center}
\includegraphics[width=4in]{jcd_mjt_fig_14_ed.eps}
\caption{Stacked power spectra of red giants observed with {\it Kepler},
with the frequency of maximum power decreasing towards the top.
The numbers at the right edge provide an estimate of the corresponding
ratio between luminosity and mass, in solar units.
Figure courtesy of T. Kallinger.
}
\label{fig:stackspec}
\end{center}
\end{figure}
Very extensive results on red-giant oscillations have been obtained by
CoRoT and {\it Kepler}
\citep[e.g.,][]{Hekker2009, Beddin2010, Mosser2010, Kallin2010a, Kallin2010b,
Stello2010, Hekker2011}.
These confirm the acoustic nature of the observed spectra,
with a clear detection of modes of degree 0, 1 and 2.
This is illustrated in Fig.~\ref{fig:stackspec}
\citep[see also][]{Gillil2010},
for a large sample of stars
observed with {\it Kepler} during the first 16 months of the mission;
the observations are shown as stacked spectra ordered according
to decreasing large
frequency separation $\Delta \nu$ and hence increasing radius and luminosity.
This is characterized at the right-hand edge of the figure
by the ratio between luminosity and mass,
estimated from the oscillation parameters (see below).
Stellar `noise' from granulation is evident in the low-frequency region.
The frequencies approximately satisfy an asymptotic relation similar to
Eq.~(\ref{eq:pasymp}),
with a closely spaced pair of bands of $l = 0, 2$ modes and an intermediate
band of $l = 1$ modes.
However, since the acoustic propagation region is generally confined to
the convection zone or the region just beneath it
the small separation between $l = 0$ and 2 is not directly related to the
properties of the stellar core, let alone the age of the star, unlike
the situation on the main sequence.
\citet{Montal2010a} carried out an extensive analysis of the overall properties
of the oscillation frequencies for a large sample of red-giant models.
They noted that the outer convective zone in red-clump phase is not quite as
deep as for the stars ascending the red-giant branch;
the extent of the acoustic propagation region beyond the convective
envelope was found to have a
potentially measurable effect on the small frequency separations.
\begin{figure}[b]
\begin{center}
\includegraphics[width=4in]{jcd_mjt_fig_15.eps}
\caption{`Hertzsprung-Russell diagram' of red giants observed with
{\it Kepler}, using the frequency $\nu_{\rm max}$ at maximum power
as a proxy for luminosity.
The curves show evolution tracks for models at the indicated ages.
Figure courtesy of D. Huber \citep[see][]{Huber2010}.}
\label{fig:huber_hr}
\end{center}
\end{figure}
The mean large frequency separation $\Delta \nu$ and the frequency
$\nu_{\rm max}$ at maximum power satisfy the scaling relations
(\ref{eq:nuscale}) and (\ref{eq:numax}).
Thus the stellar properties can be characterized by these quantities.
This is used in Fig.~\ref{fig:huber_hr} to plot a `Hertzsprung-Russell'
diagram of red giants observed with {\it Kepler}, replacing the
luminosity by $\nu_{\rm max}$ as a measure of radius and hence luminosity.
The observations are compared with evolution tracks for a range of masses,
using scaling from the solar value of $\nu_{\rm max}$.
The distribution of stars clearly shows the higher density in the region
of the helium-burning red clump.
The scaling relations can also be used to determine the stellar parameters
from the observed $\Delta \nu$, $\nu_{\rm max}$ and $T_{\rm eff}$
\citep[e.g.][]{Kallin2010a}.
This provides a unique possibility for population studies of red giants
\citep[e.g.,][]{Miglio2009, Kallin2010b, Mosser2010, Hekker2011}.
The CoRoT results are particularly interesting in this regard,
given that they allow a comparison of the populations in the centre and
anti-centre directions of the Galaxy (Miglio et al., in preparation)
and hence provide information about the evolution and dynamics of the
Galaxy.
A more precise determination of the stellar parameters can be obtained
with the so-called grid-based methods, where stellar modelling is used
to relate the effective temperature, mass and radius
\citep[e.g.,][]{Gai2011}.
This was used by \citet{Basu2011} to investigate the properties of
two open clusters in the {\it Kepler} field.
To the extent that the modes are trapped in the convective envelope,
whose structure is very similar amongst different stars
apart from a scaling depending on the stellar mass and radius, one would
expect a corresponding similarity between the oscillation frequencies.
This is confirmed by the so-called `universal red-giant oscillation pattern',
a term introduced by Mosser et al., of the oscillations
\citep{Huber2010, Mosser2011}.
To illustrate this, Fig.~\ref{fig:huber_echl} shows a
normalized and stacked \'echelle diagram.
Here collapsed \'echelle diagrams for the individual stars,
normalized by the large separation,
have been stacked after taking out the variation with stellar parameters
of the average $\epsilon$ (cf.\ Eq. \ref{eq:pasymp}).
Clearly there is very little variation with $\nu_{\rm max}$
and hence stellar properties
in the location of the ridges and hence the scaled small separations.
This is emphasized by the collapsed version of the diagram in the lower panel;
it should be noticed that this also, as indicated, provides a weak
indication of modes of degree $l = 3$.
A lower limit to the width of the ridges is provided by the
natural width of the peaks, corresponding to the lifetime of the modes.
For $l = 0$ and 2 \citet{Huber2010} found a width of around
$0.2 \muHz$, essentially independent of the stellar parameters and
corresponding to a mode lifetim
\footnote{defined as the e-folding time of the displacement}
of around 18\,d.
Similar results were obtained by \citet{Baudin2011}
based on CoRoT observations.
Interestingly, this value of the lifetime is similar to the
estimate obtained by \citet{Houdek2002} from modelling of $\xi$ Hya.
\begin{figure}[b]
\begin{center}
\includegraphics[width=2.5in]{jcd_mjt_fig_16.eps}
\caption{The upper panel shows stacked collapsed, rescaled and
shifted \'echelle diagrams for red giants observed with {\it Kepler}.
These have been collapsed in the lower panels, where the
peaks corresponding to $l = 0 - 3$ are indicated.
The thick lines correspond to the full set, and the thin blue and red lines
(in the electronic version)
correspond to stars with $\nu_{\rm max} > 100 \muHz$ and
$\nu_{\rm max} < 50 \muHz$, respectively.
Figure courtesy of D. Huber \citep[see][]{Huber2010}.}
\label{fig:huber_echl}
\end{center}
\end{figure}
In Fig.~\ref{fig:huber_echl} it is evident that the ridge for
$l = 1$ appears substantially wider than for $l = 0$ and 2.
This can be understood from the analysis of \citet{Dupret2009},
discussed above, which showed that several dipolar modes may reach
observable amplitudes in the vicinity of an acoustic resonance
\citep[see also][]{Montal2010b}.
In a major breakthrough in red-giant asteroseismology
\citet{Beck2011} and \citet{Beddin2011} demonstrated
that the frequencies in such groups of peaks showed clear
g-mode-like behaviour; this allowed a determination
of the uniform g-mode period spacing (cf.\ Eq.~\ref{eq:gper}).
It was further demonstrated by \citet{Beddin2011} that the
inferred value of $\Delta \Pi$ allowed to distinguish between stars on
the ascending red-giant branch and stars in the helium-burning
clump phase.
With further analyses these observations will undoubtedly provide
very valuable diagnostics of the central properties of red giants.
The frequencies of purely acoustic modes also contain information beyond the
basic parameters $\Delta \nu$ and $\nu_{\rm max}$.
Sharp features in the sound speed lead to systematic departures from
the simple asymptotic behaviour in Eq.~(\ref{eq:pasymp}), with
characteristic properties which provide information about the location and
strength of the feature \citep[e.g.,][]{Gough1990}.
An important example is the effect on the sound speed of the localized
reduction in the adiabatic exponent caused by the second ionization of
helium, which provides information about the helium abundance
\citep[e.g.,][]{Voront1991, Montei2005, Houdek2007}.
\citet{Carrie2010} and \citet{Miglio2010} found the signature of this
effect in the red giant HR~7349 observed by CoRoT.
Miglio et al.\ noted that the inferred location of the second helium
ionization zone provided a strong constraint on the properties of the star.
Such analyses will undoubtedly be possible for a large number of red
giants observed by CoRoT and {\it Kepler}.
\section{Concluding remarks}
The last few years have seen amazing progress in the availability of data
for asteroseismic investigations of stellar interiors.
The full scientific exploitation of these data is just starting.
The community is realizing the challenges provided by actual observations,
compared with the simulations that preceded the missions,
and we are hard at work at
optimizing the techniques for the data analysis and interpretation, with the
goal of addressing specific aspects of stellar interiors.
The ability to study solar-like oscillations in hundreds of stars with
{\it Kepler} has been
a positive surprise, allowing comparative studies in
{\it ensemble asteroseismology
\footnote{or, according to D.~O.\ Gough, perhaps better
{\it synasteroseismology}}
\citep{Chapli2011};
however, we still need to extend the investigations
to unevolved stars of lower masses, as will surely be possible as ever
longer timeseries for individual stars are accumulated.
The present authors, at least, had not foreseen the possibility
of detailed analyses
of the mixed modes in subgiants, providing archaeological information about
the properties of the mixed cores during the main-sequence phase
\citep{Deheuv2010}.
The very recent detection and analysis of features in the pulsations
directly related to the g modes in the cores of red giants
\citep{Beck2011, Beddin2011} have also been
a major surprise, with huge potentials both for characterizing the evolutionary
state of the stars and for investigating the properties of the deep
interiors of these stars.
The detection in the {\it Kepler} field of a dozen subdwarf B stars
showing long-period oscillations, with additional cases being found
by CoRoT, is providing tight constraints on the overall properties of stars
in this very late phase of evolution and there is certainly a potential for
much more detailed investigations.
And the list goes on.
We are fortunate currently to have access to three largely complementary
space missions with asteroseismic potentials.
The MOST mission is perhaps somewhat overshadowed by CoRoT and {\it Kepler},
but it continues to provide excellent data on a variety of bright stars
with the possibility of selecting targets over a large part of the sky;
the recent combined analysis of MOST data and ground-based radial-velocity
data for Procyon \citep{Huber2011} demonstrates the potential of MOST
even in the area of solar-like pulsations.
CoRoT has fully demonstrated the required sensitivity to study
solar-like oscillations in main-sequence stars.
The mission has the very substantial advantage of being able to observe both in
the direction towards and away from the Galactic centre which allows
comparative studies of stellar populations.
Also, the stars observed in the asteroseismology field are relatively bright,
facilitating the ground-based support observations of these targets,
and the CoRoT `eyes' contain a very broad sample of interesting targets
of most types.
Finally, {\it Kepler} can observe stars for the duration of the mission,
optimizing the precision and sensitivity of the observations and allowing
the uninterrupted study of potential changes in stellar properties.
The heliocentric orbit provides a more stable and quiet environment
than the low-earth orbit of CoRoT, in terms of scattered light and
magnetospheric disturbances.
Also, the {\it Kepler} field has been found to be extremely rich in a variety
of interesting stars, now to a large extent characterized through the
KAI survey phase.
Even so, there is a continued need to improve the observational situation
and strong prospects that this will happen.
The BRITE Constellation mission,
under development by Austria, Canada and Poland \citep{Kuschn2009}
will fill a very important niche by carrying out photometric observations
of the brightest stars across the sky in two colours.
On a longer timescale the ESA PLATO mission, if selected, will
greatly extend the {\it Kepler} results \citep{Catala2011}.
As {\it Kepler}, PLATO has the dual purpose of exoplanet investigations
and asteroseismology.
However, PLATO will look at substantially brighter stars in much larger fields.
This is important for the crucial ground-based follow-up observations to
confirm the detection of a planet in an apparent transit, particularly
for earth-size planets which will be a key goal for PLATO as it is for
{\it Kepler}.
Also, as a result PLATO will allow asteroseismic characterization of
a substantial fraction of the stars around which potential planets are
found, unlike {\it Kepler} where this is an exception.
PLATO will be placed in an orbit around the ${\rm L_2}$ point which
shares the advantage, in terms of stability, of {\it Kepler's}
heliocentric orbit.
The planned observing schedule consists of continuous observations of two
fields for two or three years each, followed by a `stop-and-stare' phase
where each field is observed for a few months.
The latter part of the mission will allow investigation of a substantial
fraction of the sky, providing a survey of a far more wide-ranging and varied
set of stellar pulsations and other types of stellar variability than has
been possible even with {\it Kepler}.
The great advances provided by the space asteroseismic observations
should not blind us to the continued usefulness of ground-based
observations.
In photometry that allows study of rare objects that may not be available
to the space observatories.
Also, it provides greater flexibility, e.g., in carrying out observations
in several colours, of importance to mode identification.
Even more important is the use of ground-based radial-velocity observations,
particularly for solar-like oscillations.
Observations of solar oscillations from the SOHO mission have clearly
demonstrated that the solar background, from granulation and activity,
is substantially higher relative to the oscillations for photometric
observations than for radial-velocity observations,
as already noted by \citet{Harvey1988}
\citep[see also][]{Grunda2007}.
This background is evident in the {\it Kepler} observations of Gemma
illustrated in Fig.~\ref{fig:gemmaspec}.
This puts a natural limit to the precision and mode selection possible
in photometric observations.
Thus radial-velocity observations of carefully selected stars are still
required to reach the ultimate precision and level of detail in asteroseismic
investigations.
Such observations can been carried out from the ground, as has been
done successfully for a small sample of stars
\citep[e.g.,][]{Bouchy2002, Beddin2004, Bazot2005, Kjelds2005, Beddin2007,
Arento2008};
to reduce the gaps in the data they have been carried out in a coordinated
fashion, involving two or more observatories.
However, the required state-of-the-art instrumentation is only available
for limited periods and certainly not for the month-long observations
from several observatories that
are needed to secure adequate frequency resolution.
This is the motivation for the development of the Stellar Observations
Network Group (SONG) network \citep{Grunda2009, Grunda2011}.
SONG is planned to consist of 7 -- 8 nodes with a suitable geographical
distribution in the northern and southern hemisphere.
Each node will consist of a 1\,m telescope, equipped with a high-resolution
spectrograph for Doppler-velocity observations and a so-called lucky-imaging
camera for photometry in crowded fields.
With the use of an iodine cell as reference, and with careful optimization
of the optics, it is estimated that SONG will be able to study
velocity oscillations in a star as the Sun at magnitude 6.
The lucky-imaging camera is designed for characterization of exoplanet
systems through gravitational micro-lensing \citep[e.g.,][]{Domini2010}.
At present a prototype SONG node is under construction with Danish funding,
to be placed at the Iza{\~n}a Observatory on Tenerife,
with expected deployment and start of operations in 2011.
A Chinese node is being designed and is expected to be operational in 2013,
and funding and support for further nodes will be sought through a network
of international collaborations.
The data from these projects will provide excellent possibilities for testing
present stellar evolution calculations and indicating where improvements
should be made.
Such improvements are certainly required, particularly when it comes to the
treatment of stellar internal dynamics.
Impressive progress is being made in the application of complex, yet
unavoidably highly simplified, treatments of the interplay between rotation,
circulation, and magnetic fields,
including also the evolution of stellar internal angular velocity
\citep[e.g.,][]{Palaci2003, Palaci2006, Mathis2004, Mathis2005}
\citep[see also][]{Maeder2009}.
Indeed, full stellar evolution calculations will undoubtedly require such
simplifications in the foreseeable future.
However, it is equally important that these simplified treatments be tested
by detailed simulations of the hydrodynamical phenomena, albeit for conditions
that often do not fully reflect the stellar internal conditions.
An important example is the simulation of near-surface convection, where
computations under reasonably realistic conditions are in fact possible
\citep[e.g.,][]{Nordlu2009, Trampe2010}.
Simulations of the deeper parts of the convective envelope and the region
below \citep[see][for reviews]{Miesch2005, Miesch2009}
unavoidably require simplification, but are providing deep insights
into the interaction between convection and rotation, and the generation
of magnetic fields \citep{Brun2004, Browni2006, Miesch2008}.
Such simulations are now being extended to
stars other than the Sun \citep{Brun2009, BrownB2011},
including the dynamics of stellar convective cores
\citep{Brun2005, Feathe2009}.
The observations by CoRoT, {\it Kepler} and projects to follow provide excellent
prospects for testing the results of such modelling and hence improve our
understanding of stellar internal dynamics.
\medskip\noindent
{\it Acknowledgement}: We wish to take this occasion to thank Juri for many
years of enjoyable collaboration, as well as for his inspiration and
constant friendship.
We are very grateful to
P. Degroote,
J. De Ridder,
G. Do{\u g}an,
D. Huber,
T. Kallinger,
C. Karoff,
K. Kolenberg,
R. Szab\'o and
V. Van Grootel
for the provision of, or help with, the figures, and to Travis Metcalfe for
comments that helped improve the paper.
We thank Sacha Brun and Nick Brummell for the excellent organization of an
exciting conference, and for their patience with the present authors.
The National Center for Atmospheric Research is sponsored by the
National Science Foundation (NSF).
|
1,108,101,566,440 | arxiv | \section{INTRODUCTION}
A topological insulator (TI) is a material with the insulating bulk, but with topologically protected gapless surface or edge states
\cite{PhysRevLett.95.146802,PhysRevLett.95.226801,bernevig2006quantum,PhysRevLett.98.106803,PhysRevB.76.045302,PhysRevB.78.045426,PhysRevB.78.195424,RevModPhys.82.3045,RevModPhys.83.1057}.
TIs are classified in terms of $\mathbb{Z}_{2}$ topological invariants and their gapless surface states are
topologically protected.
Two- and three-dimensional TIs have one-dimensional (1D) and two-dimensional (2D) topological gapless states, respectively.
Namely, topological nature in the $n$-dimensional bulk of the system is associated with ($n-1$)-dimensional gapless states in TIs.
In recent years, second-order topological insulators (SOTI) have been proposed as a new class of topological insulators
\cite{PhysRevLett.108.126807,PhysRevLett.110.046404,PhysRevB.92.085126, benalcazar2017quantized, PhysRevB.96.245115,PhysRevLett.119.246402, fang2017rotation,PhysRevB.97.241405, schindler2018higher, PhysRevLett.119.246401,PhysRevLett.120.026801,PhysRevLett.121.116801,PhysRevB.97.155305, PhysRevB.97.241402, PhysRevB.97.205136, schindler2018higherbismuth, PhysRevB.98.205129, wang2018higher, PhysRevLett.122.256402,ezawa2019scientificrepo,PhysRevB.98.201114, PhysRevB.97.205135, PhysRevB.98.081110, PhysRevB.98.245102,PhysRevX.9.011012,okugawa2019second,PhysRevB.99.041301,yue2019symmetry,zhang2019second,PhysRevLett.122.076801,PhysRevB.98.205147,PhysRevLett.121.196801,PhysRevB.98.235102,PhysRevLett.123.016805,PhysRevLett.123.073601,serra2018observationnature555,peterson2018quantizedNature7695,imhof2018topolectricalnatphys,PhysRevB.99.195431,PhysRevLett.123.016806,sheng2019two,agarwala2019higher,PhysRevLett.123.036802,chen2019higher,PhysRevB.98.035147,wieder2018axioninsulatorpump}. In three dimensions, SOTIs are insulating both in the bulk and in the surface. However,
they have anomalous gapless states at an intersection of two surfaces, called hinge states.
In the SOTIs, the topological nature of $n$-dimensional bulk manifests itself not as $(n-1)$- but as $(n-2)$-dimensional gapless states.
Among various classes of SOTIs, one class of SOTIs is protected by inversion symmetry
\cite{wang2018higher,ezawa2019scientificrepo,PhysRevLett.122.256402, PhysRevB.98.205129,PhysRevB.97.205136,schindler2018higherbismuth}, and this class of SOTIs is characterized by a $\mathbb{Z}_{4}$ index of symmetry-based indicators \cite{schindler2018higherbismuth, wang2018higher,ezawa2019scientificrepo, PhysRevX.7.041069, po2017symmetry,PhysRevX.8.031070, PhysRevB.98.115150, PhysRevLett.122.256402, PhysRevB.98.205129,tang2019comprehensive,tang2019efficient}.
Appearance of the hinge states in a SOTI is usually understood in term of the surface Dirac Hamiltonian with a symmetry-respecting mass term. The surface energy spectrum is gapped by adding this mass term. However, due to symmetry constraint, the mass term changes its sign depending on the surface direction. Therefore, at the intersection of two surfaces having mass terms with opposite signs, the mass terms can be regarded as zero. This allows behaviors of electrons at the hinge to be represented as massless Dirac hamiltonian and the energy spectrum becomes gapless at the hinge.
As described above, appearance of a hinge state is topologically protected by symmetry. However, in the above discussion with the surface Dirac Hamiltonian, we cannot directly explain how the $\mathbb{Z}_{4}$ index of symmetry-based indicators is related to the hinge state.
In this paper, our purpose is to show the emergence of the gapless hinge states when the $\mathbb{Z}_{4}$ index of symmetry-based indicators is nontrivial without relying upon specific models.
The previous work discussing the connection between symmetry-based indicators and hinge states \cite{PhysRevX.8.031070}, is based on the $\boldsymbol{k}\cdot \boldsymbol{p}$ Dirac Hamiltonian. Therefore, this approach cannot be applied to systems whose surfaces are not described by Dirac model. In order to complete the proof, it is necessary to establish a theory on the connection between the $\mathbb{Z}_{4}$ index and hinge states for general systems.
There have been studies based on tight-binding models of SOTIs with a nontrivial $\mathbb{Z}_{4}$ index of symmetry-based indicators \cite{schindler2018higherbismuth, PhysRevB.98.205129, wang2018higher, PhysRevLett.122.256402,ezawa2019scientificrepo}.
However, this argument does not lead to a general proof that hinge states appear generally in any model with a nontrivial $\mathbb{Z}_{4}$ index of symmetry-based indicators.
In this paper, we propose a new method to understand the hinge state only from the $\mathbb{Z}_{4}$ index of symmetry-based indicators.
This method is applicable to a broad range of systems.
In this method, we change boundary conditions in two directions by changing hopping amplitude across the boundaries. When the hopping amplitudes across the two boundaries become zero, the system is cut along two planes, giving rise to a hinge. Then by tracing the spectral flow along the change, we can see whether and how hinge states appear.
From this discussion, we show that when the
$\mathbb{Z}_{4}$ topological index $\mu_{1}$ is $\mu_{1}=2$ for class A, gapless states appear inevitably at the hinges of three-dimensional insulators with inversion symmetry.
We note that the gapless states may not be localized at the hinges if surfaces are gapless. Therefore we restrict ourselves to the case with no gapless surface states throughout the present paper. In the main text of this paper, we consider systems in class A, and we extend our theory to systems in class AII in Appendix \ref{section:timereversal}.
A similar method with changing the hopping amplitude across only one boundary has been applied to characterize TIs \cite{PhysRevB.78.045426} and SOTIs \cite{bulkhinge}, and this method is called cutting procedure.
In Ref.~\cite{bulkhinge}, the boundary condition is changed only along one direction, in contrast with the present paper. Through this change the three-dimensional system is related with a two-dimensional slab. Then we show that the indicators, which characterize three-dimensional inversion-symmetric SOTIs, are directly related to the indicators of the two-dimensional inversion-symmetric systems, i.e. the Chern number parity. In the present paper, we study the spectral flows in the band gap, i.e. the behaviors of gapless states through continuously changing the boundary conditions along the two directions. In addition, we find the spectral flows related to appearance of the hinge states.
This paper is organized as follows. In Sec.~\ref{topological index and cutting procedure}, we explain the $\mathbb{Z}_{4}$ topological index and cutting procedure. In addition, we show appearance of hinge states by the applying the cutting procedure to one of the models with $\mathbb{Z}_{4}=2$. In Sec.~\ref{section:Tight-binding model}, we confirm our theory in Sec.~\ref{topological index and cutting procedure} by calculations on a tight-binding model of a SOTI. In Sec.~\ref{sec:positionofhingestates}, we discuss which of the hinges have hinge states. Our conclusion is given in Sec.~\ref{section:conclusion}.
\section{\label{topological index and cutting procedure}$\mathbb{Z}_{4}$ topological index and cutting procedure}
In this section, we will establish the relationship between the hinge state and $\mathbb{Z}_{4}$ topological index by cutting procedure.
\subsection{Strong $\mathbb{Z}_{4}$ index and weak $\mathbb{Z}_{2}$ indices}
\begin{figure}
\includegraphics{z4paritypic.pdf}
\caption{\label{zfourparitypicture}(Color online) Parity eigenvalues at TRIM.~(a, b) Two examples of parity eigenvalues at TRIM to realize $(\nu_{1}, \nu_{2}, \nu_{3}, \mu_{1})=(0,0,0,2)$. They are transformed to each other by a shift of an inversion center by $\boldsymbol{a}_{3}/2$.}
\end{figure}
We consider a noninteracting centrosymmetric system on a three-dimensional lattice in class A, one of the Altland-Zirnbauer symmetry classes [\onlinecite{PhysRevB.55.1142}].
For three-dimensional systems, there are eight time-reversal invariant momenta (TRIM) denoted by $\Gamma_{j}$.
The eight TRIM $\Gamma_{j}$ can be indexed by three integers $n_{l}=0,1$ defined mod 2,
\begin{equation}
\Gamma_{j=(n_{1},n_{2},n_{3})}=\frac{1}{2}(n_{1}\boldsymbol{b}_{1}+n_{2}\boldsymbol{b}_{2}+n_{3}\boldsymbol{b}_{3}),
\end{equation}
where $\boldsymbol{b}_{l}$ are primitive reciprocal lattice vectors.
According to Ref. [\onlinecite{PhysRevB.98.115150}], the symmetry indicator for class A is found to be $X_{\rm BS}=\mathbb{Z}_{2}\times \mathbb{Z}_{2}\times \mathbb{Z}_{2}\times \mathbb{Z}_{4}$.
The three factors of $\mathbb{Z}_{2}$ are the weak topological indices
\begin{equation}\label{weakz2index}
\nu_{a}\equiv \sum_{\Gamma_{j}:{\rm TRIM}\land n_{a}=1}n_{-}(\Gamma_{j})\ \ \ ({\rm mod}\ 2)\ (a=1,2,3),
\end{equation}
where $n_{-}(\Gamma_{i})$ is the number of occupied states with odd parity at the TRIM $\Gamma_{j}$, and the summation is taken over the TRIM on the plane $n_{a}=1$.
The factor of $\mathbb{Z}_{4}$ is the strong topological index, defined as
\begin{align}\label{z4index}
\mu_{1}\equiv
&\frac{1}{2}\sum_{\Gamma_{j}:{\rm TRIM}}\Bigl(n_{+}(\Gamma_{j})-n_{-}(\Gamma_{j})\Bigr) \ \ \ ({\rm mod}\ 4)\nonumber \\
=&-\sum_{\Gamma_{j}:{\rm TRIM}}n_{-}(\Gamma_{j})\ \ \ ({\rm mod}\ 4),
\end{align}
where $n_{+}(\Gamma_{j})$ is the number of occupied states with even parity at the TRIM $\Gamma_{j}$.
Therefore,
for systems with inversion symmetry, topological phases are characterized by the symmetry indicator $X_{\rm BS}=(\nu_{1}, \nu_{2}, \nu_{3}, \mu_{1}$) with $\nu_{a}=0, 1$ and $\mu_{1}=0, 1, 2, 3$.
In Sec.~\ref{cuttingprocedure}, we will show that the gapless hinge states appear in a three-dimensional insulator when $(\nu_{1}, \nu_{2}, \nu_{3}, \mu_{1})=(0,0,0,2)$.
Here, for that purpose, let us consider the numbers of occupied states with odd parity at each TRIM in the case of $(\nu_{1}, \nu_{2}, \nu_{3}, \mu_{1})=(0,0,0,2)$.
As shown in Fig.~\ref{zfourparitypicture}(a), one of the simplest examples to realize $(\nu_{1}, \nu_{2}, \nu_{3}, \mu_{1})=(0,0,0,2)$ is $n_{-}(\Gamma)=2$, $n_{-}(\Gamma_{j})=0$ ($\Gamma_{j}\neq \Gamma$), where $\Gamma=(0, 0, 0)$.
Another example shown in Fig.~\ref{zfourparitypicture}(b) also shows the same set of topological invariants $(\nu_{1}, \nu_{2}, \nu_{3}, \mu_{1})=(0,0,0,2)$,
but this case can be reduced to the case of Fig.~\ref{zfourparitypicture}(a) by gauge transformation of switching the inversion center.
For example, by shifting the inversion center $\bm{R}$ of the system to $\bm{R}+\bm{a}_{i}/2\ (i=1, 2, 3)$, where $\boldsymbol{a}_{i}$ are translation vectors, the parity at the TRIM $\Gamma_{j}$ on a plane $n_{i}=1$ is multiplied by $(-1)$. Therefore, if we shift the inversion center $\bm{R}$ to $\bm{R}+\bm{a}_{3}/2$, Fig.~\ref{zfourparitypicture}(b) is transformed to (a).
Therefore, Fig.~\ref{zfourparitypicture}(b) is equivalent to Fig.~\ref{zfourparitypicture}(a).
Although there are many cases of patterns of odd-parity states at TRIM with $(\nu_{1}, \nu_{2}, \nu_{3}, \mu_{1})=(0,0,0,2)$ other than Fig.~\ref{zfourparitypicture}(a) and (b),
we will consider the case of Fig.~\ref{zfourparitypicture}(a) for simplicity in this section.
However, a cutting procedure, which is to be discussed in the next subsection, can be applied to every case with $(\nu_{1}, \nu_{2}, \nu_{3}, \mu_{1})=(0,0,0,2)$ (see Appendix~\ref{section:extension to the general cases}).
\subsection{Cutting procedure}
\begin{figure}
\includegraphics{cuttingprocedurefig.pdf}
\caption{\label{cuttingprocedurefig}(Color online) Cutting procedure.~(a) Our boundary conditions for the cutting procedure.~Each gray box represents the entire system, and we show the copies of the system in the figure to illustrate the boundary condition.~Boundary conditions change along the $x_{1}$ and $x_{2}$ directions by changing $\lambda_{1}$ and $\lambda_{2}$.~We replace every hopping amplitude $t$ for the bonds that cross the boundary at $x_{i}=$ const by $\lambda_{i}t$ $(i=1, 2)$.~(b) Black points represent possible wave vectors in the cases of periodic and anti-periodic boundary conditions in the $x_{1}$ direction. Here $\lambda_{2}$ is set to be unity.
When $\lambda_{1}=1$ and $\lambda_{2}=1$, among the four TRIM, only $\bm{k}=(0,0)$ is among the possible wave vectors. Likewise, when $\lambda_{1}=-1$ and $\lambda_{2}=1$, only $\bm{k}=(\pi, 0)$ is among them.
}
\end{figure}
In order to understand the relationship between the $\mathbb{Z}_{4}$ topological index and the hinge states, we introduce a cutting procedure, which is used in the appendix of Ref.~[\onlinecite{PhysRevB.78.045426}] in the context of the time-reversal invariant $\mathbb{Z}_{2}$ topological insulators.
Here, we consider the system to be large but finite, with periodic boundary conditions in the $x_{1}$ and $x_{2}$ directions that are parallel to the primitive reciprocal lattice vectors $\bm{b}_{1}$ and $\bm{b}_{2}$, respectively.
We set the system size as $L_{1}\times L_{2},\ L_{1}=L_{2}=2M+1$ ($M$ is an integer) for simplicity. We will discuss the case with an even number of the system size in Appendix~\ref{section:appendixd}.
Here, the length of the system along the $x_{j}$ direction ($j=1,2,3$) is measured in the unit of the lattice constant, and each unit cell is inversion-symmetric with its inversion center at $x_{j}={\rm integer}$.
Along the $x_{3}$ direction that is parallel to $\bm{b}_{3}$, we set the periodic boundary condition and set the system size as $L_{3}\rightarrow \infty$.
Thereby, the Bloch wave-vector $k_{3}$ in the $x_{3}$ direction can be defined.
In this subsection, we focus on $k_{3}=0$. That is, we will consider the four TRIM $\Gamma_{j=(n_{1},n_{2},0)}$ shown in Fig.~\ref{zfourparitypicture}(a).
Along the $x_{1}$ and $x_{2}$ directions, instead of the periodic boundary conditions, we multiply all the hopping amplitudes across the boundary between $x_{i}=-M$ and $x_{i}=M$ by a real parameter $\lambda_{i}$.
This means that the boundary conditions for the finite systems in the $x_{1}, x_{2}$ directions change by changing $\lambda_{1},\ \lambda_{2}$, as shown in Fig.~\ref{cuttingprocedurefig}(a). The case with $\lambda_{1}=1$ corresponds to periodic boundary condition in the $x_{1}$ direction and that with $\lambda_{1}=0$ corresponds to an open boundary condition in the $x_{1}$ direction.
For any values of $\lambda_{1}$ and $\lambda_{2}$, the system is inversion symmetric with its inversion center at $(x_{1},x_{2},x_{3})=(0,0,0)$.
\subsection{Spectral flows in the band gap\label{cuttingprocedure}}
In the following, we show existence of gapless hinge states when $(\nu_{1},\nu_{2},\nu_{3},\mu_{1})=(0,0,0,2)$, i.e. the $\mathbb{Z}_{4}$ index is nontrivial. In the cutting procedure with the parameters $\lambda_{1}$ and $\lambda_{2}$, the hinge states appear at $\lambda_{1}=\lambda_{2}=0$, while the bulk topological invariants $(\nu_{1},\nu_{2},\nu_{3},\mu_{1})$ determine the parity eigenvalues of the states at $(\lambda_{1},\lambda_{2})=(1,\pm1)$, $(-1,\pm1)$ as we show in the following. To relate the information of wave-functions at $(\lambda_{1},\lambda_{2})=(1,\pm1)$, $(-1,\pm1)$ with that at $\lambda_{1}=\lambda_{2}=0$, we utilize symmetry of the spectral flows under $\lambda_{1}\leftrightarrow -\lambda_{1}$ and under $\lambda_{2}\leftrightarrow -\lambda_{2}$, as long as the surface is gapped.
When we consider the case with $\lambda_{1}=1$,
the
wave-vector in the $x_{1}$ direction is
\begin{equation}
k_{1}=\frac{2\pi}{L_{1}}m_{1}\ \ \ (-M\leq m_{1}\leq M),
\end{equation}
because of the periodic boundary condition in the $x_{1}$ direction.
Because $L_{1}$ is an odd number, $k_{1}$ can be $0$ but not $\pi$.
When $(\lambda_{1},\lambda_{2})=(1,1)$,
$(k_{1}, k_{2})$ can take a value $(0, 0)$, but not
$(\pi, 0),\ (0, \pi)$ or $(\pi, \pi)$ as shown in Fig.~\ref{cuttingprocedurefig}(b).
\begin{figure}
\includegraphics{energylambda1.pdf}
\caption{\label{energylambda}(Color online) Energy spectra in changing $\lambda_{1}=1\ \rightarrow -1$ with $\lambda_{2}$ being constant. The energy spectra are symmetric with respect to $\lambda_{1}\leftrightarrow -\lambda_{1}$, and states at $\lambda_{1}$ and $-\lambda_{1}$ have opposite parity eigenvalues. (a-d) are four representative examples when $\lambda_{2}=1$. (e) and (f) are two examples when $\lambda_{2}=-1$.}
\end{figure}
Next, we consider the case with $\lambda_{1}=-1$. In this case, wave functions are multiplied by $-1$ across the boundary between $x_{1}=M$ and $x_{1}=-M$, corresponding to an anti-periodic boundary condition in the $x_{1}$ direction.
This anti-periodic boundary condition is converted into the periodic boundary condition in the $x_{1}$ direction by a unitary transformation $U_{1}={\rm exp}[i\pi \hat{x}_{1}/L_{1}]$, where $\hat{x}_{1}$ is a position operator for the coordinate $x_{1}$. Through this transformation, the Bloch wave vector is shifted as $k_{1} \rightarrow k_{1}+\frac{\pi}{L_{1}}$ due to this unitary transformation $U_{1}$ (see Appendix \ref{apendixa}).
Thus, the Bloch wave vector in the $x_{1}$ direction is
\begin{equation}
k_{1}=\frac{2\pi}{L_{1}}m_{1}+\frac{\pi}{L_{1}}\ \ \ (-M\leq m_{1}\leq M).
\end{equation}
In this case, because $L_{1}$ is an odd number, $k_{1}$ can be $\pi$ but not $0$.
When $(\lambda_{1},\lambda_{2})=(-1,1)$,
$(k_{1}, k_{2})$ can take a value $(\pi, 0)$, but not
$(0, 0),\ (0, \pi),\ (\pi, \pi)$ as shown in Fig.~\ref{cuttingprocedurefig}(b).
Now, we calculate $N_{-}(\lambda_{1},\lambda_{2})$, representing the number of occupied states with odd parity at $k_{3}=0$.
Inversion operation $\hat{I}$ changes $(k_{1}, k_{2}, k_{3})$ to $(-k_{1}, -k_{2}, -k_{3})$.
Each wave-function $\psi_{m}(\boldsymbol{k})$ at non-TRIM points $\boldsymbol{k}=(k_{1},k_{2},k_{3})$ with $k_{3}=0$ can always be paired with one at $-\boldsymbol{k}$ to construct two states, one with even-parity $\phi_{+}$ and the other with odd-parity $\phi_{-}$:
\begin{equation}
\phi_{\pm}\equiv \frac{1}{\sqrt{2}}\biggl(\psi_{m}(\boldsymbol{k})\pm \hat{I}\psi_{m}(\boldsymbol{k})\biggr),
\end{equation}
where $\hat{I}\psi_{m}(\boldsymbol{k})\propto \psi_{m}(-\boldsymbol{k})$.
Therefore, each non-TRIM pair ($\boldsymbol{k}$, $-\boldsymbol{k}$) with $k_{3}=0$ contributes 1 to $N_{-}(\lambda_{1},\lambda_{2})$.
On the other hand, a contribution to $N_{-}(\lambda_{1},\lambda_{2})$ from the TRIM depends on $\lambda_{1}$ and $\lambda_{2}$. First, we consider the case with $\lambda_{2}=1$.
When $(\lambda_{1},\lambda_{2})=(1, 1)$, the number of odd-parity states at TRIM that contributes to $N_{-}(\lambda_{1},\lambda_{2})$ is
$n_{-}(0,0,0)$, where $n_{-}(k_{1}, k_{2}, k_{3})$ is the number of occupied states with odd parity at TRIM $\Gamma_{j=(n_{1},n_{2},n_{3})}=(n_{1}\boldsymbol{b}_{1}+n_{2}\boldsymbol{b}_{2}+n_{3}\boldsymbol{b}_{3})/2$.
Let $\nu$ be the number of occupied bands. Then $N_{-}(\lambda_{1}=1, \lambda_{2}=1)$ can be expressed as follows:
\begin{equation}\label{noddlambda1}
N_{-}(1, 1)=\frac{(L_{1}L_{2}-1)\nu}{2}+n_{-}(0,0,0).
\end{equation}
Similarly, when $(\lambda_{1}, \lambda_{2})=(-1, 1)$, among the TRIM only the TRIM $\Gamma_{j=(1,0,0)}=(\pi,0,0)$ contributes to $N_{-}(\lambda_{1},\lambda_{2})$.
From this, $N_{-}(\lambda_{1}=-1,\lambda_{2}=1)$ can be expressed as follows:
\begin{equation}\label{noddlambdaminus1}
N_{-}(-1, 1)=\frac{(L_{1}L_{2}-1)\nu}{2}+n_{-}(\pi,0,0).
\end{equation}
Therefore,
from Eqs.~(\ref{noddlambda1}) and (\ref{noddlambdaminus1}), the total change in $N_{-}(\lambda_{1}, \lambda_{2}=1)$ between $\lambda_{1}=1$ and $\lambda_{1}=-1$ can be expressed as follows:
\begin{align}\label{kyusiki}
&\bigl[N_{-}(\lambda_{1}, \lambda_{2}=1)\bigr]^{\lambda_{1}=1}_{\lambda_{1}=-1}
\nonumber \\
=& n_{-}(0,0,0)-n_{-}(\pi,0,0)=2,
\end{align}
where
\begin{align}
\bigl[N_{\pm}(\lambda_{1}, \lambda_{2})\bigr]^{\lambda_{1}=a}_{\lambda_{1}=b}\equiv N_{\pm}(a,\lambda_{2})
- N_{\pm}(b,\lambda_{2}).
\end{align}
That is, in the process of changing from $\lambda_{1}=1$ to $-1$,
the number of occupied states with odd parity is reduced by 2.
In addition, we can show that the energy spectrum is symmetric with respect to $\lambda_{1}\leftrightarrow -\lambda_{1}$, and the bound states $\ket{\psi_{l}(\lambda_{1})}$ and $\ket{\psi_{l}(-\lambda_{1})}$ have opposite parity eigenvalues (see Appendix \ref{apendixa}).
Thus, as we show some examples in Fig.~\ref{energylambda}(a-d), two states with odd-parity move from the valence bands for $\lambda_{1}=1$ to the conduction bands for $\lambda_{1}=-1$.
In addition, two states with even-parity move from the conduction bands for $\lambda_{1}=1$ to the valence bands for $\lambda_{1}=-1$.
This means that the following relation generally holds:
\begin{equation}\label{atarasikigou}
\bigl[N_{\pm}(\lambda_{1},\lambda_{2}=1)\bigr]^{\lambda_{1}=0}_{\lambda_{1}=1}=\bigl[N_{\mp}(\lambda_{1}, \lambda_{2}=1)\bigr]^{\lambda_{1}=0}_{\lambda_{1}=-1}.
\end{equation}
Now, we calculate the $N_{+-}(\lambda_{1},\lambda_{2})\equiv N_{+}(\lambda_{1},\lambda_{2})-N_{-}(\lambda_{1},\lambda_{2})$ for the four points A $(\lambda_{1}=1,\lambda_{2}=1)$, B $(\lambda_{1}=0,\lambda_{2}=1)$, C $(\lambda_{1}=1,\lambda_{2}=-1)$ and D $(\lambda_{1}=0, \lambda_{2}=-1)$ as shown in Fig.~\ref{fig:appearing_hinge}(a). We calculate the differences of this value between (i) A-B, (ii) C-D and (iii) A-C in the followings.
(i) First, we calculate the change in $N_{+-}(\lambda_{1},\lambda_{2})$ between the point A and the point B.
From Eqs.~(\ref{kyusiki}) and (\ref{atarasikigou}), we obtain the following relation.
\begin{align}\label{hukusen_appearance_ichi}
&\bigl[N_{+-}(\lambda_{1}, \lambda_{2}=1)\bigr]^{\lambda_{1}=0}_{\lambda_{1}=1}\nonumber \\
=&\bigl[N_{-}(\lambda_{1}, \lambda_{2}=1)\bigr]^{\lambda_{1}=0}_{\lambda_{1}=-1}-
\bigl[N_{-}(\lambda_{1}, \lambda_{2}=1)\bigr]^{\lambda_{1}=0}_{\lambda_{1}=1}\nonumber \\
=&\bigl[N_{-}(\lambda_{1}, \lambda_{2}=1)\bigr]^{\lambda_{1}=1}_{\lambda_{1}=-1}=2.
\end{align}
(ii) Next, we analyze the case with $\lambda_{2}=-1$.
In this case,
$N_{-}(\lambda_{1}, \lambda_{2}=-1)$ can be expressed as follows:
\begin{equation}
N_{-}(1, -1)=\frac{(L_{1}L_{2}-1)\nu}{2}+n_{-}(0,\pi,0),
\end{equation}
\begin{equation}
N_{-}(-1, -1)=\frac{(L_{1}L_{2}-1)\nu}{2}+n_{-}(\pi,\pi,0),
\end{equation}
in the same way as in the case of $\lambda_{2}=1$.
Therefore,
the total change in $N_{-}(\lambda_{1}, \lambda_{2}=-1)$ between the point C and the point E ($\lambda_{1}=-1, \lambda_{2}=-1$), can be expressed as follows:
\begin{align}\label{minusichinikoteisiteichikaraminuichi}
&\bigl[N_{-}(\lambda_{1}, \lambda_{2}=-1)\bigr]^{\lambda_{1}=1}_{\lambda_{1}=-1}\nonumber \\
=&n_{-}(0,\pi,0)-n_{-}(\pi,\pi,0)=0.
\end{align}
That is,
the number of occupied states with odd parity does not change
in the process of changing from $\lambda_{1}=1$ to $-1$.
Thus,
the change of the energy spectra are as shown in Fig.~\ref{energylambda}(e)(f)
so as to satisfy
$N_{-}(1, -1)=N_{-}(-1, -1)$
because states with even parity and odd parity are mutually transformed by $\lambda_{1}\leftrightarrow -\lambda_{1}$.
From Eqs.~(\ref{atarasikigou}) and (\ref{minusichinikoteisiteichikaraminuichi}), the change in $N_{+-}(\lambda_{1},\lambda_{2}=-1)$ between the point C and the point D, can be expressed as follows:
\begin{align}\label{N(0,-1)nosa}
&\bigl[N_{+-}(\lambda_{1}, \lambda_{2}=-1)\bigr]^{\lambda_{1}=0}_{\lambda_{1}=1}\nonumber \\
=&\bigl[N_{-}(\lambda_{1}, \lambda_{2}=-1)\bigr]^{\lambda_{1}=0}_{\lambda_{1}=-1}
-\bigl[N_{-}(\lambda_{1}, \lambda_{2}=-1)\bigr]^{\lambda_{1}=0}_{\lambda_{1}=1}\nonumber \\
=&\bigl[N_{-}(\lambda_{1}, \lambda_{2}=-1)\bigr]^{\lambda_{1}=1}_{\lambda_{1}=-1}=0.
\end{align}
(iii) In the above, we have considered the energy spectra when we fix $\lambda_{2}=1$ or $\lambda_{2}=-1$ and vary $\lambda_{1}$.
Similarly, we also get the similar conclusion when we fix $\lambda_{1}=1$ or $\lambda_{1}=-1$ and vary $\lambda_{2}$.
Therefore, similarly to Eq.~(\ref{kyusiki}), the total change in $N_{-}(\lambda_{1}=1,\lambda_{2})$ between the point A and the point C in Fig.~\ref{fig:appearing_hinge}(a), can be expressed as follows:
\begin{align}
&\bigl[ N_{-}(\lambda_{1}=1,\lambda_{2}) \bigr]^{\lambda_{2}=1}_{\lambda_{2}=-1}\nonumber \\
=&n_{-}(0,0,0)-n_{-}(0,\pi,0)=2.
\end{align}
That is, two states with odd parity move from the valence bands of $\lambda_{2}=1$ to the conduction bands of $\lambda_{2}=-1$.
In the same way as when $\lambda_{1}$ varies,
we can show that the energy spectra are symmetric with respect to $\lambda_{2} \leftrightarrow -\lambda_{2}$, and the bound states $\ket{\psi(\lambda_{2})}$ and $\ket{\psi(-\lambda_{2})}$ have opposite parity.
Therefore, two states with even-parity move from the conduction bands of $\lambda_{2}=1$ to the valence bands of $\lambda_{2}=-1$.
From this discussion, we conclude that the number $N_{+-}(\lambda_{1},\lambda_{2})$ increases by 4 by changing from the point A to the point C.
This means that the following relation holds:
\begin{align}\label{N(1,-1)nosa}
\bigl[ N_{+-}(\lambda_{1}=1,\lambda_{2})\bigl]^{\lambda_{2}=-1}_{\lambda_{2}=1}=4.
\end{align}
To summarize (i)-(iii),
from Eqs.~(\ref{hukusen_appearance_ichi}), (\ref{N(0,-1)nosa}) and (\ref{N(1,-1)nosa}), we obtain the following equation.
\begin{align}\label{mainresult2b}
&\bigl[N_{+-}(\lambda_{1}=0,\lambda_{2}) \bigr]^{\lambda_{2}=-1}_{\lambda_{2}=1}=2.
\end{align}
This equation is the main result of this subsection and this is closely related to the appearance of hinge states as described in the next subsection.
In the next subsection, we will consider the energy spectrum when $\lambda_{1}=0$ through the change of $\lambda_{2}$ from $\lambda_{2}=1$ to $\lambda_{2}=-1$, and for this purpose Eq.~(\ref{mainresult2b}) is important.
\subsection{Appearance of hinge states}\label{subsec:Appearance of hinge states}
\begin{figure}
\includegraphics{hinge_appear_fig2.pdf}
\caption{\label{fig:appearing_hinge}(Color online) Appearance of the hinge states. (a) The differences in $N_{+-} (\lambda_{1},\lambda_{2})\equiv N_{+}(\lambda_{1},\lambda_{2})-N_{-}(\lambda_{1},\lambda_{2})$ at the points A, B, C and D with respect to the point A for $k_{3}=0$. From this we conclude that $N_{+-}(\lambda_{1},\lambda_{2})$ at the points B and D are different by two. (b) The energy spectrum when $\lambda_{1}=0$ and $k_{3}=0$ with changing $\lambda_{2}$ from $1$ to $-1$. (c) The energy spectrum when $\lambda_{1}=0$ and $k_{3}=\pm \Delta$, where $\Delta$ represents a small real number. Because $k_{3}$ is away from $k_{3}=0$, degeneracy between the even-parity and the odd-parity states at $k_{3}=0$ is lifted because of absence of inversion symmetry for $k_{3}=\pm \Delta$. (d) The energy spectrum when $\lambda_{1}=0$ and $k_{3}=\pi$. In this case, states with even and odd parity do not cross.
(e) The band structure when $(\lambda_{1},\lambda_{2})=(0,0)$.
Two gapless states are degenerate at $k_{3}=0$ corresponding to the yellow point at $\lambda_{2}=0$ in (b). This degeneracy is lifted when $k_{3}\neq 0$ corresponding to the yellow points in (c).
These two states move to conduction bands and valence bands from $k_{3}=\Delta$ to $k_{3}=\pi$, corresponding to (d).
(f) An example of the band structure crossing the Fermi level in a more complicated manner than (e).
}
\end{figure}
Here we consider the energy spectrum $k_{3}=0$ when $\lambda_{1}=0$.
From Eq.~(\ref{mainresult2b}), we find that the number $N_{+-}(\lambda_{1},\lambda_{2})\equiv N_{+}(\lambda_{1},\lambda_{2})-N_{-}(\lambda_{1},\lambda_{2})$ increases by 2 when $\lambda_{2}$ is changed from $\lambda_{2}=1$ to $-1$.
Here we note that the energy spectra are symmetric with respect to $\lambda_{2}\leftrightarrow -\lambda_{2}$ even at $\lambda_{1}=0$, and the states $\ket{\psi_{l}(\lambda_{2})}$ and $\ket{\psi_{l}(-\lambda_{2})}$ have opposite parities by assuming that no gapless state appears in the surface.
From this result, we find that in this process of changing $\lambda_{2}$, the exchange of states with odd and even parity takes place once as shown in Fig.~\ref{fig:appearing_hinge}(b).
From this argument, we conclude that even-parity states and odd-parity ones must be degenerate at $(\lambda_{1},\lambda_{2})=(0,0)$ as shown in Fig.~\ref{fig:appearing_hinge}(b).
In the arguments so far, we considered the case with $k_{3}=0$.
On the other hand, when $k_{3}=\pm \Delta$, where $\Delta$ represents a small non-zero real number, the wave-vector $\boldsymbol{k}$ is not a TRIM. Therefore the states with even and odd parity hybridize, and a gap opens at $\boldsymbol{k}$ when
$k_{3}=\Delta$. Therefore, the energy spectrum is shown in Fig.~\ref{fig:appearing_hinge}(c) as $\lambda_{2}$ is changed from $\lambda_{2}=1$ to $\lambda_{2}=-1$.
\begin{figure}
\includegraphics{tuikahingefig.pdf}
\caption{\label{tuikahingefig}The numbers of hinge states at four hinges. (a) Four hinges $\rm A, A', B, B'$ produced by the cutting procedure. (b, c) Let $n_{i}$ ($i={\rm A, A', B, B'}$) denote the number of hinge modes at the $i$-hinge. We find that $n_{\rm A}=-n_{\rm A'}$, $n_{\rm B}=-n_{\rm B'}$ and $n_{\rm A}+n_{\rm B}$ is an odd number. One of $n_{\rm A}(=-n_{\rm A'})$ and $n_{\rm B}(=-n_{\rm B'})$ is odd while the other is even. (b-1) and (b-2) are minimal configurations. (c-1) and (c-2) are general ones.}
\end{figure}
\begin{figure*}
\includegraphics{example3.pdf}
\caption{\label{systems_4band}Band structures of the tight-binding model (\ref{bhzmodel_ziba})
with parameters $t=c=1$, $m=2$, $B=1/2$ and $\theta=\pi/4$. The boundary condition in the $x_{3}$ direction is periodic and that in the $x_{1}$ and $x_{2}$ directions are characterized by $\lambda_{1}$ and $\lambda_{2}$. Parameter values for boundary conditions are (a) $\lambda_{1}=1$, $\lambda_{2}=1$, (b) $\lambda_{1}=1$, $\lambda_{2}=0$, (c) $\lambda_{1}=0$, $\lambda_{2}=1$ and (d) $\lambda_{1}=0$, $\lambda_{2}=0$. The system size along $x_{1}$ and $x_{2}$ directions are $L_{1}=L_{2}=45$.
}
\end{figure*}
Next, we consider the spectrum at $k_{3}=\pi$, following the discussion in Sec.~\ref{cuttingprocedure}. In contrast with $k_{3}=0$,
there is no difference in the number of odd-parity eigenstates at the four TRIM on $k_{3}=\pi$.
Then, we can conclude that states with odd parity and states with even parity are not exchanged on $k_{3}=\pi$ as shown in Fig.~\ref{fig:appearing_hinge}(d).
Namely, the degeneracy of states at $(\lambda_{1},\lambda_{2})=(0,0)$ is present only at $k_{3}=0$ but not at $k_{3}=\pi$.
Therefore gapless states appear when $(\lambda_{1},\lambda_{2})=(0,0)$ as shown in Fig.~\ref{fig:appearing_hinge}(e).
These gapless states are hinge states because they appear only when there are no bonds across the two boundaries along the $x_{1}$ and $x_{2}$ directions (see Fig.~\ref{cuttingprocedurefig}(a)). Therefore, this system is a SOTI.
From the above discussion, we can conclude the followings for $(\lambda_{1},\lambda_{2})=(0,0)$.
(i) The two hinge states are degenerated at $k_{3}=0$. (ii) By changing from $k_{3}=0$ to $k_{3}=\pi$, one of the degenerated states moves to the valence band and the other moves to the conduction band.
Then a band structure of the hinge states of the SOTI can be like Fig.~\ref{fig:appearing_hinge}(e).
We note that Fig.~\ref{fig:appearing_hinge}(e) is only an example, and the band structure can be different from Fig.~\ref{fig:appearing_hinge}(e).
In such cases, we can conclude from (i) and (ii) that the number of states crossing the Fermi level between $k_{3}=0$ and $k_{3}=\pi$ in the band structure of the SOTI will always be an odd number (for example see Fig.~\ref{fig:appearing_hinge}(f)).
Note that we consider the case of Fig.~\ref{zfourparitypicture}(a) as parity eigenvalues at TRIM in the arguments so far. We extend the discussion so far to general cases of parity eigenvalues at TRIM in Appendix~\ref{section:extension to the general cases}.
In addition, while we find that a pair of hinge states with positive and negative velocities appears as a particular example in this section, we conclude that in general an odd number of the pairs of hinge states always appear when $(\nu_1,\nu_2,\nu_3,\mu_1)=(0,0,0,2)$ from the discussion in Appendix.~\ref{section:extension to the general cases}.
When $\lambda_{1}=\lambda_{2}=0$, the cutting produces four hinges, and the hinge states exist at one of the four hinges. If one hinge state lies on the hinge A with a positive velocity along $x_{3}$ in Fig.~\ref{tuikahingefig}(a) as an example, the inversion symmetry imposes that the hinge $\rm A'$ supports a hinge states with a negative velocity along $x_{3}$. It is also true for the pair of hinges B and $\rm B'$. From these considerations, we conclude that the hinges A and $\rm A'$ (and likewise B and $\rm B'$) have the same number of hinge states, and their hinge states form pairs under inversion symmetry, i.e. having opposite signs of velocities. Let $n_{i}$ ($i={\rm A},{\rm A'}, {\rm B}, {\rm B'}$) denote the number of hinge modes at the $i$-hinge. We define this number to be the number of hinge modes with positive velocity minus that with negative velocity. Then from the above argument, we conclude that $n_{\rm A}=-n_{\rm A'}$, $n_{\rm B}=-n_{\rm B'}$ and $n_{\rm A}+n_{\rm B}$ is an odd number. Then, minimal configurations for $n_{i}$ are shown in Figs.~\ref{tuikahingefig}(b-1) and (b-2). In general, we conclude that one of $n_{\rm A}(=-n_{\rm A'})$ and $n_{\rm B}(=-n_{\rm B'})$ is odd while the other is even as shown in Figs.~\ref{tuikahingefig}(c-1) and (c-2).
So far we showed that $(\nu_{1},\nu_{2},\nu_{3},\mu_{1})=(0,0,0,2)$ leads to existence of hinge states. Here we explain that in a case with $(\nu_{1},\nu_{2},\nu_{3})\neq (0,0,0)$ i.e. a three-dimensional Chern insulator, existence of hinge states does not follow.
For example, when $(\nu_{1},\nu_{2},\nu_{3})=(0,1,0)$, the system is a Chern insulator, and there exist chiral surface states on the $x_{2}$-$x_{3}$ surface.
In this case, along the $\lambda_{1}=0$ line in Fig.~\ref{fig:appearing_hinge}(a), spectral symmetry under $\lambda_{2}\leftrightarrow -\lambda_{2}$ does not hold, because existence of the gapless surface states invalidates the proof for the $\lambda_{2}\leftrightarrow -\lambda_{2}$ symmetry in Appendix \ref{appendixa1proofofeforlofalized}. It physically means that the chiral surface states hide hinge states if any. Thus, to summarize, when $(\nu_{1},\nu_{2},\nu_{3})\neq(0,0,0)$, $\mu_{1}=2$ does not lead to the existence of hinge states.
\section{\label{section:Tight-binding model}Tight-binding model}
Here, we perform a model calculation to verify the arguments in the previous section.
We start from a tight-binding model of a SOTI on a simple-cubic lattice with inversion symmetry [\onlinecite{PhysRevB.98.205129}].
In order to construct the model of the SOTI, we add a uniform Zeeman magnetic field $\boldsymbol{B}=B(-\sin{\theta}, \cos{\theta},0)$ to the model of the three-dimensional TI in Ref.~[\onlinecite{PhysRevB.78.195424}].
By adding a Zeeman term $-\tau_{0}\otimes \boldsymbol{B}\cdot \boldsymbol{\sigma}$ to the model of the TI,
we get a four-band tight-binding model given by
\begin{align}\label{bhzmodel_ziba}
H(\boldsymbol{k})=&-t\sum_{j}\sin{k_{j}}\tau_{1}\otimes \sigma_{j}\nonumber \\
&-(m-c\sum_{j}\cos{k_{j}})\tau_{3}\otimes \sigma_{0}
-\tau_{0}\otimes \boldsymbol{B}\cdot \boldsymbol{\sigma},
\end{align}
where $\tau_{j}$ and $\sigma_{j}\ (j=1, 2, 3)$ are Pauli matrices, and $\tau_{0}$ and $\sigma_{0}$ are the $2\times 2$ identity matrices.
In this model,
the term $-\tau_{0}\otimes \boldsymbol{B}\cdot \boldsymbol{\sigma}$ breaks symmetry under the time-reversal operator $T=-i\tau_{0}\otimes \sigma_{2}$ but respects symmetry under the inversion operator $I=\tau_{3}\otimes \sigma_{0}$.
To realize a SOTI phase,
we set $t=c=1,\ m=2,\ B=1/2$ and $\theta=\pi/4$ in the following.
We set the Fermi energy to be $E_{F}=0$.
\begin{figure}
\includegraphics{model_calculation.pdf}
\caption{\label{fig:lamz_lamy}Energy spectra with changing $\lambda_{1}$ from $\lambda_{1}=-1$ to $1$
for the model (\ref{bhzmodel_ziba}).
Parameters are set as $t=c=1$, $m=2$, $B=1/2$ and $\theta=\pi/4$.
(a) $\lambda_{2}=1$ and $k_{3}=0$. (b) $\lambda_{2}=-1$ and $k_{3}=0$. (c) $\lambda_{2}=0$ and $k_{3}=0$. (d) $\lambda_{2}=0$ and $k_{3}=0.05$. (e) $\lambda_{2}=0$ and $k_{3}=0.1$. (f) $\lambda_{2}=0$ and $k_{3}=\pi$.}
\end{figure}
In this model, surface Dirac cones perpendicular to either $x_{1}$ and $x_{2}$ axes are gapped by the uniform magnetic field.
Between the surfaces with an inward magnetic field and those with an outward magnetic field, the signs of the mass term of the surface Dirac cones are opposite. Therefore, at the intersections of these two surfaces, gapless states necessarily appear.
In this model,
only the $\Gamma$ point [$\boldsymbol{k}=(0, 0, 0)$] has two odd- parity states and other TRIM have only even-parity states as in Fig.~\ref{zfourparitypicture}(a).
From these parity eigenvalues,
we get the weak indices $\nu_{1}=\nu_{2}=\nu_{3}=0$, and the strong index $\mu_{1}=2$.
Here we set a periodic boundary condition in the $x_{3}$ direction.
The system has a size $L_{1}\times L_{2}$ along $x_{1}$ and $x_{2}$ directions, and we first set periodic boundary conditions along the $x_{1}$ and $x_{2}$ directions with $L=2M+1$.
In the calculation we set the system size as $L\times L=45\times 45$.
We then replace the hopping amplitudes $t$ and $c$ for all bonds across the boundary between $x_{1}=-M$ and $x_{1}=M$ by $\lambda_{1}t$ and $\lambda_{1}c$, where $\lambda_{1}$ is real. In addition we similarly replace the hopping amplitudes for all bonds that cross the boundary between $x_{2}=-M$ and $x_{2}=M$ by $\lambda_{2}t$ and $\lambda_{2}c$, where $\lambda_{2}$ is real.
\begin{figure*}
\includegraphics{location_hinge.pdf
\caption{\label{fig:location_hinge}(Color online)
Positions of hinge states. Blue lines represent an odd number of pairs of hinge states and red lines represent an even number of pairs of hinge states.
(a) Hinge states appear along $x_{3}$ directions.
(b, c) By introducing cutting procedure along $x_{2}$ and $x_{3}$ directions, it is found that hinge states appear along $x_{1}$ direction.
(d-g) Hinge states along $x_{1}$, $x_{2}$ and $x_{3}$ directions. The number of hinge modes toward the corner $X$ of the system are defined as $N_{1}$, $N_{2}$ and $N_{3}$ respectively.
Realizable positions of hinge modes can only be (d) and (g) because of charge conservation at the corner $X$.}
\end{figure*}
First we calculate the band structures of the model when ($\lambda_{1},\lambda_{2}$)=($1,1$), ($1,0$), ($0,1$) and ($0,0$). The results are shown in Figs.~\ref{systems_4band}(a-2), (b-2), (c-2) and (d-2) respectively. In addition the schematic figures corresponding to these results are Figs.~\ref{systems_4band}(a-1), (b-1), (c-1) and (d-1). From these results, we find that gapless states appear only in Fig.~\ref{systems_4band}(d-2).
In Fig.~\ref{systems_4band}(a) ($\lambda_{1}=\lambda_{2}=1$), the system has no boundary, and the eigenstates are bulk states and are gapped. In Fig.~\ref{systems_4band}(b) ($\lambda_{1}=1$, $\lambda_{2}=0$) and (c) ($\lambda_{1}=0$, $\lambda_{2}=1$) the system has surfaces, and the results in (b-2) and (c-2) show that the surface spectrum is also gapped. In Fig.~\ref{systems_4band}(d) ($\lambda_{1}=\lambda_{2}=0$), the system has surfaces and hinges. Therefore, the gapless states in Fig.~\ref{systems_4band}(d-2) are hinge states.
Next we calculate a change in the energy spectra from $\lambda_{1}=-1$ to $1$ when $k_{3}$ and $\lambda_{2}$ are fixed. The results are shown in Fig.~\ref{fig:lamz_lamy}.
When $\lambda_{1}$ changes from $\lambda_{1}=-1$ to $\lambda_{1}=1$, two states are interchanged between the conduction and the valence bands when $\lambda_{2}=1$ and $k_{3}=0$ (Fig.~\ref{fig:lamz_lamy}(a)) but not when $\lambda_{2}=-1$ and $k_{3}=0$ (Fig.~\ref{fig:lamz_lamy}(b)).
The results in Figs.~\ref{fig:lamz_lamy}(a) and (b) correspond to Figs.~\ref{energylambda}(d) and (f) respectively.
Therefore, the results of this model calculation are consistent with the discussion in Sec.~\ref{cuttingprocedure}.
We discuss here the results for $\lambda_{2}=0$ with various values of $k_{3}$.
States are interchanged between the conduction and the valence bands in Fig.~\ref{fig:lamz_lamy}(c) ($k_{3}=0$),
but not in Fig.~\ref{fig:lamz_lamy}(d) ($k_{3}=0.05$) and (e) ($k_{3}=0.1$).
The result in Fig.~\ref{fig:lamz_lamy}(c) corresponds to Fig.~\ref{fig:appearing_hinge}(b) and the results in
Figs.~\ref{fig:lamz_lamy}(d) and (e) correspond to Fig.~\ref{fig:appearing_hinge}(c).
Fig.~\ref{fig:lamz_lamy}(f) is the energy spectrum when $k_{3}=\pi$.
As mentioned in Sec.~\ref{subsec:Appearance of hinge states}, an
interchange of states between the conduction and the valence bands does not occur.
From the above, we confirmed that all the results from the model calculations are consistent with the discussion in Sec.~\ref{topological index and cutting procedure}.
\section{Positions of hinge states}\label{sec:positionofhingestates}
In this section, we consider a crystal with a parallelepiped shape with its edges along $x_{i}$ axis (see Fig.~\ref{fig:location_hinge}) and discuss which hinges support gapless hinge states.
From the previous discussion, we find that pairs of hinge states always appear in insulators with $\mathbb{Z}_{4}=2$ when $\lambda_{1}=\lambda_{2}=0$.
Each pair consists of a hinge state with positive velocity and one with negative velocity which are related by inversion symmetry as shown in Fig.~\ref{tuikahingefig}.
In addition, the number of the pairs is odd.
When the system size is very large, every state should be at one hinge among the four hinges facing each other (see Fig.~\ref{tuikahingefig}). From the inversion symmetry of the whole system, when one hinge state is at one hinge, the other hinge state should reside at the hinge which is facing diagonally with that hinge in Fig.~\ref{tuikahingefig}.
Therefore we conclude that gapless states appear at hinges of the system as shown in Fig.~\ref{fig:location_hinge}(a).
In general, in Fig.~\ref{fig:location_hinge}(a), an odd number of pairs of hinge states appear in two hinges facing each other (blue lines) and even number of pairs of hinge states appear at the other two hinges facing each other (red lines) because the total number of the pairs is odd.
We have found appearance of hinge states along $x_{3}$ direction by introducing the cutting procedure along $x_{1}$ and $x_{2}$ directions in the arguments so far.
We similarly find hinge states along the $x_{1}$ direction by introducing the cutting procedure along the $x_{2}$ and $x_{3}$ directions.
From this, we can consider two cases as shown in Figs.~\ref{fig:location_hinge}(b) and (c) as patterns of positions and directions where hinge states appear.
We furthermore consider hinge states in the $x_{2}$ direction by introduce cutting procedure along the $x_{1}$ and $x_{3}$ directions.
Then, we can consider four cases of Figs.~\ref{fig:location_hinge}(d-g).
\begin{figure}
\includegraphics{hoti.pdf}
\caption{\label{fig:hoti}Second-order topological insulator. (a) Let $N_{1}$, $N_{2}$ and $N_{3}$ be the number of $2M_{1}$, $2M_{2}$ and $2M_{1}+2M_{2}$ hinge states respectively.
One can attach two 2D Chern insulators with same Chern number on two surfaces of opposite sides of the crystal, while preserving inversion symmetry.
By attaching two 2D Chern insulators with Chern number $\mathcal{C}=2M_{1}$ and $\mathcal{C}=2M_{2}$ on $x_{1} x_{3}$- and $x_{2}x_{3}$-surfaces respectively,
one can make the number of the hinge modes at the red hinges to be zero.
(b) Hinge states form a closed loop. In addition, the number of hinge states is odd.}
\end{figure}
Here we should discard unphysical cases among the four cases of Figs.~\ref{fig:location_hinge}(d-g).
In these figures, three hinges meet together at each corner of the crystal.
At each corner, the number of incoming hinge modes should be equal to that of outgoing hinge modes, where ``incoming" and ``outgoing" refer to the signs of the velocities of hinge states.
It is shown as follows. In equilibrium, a current flows along the hinge modes, and at each corner the incoming current is equal to the outgoing current, because otherwise a charge will be accumulated at each corner in proportion with time. Then suppose we increase the chemical potential by $\Delta \mu$ within the gap. Each hinge mode will acquire an additional current by $\frac{e^{2}}{h}\Delta \mu$. For the current conservation at each corner after the shift $\Delta \mu$, the number of incoming hinge modes should be equal to that of outgoing hinge modes.
This argument is similar to the one in Ref.~\cite{berrypahaseinelectronic} for proving that the chiral edge currents are determined by a bulk orbital magnetization in 2D insulating ferromagnet.
From these discussions, we conclude that the only possible positions of hinge states are Figs.~\ref{fig:location_hinge}(d) and (g).
Because Fig.~\ref{fig:location_hinge}(g) is reduced to Fig.~\ref{fig:location_hinge}(d) by flipping the sign of $x_{3}$, Fig.~\ref{fig:location_hinge}(d) is essentially the only possibility for hinge modes.
In addition, we have some freedom in modifying the hinge modes without closing the bulk gap.
One can attach two-dimensional Chern insulators on the surfaces of the system, which modifies the number of hinge modes at each hinge while keeping the bulk unchanged.
This discussion is similar to the one in Refs.~\cite{schindler2018higher,PhysRevB.97.205136,PhysRevB.98.205129}.
Note that this operation should preserve inversion symmetry. Therefore, we should simultaneously attach two 2D Chern insulators with the same Chern number on two surfaces of the opposite sides of the crystal. Then in Fig.~\ref{fig:hoti}, one can make the number of the hinge modes at the hinges with an even number of hinge modes (shown in red) to be zero.
To show this let us put $N_{1}=2M_{1}$ and $N_{2}=2M_{2}$ ($M_{1}, M_{2}$: integer), and we get $N_{3}=-2(M_{1}+M_{2})$. Then we attach 2D Chern insulators with Chern number $2M_{1}$ onto $x_{1} x_{3}$-surfaces, and those with Chern number $2M_{2}$ onto $x_{2}x_{3}$-surfaces.
As a result, there is no longer a hinge state along the red lines, and remaing hinge gapless states form a closed loop as shown in Fig.~\ref{fig:hoti}(b).
In addition, the number of hinge modes is an odd number.
Thus, to summarize we have shown that the distribution of the gapleess hinge states is as shown in Fig.~\ref{fig:hoti}(b), by using the freedom to attach 2D Chern insulators while preserving inversion symmetry.
In previous papers \cite{,PhysRevB.97.205136,PhysRevB.98.205129}, the same distribution has been proposed for particular examples of SOTIs realized as $\mathbb{Z}_{2}$ topological insulators with magnetic field or magnetization.
However, it is not obvious whether it holds for general SOTIs.
Here, we have shown that hinge states appear as shown in Fig.~\ref{fig:hoti}(b) when surfaces are gapped and $\mathbb{Z}_{4}=2$ without relying upon specific models.
\section{CONCLUSION}\label{section:conclusion}
In this paper, we give a general proof that any insulators with inversion symmetry and gapped surface always have hinge states when $\mathbb{Z}_{4}$ topological index $\mu_{1}$ is $\mu_{1}=2$.
In the proof, we introduce the cutting procedure. We change boundary conditions along two directions by changing hopping amplitudes across the boundaries, and study behaviors of gapless states through this change.
We then reveal that the behaviors of gapless states result from the strong $\mathbb{Z}_{4}$ topological index.
From this discussion, we show that when the strong
$\mathbb{Z}_{4}$ topological index $\mu_{1}$ is $\mu_{1}=2$ and the weak topological indices $\nu_{1}$, $\nu_{2}$ and $\nu_{3}$ are $\nu_{1}=\nu_{2}=\nu_{3}=0$, gapless states appear inevitably at the hinges of three-dimensional insulators with gapped surfaces.
We also identify the only possible configuration for the hinge modes as in Fig.~\ref{fig:location_hinge}(d).
Together with a freedom to attach 2D Chern insulators on surfaces, it can always be reduced to Fig.~\ref{fig:hoti}(b) with an odd number of chiral hinge states.
\begin{acknowledgments}
This work was supported by JSPS KAKENHI Grant numbers JP18H03678 and JP16J07354; by JST - CREST Grant number JP-MJCR14F1; and by the MEXT Elements Strategy Initiative to Form Core Research Center (TIES). R. T. was also supported by JSPS KAKENHI Grant number JP18J23289.
\end{acknowledgments}
|
1,108,101,566,441 | arxiv | \section{Introduction}\label{introduction}
\subsection{Anosov maps} Anosov maps are fundamental objects
in the field of dynamical systems. A $C^1$ diffeomorphism $f$ of a
compact Riemannian manifold $M$ is called \textit{Anosov} if there
exist constants $\lambda$ in $(0,1)$ and $c > 0$ along with a
$df$-invariant splitting $TM = E^s \oplus E^u$ of the tangent bundle
of $M$ so that for all $n \ge 0$,
\begin{align*} \| df^n_x\bfv\| &\le c \lambda^n \| \bfv \| \quad
\text{for all $\bfv$ in $E^s(x)$, and} \\ \| df^{-n}_x\bfv \| &\le c
\lambda^n \| \bfv \| \quad \text{for all $\bfv$ in
$E^u(x)$.} \label{expanding definition}
\end{align*} The standard example of an Anosov map is a toral map
defined by a unimodular hyperbolic automorphism of $\boldR^n$ that
preserves an integer lattice.
The only other known examples of Anosov maps arise from automorphisms
of nilpotent groups. A hyperbolic automorphism of a simply connected
nilpotent Lie group $N$ that fixes a torsion-free lattice $\Gamma < N$
descends to an Anosov diffeomorphism of the compact nilmanifold
$N/\Gamma$. It is also possible that such an $N/\Gamma$ has a finite
quotient, called an {\em infranilmanifold}, and that the Anosov map
on $N/\Gamma$ finitely covers an Anosov map of the infranilmanifold.
An automorphism of a Lie algebra that descends to an Anosov map of a
compact quotient of the corresponding Lie group is called an {\em
Anosov automorphism.} Nilpotent Lie algebras are the only Lie algebras
that admit Anosov automorphisms. A Lie algebra $\frakn$ is called
{\em Anosov} if there exists a basis $\calB$ for $\frakn$ with
rational structure constants, and there exists a hyperbolic
automorphism $f$ of $\frakn$ with respect to which $f$ is
represented relative to $\calB$ by a matrix in $GL_n(\boldZ).$ A
simply connected Lie group admits a hyperbolic automorphism preserving
a lattice if and only if its Lie algebra is Anosov
(\cite{auslander-scheuneman}).
In this paper we study the properties of Anosov automorphisms and
Anosov Lie algebras. There has already been some progress in this area.
S. G. Dani showed that the free $r$-step nilpotent Lie algebra
$\frakf_{n,r}$ on $n$ generators admits an Anosov automorphism when $r<n$
(\cite{dani-78}). Dani and Mainkar considered when
two-step nilpotent Lie algebras defined by graphs
admit Anosov automorphisms (\cite{dani-mainkar-05}). Real Anosov Lie
algebras and all their rational forms
have been classified in dimension eight and less (\cite{ito-89},
\cite{lauret-will-05}).
Lauret observed that
the classification problem for Anosov Lie algebras
contains within it the problem of classifying all Lie algebras
admitting $\boldZ^+$ derivations (\cite{lauret-03c}, \cite{lauret-03}).
Auslander and Scheuneman established the correspondence between
Anosov automorphisms of nilpotent Lie algebras and semisimple
hyperbolic automorphisms of free nilpotent Lie algebras preserving
ideals of a certain type (\cite{auslander-scheuneman}). A matrix
$A$ in $GL_n(\boldZ),$ together with a rational basis $\calB$ of
$\frakf_{n,r}$ induces
an automorphism $f^A$ of $\frakf_{n,r}.$ Suppose that
$\fraki$ is an ideal of $\frakf_{n,r}$ such that
\begin{enumerate}
\item{$\fraki$ is invariant under ${f^A},$}\label{prop0}
\item{the restriction of ${f^A}$ to $\fraki$ is unimodular,}
\item{$\fraki$ has a basis that consists of $\boldZ$-linear
combinations of elements of $\calB$, and}\label{prop1}
\item{ all eigenspaces for ${f^A}$ for eigenvalues with modulus one
are contained in $\fraki$.}\label{prop2}
\end{enumerate}
If we let $\frakn = \frakf_{n,r} / \fraki$ and
let $p : \frakf_{n,r} \to \frakn$ be the projection map, there is
an Anosov automorphism $\overline{f} : \frakn \to \frakn$ such that
$\overline{f} p = p f^A$. We will call the four conditions
the {\em Auslander-Scheuneman conditions.}
Auslander and Scheuneman showed that
any semisimple Anosov automorphism $f$ of an $r$-step
nilpotent Lie algebra $\frakn$ may be represented in the manner just
described, relative to a rational basis $\calB$ of a free nilpotent Lie algebra
$\frakf_{n,r}$, a semisimple matrix $A$ in $GL_n(\boldZ),$
and an ideal $\fraki$ in $\frakf_{n,r}$ satisfying the
four conditions. We will always assume without loss of generality that
$\fraki < [\frakf_{n,r},\frakf_{n,r}].$
In order to
understand general properties of Anosov Lie algebras, one must understand
the kinds of ideals of free nilpotent Lie algebras
that satisfy the Auslander-Scheuneman conditions
for some automorphism $f^A$ defined by a matrix $A \in GL_n(\boldZ).$
The dynamical properties of a
toral Anosov automorphism of $\boldR^n/\boldZ^n$
are closely related to the algebraic properties
of the characteristic polynomial $p$ of the matrix $A$ in
$GL_n(\boldZ)$ used to define the automorphism (See \cite{everest-ward}).
We show in this work that, similarly, the algebraic properties of the
characteristic polynomial $p$ of the matrix $A$ defining the
automorphism
$f^A$ of a free nilpotent Lie algebra
determine the structure of the ideals that satisfy the
Auslander-Scheuneman conditions for $f^A.$
\subsection{Summary of results} Now we summarize the main ideas of the
paper.
We associate to any automorphism $f^A,
A \in GL_n(\boldZ),$ of a free $r$-step nilpotent Lie algebra
$\frakf_{n,r}$ an $r$-tuple of polynomials $(p_1, p_2, \ldots, p_r).$
For $i=1, \ldots,r,$ the
polynomial $p_i$ is the characteristic polynomial of a matrix representing
the automorphism $f^A$
on an $i$th step $V_i$ of $\frakf_{n,r}.$
Let $K$ denote the splitting field of $p_1$ over $\boldQ,$
and let $G$ denote the Galois group for $K$ over $\boldQ.$
We associate to the
automorphism the action of the finite group $G$ on $\frakf_{n,r}(K),$
the free nilpotent
Lie algebra over $K.$ We show in Theorem
\ref{actions} that $G$ orbits in $\frakf_{n,r}(K)$ correspond to
rational invariant subspaces for $f^A,$ and the
characteristic polynomial for the restriction of $f^A$ to such an invariant
subspace is a power of an irreducible polynomial.
We analyze Anosov Lie algebras using the following general approach.
We fix a free
$r$-step nilpotent Lie algebra $\frakf_{n,r}.$ We consider the
class of automorphisms of $\frakf_{n,r}$ whose associated polynomial
$p_1$ has Galois group $G,$ where $G$ is isomorphic to a subgroup of
the symmetric group $S_n.$
We let $(p_1, p_2, \ldots, p_r)$ be the $r$-tuple of polynomials
associated to such a polynomial $p_1.$
Our goal is to determine the factorizations of the polynomials
$p_1, \ldots, p_r;$ this will tell us what rational invariant subspaces
for $f$ are. Such subspaces generate any ideal satisfying the
Auslander-Scheuneman conditions.
First we analyze the factorizations of $p_2, \ldots, p_r$ into
powers of irreducibles by understanding
orbits of the action of the Galois group of $p_1$ on $\frakf_{n,r}(K).$
Then we determine whether the corresponding
rational invariant subspaces are minimal and whether there are eignevalues
of modulus one
using ideas from number theory (See Proposition \ref{full rank} and
Lemma \ref{how it can factor}).
We extend the classification of
Anosov Lie algebras to some new classes of two-step Lie
algebras.
\begin{thm}\label{main} Suppose that $\frakn$ is a
two-step Anosov
Lie algebra of type $(n_1, n_2)$
with associated polynomials $(p_1,p_2).$
Let $G$ denote the Galois group of $p_1.$
\begin{enumerate}
\item{If
$n_1 = 3, 4$ or $5,$ then $\frakn$ is one of the
Anosov Lie algebras listed in Table \ref{r=2}.}\label{classify-lowdim}
\item{If $p_1$ is irreducible and
the action of $G$ on the roots of $p_1$ is doubly transitive,
then $\frakn$ is isomorphic to the free nilpotent Lie algebra
$\frakf_{n,2}$. }\label{classify-full rank}
\end{enumerate}
\end{thm}
We can also classify Anosov Lie
algebras admitting automorphisms whose polynomials $p_1$
have certain specified Galois groups.
\begin{thm}\label{Sn-Cn}
Let $\overline{f}$ be a semisimple Anosov automorphism of
an $r$-step Anosov Lie algebra.
Let $(p_1, \ldots, p_r)$ be the $r$-tuple of polynomials associated
to the automorphism $f$ of the free nilpotent Lie algebra
$\frakf_{n,r}$ induced by $\overline{f}.$ Suppose that $p_1$ is irreducible.
\begin{enumerate}
\item{If the polynomial $p_1$ is of prime degree with
cyclic Galois group, then $\frakn$ is one of the Lie algebras
of type $C_n$ defined over $\boldR$
as in Definition \ref{ideal-i}. Conversely,
if $n$ is prime, and $\fraki$ is an ideal
of $\frakf_{n,r}$ of cyclic type defined over $\boldR$ containing the
ideal $\frakj_{n,r}$ defined in Definition \ref{j-def}, then the
Lie algebra $\frakn = \frakf_{n,r}/\fraki$
is Anosov. }\label{classify-prime}
\item{If the Galois group of $p_1$ is
symmetric, then
\begin{enumerate}
\item{If $r = 2,$ then $\frakn$ is isomorphic to
$\frakf_{n,2},$}
\item{If $r = 3,$ then $\frakn$ is isomorphic to one of
the following five Lie algebras:
$\frakf_{n,3},$ $\frakf_{n,3}/F_1, \frakf_{n,3}/F_2,$
$\frakf_{n,3}/(F_1 \oplus F_{2a}),$ and $\frakf_{n,3}/F_{2a},$
where the ideals $F_1$ and $F_2$ are as defined in Equation
\eqref{F defs} of Section \ref{p2p3} and $F_{2a}$ is as
in Proposition \ref{Fdecomp}. }
\end{enumerate}}
\end{enumerate}
\end{thm}
Matrices in $GL_n(\boldZ)$ having characteristic
polynomial with symmetric Galois group are dense in the sense of
thick and thin sets (\cite{serre-92}); hence, the second part of
the previous theorem describes
Anosov automorphisms of two- and three-step Lie algebras
that are generic in this sense.
We investigate some general properties of Anosov
automorphisms. We can describe the dimensions of
minimal nontrivial rational invariant subspaces.
Out of such analyses we obtain the following special case.
\begin{thm}\label{general-properties} Suppose that $\frakn$ is an
Anosov Lie algebra of type $(n_1, \ldots, n_r).$ If $n_1=3,$
then $n_i$ is a multiple of $3$ for all $i=2, \ldots, r,$ and if $n_1=4,$
then $n_i$ is even for all $i=2, \ldots, r.$
If $n_1$ is prime and
the polynomial $p_1$ is irreducible,
then $n_1$ divides $n_i$ for all $i = 2, \ldots,r < n.$
\end{thm}
The results of \cite{mainkar-06b} give
an alternate proof of this theorem.
One way to approach the classification problem is to fix the field in
which the spectrum of an Anosov automorphism
lies. The following theorem describes all
Anosov automorphisms whose spectrum lies in a quadratic extension of
$\boldQ.$
\begin{thm}\label{spectrum}
Let $f$ be a semisimple Anosov automorphism of a two-step nilpotent Lie
algebra $\frakn.$ Let $\Lambda \subset \boldR$ denote the spectrum of $f,$
and let $K$ denote the finite extension $\boldQ(\Lambda)$ of $\boldQ.$
If $K$ is a quadratic extension of $\boldQ,$
then $\frakn$ is one of the Anosov
Lie algebras defined in Definition \ref{graph}.
\end{thm}
The paper is organized as follows.
In Section \ref{preliminaries},
we review background material on nilpotent Lie algebras,
Anosov automorphisms and algebraic numbers, and we define the
$r$-tuple of polynomials associated to an Anosov automorphism
of a Lie algebra. In Section \ref{polynomials}, we describe properties
of the $r$-tuple of polynomials, such
as their reducibility and their Galois groups. In Proposition \ref{full rank},
we consider the set of roots of an Anosov polynomial
and describe multiplicative relationships among them; this
number-theoretic result may be
interesting in its own right.
In Section \ref{action-section}, we associate to an
automorphism $f$ of a free nilpotent Lie algebra $\frakf_{n,r}$
the action of a Galois group $G,$ and in Theorem \ref{actions} we
relate rational invariant subspaces of $\frakf_{n,r}$ to the orbits of $G.$ In Section \ref{symmetric-cyclic} we consider
Anosov Lie algebras for which the associated
Galois group is symmetric or cyclic.
Finally, in Section \ref{2&3}, we apply the results from
previous sections to the problem of classification
of Anosov Lie algebras whose associated polynomial $p_1$
has small degree.
Although the theorems we have stated above follow from various
results distributed throughout this work, for the sake of clarity,
in Section \ref{summary} we provide
self-contained proofs of the theorems.
\thanks{ This
project was initiated through discussions with Ralf Spatzier during a
visit to the University of Michigan funded by the University of
Michigan's NSF Advance grant \#0123571. The author is grateful to Ralf
Spatzier for his interest and encouragement, and many useful questions and
comments. Peter Trapa, Roger Alperin and Raz Stowe
also provided helpful input. }
\section{Preliminaries}\label{preliminaries} In this section, we
describe the structure of free nilpotent Lie algebras and their
automorphisms, and we review
some concepts from number theory that we will use later. We conclude with
some examples to illustrate the concepts presented.
\subsection{Nilpotent Lie algebras}\label{free-section} Let $\frakn$
be a Lie algebra defined over field $K.$
The \textit{central descending series} for
$\frakn$ is defined by $\frakn^0 = \frakn,$ and $\frakn^i =
[\frakn,\frakn^{i-1}]$ for $i \ge 1$. If $\frakn^r = 0$ and
$\frakn^{r-1} \ne 0,$ then $\frakn$ is said to be $r$-\textit{step
nilpotent}. When $\frakn$ is a nilpotent Lie
algebra defined over field $K$ and $n_i$ is the dimension of
the vector space $\frakn^i / \frakn^{i - 1}$
over $K,$ then $(n_1, n_2, \ldots, n_r)$ is called
the \textit{type} of $\frakn$.
The \textit{free $r$-step nilpotent Lie algebra on $n$ generators over the field
$K$,} denoted
$\frakf_{n,r}(K),$ is defined to be the quotient algebra $\frakf_n(K) /
\frakf_n^{r + 1}(K),$ where $\frakf_n(K)$ is the free nilpotent Lie algebra
on $n$ generators over $K.$ Given a set $\calB_1$
of $n$ generators, the free nilpotent Lie algebra $\frakf_{n,r}(K)$
can be written as the direct sum $V_1(K) \oplus \cdots \oplus V_r(K),$ where
$V_1(K)$ is defined to be the span of $\calB_1$
over $K$ and for $i = 2, \ldots, r,$ the
subspace $V_i(K)$ is defined to be the span over $K$ of
$i$-fold brackets of the generators. We will call the space
$V_i(K)$ the {\em $i$th step} of $\frakf_{n,r}(K)$ without always
explicitly mentioning the dependence on $\calB_1.$
When the field $K$ has characteristic zero, we
identify the prime subfield of $K$ with $\boldQ.$
For our purposes, fields that we consider
will be intermediate to $\boldQ$ and $\boldC:$
one of $\boldQ, \boldR, \boldC,$ or the splitting field for a
polynomial in $\boldZ[x].$
We will always
assume that a generating set $\calB_1$ for
a free nilpotent Lie algebra $\frakf_{n,r}(K)$ has cardinality $n.$
The most natural basis to use for a free nilpotent Lie algebra
is a Hall basis. Let $\bfx_1, \bfx_2, \ldots, \bfx_n$ be $n$ generators
for $\frakf_{n,r}(K)$. We call these the {\em standard monomials of
degree one.} \textit{Standard monomials of degree} $n$ are defined
inductively: After the monomials of degree $n-1$ and less have
been defined, we define an order relation $<$ on them so that if
$\degree u < \degree v,$ then $u < v.$ Any linear combination of
monomials of degree $i$ will be said to be of degree $i.$ If $u$ has
degree $i$ and $v$ has degree $j,$ and $i+j=k,$ we define $[u,v]$ to
be a standard monomial of degree $k$ if $u$ and $v$ are standard
monomials and $u > v,$ and if $u = [x,y]$ is the form of the standard
monomial $u,$ then $v \ge y.$ The standard monomials of degree $r$ or
less form a basis for $\frakf_{n,r}(K),$ called the {\em Hall basis}
(\cite{hall50}). For $i=1, \ldots, r,$ the subset $\calB_i = \calB
\cap V_i(K)$ of the basis $\calB$ is a basis for the $i$th step
$V_i(K)$ of $\frakf_{n,r}(K)$ consisting of
elements of the Hall basis
of degree $i.$ To each monomial of degree $i$ we can also associate a
{\em Hall word} of length $i$ from a given alphabet
$\alpha_1, \ldots, \alpha_n$ of
$n$ letters; for example, $[[\bfx_3,\bfx_1], \bfx_2]$ becomes
the word $\alpha_3 \alpha_1 \alpha_2.$
Suppose $\frakg$ is a Lie algebra defined over a field
$K$ of characteristic zero. Suppose that $\calB$ is a basis of
$\frakg$ having rational structure constants. The basis $\calB$ determines
a {\em rational structure} on $\frakg.$ A subspace $E$ of $\frakg$ spanned by
$\boldQ$-linear combinations of elements of $\calB$ is called a {\em
rational subspace} for this rational structure.
Since the structure constants for the free nilpotent
Lie algebra $\frakf_{n,r}(K)$ relative to
a Hall basis $\calB$ are rational, a
Hall basis $\calB$ for $\frakf_{n,r}(K)$ defines a rational structure
on $\frakf_{n,r}(K).$
\begin{example}\label{2,3-Hall words} Let $\calC_1 = \{
\bfz_i\}_{i=1}^n$ be a set of $n$ generators for the free $r$-step
nilpotent Lie algebra $\frakf_{n,r}(K)$
on $n$ generators over a field $K$
and let $\calC =
\cup_{i=1}^r \calC_i$ be the Hall basis determined by $\calC_1.$
Elements of $ \calC_2,$ where $r \ge 2,$ are of
the form $[\bfz_i,\bfz_j]$ with $i > j,$ hence the dimension of
$V_2(K)$ over $K$ is $ (\begin{smallmatrix} n
\\ 2 \end{smallmatrix}).$
When $r \ge 3,$ from the definition of Hall monomial, elements in the
set $ \calC_3$ for $\frakf_{n,r}(K)$ are of the form $[[\bfz_i,\bfz_j],
\bfz_i]$ or $[[\bfz_i,\bfz_j],
\bfz_j]$ with $i > j$ or, if $n \ge 3,$ of the form
$[[\bfz_i,\bfz_j],\bfz_k]$ with $i,j,k$ distinct and $i$ and $k$
greater than $j.$ There are $ n(n-1)$ standard Hall monomials of the
first type, and when $n \ge 3,$ there are $2 (\begin{smallmatrix} n
\\ 3 \end{smallmatrix})$ standard Hall monomials of the second type,
for a dimensional total of $\smallfrac{1}{3}(n+1)n(n-1)$ for the
third step $V_3(K)$ of $\frakf_{n,r}(K).$ We let
$ \calC_3^\prime$ denote the set of standard Hall monomials of the
first type, and let $ \calC_3^{\prime \prime}$ denote the set of
standard Hall monomials of the second type:
\begin{align*}
\calC_3^\prime &= \cup_{1 \le j < i \le n} \{[[\bfz_i,\bfz_j],\bfz_i],
[[\bfz_i,\bfz_j],\bfz_j] \},
\quad \text{and}\\
\calC_3^{\prime \prime} &= \cup_{1 \le j < i < k \le n} \{[[\bfz_i,\bfz_j],\bfz_k],
[[\bfz_k,\bfz_j],\bfz_i] \}.
\end{align*} Define subspaces $F_1(K)$ and $F_2(K)$ of $\frakf_{n,r}(K)$ by
\begin{equation}
\label{F defs}
F_1(K) = \myspan_K \calC_3^\prime, \qquad \text{and}
\quad F_2(K) = \myspan_K \calC_3^{\prime
\prime}. \end{equation} The subspace $V_3(K)$ is the direct sum of
$F_1(K)$ and $F_2(K)$ since $\calC_3$ spans $V_3(K)$ and is the
disjoint union of $\calC_3^\prime$ and $\calC_3^{\prime \prime}.$
\end{example}
\subsection{Anosov automorphisms}\label{setup}
As we discussed previously, every Anosov automorphism can be represented
in terms of a matrix $A$ in $GL_n(\boldZ),$ an automorphism $f^A$
of $\frakf_{n,r}$ induced by $A$ and an ideal $\fraki < \frakf_{n,r}$
satisfying the four Auslander-Scheuneman conditions. In this section
we spell out some of the details involved in such a representation.
Let
$\frakf_{n,r}(K) = \oplus_{i=1}^r V_i(K)$ be the free $r$-step
nilpotent Lie algebra over field $K$ with
a set $\calB_1$ of $n$ generators.
Let $\calB = \cup_{i=1}^r \calB_i$ be the Hall basis
determined by $\calB_1.$
Let $A$ be a matrix in
$GL_{n}(\boldZ)$ having no eigenvalues of modulus one. Together
the matrix $A$ and
and the basis $\calB_1$ define a linear map $f_1: V_1(K) \to V_1(K).$ The
map $f_1$ induces an automorphism $ f^A_K$
of $\frakf_{n,r}(K)$ that when
restricted to $V_1(K)$ equals $f_1.$ For all $i=1, \ldots, r,$ the
restriction $f_i$ of $f^A_K$ to $V_i(K)$ can be represented with respect
to the basis $\calB_i$ of $V_i(K)$ by a matrix $A_i$ having integer
entries that are independent of the field $K.$
For $i=1, \ldots, r,$ let $p_i$ denote the characteristic
polynomial of $A_i.$ We define the {\em $r$-tuple of polynomials
associated to $f$} to be $(p_1, \ldots, p_r).$ Note that
all of the polynomials are monic with integer coefficients, and
there is no dependence on $K$ in defining the polynomials: either
the matrix $A$ or the polynomial $p_1$ alone is enough to
uniquely define the $r$-tuple $(p_1, \ldots, p_r).$
A Lie algebra admits an Anosov automorphism if and only if it admits
a semisimple Anosov automorphism (\cite{auslander-scheuneman}).
Assume
that the linear map $f_1: V_1(L) \to V_1(L)$ defined by
$A \in GL_n(\boldZ)$ and rational basis $\calB$
is diagonalizable over the field $L$ (where $\mychar L = 0$).
The vector space $V_1(L)$
can be decomposed into the direct sum of minimal
nontrivial rational $f_1$-invariant subspaces $E_1, \ldots, E_s$.
For each rational invariant subspace $E_j, j = 1, \ldots, s,$
the restriction of $f_1$ to $E_j$ is diagonalizable over $L.$
Hence there is a basis $\calC_1 = \{\bfz_1, \ldots, \bfz_n\}$ of
$V_1(L)$ consisting of eigenvectors of $f_1$
with each eigenvector properly contained in
one of the subspaces $E_1, \ldots, E_s.$ Let
$\calC$ be the Hall basis of $\frakf_{n,r}(L)$ determined by $\calC_1.$
We will call such an eigenvector basis for an automorphism
$f$ of $\frakf_{n,r}(L)$ {\em compatible with the
rational structure}, and in the future, when we use
eigenvector bases for free nilpotent Lie algebras we will always
choose them to be compatible with the rational structure determined
by a fixed Hall basis.
\begin{notation}
We shall use $\calB$ to
denote the Hall basis of a free nilpotent Lie algebra
$\frakf_{n,r}(K)$ that determines the
rational structure and that with a matrix in $GL_n(\boldZ)$
defines the Anosov automorphism,
while we will use $\calC$ to denote a Hall basis that
diagonalizes the Anosov automorphism.
Suppose that $K$ has characteristic zero
and $\calB$ is a fixed Hall basis of $\frakf_{n,r}(K),$ and identify the
prime subfield of $K$ with $\boldQ.$
We will use $\frakf_{n,r}(\boldQ)$
to denote the subset of $\frakf_{n,r}(K)$ that is
the $\boldQ$-span of $\calB$ in $\frakf_{n,r}(K).$
\end{notation}
At times we will move between free nilpotent Lie algebras $\frakf_{n,r}(K)$
and $\frakf_{n,r}(L)$ defined over different field extensions
$K$ and $L$ of $\boldQ.$ We define a correspondence
between rational $f^A_K$-invariant subspaces of $\frakf_{n,r}(K)$
and rational $f^A_L$-invariant subspaces of $\frakf_{n,r}(L):$
\begin{definition}\label{correspond}
Let $K$ and $L$ be fields with characteristic zero.
Let $\calB_1(K)$ and $\calB_1(L)$ be generating sets,
both of cardinality $n,$ for free nilpotent Lie algebras
$\frakf_{n,r}(K)$ and $\frakf_{n,r}(L)$
respectively, and let $\calB(K)$ and $\calB(L)$ be the Hall bases
defined by $\calB_1(K)$ and $\calB_1(L)$ respectively.
A bijection $i_1: \calB_1(K) \to \calB_1(L)$ of the generating sets
naturally induces a bijection $i: \calB(K) \to \calB(L)$
of the Hall bases, and this in turn defines an isomorphism
$\overline{i}$ from $\frakf_{n,r}(\boldQ) < \frakf_{n,r}(K)$
to $\frakf_{n,r}(\boldQ) < \frakf_{n,r}(L),$ where
$\frakf_{n,r}(\boldQ)$ denotes the $\boldQ$-span of the fixed Hall
basis.
Endow $\frakf_{n,r}(K)$ and $\frakf_{n,r}(L)$ with the rational structures
defined by $\calB(K)$ and $\calB(L)$ respectively.
Given a matrix $A \in GL_n(\boldZ),$
let
maps $f^A_K \in \Aut(\frakf_{n,r}(K))$ and $f^A_L \in \Aut(\frakf_{n,r}(L))$
be defined by $A$ and
$\calB_1(K)$ and $\calB_1(L)$ respectively. Observe that
$[f^A_K]_{\calB(K)} = [f^A_L]_{\calB(L)} \in M_N(\boldZ),$ where
$N = \dim \frakf_{n,r}.$
Let $E$ be a rational $f^A_K$-invariant subspace of $\frakf_{n,r}(K)$
spanned by vectors $\bfv_1, \ldots, \bfv_m$ in $\frakf_{n,r}(K);$ i.e.
coordinates of $\bfv_1, \ldots, \bfv_m$ with respect to $\calB(K)$
are in $\boldQ.$
Define the subspace $E^L$ of $\frakf_{n,r}(L)$ to be the
$L$-span of the vectors $\overline{i}(\bfv_1), \ldots,
\overline{i}(\bfv_m)$ in $\frakf_{n,r}(L).$ Clearly
$E^L$ is rational and $f^A_L$-invariant subspace of $\frakf_{n,r}(K).$
\end{definition}
\begin{remark}\label{f^2}
Observe that $\fraki$ satisfies the Auslander-Scheuneman
conditions for a semisimple automorphism $f$ of a free nilpotent Lie algebra,
then it satisfies the conditions for $f^2.$ Therefore,
when seeking ideals of a free nilpotent Lie algebra satisfying the
four conditions for an automorphism $f,$
by moving to $f^2$ if necessary, we may assume
that the eigenvalues of the automorphism have product 1, and
that all the real eigenvalues are positive.
\end{remark}
The next example clarifies some of our definitions and notation.
\begin{example}\label{free-3,2}
Let $\frakf_{3,2}(\boldR) = V_1(\boldR) \oplus V_2(\boldR)$ be
the free two-step nilpotent Lie algebra on three generators $\bfx_1,
\bfx_2,$ and $\bfx_3.$ These three generators span the subspace
$V_1(\boldR).$ The Hall words of length two are $\bfx_1^\prime = [\bfx_3,
\bfx_2], \bfx_2^\prime = [\bfx_3,\bfx_1],$ and $\bfx_3^\prime =
[\bfx_2,\bfx_1];$ they span $V_2(\boldR).$ The union $\calB$ of $\calB_1 =
\{ \bfx_1, \bfx_2, \bfx_3\}$ and $\calB_2 = \{ \bfx_1^\prime,
\bfx_2^\prime, \bfx_3^\prime\}$ is the Hall basis determined by
$\bfx_1, \bfx_2, \bfx_3.$
Now let $A = A_1$ be a $3 \times 3$ matrix in $SL_3(\boldZ)$ that has
eigenvalues $\alpha_1, \alpha_2, \alpha_3,$ none of which
has modulus one. The matrix $A$ and the basis
$\calB_1$ define the linear map $f_1 : V_1 \to V_1.$ The linear map
$f_1$ induces an automorphism ${f^A}$ of $\frakf_{3,2}(\boldR).$ Let
$A_2$ denote the matrix representing the restriction of
$f^A$ to ${V_2(\boldR)}$ with respect to
the basis ${\calB_2}.$
The matrix $A_1$ has characteristic polynomial
\[ p_1(x) = (x - \alpha_1)(x - \alpha_2)(x-\alpha_3). \]
A short calculation shows that $A_2$ is similar to $A_1^{-1}$ and
has characteristic polynomial
\[
p_2(x) = (x - \alpha_2 \alpha_3)(x - \alpha_1
\alpha_3)(x-\alpha_1 \alpha_2) = (x - \alpha_1^{-1})(x - \alpha_2^{-1})
(x-\alpha_3^{-1}) . \]
Neither $A_1$ or $A_2$ has any eigenvalues of modulus one, so ${f^A}$
is an Anosov automorphism of $\frakf_{3,2}(\boldR).$
\end{example}
\subsection{Polynomials and algebraic
numbers}\label{polynomials-numbers}
We will call a monic polynomial \textit{Anosov} if has integer
coefficients, it has constant term
$\pm 1$, and it has no roots with modulus one.
The roots of an Anosov polynomial are algebraic units.
We can identify each monic polynomial $p$ in $\boldZ[x]$ of degree $n$
with an automorphism of the free nilpotent Lie algebra $\frakf_{n,r}(\boldR)
= \oplus_{i = 1}^r V_i(\boldR)$ with generating set $\calB_1 = \{ \bfx_1, \ldots,
\bfx_n\}.$ Suppose $p = q_1 q_2 \cdots q_s$ is a factorization of
$p$ into irreducibles. Let $A_{i}$ be the companion matrix for
$q_i,$ for $i=1, \ldots, s,$
and define the matrix $A_p$ to be block diagonal with the
matrices $A_{1}, \ldots, A_{s}$ down the diagonal. As
already described, the matrix $A_p$ and the basis $\calB_1$
together define an automorphism of the free
$r$-step nilpotent Lie algebra on $n$ generators.
If $E$ is a nontrivial rational invariant subspace for an Anosov
automorphism $f$ of an Anosov Lie algebra,
we will let $p_E$ denote the characteristic polynomial for the
restriction of $f$ to $E.$ If $p$ and $q$ are polynomials in
$\boldZ[x],$ we define the polynomial $p \wedge q$ to be the
characteristic polynomial of the matrix $A_p \wedge A_q.$
Next we illustrate how an Anosov polynomial determines a class of
Anosov automorphisms.
\begin{example}\label{qr} Let $p$ be an Anosov polynomial of degree
$n$ that is a product of two irreducible factors $r_1$ and $r_2$ of
degrees $d_1$ and $d_2$ respectively. The companion matrices $B_1$
and $B_2$ to the polynomials $r_1$ and $r_2$ are in $GL_{d_1}(\boldZ)$
and $GL_{d_2}(\boldZ)$ respectively. Putting these matrices together
in a block diagonal matrix gives a matrix
\[ A_p = A_1 = \begin{bmatrix} B_1 & 0 \\ 0 & B_2 \end{bmatrix}\] in
$GL_{n}(\boldZ)$ with characteristic polynomial $p.$
Let $\frakf_{n,2}(\boldR) = V_1(\boldR)
\oplus V_2(\boldR)$ be the real free two-step nilpotent
Lie algebra on $n$ generators with generating set $\calB_1.$ The
matrix $A_1$ and the basis $\calB_1$ of $V_1(\boldR)$ define a linear map
$f_1: V_1(\boldR) \to V_1(\boldR)$ which induces an automorphism $f^A$ of
$\frakf_{n,2}(\boldR).$ Let $f_2 = f^A|_{V_2(\boldR)}.$ The map $f_2$ may be
represented by a matrix $A_2$ that is block diagonal with matrices
$B_1 \wedge B_1, B_1 \wedge B_2$ and $B_2 \wedge B_2$ along the
diagonal. Let $\alpha_1, \ldots, \alpha_{d_1}$ denote the roots of
$r_1$ and let
$\beta_1, \ldots, \beta_{d_2}$ denote the roots of $r_2.$
It can be shown that the matrix $A_2$ has characteristic
polynomial
\[ p_2 = (r_1 \wedge r_1)(r_1 \wedge r_2)(r_2 \wedge r_2),\] where
\begin{align*}
(r_1 \wedge r_1)(x) &=
\prod_{1 \le i < j \le d_1} (x -\alpha_i \alpha_j), \\
(r_1 \wedge r_2)(x) &=
\prod_{\substack{1 \le i \le d_1 \\ 1 \le j \le d_2}} (x - \alpha_i \beta_j), \quad \text{and} \\
(r_2 \wedge r_2)(x) &= \prod_{1 \le i < j \le d_2} (x - \beta_i \beta_j).
\end{align*}
As long as none of the roots of $p_2$ have modulus one, the map
$f^A$ is Anosov.
\end{example}
Later we will need to know when polynomials in $\boldZ[x]$
have roots of modulus one and will use the following observation.
\begin{remark}\label{roots of modulus one}
Suppose that an irreducible polynomial $p$ in $\boldZ[x]$ of degree $n$
has a root $\alpha$ with modulus one.
Then the complex conjugate $\bar \alpha = \alpha^{-1}$ is also a
root of $p,$ so $p$ is self-reciprocal and $n$ is even. If $q$ is the
minimal polynomial of $\alpha + \alpha^{-1},$ then $p(x) = x^{n/2} q(x +
1/x).$ Hence, the Galois group for $p$ is a wreath product of $C_2$
and the Galois group for the polynomial $q.$
\end{remark}
\section{Polynomials associated to automorphisms}\label{polynomials}
\subsection{Properties of the $r$-tuple of characteristic polynomials}
\label{properties} Now we present some properties of the tuple of
polynomials associated to an automorphism of a free nilpotent Lie
algebra.
\begin{proposition}\label{p-properties} Let $f^A$ be a semisimple
automorphism of the free nilpotent Lie algebra $\frakf_{n,r}(\boldR)
= \oplus_{i=1}^r V_i(\boldR)$ defined by a matrix $A$ in
$GL_n(\boldZ)$ and the Hall basis $\calB$ defined by generating set
$\calB_1 = \{ \bfx_i\}_{i=1}^{n}.$ Let $(p_1, p_2, \ldots, p_r)$
be the $r$-tuple of polynomials associated to $f,$ let $\alpha_1,
\ldots, \alpha_n$ denote the roots of $p_1,$ and let $K$ denote the
splitting field for $p_1.$ Let $\calC_1 = \{\bfz_i\}_{i=1}^n$ be a
$f^A_K$-eigenvector basis of $V_1(K) < \frakf_{n,r}(K)$ compatible with the
rational structure defined by $\calB$ and let $\calC= \cup_{i=1}^r
\calC_i$ be the Hall basis of $\frakf_{n,r}(K)$ associated to
$\calC_1.$
\begin{enumerate}
\item{Each standard Hall monomial of degree $i$ on $\bfz_1, \ldots,
\bfz_n$ in the set $\calC_i$ is an eigenvector for
$ f_{K}^A|_{V_i(K)}$ whose
eigenvalue is the corresponding Hall word in $\alpha_1,
\ldots, \alpha_n.$ }\label{eval-tilde-pi}
\item{For $i=1, \ldots, r,$ let $ p_i = r_{i,1} \cdots r_{i,d_i}$ be
a factorization of $ p_i$ into $d_i$ irreducible monic polynomials in
$\boldZ[x]$, and let $V_i(\boldR) = \oplus_{j=1}^{e_i}E_{i,j}$ be a
decomposition of $V_i(\boldR)$ into $e_i$ minimal nontrivial rational
$f^A$-invariant subspaces. For all $i =1, \ldots, r,$
$d_i = e_i$ and the
map that sends $E_{ij}$ to the characteristic
polynomial of $f|_{E_{ij}}$ is a one-to-one correspondence between
the set of rational subspaces $\{E_{ij}\}_{j = 1}^{d_i}$ and the set of
factors $\{ r_{i,j} \, : \, j =1, \ldots,
e_i \}$ of $p_i.$}\label{rat-subsp}
\end{enumerate}
\end{proposition}
It follows from Part \eqref{eval-tilde-pi} of the proposition
that if the matrix $A$ is diagonalizable over $\boldC,$
then the automorphism $f^A$ is semisimple. In particular, if the polynomial
$p_1$ is separable over $\boldQ,$ then $f^A$ is semisimple.
\begin{remark}\label{as cond 2} As a consequence of the third part of
the proposition, if $f$ is unimodular, the
characteristic polynomial for the restriction of $f$ to a rational invariant
subspace $E$ has a
unit constant term, hence the restriction of $f$ to
any rational invariant subspace $E$ is unimodular.
Therefore, the second of the four
Auslander-Scheuneman conditions is automatic.
\end{remark}
It is well known that there are no Anosov Lie
algebras of type $(n_1, \ldots, n_r),$ where $n_1 = 2$ and
$r > 1.$ This follows from Part \ref{eval-tilde-pi}
of the proposition. Henceforth we shall only consider
nilpotent Lie algebras where $n_1 \ge 3$ and $r \ge 2.$
\begin{proof}[Proof of Proposition \ref{p-properties}.]
The first part is elementary.
Hall words of degree $i$ in $\bfz_i$ span $V_i(K),$
for $i = 1, \ldots,
r.$ Because $f_{K}^A$ is an automorphism of $\frakf_{n,r}(K),$
a Hall word in $\bfz_1, \ldots,
\bfz_n$ is an eigenvector for $f_{K}^A$ whose
eigenvalue is the same Hall word in $\alpha_1, \ldots, \alpha_n.$
The last part of the proposition follows from the existence of
the elementary divisors rational canonical forms for matrices.
The fact that the matrix is semisimple implies that the elementary
divisors are irreducible.
\end{proof}
\subsection{The polynomials $p_2$ and
$p_3$}\label{p2p3}
Let $A$ be a semisimple matrix in $GL_n(\boldZ),$ and let $K$ be the
splitting field for the characteristic polynomial $p_1$ of $A.$
Let
$f^A$ be the semisimple automorphism of
$\frakf_{n,r}(\boldR) = \oplus_{i = 1}^r V_i(\boldR),$ where $n \ge 3,$
induced by $A$ and a basis
$\calB_1$ for $V_1(\boldR).$
Let $\calC_1 = \{\bfz_i\}_{i=1}^n$ be an
eigenvector basis for $ V_1(K) < \frakf_{n,r}(K)$
compatible with the rational structure determined by
$\calB_1,$ where $f^A(\bfz_j) = \alpha_j \bfz_j,$ for $j = 1, \ldots, n;$
and let $\calC = \cup_{i=1}^r
\calC_i$ be the Hall basis of $\frakf_{n,r}(K)$ determined by $\calC_1.$
In Example \ref{2,3-Hall words} we described Hall words of length two
and three. By Proposition \ref{p-properties}, the eigenvalue for
an element $[\bfz_j,\bfz_i]$ of $\calC_2$ is $\alpha_i\alpha_j,$
so
\begin{equation}\label{p2-def}
p_2(x) = \prod_{1 \le j < i \le n} (x - \alpha_i \alpha_j).\end{equation}
Let $\calC_3^\prime$ and $\calC_3^{\prime
\prime}$ be as defined in Example \ref{2,3-Hall words}.
By Proposition \ref{p-properties}, an element
$[[\bfz_j,\bfz_i],\bfz_j]$
of $\calC_3^\prime$ is an eigenvector for
$f_{K}^A$ with eigenvalue $\alpha_i \alpha_j^2,$ and an element
$[[\bfz_i,\bfz_j],\bfz_k]$ of $\calC_3^{\prime \prime}$ is an
eigenvector for $f_{K}^A$ with eigenvalue $\alpha_i \alpha_j \alpha_k.$
Define the polynomials $q_1$ and $q_2$ by
\begin{equation}\label{q1,q2} q_1(x) = \prod_{1 \le i , j \le n, i \ne j}(x -
\alpha_i \alpha_j^2), \qquad \text{and} \qquad q_2(x) = \prod_{1 \le
i< j < k \le n}(x - \alpha_i \alpha_j \alpha_k).
\end{equation}
Because they are invariant under the action of $G,$ $q_1$ and $q_2$
have integral coefficients.
The polynomial $ p_3 = q_1
q_2^2$ is the characteristic polynomial for the restriction of the
automorphism $f^A_L$
to $V_3(L)$ for any extension $L$ of $\boldQ.$
\subsection{Anosov polynomials and their roots} In this section, we
discuss Anosov polynomials and their properties.
\begin{proposition}\label{anosov poly}
Let $p_1$ be an Anosov polynomial in
$\boldZ[x]$ of degree $n \ge 3.$ Let $(p_1, \ldots,
p_r)$ be the associated $r$-tuple of polynomials.
\begin{enumerate}
\item{
If $ p_1$ has
constant term one, then its reciprocal polynomial $( p_1)_R$ is a
factor of $ p_{n-1}$ and $( p_2)_R$ is a factor of $ p_{n-2}$. If
the constant term of $ p_1$ is $-1,$ then $( p_1)_R(-x)$ is a
factor of $ p_{n-1}$ and $( p_2)_R(-x)$ is a factor of $ p_{n-2}$.
}\label{recip}
\item{The polynomial
$p_i$ has at least one root of modulus greater than or equal to
than one and at least one root of modulus less than one
for all $i \ge 2.$ }\label{root}
\item{If the roots $\alpha_1, \ldots, \alpha_n$
of $p_1$ are viewed as indeterminates, then the constant
term of $p_i$ is $(\alpha_1 \cdots \alpha_n)^{D(i)},$ with the
exponent $D(i)$ given by
\[ D(i) = \frac{1}{ni}\sum_{d|i} \mu(d)
n^{i/d},\] where $\mu$ is the M\"obius function.}\label{dimVi}
\end{enumerate}
\end{proposition}
\begin{proof} Let $\alpha_1, \ldots, \alpha_n$ denote the roots of $p_1.$
Suppose that the constant term of $ p_1$ is
$(-1)^{n},$ so $\alpha_1 \cdots \alpha_{n} = 1$. The reciprocal
of $\alpha_j,$ for $j =1, \ldots, n,$ is $
\alpha_{1} \cdots \hat \alpha_{j} \cdots \cdots\alpha_{n},$
the Hall word $ \alpha_{n}
\cdots \hat \alpha_{j} \cdots \cdots\alpha_{1}$ of length $n-1.$ By
Proposition \ref{p-properties}, Part \eqref{eval-tilde-pi}, this number is a
root of $p_{n-1}.$ Thus $(p_1)_R$ is a factor of $ p_{n-1}.$
Similarly, the reciprocal of the root $\alpha_{j_1} \alpha_{j_2}$ of
$ {(p_2)_R},$, where $1 \le j_1 < j_2 \le n$, is $\alpha_1 \cdots
\hat \alpha_{j_1} \cdots \hat \alpha_{j_2} \cdots\alpha_n,$ which is
a permutation of a Hall word on $n-2$ letters, and therefore is a
root of $ p_{n-2}.$ Thus $(p_2)_R$ is a factor of $ p_{n-2}.$ If the
constant term of $p_1$ is $(-1)^{n+1},$ then $\alpha_1 \cdots
\alpha_n = -1$ and the same argument shows that whenever $\alpha$ is a
root of $ p_1,$ $-\alpha^{-1}$ is a root of $ p_{n-1},$ and when
$\alpha$ is a root of $ p_2,$ then $-\alpha^{-1}$ is a root of $
p_{n-2}.$
Now we prove Part \eqref{root}. Fix $i \ge 2.$ By
hypothesis, $\alpha_1 \cdots \alpha_n = \pm 1$ and not all of the
roots have modulus one. Therefore, some root, say $\alpha_1,$ has
modulus less than one. Then the modulus of the product of the
$n-1$ other roots is $1/|\alpha_1| > 1,$ so the modulus of at
least one of the other roots, say $\alpha_2,$ must satisfy $|\alpha_2|
\ge 1/ {\sqrt[n-1]{|\alpha_1|}}$. The root $\alpha_1^{i-1}
\alpha_2$ of $ p_{i}$ has modulus equal to $|\alpha_1|^{i-1 -
\frac{1}{n-1}}.$ The exponent $i-1 - \frac{1}{n-1}$ is positive, so
$\alpha_1^{i-1} \alpha_2$ has modulus less than one.
By the same reasoning, there exists a root of $p_i$ with modulus
greater than one.
The dimension of $V_i(\boldR)$ is a Dedekind number $\frac{1}{i}\sum_{d|i}
\mu(d) n^{i/d},$ where $\mu$ is the M\"obius function (Corollary
4.14, \cite{reutenauer}).
Each of $\alpha_1, \ldots, \alpha_n$ must occur the same number of times
in the constant term of $p_i$ by Lemma 1 of \cite{auslander-scheuneman}.
Therefore, the constant term of $p_i,$ for
$1 \le i \le r,$ is $\alpha_1 \cdots \alpha_n$ to the power
$1/n \cdot \dim(V_i(\boldR)),$ as claimed.
\end{proof}
The next lemma helps identify roots of modulus one
for automorphisms whose first polynomial $p_1$ has Galois group of
odd order.
\begin{lemma}\label{odd-good} Suppose that the characteristic
polynomial $p_1$ of a semisimple hyperbolic matrix $A$ in $GL_n(\boldZ),$
where $n \ge 3,$ has
Galois group of odd order. Let
$f^A: \frakf_{n,r}(\boldR) \to \frakf_{n,r}(\boldR)$
be the automorphism of the free $r$-step nilpotent Lie algebra on
$n$ generators induced by $A$. If $\lambda$ is an eigenvalue of
$f^A$ with modulus
one, then $\lambda = 1$ or $\lambda = -1.$
\end{lemma}
\begin{proof} Let $p_1$ and $f^A$ be as in the statement of the lemma. Let
$(p_1, \ldots, p_r)$ be the $r$-tuple of polynomials associated
to $f^A.$
If a monic irreducible nonlinear polynomial in $\boldZ[x]$ has a root
of modulus one, by Remark \ref{roots of modulus one}, its Galois
group $G$ has even order.
By Proposition \ref{p-properties}, if $\alpha$ is an eigenvalue of
$f^A,$ it is a root of an irreducible factor $q$ of $p_i$ for
some $i = 1, \ldots, r.$
Since the splitting field for $q$ is a subfield of the splitting
field for $p_1,$ the Galois group $H$ for $q$ is the quotient of $G$ by a
normal subgroup; hence if $q$ is nonlinear,
$H$ has odd order. Then, either $q$ is linear and
$\alpha = \pm 1,$ or
$q$ is nonlinear and $|\alpha| \ne 1.$
\end{proof}
\subsection{The full rank condition}
Suppose that $p_1$ is an Anosov polynomial in $\boldZ[x]$ with
roots $\alpha_1, \ldots, \alpha_n.$
We will want to know
when the equation
\begin{equation}\label{system} \alpha_1^{d_1} \alpha_2^{d_2} \cdots
\alpha_n^{d_n} = 1\end{equation}
has integer solutions $d_1, \ldots, d_n.$ Note that if
$p_1$ has constant term $(-1)^n,$ then $\alpha_1 \cdots \alpha_n =1,$
and $d_1 = \cdots = d_n = d$ is a solution for any integer $d.$
\begin{definition}\label{full rank-def}
Let $\Lambda= \{\alpha_1, \ldots, \alpha_n\}$
be the set of roots of a polynomial $p$ in $\boldZ[x]$
with constant term $(-1)^n$
and degree $n \ge 2.$ The set $\Lambda$ is said to be of {\em full rank}
if the only integral
solutions to Equation \eqref{system} are of form $d_1 = d_2 = \cdots = d_n.$
\end{definition}
The next proposition describes how multiplicative relationships among the
roots of some polynomials in $\boldZ[x]$ depend on their Galois groups.
\begin{proposition}\label{full rank} Suppose that $\alpha_1, \ldots
,\alpha_n$ are roots of a degree $n$ irreducible monic polynomial $p$ in
$\boldZ[x]$ with constant term $(-1)^n,$ and suppose that none of
$\alpha_1, \ldots
,\alpha_n$ are roots of unity. Let $G$ denote the Galois group for
$p.$
The set $\{\alpha_1, \ldots,\alpha_n\}$ is of full rank
in the following situations.
\begin{enumerate}
\item{When the permutation representation of
$G$ on $\boldQ^n$ is the sum of the principal representation and a
representation that is irreducible over $\boldQ.$
}\label{irr-rep}
\item{When the action of $G$ on the set of roots of $p$
is doubly transitive.}\label{2t->fr}
\item{When $p$ is Anosov,
and precisely one of its roots $\alpha_1$
has modulus greater than one.}\label{bh-gen}
\end{enumerate}
\end{proposition}
An algebraic number $\alpha_1$ as in Part \eqref{bh-gen} of the
proposition is a \textit{P.-V.\ number}.
Properties of P.-V.\ numbers were
first investigated by Pisot and Vijayaraghavan. (See \cite{meyer-72}
and \cite{bertin-92} for background on P.-V.\ numbers.)
The proof of
Part \eqref{bh-gen} of the proposition is due to
Bell and Hare (\cite{bell-hare-05}); we repeat it here for the
sake of completeness.
The action of the the Galois group
$G$ of $p_1 \in \boldZ[x]$ on the set $\{\alpha_1, \ldots, \alpha_n\}$
of enumerated roots
of $p_1$ gives an identification of $G$ with a subgroup of $S_n,$ and
we can define a permutation representation $\rho$ of $G$ on $\boldQ^n,$
with
\begin{equation}\label{rho-def}
\rho(g) (\beta_1, \ldots, \beta_n) = (\beta_{g(1)},
\ldots, \beta_{g(n)})\end{equation}
for $g \in G$ and $(\beta_1, \ldots, \beta_n) \in \boldQ^n.$
\begin{proof} Fix $\alpha_1, \ldots, \alpha_n$ as in the statement of the
theorem.
If $(d_1, \ldots, d_n) \in \boldQ^n$ is a solution to
Equation \eqref{system}, and $\sigma$ is in $G,$ then
\begin{equation*} \sigma(\alpha_1)^{d_1} \sigma(\alpha_2)^{d_2}
\cdots \sigma(\alpha_n)^{d_n} = 1,\end{equation*}
which may be alternately expressed as
\begin{equation}\label{system-2}
\alpha_1^{d_{\sigma^{-1}(1)}} \alpha_2^{d_{\sigma^{-1}(2)}}
\cdots \alpha_n^{d_{\sigma^{-1}(n)}} = 1.\end{equation}
Therefore, the set of integral solutions to Equation
\eqref{system} is invariant for
the permutation representation $\rho:$ For all $\sigma$ in $G$,
$(d_1, \ldots, d_n) \in \boldQ^n$ is a solution to Equation
\eqref{system} if and only if $(d_{\sigma^{-1}(1)}, \ldots, d_{\sigma^{-1}(n)})$ is a solution
to Equation \eqref{system}.
It is easy to see that the set $S$ of solutions to Equation \eqref{system}
in $\boldZ^n$ is closed
under addition and subtraction. Therefore, if
$(d_1, \ldots, d_n)$ is an integral solution to Equation \eqref{system},
then any vector in
\[\myspan_{\boldZ} \{ \rho(\sigma)(d_1, \ldots, d_n) \, : \, \sigma \in G \}\]
is also a solution to Equation \eqref{system}.
Suppose that $\rho$
decomposes as the sum of the trivial representation on $\boldQ(1,1, \ldots, 1)$
and an irreducible representation on $W = (1,1, \ldots, 1)^\perp.$
We will show by contradiction that the set $\{\alpha_1, \ldots, \alpha_n\}$
has full rank.
Suppose that $(d_1, \ldots, d_n)$ is an $n$-tuple of integers so that
Equation \eqref{system} holds and that $(d_1, \ldots, d_n)$
is not a scalar multiple of $(1,1, \ldots, 1).$ After
subtracting the appropriate multiple of $(1,1, \ldots, 1),$
we may assume that the solution
$(d_1, \ldots, d_n)$ is a nontrivial vector in $W.$ The
representation of $G$ on $W$ is irreducible over $\boldQ,$
and $(d_1, \ldots, d_n)$
is a nontrivial element of $W,$ so the invariant subspace
$\myspan_{\boldQ}
\{ \rho(\sigma)(d_1, \ldots, d_n) \, : \, \sigma \in G \}$ is
all of $W.$
Then $\myspan_{\boldQ} S = \boldQ^n,$
implying that
$(1, 0, \ldots, 0)$ is a $\boldQ$-linear combination of
solutions $(d_1, \ldots, d_n)$ to Equation \eqref{system}.
But then there exists an integer $N$ such that $\alpha_1^N = 1,$
a contradiction. Hence, every solution to Equation \eqref{system}
is a scalar multiple of $(1,1, \ldots, 1).$
If
the action of $G$ is two-transitive, then the permutation representation of $G$ on $\boldC^n$
is the sum of the trivial representation on $\boldC(1,1, \ldots, 1)$
and a representation on the orthogonal complement that is
irreducible over $\boldC$ (\cite{serre-77}, Exercise 2.6). Then
the the permutation representation of $G$ on $\boldQ^n$
is the sum of the trivial representation on $\boldQ(1,1, \ldots, 1)$
and a representation on the orthogonal complement that is
irreducible over $\boldQ,$ and
the set of roots is of full rank, by Part \eqref{irr-rep}.
Therefore, if $G$ is two-transitive, then
the set of roots is of full rank.
Now assume that $p$ is irreducible, and without loss of generality
that $\alpha_1 > 0.$
Suppose that the roots of $p$ satisfy the condition that
\[ \alpha_1 > 1 > |\alpha_2| \ge \cdots \ge |\alpha_n|, \]
and that Equation \eqref{system} holds for $d_1, d_2, \ldots, d_n.$
Let $m$ be the index so that
$d_m$ achieves the minimum of the set
$\{ d_i \, : \, i=1, \ldots, n\}.$ Since
$(d_m, \ldots, d_m)$ is a solution to Equation \eqref{system},
the $n$-tuple
\[ (e_1, \ldots, e_n) = (d_1 - d_m, d_2 - d_m, \ldots, d_n -d_m)\]
is a solution
to Equation \eqref{system} with $e_m = 0$ and $e_i \ge 0$ for all
$i = 1, \ldots, m.$
Because $p$ is irreducible, there exists a
permutation $\sigma$ in $G$ such
that $\sigma(\alpha_1) = \alpha_m,$ or if we identify
$G$ with a subgroup of $S_n$ in the natural way, $\sigma(1) = m.$ We
then have
\[ \rho(\sigma)(e_1, \ldots, e_n) =
(e_{\sigma(1)}, e_{\sigma(2)},
\ldots, e_{\sigma(n))}) = (0, e_{\sigma(2)}, \ldots,
e_{\sigma(n))})\]
is also a solution to the equation, so
\[ \alpha_2^{e_{\sigma(2)}}
\alpha_3^{e_{\sigma(3)}} \cdots
\alpha_n^{e_{\sigma(n)}} = 1. \] But
$|\alpha_i| < 1$ for $i = 2, \ldots, n,$ and
all the exponents are nonnegative, hence all the
exponents $e_i$ must be zero. Then $d_1 = d_2 = \cdots = d_n$ as desired,
and Part \eqref{bh-gen} holds.
\end{proof}
The next lemma shows that
when the set of roots of an Anosov polynomial $p_1$ is of full rank,
there are strong restrictions on the Galois groups for irreducible
factors of polynomials $p_2, p_3, \ldots$ associated to $p_1.$
\begin{lemma}\label{stabilizer}
Let $p_1$ be an Anosov polynomial of degree $n$ with
constant term $(-1)^n$. Suppose that the set of roots
$\{\alpha_1, \ldots, \alpha_n\}$ of $p_1$ has full rank.
Let $G$ denote the Galois group of $p_1,$ and let $(p_1, \ldots, p_r)$
be the $r$-tuple of polynomials associated to $p_1$ for some $r>1.$
Fix $p_i$ for some $i = 2, \ldots, r,$ and
suppose that $q$ is an irreducible nonlinear factor of $p_i$ over $\boldZ$
with root
$\beta = \alpha_1^{d_1} \cdots \alpha_s^{d_s}$ for $s \le n-1.$
The Galois group $G$ acts on the splitting
field $\boldQ(p_1)$ for $p_1,$ and the splitting
field $\boldQ(q)$ for $q$ is a subfield of $\boldQ(p_1).$
Let $H < G$ be the stabilizer of
$\boldQ(q).$
Any element $\sigma$ in $H$ has the properties
\begin{enumerate}
\item{$\sigma$ permutes the set $\{ \alpha_{s+1}, \ldots, \alpha_n\}.$}\label{permute-set}
\item{For $j, k = 1, \ldots, s,$
$\sigma(\alpha_j) = \alpha_k$ only if $d_j = d_k.$ That is,
$\sigma$ permutes the sets of roots having the same exponent in
the expression for $\beta$ in terms of $\alpha_1, \ldots, \alpha_n.$}
\end{enumerate}
Thus, $H$ is isomorphic to a subgroup of the direct product
\[S_{k_1} \times S_{k_2} \cdots \times S_{k_{m-1}} \times S_{n - s}, \qquad
(k_1 + \cdots + k_{m-1} = s) \]
of $m \ge 2$ symmetric groups.
\end{lemma}
\begin{proof} Let $\beta = \alpha_1^{d_1} \cdots \alpha_s^{d_s}$ be as
in the statement of the theorem, and let $\sigma$ be in the stabilizer $H$ of
$\boldQ(q).$ Then $\sigma(\beta) = \beta$ implies that
\[ \alpha_1^{d_{\sigma^{-1}(1)}} \alpha_2^{d_{\sigma^{-1}(2)}} \cdots
\alpha_s^{d_{\sigma^{-1}(s)}} \alpha_n^0 = \alpha_1^{d_1} \cdots \alpha_s^{d_s},\]
so then
\[ \alpha_1^{d_{\sigma^{-1}(1)} - d_1} \alpha_2^{d_{\sigma^{-1}(2)} - d_2} \cdots
\alpha_s^{d_{\sigma^{-1}(s)} - d_s} \alpha_n^0 = 1.\]
By definition of full rank,
\[ d_{\sigma^{-1}(1)} - d_1 = d_{\sigma^{-1}(2)} - d_2 = \cdots =
d_{\sigma^{-1}(s)} - d_s = 0.\]
Therefore, $d_{\sigma^{-1}(i)} = d_i$ for all $i = 1, \ldots, s;$ in other
words, if $\sigma(i) = j,$ then $d_i = d_j.$ Hence
$\sigma$ permutes each set of $\alpha_i$'s for which the exponents $d_i$
agree, including the nonempty
set $\{ \alpha_{s+1}, \ldots, \alpha_n\}$ where $d_i = 0.$
\end{proof}
The following lemma describes how the polynomials in the
$r$-tuple of polynomials $(p_1, p_2, \ldots, p_r)$
can factor, in terms of the properties of $p_1.$
\begin{lemma}\label{how it can factor} Suppose $p_1$
is a monic polynomial in $\boldZ[x]$ of degree $n \ge 3$ with constant
term $(-1)^n.$
Let $G$ denote the Galois group for $p_1.$ Let $(p_1, p_2, \ldots, p_r)$
be an $r$-tuple of polynomials associated to $p_1.$
Let $q_1$ and $q_2$ be as defined
in Equation \eqref{q1,q2}.
\begin{enumerate}
\item{
Assume that $p_1$ is separable.
If $p_2$ or $q_1$ factors over $\boldZ$ as a power of an irreducible
polynomial $r,$ then the degree of $r$ is $n-1$ or more.
If $q_2$ factors over $\boldZ$ as a power of an
irreducible polynomial $r,$ then the degree of $r$ is $n-2$ or more.}\label{degree-r}
\item{Suppose that
the set of roots of $p_1$ is of full rank.
\begin{enumerate}
\item{For $i =2, \ldots, r,$
the degree $k$ of any factor of $p_i,$ satisfies
$k = d (n/i) $ for some positive integer $d \le D(i),$
where
$D(i)$ is as defined in Proposition \ref{anosov poly}.
Therefore, if $\gcd(n,i)=r,$ then $n/r$ divides $k.$ }\label{factor-degree}
\item{
If $q$ is a nonlinear irreducible factor of $p_i$ over $\boldZ$
for some $i = 2, \ldots, r,$
then the normal subgroup $N$ of $G$ of automorphisms
fixing $\boldQ(q)$ does not act
transitively on the roots of $p_1.$ }\label{N-not-trans}
\item{If $p_2 = r^s$ or $q_1 = r^s$ for an irreducible monic
polynomial $r \in \boldZ[x],$
then $s = 1$ when $n \ge 3;$ and when $n \ge 4,$
if $q_2 = r^s$ for an irreducible $r,$ then $s=1.$}\label{fr-p2-irr}
\end{enumerate}}
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\alpha_1,
\ldots, \alpha_n$ denote the roots of $p_1.$
If a polynomial $r$ is
irreducible with $m$ distinct roots, then $r^s$ has $m$ distinct roots
each of multiplicity $s.$ Thus, to prove the
the first part, we simply count the number of roots of each polynomial
$p_2, q_1,$ and $q_2$ that are guaranteed to be distinct, and the
degree of $r$ is necessarily greater or equal to that number if
$p_2, q_1,$ or $q_2$ is of form $r^s.$ For
$p_2,$ roots of form $\alpha_1 \alpha_j, j = 2, \ldots, n$ are distinct; for
$q_1,$ roots of form $\alpha_1 \alpha_j^2, j = 2, \ldots, n$ are distinct;
and for
$p_2,$ roots of form $\alpha_1 \alpha_2 \alpha_j,
j = 3, \ldots, n$ are distinct. This proves Part \eqref{degree-r}.
Now suppose that the set of roots of $p_1$ has full rank.
Let $q$ be a degree $k$
factor of $p_i$ over $\boldZ[x]$ for some $i = 2, \ldots, n.$
The constant term of $q$ is $\pm 1,$ and by the full rank property, of
form $(\alpha_1 \alpha_2 \cdots \alpha_n)^d$
for some positive integer $d.$ The constant term of $p_i$ is
$(\alpha_1 \cdots \alpha_n)^{D(i)},$ by Proposition \ref{anosov poly},
so $d \le D(i).$
On the other hand, roots of $p_i$ are Hall words of degree $i$ in
$\alpha_1, \alpha_2, \ldots, \alpha_n,$ so the constant term of $q$ is
the product of $k$ $i$-letter words in
$\alpha_1, \alpha_2, \ldots, \alpha_n.$ Thus, $ki = nd,$
and the degree $k$ is an integral multiple of $n/i$ as desired.
This proves Part \eqref{factor-degree}.
Now suppose that $q$ is nonlinear and irreducible.
Let $\beta$ be a root of $q.$ Using the identity
$\alpha_1 \cdots \alpha_n = 1,$ we may write $\beta$ in the form
$\alpha_1^{d_1} \cdots \alpha_{n-1}^{d_n}$ where no $\alpha_n$ appears,
and Lemma \ref{stabilizer} applies.
Then by the lemma, the action of $N$ on $\{\alpha_1 \cdots \alpha_n\}$
is not transitive.
Finally, to show irreducibility of $p_2, q_1$ and $q_2,$ use
the full rank condition to show that the roots of $p_2$ and
$q_1$ are distinct when $n \ge 3,$ and the roots of $q_2$ are
distinct when $n \ge 4.$
\end{proof}
We obtain a corollary that describes the dimensions of the steps
of certain sorts of Anosov Lie algebras.
\begin{corollary}\label{prime-dim}
Let $\frakn$ be an $r$-step nilpotent Lie algebra of type $(n_1, \ldots, n_r),$
where $n_1$ is prime and
$1 < r < n_1,$ that admits an Anosov automorphism $f.$
Let $(p_1, p_2, \ldots, p_n)$ be the $r$-tuple of polynomials associated
to an automorphism $f$ of $\frakf_{n,r}(\boldR)$ that has $\overline{f}$ as a
quotient.
Suppose that the polynomial $p_1$ is irreducible.
Then $n_1$ divides $n_i$ for all $i =2,\ldots, r.$
\end{corollary}
\begin{proof}
Let $G$ denote the Galois group of the polynomial $p_1$
associated to the Anosov automorphism $f.$
Because $p_1$ is irreducible, its roots are distinct,
and $f$ is semisimple.
The number $n_1$ is prime, hence
it divides the order of $G,$ and by Lagrange's Theorem, there
is a subgroup of $G$ isomorphic to $C_{n_1}.$
The permutation representation
of $G$ on $\boldQ^n$ is then the sum of the principal representation
and a representation that is irreducible over $\boldQ.$ Hence the
set of roots of $p_1$ has full rank by Proposition \ref{full rank}.
Now let $q$ be the characteristic polynomial of a rational $f$-invariant
subspace $E$ contained in $V_i(\boldR)$ for some $i < n.$ Because $n_1$ is
prime, and $i < n_1,$ the numbers $i$ and $n_1$
are coprime. Therefore, the dimension $k$ of
a rational invariant subspace for $f$ is an integral multiple of $n_1,$
by Part \eqref{factor-degree} of Lemma \ref{how it can factor}.
Therefore, the dimension $n_i$ of the $i$th step of $\frakn$ is a multiple of $n_1$
for all $i = 2, \ldots, r.$
\end{proof}
\subsection{The existence of Anosov polynomials with given Galois group}
In the next proposition, we summarize some results on the existence
of Anosov polynomials with certain properties.
\begin{proposition}\label{existence}
There exist irreducible Anosov polynomials in
$\boldZ[x]$ satisfying the following conditions.
\begin{enumerate}
\item{For all $n \ge 2,$ for all $r = 1, \ldots n-1,$
there exists an irreducible Anosov polynomial
$p$ of degree $n$ such that precisely $r$ of the roots have modulus larger
than one.
}\label{pisot}
\item{For all $n \ge 2,$ there exists an irreducible Anosov
polynomial of degree $n$ with Galois group $S_n.$ }\label{G=Sn}
\item{For all prime $n \ge 2,$ there exists an irreducible Anosov
polynomial of degree $n$ with Galois group $C_n.$}
\item{Suppose that the group $G$ acts transitively on some set
of cardinality 2, 3, 4 or 5 (hence is the Galois group of an irreducible
polynomial in $\boldZ[x]$ of degree $n \le 5$), and $G$ is not isomorphic
the alternating group $A_5.$
Then there exists an Anosov polynomial of
degree $n$ having Galois group $G.$}\label{low-degree}
\end{enumerate}
\end{proposition}
D.\ Fried showed how to use the geometric version of the
Dirichlet Unit Theorem and
the results from \cite{auslander-scheuneman} to construct
Anosov automorphisms with given spectral properties (\cite{fried-81});
these methods can also be used to prove the first part of the proposition.
We provide an alternate proof that shows the actual polynomials defining
the automorphisms.
\begin{proof}
The polynomial $p(x) = x^n + a_1x^{n-1} + \cdots + a_{n-1}x \pm 1$
has $r$ roots greater than one in modulus and $n-r$ roots less
than one in modulus when
\[ |a_r| > 2 + |a_1| + \cdots + |a_{r-1}| + |a_{r+1}| + \cdots + |a_{n-1}|\]
(\cite{mayer}, see \cite{tajima}).
By letting $a_r=3$ and $a_i = 0$ for $i \ne r$ in $\{1, \ldots, n-1\},$
we get a polynomial of degree $n$ with precisely
$r$ roots greater than one in modulus
that is irreducible by Eisenstein's Criterion. By Remark
\ref{roots of modulus one}, if $n \ne 2r,$
the polynomial can not have any roots of
modulus one. If $n=2r,$ then the polynomial is
$(x^r)^2 + 3x^r \pm 1;$ one can check by hand that neither of these polynomials
have roots of modulus one.
This proves
Part \eqref{pisot}.
The irreducible polynomial $x^n - x -1$ has Galois group $S_n$
(\cite{serre-92}). As it is not self-reciprocal, by Remark
\ref{roots of modulus one}, it has no roots of modulus one.
Now suppose that $K$ is a Galois
extension of $\boldQ$ with Galois group $C_n$ with $n \ge 3$ prime.
It is well known that such a field exists (See the Kronecker-Weber Theorem, in
\cite{jensen-ledet-yui}).
Let $\eta$ be a Dirichlet fundamental unit for $K,$ and let
$p$ denote its minimal polynomial. Since $\eta$ is a unit, the constant
term of $p$ is $\pm 1.$ Because $\eta \not \in \boldQ,$
the degree $m$ of $\eta$ is greater that one.
Because $m$ divides $n,$ and $n$ is prime,
$m$ must equal $n.$ Thus, the minimal polynomial
$p$ for $\eta$ has degree $n$ and splitting field $K.$ The
degree of $p$ is odd, so by Lemma \ref{odd-good},
$p$ has no roots of modulus one. Thus we have shown that $p$ is
an Anosov polynomial.
Now we prove Part \eqref{low-degree}.
It is simplest to list examples
of Anosov polynomials of each kind: See Table \ref{low-degree-list}.
By Remark
\ref{roots of modulus one}, the only polynomial in the
table that could possibly have roots of modulus one is the self-reciprocal
polynomial with Galois group $V_4.$ An easy calculation shows
that it does not have roots of modulus one.
\renewcommand{\arraystretch}{2}
\begin{table}
\begin{tabular}{|c|c|l|} \hline
Degree & Galois group & Anosov polynomial
\\ \hline \hline
2 & $C_2$ & $p(x) = x^2 - x - 1$ \\
\hline \hline
3 & $C_3$ & $p(x) = x^3 - 3x - 1$ \\
\hline
3 & $S_3$ & $p(x) = x^3 - x - 1$ \\
\hline \hline
4 & $C_4$ & $p(x) = x^4 + x^3 - 4x^2 - 4x + 1$\\
\hline
4 & $V_4$ & $p(x) = x^4 + 3x^2 + 1$\\
\hline
4 & $D_8$ & $p(x) = x^4 -x^3 -x^2 + x + 1$\\
\hline
4 & $A_4$ & $p(x)= x^4 + 2x^3 + 3x^2 - 3x + 1$ \\
\hline
4 & $S_4$ & $p(x)= x^4 - x - 1$ \\
\hline \hline
5 & $C_5$ & $p(x) = x^5 + x^4 - 4x^3 - 3x^2 + 3x + 1$ \\
\hline
5 & $D_{10}$ & $p(x) = x^5 - x^3 -2x^2 - 2x -1$ \\
\hline
5 & $F_{20}$ & $p(x) = x^5 + x^4 + 2x^3 + 4x^2 + x + 1$ \qquad \strut \\
\hline 5 & $A_5$ & Q: What is an example with small coefficients? \qquad \strut \\
\hline 5 & $S_5$ & $p(x)= x^5 - x - 1$ \\
\hline
\end{tabular}
\bigskip
\caption{\label{low-degree-list} The inverse Galois problem for Anosov polynomials of low
degree}
\end{table}
\end{proof}
Many of the examples in Table \ref{low-degree-list} were
taken from the appendix of \cite{malle-matzat}; the reader may find there
a great many more examples of Anosov polynomials of degree $n \ge 6$
with a variety of Galois groups. To our knowledge, it
is known whether the existence of a polynomial in $\boldZ[x]$
with Galois group $G$ guarantees the existence of
a polynomial in $\boldZ[x]$ with
constant term $\pm 1$ and Galois group $G.$ In addition, we do not know of
an example of an Anosov polynomial with Galois group $A_5.$
\section{Actions of the Galois group}\label{action-section}
\subsection{Definitions of actions}
In this section, we associate the $\boldQ$-linear action of a finite group
to an automorphism of a free nilpotent Lie algebra that
preserves the rational structure defined by a Hall basis.
\begin{definition}\label{action}
Let $A$ be a matrix in $GL_n(\boldZ),$ let $p_1$ be the
characteristic polynomial of $A$ and let $K$ be the splitting field for
$p_1.$ Suppose that $f$ is the automorphism of
$\frakf_{n,r}(K)$ determined by $A$
and a set $\calB_1= \{ \bfx_i\}_{i=1}^n$ of generators for
$\frakf_{n,r}(K).$ Let $\calB = \cup_{i=1}^r \calB_i$
be the Hall basis determined by $\calB_1.$
Write $\frakf_{n,r}(K) = \oplus_{i=1}^r V_i(K),$ and
for $i = 1, \ldots, r,$ let $n_i = \dim_K V_i(K).$
Let $G$ denote the Galois group for the field $K.$
Let
$\bfx_1^i, \ldots, \bfx_{n_i}^i$ denote the $n_i$ elements of the basis
$\calB_i$ of $V_i(K);$ they determine an identification
$V_i(K) \cong K^{n_i}.$
For $i=1, \ldots, r,$ the $G$ action on $K$
extends to a diagonal $G$ action on $V_i(K) \cong K^{n_i}.$
In particular, if $\bfw = \sum_{j=1}^{n_i} \beta_j \bfx_j^i$ is
an element in $V_i(K),$ where $\beta_1, \ldots, \beta_{n_i} \in K,$
and $g \in G,$ then $g \cdot \bfw$ is defined by
\[ g \cdot \bfw = \sum_{j=1}^{n_i} (g \cdot \beta_j) \,
\bfx_j^i \in V_i(K). \]
A $G$ action on the
free nilpotent Lie algebra $\frakf_{n,r}(K) = \oplus_{i=1}^r V_i(K)$
is defined by
extending each of the $G$ actions on $V_i(K),$ for $i = 1, \ldots, r.$
\end{definition}
Next we describe properties of the action.
\begin{proposition}\label{action-properties} Let
$A$ be a semisimple matrix in $GL_n(\boldZ),$
let $p_1$ be the characteristic polynomial of $A,$ and let
$K$ be the splitting field for $p_1$ over $\boldQ.$
Let $\calB_1$ be a generating set for the free nilpotent
Lie algebra $\frakf_{n,r}$ and let $\calB$ be the Hall
basis that it determines.
Let $f$ be the automorphism of $\frakf_{n,r}(K),$ induced by Hall basis
$\calB$ and the matrix $A.$ Let $G$ be the
Galois group for $K,$ and let $G \cdot
\frakf_{n,r}(K) \to \frakf_{n,r}(K)$ be the action defined in
Definition \ref{action}. Then
\begin{enumerate}
\item{The $G$ action is $\boldQ$-linear, preserving
$\frakf_{n,r}(\boldQ) < \frakf_{n,r}(K),$ and it preserves
the decomposition $\frakf_{n,r}(K) =
\oplus_{i=1}^r V_i(K)$ of $\frakf_{n,r}(K)$ into steps.}\label{preserve-Vi}
\item{The $G$ action on $\frakf_{n,r}(K)$ commutes with the Lie
bracket. }\label{action-bracket}
\item{The function $f: \frakf_{n,r}(K) \to \frakf_{n,r}(K)$
is $G$-equivariant.}\label{action-fK}
\item{The $G$ action permutes the eigenspaces of $f:$
an element $g \in G$
sends the $\alpha$ eigenspace for $f$ to the $g
\cdot \alpha$ eigenspace for $f.$}\label{action-eigenspace}
\end{enumerate}
\end{proposition}
Note also that because it fixes $\frakf_{n,r}(\boldR)$ stepwise,
the $G$ action on $\frakf_{n,r}(K) = \oplus_{i=1}^r V_i(K)$
commutes with the grading automorphism $B$
defined by $B (\bfw) = e^i \bfw$ for $\bfw \in V_i.$
\begin{proof} The first assertion follows from the definition of the
action.
The action of $G$ commutes with the Lie bracket because
structure constants for the Hall basis $\calB$
are rational. Let $\bfy_1, \ldots, \bfy_N$ be an enumeration of
the Hall basis $\calB,$ and denote the structure constants relative to
$\calB$ by
$\alpha_{ij}^k.$ Consider the Lie bracket of two
arbitrary vectors $\sum_{i=1}^N a_i \bfy_i$
and $\sum_{j=1}^N b_j \bfy_j$ in $\frakf_{n,r}(K):$
\begin{align*} g \cdot \left[\sum_{i=1}^N a_i \bfy_i, \sum_{j=1}^N
b_j \bfy_j\right] &= g \cdot \sum_{k=1}^N \left( \sum_{j=1}^N
\sum_{j=1}^N \alpha_{ij}^k a_i b_j \right) \bfy_k \\ &= \sum_{k=1}^N
\left( \sum_{j=1}^N \sum_{j=1}^N \alpha_{ij}^k (g \cdot a_i) (g
\cdot b_j) \right) \, \bfy_k \\ & = \left[ g \cdot \sum_{i=1}^N a_i
\bfy_i, g \cdot \sum_{j=1}^N b_j \bfy_j \right].
\end{align*}
Now we show that the $G$ action on $\frakf_{n,r}(K)$ commutes with
the automorphism $f.$ We can
write $\bfw$ in $\frakf_{n,r}(K)$ as a linear combination
$\sum_{j=1}^{N} \beta_j \bfy_j$ of elements in the Hall basis
$\calB$ of $\frakf_{n,r}(K).$ Take $g \in G.$
Because $f$ is linear,
\[ f (g \cdot \bfw) = f \left(\sum_{j=1}^{N} (g \cdot \beta_j)
\bfy_j \right) = \sum_{j=1}^{N} (g \cdot \beta_j) f (\bfy_j) .\]
An integral matrix $(c_{ij})$ represents $f$ with respect to
the basis $\calB.$
Therefore, for each $j=1, \ldots, N,$ the vector
$f(\bfy_j)$ is a $\boldZ$-linear combination $ \sum_{i=1}^{N}
c_{ij} \bfy_i$ of $\bfy_1, \ldots, \bfy_N.$ All of the
integer entries of the matrix $(c_{ij})$ are fixed by the
$G$ action. Hence, $ f (g \cdot \bfw)$ equals
\begin{align*}
\sum_{j=1}^{N} (g \cdot \beta_j) f (\bfy_j) &=
\sum_{j=1}^{N} (g \cdot \beta_j) \sum_{i=1}^{N} c_{ij}\, \bfy_j\\
&= g \cdot \sum_{j=1}^{N} \sum_{i=1}^{N} \beta_j
c_{ij} \, \bfy_j\\
&= g \cdot f \left(\sum_{j=1}^{N} \beta_j
\bfy_j \right) \\
&= g \cdot (f \bfw). \end{align*}
Thus we have
shown that $f (g \cdot \bfw) = g \cdot (f \bfw) $, so the $G$ action
on $\frakf_{n,r}(K)$ commutes with
$f$ as asserted.
Consider an eigenvector $\bfz$ for $f$
with eigenvalue $\alpha.$ Because
$\bfz$ is in $\ker (f - \alpha \Id),$ and $\alpha \in K,$
the vector $\bfz$ is a $K$-linear
combination of elements of the rational basis.
For $g$ in $G,$
\[ f(g \cdot \bfz) = g \cdot f(\bfz) = g \cdot \alpha \bfz = (g
\cdot \alpha) (g \cdot \bfz).\] Hence an automorphism $g$ in $G$
sends the $\alpha$-eigenspace to the $(g \cdot
\alpha)$-eigenspace. Therefore, Part \eqref{action-eigenspace}
of the proposition holds.
\end{proof}
Now we use the action defined in Definition \ref{action}
to describe certain important kinds of
subspaces and ideals of Anosov Lie algebras.
\begin{definition}\label{Ez}
Let $G$ be a finite group that acts on the free nilpotent
Lie algebra $\frakf_{n,r}(K)$ over field $K,$ and let
$\bfz \in \frakf_{n,r}(K).$ Define the
subspace $E_G^K(\bfz)$ of $\frakf_{n,r}(K)$ to be the
$K$-span of the $G$ orbit of $\bfz:$
\begin{equation}\label{eg-def}
E_G^K(\bfz) =
\myspan_{K} \{ g \cdot \bfz \, : \, g \in G \}.\end{equation}
\end{definition}
\begin{example}\label{type-cn}
Let $\frakf_{n,2}(K) = V_1(K) \oplus V_2(K)$
be the free two-step Lie algebra on $n$ generators over field $K.$
Let $\calC_1 = \{\bfz_i\}_{i=1}^n$ be a set of $n$ generators for
$\frakf_{n,2}(K).$
Let $G$ be the cyclic group of order $n$ that acts on
$\frakf_{n,2}(K)$ through the natural action of $G \cong C_n$ on $\calC_1.$
For each $j = 2, \ldots, \lfloor n/2 \rfloor + 1,$
define the ideal $\fraki_j < V_2(K)$ by
\begin{align*}
\fraki_j = E_G^K([\bfz_j, \bfz_1])
= \myspan_{K} \{ [\bfz_{s},\bfz_{t}] \, : \, s - t = j - 1 \mod n \}.
\end{align*}
For example, when $n =4,$
\begin{align*}
\fraki_2 &= \myspan_{K} \{ [\bfz_2, \bfz_1], [\bfz_3, \bfz_2],
[\bfz_4, \bfz_3], [\bfz_1, \bfz_4] \},
\quad \text{and}\\
\fraki_3 &= \myspan_{K} \{ [\bfz_3, \bfz_1], [\bfz_4, \bfz_2] \}.
\end{align*}
For distinct $j_1$ and $j_2$ in $\{1, \ldots, \lfloor n/2 \rfloor \},$
the subspaces
$\fraki_{j_1}$ and $\fraki_{j_2}$ are independent, hence
$V_2(K) = \oplus_{j=1}^{ \lfloor n/2 \rfloor}\fraki_j.$
When $n = n_1$ is odd, there are $\frac{1}{2}(n-1)$ such subspaces,
each of dimension
$n.$ When $n = n_1$ is even, the
subspaces $\fraki_j, j =1, \ldots, n/2 - 1$ are of dimension $n,$
and the subspace $\fraki_{n/2}$ is of dimension $n/2.$
For any proper subset $S$ of $\{1, \ldots, \lfloor n/2 \rfloor\},$
there is an
ideal $\fraki(S)$ of $\frakf_{n,2}(K)$
defined by $\oplus_{j \in S} \fraki_j,$ and this ideal defines a
two-step Lie algebra
$\frakn_S = \frakf_{n,2}(K)/\fraki_S.$
\end{example}
We define ideals of free nilpotent Lie algebras arising from
group actions, as the Lie algebra $\frakn_S$ in the previous example
arises from the action of a cyclic group.
\begin{definition}\label{ideal-i}
Let $G$ be a finite group that acts on the free $r$-step nilpotent Lie algebra
$\frakf_{n,r}(K) = \oplus_{i=1}^r V_i(K)$ over field $K,$ where the field $K$
is an extension of $\boldQ.$
Let $L$ be another extension of $\boldQ,$ typically $\boldR$ or $\boldC.$
Suppose that Hall bases $\calB \subset \frakf_{n,r}(K)$
and $\calB^\prime \subset \frakf_{n,r}(L)$ define
rational structures on $\frakf_{n,r}(K)$ and $\frakf_{n,r}(L)$
respectively, and let $E \to E^L$ be the correspondence of rational invariant
subspaces defined in Definition \ref{correspond}
resulting from an identification of $\calB^\prime$ and $\calB.$
A rational ideal of $\frakf_{n,r}(L)$ generated by sets of the form
$(E_G(\bfw))^L,$ where $E_G^K(\bfw)$ is
as defined in Definition \ref{Ez}, for $\bfw \in \frakf_{n,r}(K),$
is called {\em an ideal of type $G$ defined over $L$}.
We use $\fraki(G,\bfw)$ denote the ideal of $\frakf_{n,r}(L)$
generated by the subspace
$(E_G^K(\bfw))^L.$
A nilpotent Lie algebra of form $\frakf_{n,r}(L)/\fraki,$ where
$\fraki$ is of type $G$ will be called a {\em
nilpotent Lie algebra of type $G.$}
\end{definition}
Now we describe ideals of symmetric type for two- and three-step
free nilpotent Lie algebras.
\begin{example}\label{type-sn}
Suppose that $\frakf_{n,r}(K)$ has generating set
$\calC_1 = \{\bfz_j\}_{j=1}^n$ and that the action of $G \cong S_n$
on $\frakf_{n,r}(K)$ is defined by permuting elements
$\bfz_1, \ldots, \bfz_n$ of the generating set $\calC_1.$
Let $\calC = \cup_{i=1}^r \calC_i$ be the Hall basis determined by
$\calC_1.$
For any $\bfw = [\bfz_i,\bfz_j]$ in $\calC_2,$
the subspace $E_G^K(\bfw)$ is all of $V_2(K):$
\[ E_G^K(\bfw) = \myspan_{K} \{ [\bfz_j,\bfz_i]\}_{1 \le i < j \le n} = V_2(K).\]
Hence, an ideal $\fraki < \frakf_{n,r}(K)$
of type $S_n$ that intersects $V_2(K)$ nontrivially must contain
all of $V_2(K).$
When $r \ge 3,$ there are two sets of form $E_G^K(\bfw)$
with $\bfw \in \calC_3,$
the subspaces $F_1$ and $F_2$ defined
in Equation \eqref{F defs}:
\[ F_1(K) = E_G^K([[\bfz_2,\bfz_1], \bfz_1]) \quad
\text{and} \quad F_2(K) = E_G^K([[\bfz_2,\bfz_1], \bfz_3]). \]
Therefore, for any ideal $\fraki$ of type $S_n,$
the subspace $\fraki \cap V_3(K)$ is one of the following:
$\{0\},$ $F_1(K),$ $F_2(K),$ or $V_3(K).$
\end{example}
Next is an example showing nilpotent Lie algebras
arising from dihedral groups.
\begin{example}\label{D2n} Let $A$ be a semisimple
matrix in $GL_n(\boldZ)$ whose
characteristic polynomial has splitting field $K$
and Galois group $G$ isomorphic to the
dihedral group $D_{2n}$ of order $2n.$ Let
$f$ be the automorphism of $\frakf_{n,2}(K)$ induced by $A$ and a Hall
basis $\calB.$ Let $\bfz_1, \ldots, \bfz_n$ denote a set of
eigenvectors of $f|_{V_1(K)}$ spanning
$V_1(K)$ compatible with the rational structure,
and let $\alpha_1, \ldots, \alpha_n$ denote
corresponding eigenvalues.
The group $D_{2n}$ is isomorphic to the group of symmetries of a
regular $n$-gon. Enumerate the vertices of such an $n$-gon in
counterclockwise order so that $D_{2n} \cong \la r, s\ra,$ where $r$ is
counterclockwise rotation by $2\pi/n$ and $s$ is reflection
through a line through center of the $n$-gon and the first vertex.
Let $X_n$ be the
complete graph on $n$ vertices obtained by adding edges connecting all
distinct vertex pairs of the $n$-gon. Identify the roots of
$p_1$ with the $n$ vertices and the roots of $p_2$ with the
$(\begin{smallmatrix} n \\ 2 \end{smallmatrix})$ edges in such a way
that the eigenvalue $\alpha_i \alpha_j$ corresponds the to edge
connecting vertices corresponding to eigenvalues $\alpha_i$ and
$\alpha_j.$
The $G$ action on $\frakf_{n,2}(K)$ can then be visualized through the
$D_{2n}$ action on the graph $X_n.$ For example, the $G$ orbit of the
eigenvector $[\bfz_2,\bfz_1]$ is
\[ G \cdot [\bfz_2,\bfz_1] = \{ [\bfz_2,\bfz_1], [\bfz_3,\bfz_2],
\ldots, [\bfz_1,\bfz_n] \},\] corresponding to the $n$ ``external''
edges of the graph $X_2.$ The subspace $E_G^K([\bfz_2,\bfz_1])$ defined
by Definition \ref{Ez} is the $K$-span of this set, an
$n$-dimensional subspace of
$V_2(K).$ Other orbits depend on the value of $n:$ if $n=3$ there are
no other orbits, and if $n=4$ or $5,$ there is one more orbit coming
from ``interior'' edges on the graph,
yielding a subspace $E_G^K([\bfz_3,\bfz_1])$
of $V_2(K)$ that is complementary to
$E_G^K([\bfz_2,\bfz_1]).$ When $n \ge 6,$ there are two more orbits
coming from interior edges.
\end{example}
\section{Rational invariant subspaces}\label{Rational invariant subspaces}
Given an automorphism of a free nilpotent Lie algebra, the
next theorem describes how orbits of the $G$ action on
$\frakf_{n,r}(\boldR)$ relate to the factorization of the polynomials $p_1, \ldots,
p_r.$ These restrictions on factorizations yield restrictions
on the existence of Anosov quotients. Roughly speaking,
when the Galois group of $p_1$ is highly transitive, the
rational invariant subspaces for associated Anosov automorphisms
tend to be big also, and when the group is small, the rational
invariant subspaces are small.
The larger rational invariant subspaces are, the
fewer Anosov quotients there may be. The field $L$ in the theorem
is typically $\boldR$ or $\boldC.$
\begin{thm}\label{actions} Let $A$ be a semisimple matrix in $GL_n(\boldZ).$
Let $(p_1, \ldots, p_r)$ be the
$r$-tuple of polynomials associated to $f,$ let
$K$ be the splitting field of $p_1$ and let $G$ be the Galois group for $K.$
Let $f$ be the
semisimple automorphism of $\frakf_{n,r}(K)$
defined by Hall basis $\calB = \cup_{i=1}^r \calB_i$ of $\frakf_{n,r}(K)$
and the matrix $A.$
Let $\calC = \cup_{i=1}^r
\calC_i$ be the Hall basis for $\frakf_{n,r}(K)$ determined by a set of
$\calC_1 = \{\bfz_j\}_{j=1}^{n}$ of eigenvectors for $f|_{V_1(K)}$
that is compatible with the rational structure defined by $\calB.$
For all $i = 1, \ldots, r,$ the vector subspace $V_i(K)$ of $\frakf_{n,r}(K)$
decomposes as
the direct sum of rational invariant subspaces of the form $E_G^K(\bfz),$
where $\bfz \in \calC_i,$ and $E_G^K(\bfz)$ is as defined in
Definition \ref{Ez}. The characteristic polynomial $p_E$ for
the restriction of $f$ to $E_G^K(\bfz)$ is of form $p_E = r^s,$
where $r$ is a polynomial that is
irreducible over $\boldZ.$
Suppose that the field $L$ is an extension of $\boldQ,$
and that $f$ is a
semisimple automorphism of $\frakf_{n,r}(L)$
defined by Hall basis $\calB^\prime = \cup_{i=1}^r \calB_i^\prime$
and the matrix $A.$
Since the subspaces $E_G^K(\bfz)$ of $\frakf_{n,r}(K)$ are rational,
for all $i = 1, \ldots, r,$ there is a
decomposition $V_i(L) = \oplus (E_G^K(\bfz))^L$
of $V_i(L) < \frakf_{n,r}(L)$ into rational $f$-invariant
subspaces, through the correspondence defined in Definition
\ref{correspond} induced by the identification of the
rational Hall bases $\calB$
and $\calB^\prime$ of $\frakf_{n,r}(K)$ and $\frakf_{n,r}(L)$.
\end{thm}
We illustrate the theorem by considering a special case
of Example \ref{D2n}.
\begin{example}\label{D10} Let $A$ be a semisimple hyperbolic matrix in
$GL_5(\boldZ)$ such that the splitting field $K$ for the
characteristic polynomial $p_1$ for $A$ has Galois group $G$ isomorphic to the
dihedral group $D_{10}$ of order $10.$ Let $f_K$ be the automorphism
of $\frakf_{5,2}(K) = V_1(K) \oplus V_2(K)$
induced by $A$ and a Hall basis $\calB.$ Let
$\calC_1 = \{\bfz_1, \ldots, \bfz_5\}$
be the set of eigenvectors of
$f_K|_{V_1(K)}$ and let $\alpha_1, \ldots, \alpha_5$ denote the corresponding
eigenvalues. Let $\calC = \calC_1 \cup \calC_2$ be the Hall basis
of $\frakf_{5,2}(K)$ determined by
$\calC_1.$
We saw in Example \ref{D2n} that the $D_{10}$ action on $\frakf_{5,2}(K)$
has two
orbits each of $K$-dimension five. By Theorem \ref{actions}, these
two orbits yield rational $f_K$-invariant subspaces
\[ \fraki_1(K) = E_{G}([\bfz_2, \bfz_1]),
\quad \text{and}
\quad \fraki_2(K) =
E_{G}([\bfz_3, \bfz_1])\]
and a decomposition $V_2(K) = \fraki_1(K) \oplus \fraki_2(K) $
of $V_2(K) < \frakf_{5,2}(K)$ into rational invariant subspaces.
Let $f_{\boldR}$ be the automorphism of $\frakf_{5,2}(\boldR)$ induced by
$A$ and Hall basis $\calB^\prime = \cup_{i=1}^r \calB_i^\prime$
of $\frakf_{5,2}(\boldR).$
Letting $L = \boldR$ in the Theorem \ref{actions},
and using the correspondence between $\frakf_{5,2}(K)$
and $\frakf_{5,2}(\boldR)$ as in Definition \ref{correspond},
we get a rational $f_R$-invariant decomposition of
$V_2(R) < \frakf_{5,2}(\boldR)$ into rational $f_\boldR$-invariant subspaces
$ \fraki_1(\boldR) = (\fraki_1(K))^\boldR $
and $\fraki_2(\boldR) = (\fraki_2(K))^\boldR .$
The
polynomial $p_2(x)$ factors as $p_2 = r_1 r_2,$
where the two quintic factors $r_1$ and $r_2$
are characteristic polynomials of the restriction of $f$ to the
rational invariant ideals
$\fraki_1(K)$ and $\fraki_2(K)$
respectively. Neither
$r_1$ nor $r_2$ has
roots of modulus one by Remark \ref{roots of modulus one}.
By Lemma \ref{how it can factor}, Part \ref{factor-degree},
both $r_1$ and $r_2$ are irreducible.
Thus, the only
ideals $\fraki$ of $\frakf_{5,2}(\boldR)$
satisfying the Auslander-Scheuneman conditions
for $f: \frakf_{n,r}(\boldR) \to \frakf_{n,r}(\boldR)$ and
$\calB$
and defining a two-step Anosov quotient $\frakf_{5,2}/\fraki$
of type $(5,n_2)$
are the trivial ideal, the ideal $\fraki_1(\boldR)$
and the ideal $\fraki_2(\boldR).$
Now we'd like to write out the ideals $\fraki_1^\boldR$
and $\fraki_2^\boldR$ in terms of generators and relations.
We need to consider two cases. In the first case, all of the eigenvalues
of $f_\boldR$ are real, so that the
quotient algebras $\frakn_1 = \frakf_{5,2}/\fraki_1(\boldR)$
and $\frakn_1^\prime= \frakf_{5,2}/\fraki_2(\boldR)$
may be written with generators $\bfz_1,\bfz_2,\bfz_3,\bfz_4,\bfz_5$
in $\frakf_{5,2}(\boldR)$
and relations
\[ [\bfz_1,\bfz_2]= [\bfz_2,\bfz_3]= [\bfz_3,\bfz_4]=
[\bfz_4,\bfz_5]= [\bfz_5,\bfz_1]=0 \]
for the first Lie algebra $\frakn_1,$
and relations
\[ [\bfz_1,\bfz_3]= [\bfz_2,\bfz_4]= [\bfz_3,\bfz_5]= [\bfz_4,\bfz_1]= [\bfz_5,\bfz_2]=0 \]
for the second Lie algebra. These Lie algebras are clearly isomorphic.
In the second case, there is an eigenvector
$\bfz_1$ with a real eigenvalue, and there
are two complex eigenvalue pairs yielding eigenvector pairs
$\bfz_2, \bfz_3 = \bfx_2 \pm i \bfy_2$ and
$\bfz_4, \bfz_5= \bfx_3 \pm i \bfy_3,$ where
the vectors $\bfx_i, \bfy_i < \frakf_{5,2}(\boldR)$ ($i=1,2$).
(If there were
only one complex eigenvalue pair, the Galois group would be $S_5.$)
The reader may check that the ideal $\fraki_1^\boldR$ is generated by
the elements
\[ [\bfz_1, \bfx_2], [\bfx_2, \bfx_3] - [\bfy_2,\bfy_3],
[\bfx_2,\bfy_3] + [\bfy_2,\bfx_3], [\bfx_3, \bfy_3], [\bfz_1, \bfy_2] \]
of $V_2(\boldR) < \frakf_{5,2}(\boldR).$
The ideal $\fraki_2^\boldR$ yields another ideal isomorphic to
the first. These ideals yield isomorphic Lie algebras
$\frakn_2$ and $\frakn_2^\prime.$
It can be shown that $\frakn_1$ and $\frakn_2$ are not isomorphic,
because the the $D_{10}$ symmetry of $\frakf_{5,2}(K)/ \fraki_1$
is preserved when moving to $\frakn_1,$
but it is lost when moving to
$\frakn_2.$ (In particular, the
$\bfz_1$ coset in $\frakf_{5,2}(\boldR)/\fraki_2^\boldR$
is the unique element having a three-dimensional
centralizer.)
In summary,
in addition to $\frakf_{5,2}(\boldR),$
there are exactly two isomorphic two-step
nilpotent quotients, both of type $(5,5)$ to which
$f$ descends as an Anosov map, and there are exactly
two nonisomorphic Anosov Lie algebras of type $(5,5)$ with
Anosov automorphisms yielding Galois group $D_{10}.$
\end{example}
\begin{proof}[Proof of Theorem \ref{actions}.]
Fix an element $\bfw$ in the
basis $ \calC_i$ for $V_i(K) < \frakf_{n,r}(K).$
By Proposition \ref{p-properties},
it is an eigenvector for $f;$ let $\alpha$
denote its eigenvalue. When represented with respect
to the rational basis $\calB_i$ for $V_i(K),$
the vector $\bfw$ has coordinates in
$K^{n_i},$ where $n_i = \dim V_i.$ Let $E_G^K(\bfw)$ be the subspace of $V_i(K)$
generated by $\bfw$ and $G$ as in Definition \ref{Ez}.
First we show that $E_G^K(\bfw)$ is invariant under $f.$ By Proposition
\ref{action-properties}, part \eqref{action-eigenspace}, for all
$g$ in $G,$ the vector $g \cdot \bfw$ is an eigenvector with
eigenvalue $g \cdot \alpha.$ An element
$\bfu$ of $V_i(K)$ is in $E_G^K(\bfw)$ if and only if it is of the form
$\bfu = \sum_{g \in G} c_g \, (g \cdot \bfw),$ where $c_g \in K$
for $g \in G.$ Then
\[ f(\bfu) = \sum_{g \in G} c_g \, f(g \cdot \bfw) = \sum_{g \in G}
c_g \, (g \cdot \alpha) \, g \cdot \bfw ,\] so $f(\bfu)$ is also in
$E_G^K(\bfw).$
The vector space $E_G^K(\bfw)$ is spanned by vectors with
coordinates in $K,$ so there exists
a polynomial function $\phi : V_i(\boldR) \cong \boldR^{n_i} \to \boldR$
with coefficients in $K$ such that $E_G^K(\bfw)$ is the zero set of
$\phi.$ Because $E_G^K(\bfw)$ is invariant under the $G$ action,
for all $g$ in $G,$
$E_G^K(\bfw)$ is also the zero set of the function $\phi_g(x) = \phi (g
\cdot x).$ Let $\bar \phi = \prod_{g \in G}
\phi_g.$ The function $\bar \phi$ has
rational coefficients, so $E_G^K(\bfw)$ is a rational $G$-invariant subspace.
By Maschke's Theorem, the subspace
$V_i(K)$ may be written as the direct sum of subspaces of form
$E_G^K(\bfw).$
Now we show that the characteristic polynomial for the
restriction of $f$ to $E_G^K(\bfw)$ is a power of an irreducible. Let
$q$ denote the minimal polynomial for $\alpha.$ By Proposition
\ref{p-properties}, the splitting field $\boldQ(q)$
is intermediate to $\boldQ$ and the splitting field $\boldQ(p_1)$ of $p_1.$
The $G$-orbit of $\alpha$ is precisely the set of roots of $q.$
Since $g \cdot \alpha$ is a root of $q$ for all $g \in G,$
the space $E_G^K(\bfw)$ is contained in the direct sum of the
eigenspaces for eigenvalues $g \cdot \alpha, g \in G.$ Therefore, the
characteristic polynomial $p_E$ for the restriction of $f$ to $E_G^K(\bfw)$ is a
power of the irreducible polynomial $q.$
\end{proof}
We will describe a some rational invariant subspaces
that exist for any automorphism of a free nilpotent Lie algebra
preserving a rational structure.
First we need to make a definition.
\begin{definition}\label{j-def} Let $K$ be a field.
Let $\calC_1 = \{\bfz_j\}_{j=1}^n$ be a
generating set for the free $r$-step nilpotent
Lie algebra $\frakf_{n,r}(K),$ and let $\calC$ be the associated Hall
basis.
Define the ideal $\frakj_{n,r}$ of $\frakf_{n,r}(K)$ to be the ideal
generated by all elements $\bfw$ of the Hall basis $\calC$
having the property that there is a single number $k$ such that
for all $j = 1, \ldots, n,$ the letter $\bfz_j$
occurs exactly $k$ times in the Hall word $\bfw.$
\end{definition}
For example, when $n = 3,$ the ideal $\frakj_{3,2} < \frakf_{3,2}(K)$
is $\{0\},$ and
the ideal $\frakj_{3,3} < \frakf_{3,3}$ is
given by
\[ \frakj_{3,3} = \myspan_K \{ [[\bfz_2,\bfz_1],\bfz_3],
[[\bfz_3,\bfz_1],\bfz_2]\} = F_2(K) < V_3(K),\]
where $F_2(K)$ is as defined in Equation \eqref{F defs}, and
the ideal $\frakj_{4,3} < \frakf_{4,3}(K)$ is
given by
\[ \frakj_{4,3} = \frakj_{3,3} \oplus [\frakj_{3,3},\frakf_{4,3}],\]
where we map $\frakj_{3,3}$ into $\frakf_{4,3}$ in the natural way.
\begin{remark}\label{j in i}
Since the product of the roots of an Anosov polynomial is
always $\pm 1,$ any ideal $\fraki < \frakf_{n,r}$
satisfying the Auslander-Scheuneman conditions for some $f$
must contain the ideal $\frakj_{n,r}$ (defined relative to
an eigenvector basis).
\end{remark}
\begin{proposition}\label{Fdecomp}
Let $A$ be a semisimple
matrix in $GL_n(\boldZ)$ whose characteristic polynomial
has splitting field $K.$
Let $\frakf_{n,r}(K) = \oplus_{i=1}^r V_i(K)$ be the free $r$-step nilpotent
Lie algebra on $n \ge 3$ generators over $K$,
endowed with the rational structure
defined by a Hall basis $\calB.$ Let $f$ be the semisimple automorphism
of $\frakf_{n,r}(K)$ defined by the matrix $A$ and the Hall
basis $\calB.$
Let $\calC$ be the Hall basis of $\frakf_{n,r}(K)$ determined by
a set of eigenvectors
$\calC_1$ for $f|_{V_1(K)}$ that is compatible with the rational structure.
The ideal $\frakj_{n,r}$ defined in Definition \ref{j-def}
is a rational invariant subspace; and
when $r \ge 3,$ the subspace $V_3(K)$ is the
direct sum $F_1(K) \oplus F_{2}(K)$ of rational invariant subspaces
where $F_1(K)$ and $F_2(K)$ are as in Equation \eqref{F defs},
while $F_2(K)$ decomposes as the direct sum
$F_2(K) = F_{2a}(K) \oplus F_{2b}(K)$ where
$F_{2a}(K)$ and $F_{2b}(K)$ are rational invariant subspaces each of dimension
$(\begin{smallmatrix} n \\ 3 \end{smallmatrix})$.
The characteristic polynomial for the restrictions of $f$ to
$F_1(K)$ is $q_1$ and the characteristic polynomials for the restriction
of $f$ to $F_{2a}(K)$ and $F_{2b}(K)$ are both $q_2.$
\end{proposition}
\begin{proof} Let $G$ denote the Galois group of
the polynomial $p_1$ associated to $f.$ Suppose that $\bfw$ is
a $k$-fold bracket of elements in $\calC_1 = \{\bfz_j\}_{j=1}^{n_1}.$
Recall from
Example \ref{2,3-Hall words}
that $\calC_3$ is the union of the set $\calC_3^\prime$
of standard Hall monomial of the first
type and the set $\calC_3^{\prime \prime}$ of
standard Hall monomials of the second type. Let $g \in S_n.$
It is easy to see that if
$\bfw \in \calC_3^\prime$ then $g \cdot \bfw \in \calC_3^\prime$
or $-(g \cdot \bfw) \in \calC_3^\prime,$
and if $\bfw \in \calC_3^{\prime \prime}$
then $g \cdot \bfw \in \calC_3^{\prime \prime},$
$-(g \cdot \bfw) \in \calC_3^{\prime \prime},$ or
$g \cdot \bfw$ is a linear combination of elements of
$\calC_3^{\prime \prime}$ through the Jacobi Identity.
The action of the group $G$ on
$\frakf_{n,r}(K)$ therefore preserves the subspaces $F_1$ and $F_2,$ so
that if $\bfw$ in $\calC_2$ is in $F_1(K)$ or $F_2(K),$ then $E_G^K(\bfw) <
F_1$ or $E_G^K(\bfw) < F_2(K)$ respectively. The space $F_1(K)$
is the sum of the rational invariant spaces $E_G^K(\bfw)$ as $\bfw$ varies
over elements of $\calC_3^\prime$, so is rational and
invariant. By the same reasoning $F_2(K)$ is
rational and $f$-invariant also.
The characteristic polynomial for the restriction of $f$ to
$F_2(K)$ is $q_2^2,$ where $q_2$ is as defined in Equation \eqref{q1,q2}.
The pair of elements of form $[[\bfz_i,\bfz_j],\bfz_k]$ and
$[[\bfz_k,\bfz_j],\bfz_i],$
where $1 \le j < i < k \le n,$ in $ \calC_3$ have the same eigenvalue,
$\alpha_i \alpha_j \alpha_k,$ where $\alpha_i, \alpha_j, \alpha_k$ are
the eigenvalues of $\bfz_i, \bfz_j,$ and $\bfz_k$ respectively.
These yield one basis vector for $F_{2a}$ and one basis vector for
$F_{2b}.$
Each factor $q_2$ in the characteristic polynomial
$q_2^2$ for $F_2(K)$ yields one rational invariant subspace of
$F_2(K)$ of dimension
$\deg q_2 = (\begin{smallmatrix} n \\ 3 \end{smallmatrix}),$
each spanned by elements of $\calC_3^{\prime \prime}.$
Call these subspaces $F_{2a}(K)$ and $F_{2b}(K),$ so
$F_{2}(K) = F_{2a}(K) \oplus F_{2b}(K).$
Similarly, the set of $k$-fold brackets $\bfw$ of $\calC$ having the
property that each element $\bfz_i$ occurs the same number of times
in $\bfw$ is clearly invariant under the action of $G$, so the
ideal that it generates, $\frakj_{n,r},$
is $G$-invariant, hence rational.
\end{proof}
The following elementary proposition yields restrictions on
possible dimensions of rational invariant
subspaces for semisimple automorphisms of nilpotent Lie algebras.
\begin{proposition} Let $\frakf_{n,r}(\boldR)$ be a
free nilpotent Lie algebra, and
let $f : \frakf_{n,r}(\boldR) \to \frakf_{n,r}(\boldR)$
be a semisimple automorphism
defined by a matrix $A$ in $GL_n(\boldZ)$ and a Hall basis
$\calB$ of $\frakf_{n,r}(\boldR).$
Suppose that the characteristic polynomial $p_1$ of $A$
is irreducible with Galois group $G.$
Let $m$ be the dimension of a minimal nontrivial rational invariant subspace
$E < \frakf_{n,r}(\boldR)$ for $f.$
Then $G$ has a normal subgroup $N$ such that $G/N$ acts
faithfully and transitively on a set of $m$ elements.
\end{proposition}
Note that the subspace $E$ is one-dimensional if and only if $N = G.$
\begin{proof} Suppose that $E$ is a minimal nontrivial invariant
subspace of dimension $m.$
The characteristic polynomial $p_E$ for the restriction of
$f$ to $E$ is irreducible. Since
the roots of $p_E$ are contained in the splitting field $\boldQ(p_1),$
the Galois group for $p_E$
is the quotient $G/N$ of $G$ by the normal
subgroup of elements of $G$ fixing $\boldQ(p_E).$
The group $G/N$ acts faithfully and
transitively on the $m$ roots of $p_E$ since it the Galois group of
an irreducible polynomial.
\end{proof}
The previous proposition can be used to find all possible dimensions
of minimal nontrivial rational invariant subspaces for any
Anosov automorphism whose associated Galois group is some
fixed group $G.$ One simply needs to find all numbers $m$ such that
there exists a normal subgroup
$N$ of $G$ and there exists a faithful transitive
action of $H = G/N$ on a set of $m$ elements.
Every faithful transitive action of a group $H$ on a set $X$ is conjugate
to the action of $H$ on the cosets $X^\prime = \{hK\}_{h \in H}$ of a subgroup
$K$ of $H$ such that $K$ contains no nontrivial normal subgroups of
$H.$ To find all faithful transitive actions of a
group $H = G/N,$ one must
list all subgroups of $H$ and eliminate any that contain
nontrivial normal subgroups. The cardinalities $|H| / |K|$ of the set
$\{hK\}_{h \in H}$ are admissible values for the cardinality $m$ of a
set $X$ on which $H$ acts faithfully and transitively. In our situation,
where $G$ is the Galois group of $p_1,$
the number $m$ could be the degree of a polynomial
having Galois group $G/N,$ and $m$ could be the
dimension of a rational invariant subspace of the
corresponding automorphism of $\frakf_{n,r}(\boldR).$
In Table \ref{degrees} we analyze the possible dimensions of
rational invariant subspaces of Anosov Lie algebras $\frakn$ for which
$p_1$ is irreducible and degree 3 or 4.
The first two columns in the table give all possible Galois
groups $G$ for irreducible polynomials degrees three and four,
grouped by degree.
The fourth and fifth columns list the isomorphism class of
proper normal subgroups $N$ of each group $G,$ and the quotients
$G/N.$ (Since $\frakn$ is Anosov, there are no one-dimensional rational
invariant subspaces, and we omit the case $N = G$.)
The quotient groups $G/N$ are potential
Galois groups for
characteristic polynomials of rational invariant subspaces
of an Anosov Lie algebra with polynomial $p_1$ having Galois group $G.$
The last column gives the cardinalities of sets
on which each $G/N$ can act faithfully and transitively, found by
the procedure described above.
The numbers in the last column are listed in the same order
as the subgroups in the second column, with the numbers for different
subgroups separated by semicolons.
In the third column
we show when the roots of a polynomial with given Galois group
must have full rank by Proposition \ref{full rank}.
When the set of roots has full rank,
some possibilities for the normal subgroup $N$ may be prohibited
by Lemma \ref{how it can factor}: these subgroups and the corresponding
dimensions are indicated in the table with asterisks.
\renewcommand{\arraystretch}{2}
\begin{table}
\begin{tabular}{|c|c||c|l|l|l|} \hline $\deg p_1$ & $G$ & full rank?
&$N \ne G$ &
$G/N$ & dimension $m$ of $E$ \\
\hline \hline
$3$ & $S_3$ & yes & $C_3^\ast, \{1\}$ & $C_2^\ast, S_3$ & $2^\ast;3,6$ \\
\hline
$3$ & $C_3$ & yes & $\{1\}$ & $C_3$ & $3$ \\
\hline \hline
$4$ & $S_4$ & yes & $A_4^\ast, V_4^\ast, \{1\}$ & $C_2^\ast,S_3^\ast,S_4$ &
$2^\ast; 3^\ast, 6^\ast; 4, 6, 8, 12, 24$ \\
\hline $4$ & $A_4$ & yes & $V_4^\ast, \{1\}$ &
$C_3^\ast,A_4$ & $3^\ast; 4, 12 $ \\
\hline $4$ & $D_8$ & no & $C_4, C_2, \{1\}$ & $C_2, V_4, D_8$ & $2; 4; 4, 8$ \\
\hline $4$ & $C_4$ & no & $C_2, \{1\}$ & $C_2, C_4$ & $2;4$ \\
\hline $4$ & $V_4$ & no & $C_2, \{1\}$ & $C_2, V_4$ & $2;4$ \\
\hline
\end{tabular} \bigskip
\caption{\label{degrees} Possible dimensions $m > 1$ for
rational invariant subspaces $E$ for an
Anosov automorphism of an $r$-step nilpotent
Lie algebra $\frakn = \frakf_{n,r}(\boldR)/\fraki$
when $n = 3$ or $4$ and $p_1$ is irreducible. An
asterisk indicates that the marked values of $N, G/N$ and
$\dim E$ cannot occur by Lemma \ref{how it can factor}, Part \ref{N-not-trans}. }
\end{table}
From the table we obtain the following corollary to Theorem
\ref{actions}.
\begin{corollary}\label{dimensions}
Let $f$ be a semisimple automorphism of $\frakf_{n,r}(\boldR)$ induced by
a hyperbolic matrix in $GL_n(\boldZ)$ and a Hall basis $\calB.$
Let $(p_1, \ldots, p_r)$ be the $r$-tuple of polynomials associated to
$f.$ If $p_1$ is irreducible, then the dimension
of any minimal nontrivial invariant subspace of $\frakf_{n,r}(\boldR)$ is $3$ or $6$ if
$n=3$ and is one of $2, 4, 6, 8, 12, 24$ if $n =4.$
\end{corollary}
\section{Automorphisms with cyclic and symmetric Galois
groups}\label{symmetric-cyclic}
In this section we
use Theorem
\ref{actions} to analyze the structure of Anosov Lie algebras whose
associated polynomial $p_1$ has either a small Galois group, such as a
cyclic group, or a large Galois group, such as a symmetric group.
The following theorem describes Anosov automorphisms associated to
Galois groups whose actions on the roots of $p_1$ is highly transitive.
\begin{thm}\label{Sn} Suppose that $\frakn$ is a real $r$-step Anosov Lie
algebra admitting an Anosov automorphism
defined by a semisimple matrix $A$ in $GL_n(\boldZ),$ a Hall basis
$\calB,$ and an ideal $\fraki < \frakf_{n,r}(\boldR)$
satisfying Auslander-Scheuneman conditions.
Suppose that the polynomial $p_1$ associated to $f$ is irreducible with
Galois group $G.$
Let $(p_1, \ldots, p_r)$ be the $r$-tuple of
polynomials associated to $p_1.$
\begin{enumerate}
\item{If the action of $G$ on the roots of $p_1$ is two-transitive, then
the polynomial $p_2$ is irreducible and Anosov, and
if $r=2,$ then $\frakn$ is isomorphic to the free nilpotent
algebra $\frakf_{n,2}(\boldR).$}\label{2,2}
\item{If the action of $G$ on the roots of $p_1$ is three-transitive
and $r=3,$ then $\frakn$ is isomorphic to
$\frakf_{n,3}(\boldR)/\fraki,$ where $\fraki$ is trivial or
a sum of $F_1(\boldR),$ $F_{2a}(\boldR),$ and $F_{2b}(\boldR),$
where $F_1(\boldR)$
is as defined in Equation \eqref{F defs},
and $F_{2a}(\boldR)$ and $F_{2b}(\boldR)$ are as
in Proposition \ref{Fdecomp}.
If $n=3,$ then $\fraki$ contains $F_2(\boldR).$
}
\item{A Lie algebra
$\frakf_{n,r}(\boldR)/\fraki$ of type $S_n$ is Anosov
so long as the ideal $\fraki$ contains the
ideal $\frakj_{n,r}$ as defined in Definition \ref{j-def} }\label{sn is this}
\end{enumerate}
\end{thm}
Note that when $G$ is $S_n,$ the Anosov Lie algebra need
not be of type $S_n$ as in Definition \ref{ideal-i}: the Lie algebra
$\frakf_{n,3}/F_{2a}(\boldR)$ is not of type $S_n$ although it admits
an Anosov automorphism with symmetric Galois group.
\begin{proof}
Let $\alpha_1, \ldots, \alpha_{n}$ denote the roots of
$p_1,$ and let $\calC_1 = \{\bfz_1, \ldots, \bfz_n\}$ be a set of
corresponding eigenvectors
of $f|_{V_1(K)}$ that is compatible with the rational structure
defined by $\calB.$
Let $\calC = \cup_{i=1}^r \calC_{i}$ be the Hall basis
of $\frakf_{n,r}(K)$ determined by $\calC_1.$
Suppose that the action of
the Galois group $G$ on $\{\alpha_1, \ldots, \alpha_n\}$
is two-transitive. If the action of $G$ on the roots of
$p_1$ is two-transitive, then the set of roots of $p_1$
has full rank by Lemma \ref{how it can factor}.
Since
the action of the group $G$ sends an eigenvector $\bfz_i$
to a multiple of another eigenvector $\bfz_j,$ the $G$ action
sends an element of
$\calC_2 = \{ [\bfz_k, \bfz_j]\}_{1 \le j < k \le n}$ to a scalar multiple of
an element of $\calC_2.$
Because the action is doubly transitive,
for any $\bfw$ in $ \calC_2,$ the rational invariant
subspace $E_G^K(\bfw)$ is all of $V_2(\boldR).$ By Theorem
\ref{actions}, the characteristic polynomial $p_2$
for the restriction of $f$ to $V_2(\boldR)$ is a
power of an irreducible polynomial. But actually, $p_2$ itself
is irreducible by Part \eqref{fr-p2-irr} of
Lemma \ref{how it can factor}.
Thus, the only proper rational invariant subspace of $V_2(\boldR)$ is
trivial, and when $r=2,$ the only two-step Anosov quotient of
$\frakf_{n,2}(\boldR)$ is itself.
This proves the first part of the theorem.
Now suppose $r=3$ and that the action of
the Galois group $G$ on $\{\alpha_1, \ldots, \alpha_n\}$
is three-transitive. Then $G$ is two-transitive, and by the
argument above,
the only proper rational invariant subspace of $V_2(\boldR)$ is $\{0\}.$
Recall from Proposition \ref{Fdecomp} that $V_3(\boldR)$ is the direct sum
$V_3(\boldR) = F_1 \oplus F_{2a} \oplus F_{2b}$ of the rational invariant
subspaces
$F_1, F_{2a},$ and $F_{2b}$ where $F_2 = F_{2a} \oplus F_{2b},$ and
the characteristic polynomial for the restrictions of $f$ to
$F_1$ is $q_1$ and the characteristic polynomials for the restriction
of $f$ to $F_{2a}$ and $F_{2b}$ are both $q_2.$
By the three-fold transitivity of $G$, the subspaces
$F_1$ and $F_{2}$ are all single $G$-orbits; hence
$q_1$ and $q_2^2$ are powers of irreducibles. But by
Lemma \ref{how it can factor}, when $n\ge 3,$
$q_1$ is irreducible, so $F_1$ has
no nontrivial rational invariant subspaces, and when $n>3,$ the polynomial
$q_2$ is irreducible; hence the subspaces $F_{2a}$ and $F_{2b}$
are minimal nontrivial invariant subspaces.
If $n=3,$ then $\alpha_1 \alpha_2 \alpha_3 =\pm 1, $ so $q_2(x) =
x \pm 1,$ and $\fraki$ must contain $F_2.$
Therefore, in order for an ideal $\fraki$ of $\frakf_{n,3}(\boldR)$
to satisfy the Auslander-Scheuneman conditions relative
to $f$, it is necessary for $\fraki$ to be a sum of $\{0\}, F_1, F_{2a}$
$F_{2b}.$ Thus, the
second part of the theorem holds.
Now we consider the case that $G$ is symmetric.
Assume that $\fraki_0$ is an ideal of
$\frakf_{n,r}(\boldR)$ of type $S_n$ relative to some Hall
basis $\calD,$ and $\frakf_{n,r}(\boldR)/\fraki_0$ is a Lie
algebra of type $S_n.$ Assume also that
$\fraki_0$ contains $\frakj_{n,r}.$ We need to show that $\fraki_0$ is
satisfies the Auslander-Scheuneman conditions relative to some automorphism
$f$ of $\frakf_{n,r}(\boldR).$
Let $A$ be the companion matrix to an
Anosov polynomial with Galois group $S_n,$ such as $p_1(x) = x^n - x - 1$
as in Proposition \ref{existence}.
Together, $A$ and a set of generators $\calB_1$ determine an
automorphism $f$ of $\frakf_{n,r}(\boldR)$ that is rational relative to the
Hall basis $\calB$ determined by $\calB_1.$ Let $\calC_1$ denote
the set of eigenvectors for $f$ in $V_1(\boldR),$ and let $\calC$ be the
corresponding Hall basis.
There is an ideal $\fraki$ isomorphic to $\fraki_0$ that is the
image of $\fraki_0$ under the isomorphism $g: \frakf_{n,r}(\boldR) \to \frakf_{n,r}(\boldR)$
defined by a bijection from $\calD$ to $\calC.$
The ideal $\fraki$ is invariant under
$f$ by Theorem \ref{actions}. By Remark \ref{as cond 2},
the restriction of $f$ to $\fraki$
is unimodular. The third of the
Auslander-Scheuneman condition holds by the theory of
rational canonical forms.
The last condition is that all roots of modulus one or
minus one are in $\fraki:$ this holds because
the set of
roots of $p_1$ has full rank by Proposition \ref{full rank},
and $\frakj_{n,r} < \fraki.$
Thus, $f$ descends to an Anosov automorphism of
$\frakf_{n,r}(\boldR)/\fraki \cong \frakf_{n,r}(\boldR)/\fraki_0.$
\end{proof}
We can completely describe Anosov Lie algebras whose associated
polynomials $p_1$ are irreducible of prime degree $n \ge 3$
with cyclic Galois group.
\begin{thm}\label{Cn}
Suppose that
$\frakn = \frakf_{n,r}(\boldR)/ \fraki$ is an $r$-step Anosov Lie
algebra of type $(n_1, \ldots, n_r)$
admitting an Anosov automorphism defined by
a semisimple hyperbolic matrix in $GL_n(\boldZ),$
rational Hall basis $\calB,$ the resulting automorphism
$f$ in $\Aut(\frakf_{n,r}),$
and an ideal $\fraki$ satisfying Auslander-Scheuneman conditions.
Let $(p_1, \ldots, p_r)$
be the associated $r$-tuple of polynomials. Suppose that $p_1$
is irreducible of prime degree
$n = n_1 \ge 3,$ and that $p_1$ has cyclic Galois
group $G.$
Then the ideal $\fraki$ is of cyclic type as in Definition \ref{ideal-i}, and
$\fraki$ contains $\frakj_{n,r},$ where
$\frakj_{n,r}$ is as defined in Definition \ref{j-def}.
Furthermore, $n = n_1$ divides $n_i$ for all $i=2, \cdots, r.$
Conversely, for any prime $n \ge 3,$ a Lie algebra
$\frakf_{n,r}(\boldR)/\fraki$
of cyclic type is Anosov, as long as the ideal $\fraki$ contains
$\frakj_{n,r}.$
\end{thm}
\begin{proof}
Let $\fraki, \frakf_{n,r}$ and $(p_1,\ldots, p_r)$
be as in the statement of the theorem.
By Remark \ref{j in i}, the ideal
$\frakj_{n,r}$ is contained in $\fraki.$
Recall that any irreducible polynomial in $\boldZ[x]$
of prime degree $n \ge 3$
and cyclic Galois group has totally real roots; hence $f$
has real spectrum.
Let $\calC_1$ be a basis of eigenvectors for $f|_{V_1(\boldR)}$ that is
compatible with the rational structure, and let
$\calC = \cup_{i=1}^r \calC_i$ be the Hall basis defined by
$\calC_1.$
By Theorem \ref{actions}, for any $\bfw$ in $\calC_i,$
the orbit
\[E_G^K(\bfw)= \myspan_\boldR \{ \sigma \cdot \bfw \, : \, \sigma \in G \}\]
is a
rational invariant subspace whose
characteristic polynomial is a power $r^s$ of
an irreducible polynomial $r.$ The
dimension $d$ of
$E_G^K(\bfw) $
is $n$ or less because $|G| = n.$
The dimension $d$ also satisfies $d = s \cdot \deg r.$
Because the splitting field for $r$ is a subfield of $\boldQ(p_1)$
and the Galois group $G$ for $p_1$ is isomorphic to the simple group
$C_n,$ either
$r$ is linear or $r$ is irreducible of degree $n$ and has Galois group $G.$
Hence, either
$\fraki(G,\bfw)$ is contained in $\fraki,$ or it is $n$-dimensional
and its intersection with $\fraki$ is trivial.
The ideal $\fraki$ is then a direct sum of subspaces
of form $E_G^K(\bfw),$ hence is of cyclic type.
Since
each step $V_i$ decomposes as the direct sum of $\fraki \cap V_i$
and $n$-dimensional subspaces
of the form $E_G^K(\bfw),$ for $\bfw \in \calC_i,$
the dimension of the $i$th step of the quotient
$\frakn$ is divisible by $n,$ for all $i =2,\ldots,n.$
Now let $\frakn$ be the quotient
of $\frakf_{n,r}(\boldR) = \oplus_{i=1}^r V_i$ by an ideal
$\fraki_0$ of cyclic type relative to some Hall basis
$\calD = \cup_{i=1}^r \calD_i$,
where $n \ge 3$ is prime, and suppose that
$\fraki_0 > \frakj_{n,r}$ where $\frakj_{n,r}$ is defined relative to $\calD$.
We will show that $\frakn$ is Anosov. By
Proposition \ref{existence}, there exists an Anosov polynomial
$p_1$ whose Galois group is cyclic of order $n.$ By using the companion
matrix to $p_1$ and a Hall basis $\calB$
we can define an automorphism $f$ of $\frakf_{n,r}(\boldR).$
Let $\calC_1$ be an eigenvector basis for $V_1$ that is compatible with
the rational structure defined by $\calB,$ and let $\calC$ be the
Hall basis defined by $\calC_1.$ Then there is an ideal $\fraki$
of $\frakf_{n,r}(\boldR)$ that is cyclic relative to the $G$ action on the Hall basis
$\calC$ and is
isomorphic to $\fraki_0.$
The ideal $\fraki$ is rational and invariant by Theorem \ref{actions}.
All we need to show is that the quotient map $\overline{f}$
on $\frakf_{n,r}(\boldR)/\fraki$
has no roots of modulus one. But we have already shown in the first
part of the proof that if a rational
invariant subspace $\fraki(\bfw,G)$ is not in $\fraki,$ then
the characteristic polynomial $r$ for the restriction of $f$ to
$\fraki(\bfw,G)$ has odd degree $n.$ By Remark \ref{roots of modulus one},
$r$ has no roots of modulus one. Hence $\overline{f}$ is Anosov.
\end{proof}
\section{Anosov automorphisms in low dimensions}
\label{2&3}
In this section we describe Anosov automorphisms of some
nilpotent Lie algebras that arise from Anosov polynomials of low degree.
\subsection{When $p_1$ is a product of quadratics}
We will analyze Anosov automorphisms for which the associated polynomial
$p_1$ is a product of quadratic polynomials. To do this we
need to define a family of two-step nilpotent Lie algebras.
\begin{definition}\label{graph} Let $\frakf_{n,2}(\boldR) = V_1(\boldR) \oplus V_2(\boldR)$ be the
free two-step Lie algebra on $2n$ generators, where $n \ge 2.$ Let $\calC_1 =
\{\bfz_1, \ldots \bfz_{2n}\}$ be a set of generating vectors spanning $V_1(\boldR),$
and let $\calC$ be the Hall basis determined by $\calC_1.$
Let $S_1$ and $S_2$ be subsets of the set $\{ \{i,j\} \, : 1 \le i < j \le n\}$ of subsets of
$\{1, 2, \ldots, n\}$ of cardinality two.
To the subsets $S_1$ and $S_2$ associate the ideal
$\fraki(S_1, S_2)$ of $\frakf_{n,2}(\boldR)$ defined by
\begin{multline}
\fraki(S_1, S_2) = \bigoplus_{i=1}^n \myspan \{ [\bfz_{2i-1},\bfz_{2i}] \}
\oplus \bigoplus_{\{i,j\} \in S_1} \myspan \{ [\bfz_{2i-1}, \bfz_{2j-1}],
[\bfz_{2i}, \bfz_{2j}]\}
\oplus \\
\bigoplus_{\{i,j\} \in S_2} \myspan \{ [\bfz_{2i-1}, \bfz_{2j}],
[\bfz_{2i}, \bfz_{2j-1}]\} \end{multline}
Define the two-step nilpotent Lie algebra $\frakn(S_1,S_2)$ to be
$\frakf_{2n,2}(\boldR)/ \fraki(S_1,S_2).$ A two-step Lie algebra
of this form will be said to be of {\em quadratic type}.
\end{definition}
In the next theorem we classify two-step Anosov Lie algebras such that
the polynomial $p_1$ is a product of quadratics.
\begin{thm}\label{2+2+...}
Suppose that the polynomial $p_1$ associated to a semisimple
Anosov automorphism
$f$ of a two-step
Anosov Lie algebra $\frakn$ is the product of quadratic polynomials.
Then $\frakn$ is of quadratic type, as defined in Definition \ref{graph},
and all the eigenvalues of $f$ are real.
Furthermore, every two-step nilpotent Lie algebra of quadratic type is Anosov.
\end{thm}
\begin{proof}
Let $\frakf_{2n,2}(\boldR) = V_1(\boldR) \oplus V_2(\boldR)$ be the free
two-step nilpotent Lie algebra on $2n$ generators.
Let $\overline{f}$ be an an Anosov automorphism
of $\frakn = \frakf_{2n,2}(\boldR)/\fraki$ defined by
automorphism $f$ of $\frakf_{2n,2}(\boldR),$ a Hall basis $\calB$
and an ideal $\fraki$ satisfying the Auslander-Scheuneman conditions.
Without loss of generality,
assume that $\fraki < V_2(\boldR).$
Let $(p_1,p_2)$ denote the pair of polynomials
associated to $f,$ and assume that the polynomial
$p_1$ of degree $2n$ is the product
of $n$ quadratic Anosov polynomials $r_1, \ldots, r_n.$
By the
quadratic equation, any roots of a quadratic Anosov polynomial are real.
All the eigenvalues of the Anosov automorphism $\overline{f}$
are Hall words in the roots of $p_1,$ hence are real.
The subspace $V_1(\boldR)$ decomposes as the direct sum
$\oplus_{i=1}^n E_i$ of rational invariant subspaces such that
the characteristic polynomial for $f|_{E_i}$ is $r_i.$
For $i=1, \ldots, n,$ let $\bfz_{2i -1}$ and $\bfz_{2i}$ denote eigenvectors
in $E_i$ with eigenvalues $\alpha_{2i -1}$ and $\alpha_{2i}$
respectively. We may assume without loss of generality
that $\alpha_{2i -1} > 1 > \alpha_{2i} = \alpha_{2i-1}^{-1}.$
As in Example \ref{qr}, the polynomial $p_2$ may be written as
\[ p_2 = \prod_{i=1}^n (r_i \wedge r_i) \, \times \prod_{1 \le i < j \le n}
(r_i \wedge r_j), \]
and this factorization is corresponds to a decomposition
$V_2(\boldR) = \oplus_{1 \le i \le j \le n}[E_i,E_j]$ of $V_2(\boldR)$
into rational invariant subspaces.
For all $i=1, \ldots, n,$ the polynomial
$r_i \wedge r_i$ is linear with
root $\alpha_{2i-1} \alpha_{2i} = 1,$ so
$(r_i \wedge r_i)(x) = x - 1.$ Therefore, the
ideal $\fraki$ must contain the $n$-dimensional subspace
$\oplus_{i=1}^n [E_i,E_i]$ of $V_2(\boldR).$
For $i \ne j,$
the polynomial $r_i \wedge r_j$ is given by
\[ (r_i \wedge r_j)(x) = (x -\alpha_{2i-1}\alpha_{2j-1})
(x-\alpha_{2i-1}
\alpha_{2j-1}^{-1})(x-\alpha_{2i-1}^{-1}\alpha_{2j-1})
(x-\alpha_{2i-1}^{-1}\alpha_{2j-1}^{-1}). \]
Minimal nontrivial invariant subspaces of
$[E_i,E_j]$ correspond to factorizations of $r_i \wedge r_j$
over $\boldZ.$
If the splitting fields $\boldQ(r_i)$ and $\boldQ(r_j)$
do not coincide, then they are linearly disjoint,
and $\boldQ(r_i \wedge r_j) = \boldQ(r_i)\boldQ(r_j)$
is a biquadratic extension of $\boldQ.$
Therefore, $r_i \wedge r_j$ is irreducible, and $[E_i,E_j]$
has no nontrivial rational invariant subspaces.
Now suppose that the splitting fields
$\boldQ(r_i)$ and $\boldQ(r_j)$ are equal, for $i \ne j$.
Since the roots of Anosov quadratics are real, the field $\boldQ(r_i)$ is
a totally real quadratic extension of $\boldQ.$
By Dirichlet's Fundamental Theorem, there are units $\zeta = \pm 1,$
and $\eta,$ a fundamental unit in $\boldQ(r_i)$, such
that any unit $\beta$ in $\boldQ(r_i)$ can be expressed
as $\beta = \zeta^a \eta^{b},$
where $a \in \{0, 1\}$ and $b \in \boldZ.$
We may choose $\eta > 1.$
The Galois group for $r_1$ is generated by the automorphism of
$\boldQ(\eta)$ mapping $\eta$ to $\eta^{-1}.$
We can write
\[ \alpha_{2i-1} = \eta^{b_i}, \alpha_{2i} = \eta^{-b_i},
\alpha_{2j-1} = \eta^{b_j}, \text{and} \quad \alpha_{2j} =
\eta^{-b_j}, \] where $b_i$ and $b_j$ are in $\boldZ^+.$
The four roots
of $ (r_i \wedge r_j)(x)$ are then the numbers $\eta^{\pm b_i \pm b_j}.$
Therefore, $r_i \wedge r_j$ factors over $\boldZ[x]$ as the product of
two quadratics in $\boldZ[x]$
\begin{align*}
(x-\eta^{b_i + b_j})(x-\eta^{-b_i - b_j}) &= (x-\alpha_{2i-1}\alpha_{2j-1})
(x-\alpha_{2i}\alpha_{2j}), \quad \text{and} \\
(x-\eta^{b_i - b_j})(x-\eta^{-b_i + b_j}) &= (x-\alpha_{2i-1}\alpha_{2j-1}^{-1})(x-\alpha_{2i-1}^{-1}\alpha_{2j-1}^{-1}).
\end{align*}
The first polynomial is irreducible since $b_i + b_j > 0,$ and $\eta$
is not a root of unity.
If $\alpha_{2i-1} = \alpha_{2j-1},$
then $b_i = b_j$ and the second polynomial is equal to $(x-1)^2.$
This analysis shows that $\fraki \cap [E_i,E_j]$
must be one of the subspaces $\{0\},$ $[E_i,E_j],$
\[
E_{i,j} = \myspan \{ [\bfz_{2i-1},\bfz_{2j-1}], [\bfz_{2i}, \bfz_{2j}]\}
\quad \text{and} \quad
E_{i,j}^\prime = \myspan \{ [\bfz_{2i-1},\bfz_{2j}], [\bfz_{2i}, \bfz_{2j-1}]\}. \]
Then $\fraki$ is the direct sum of $\oplus_{i=1}^n [E_i,E_i],$
subspaces of the form $E_{i,j}$ and subspaces of the form
$E_{i,j}^{\prime}.$ Therefore, $\frakn$ is of quadratic type.
Conversely, we show that
for any ideal $\fraki$ of quadratic type in $\frakf_{2n,2}(\boldR),$
the quotient $\frakn= \frakf_{2n,2}(\boldR)/\fraki$ is an Anosov Lie algebra.
Suppose that $\fraki(S_1,S_2)$ is an ideal as in the definition
of quadratic type. Fix a fundamental unit $\eta$ in a totally real
quadratic extension of $\boldQ.$ Let $b_1, \ldots, b_n$ be
distinct positive integers.
For each $i=1, \ldots, n,$
define the polynomial $r_i$ in $\boldZ[x]$ by
\[ r_i(x) = (x - \eta^{b_i})(x - \eta^{-b_i})\]
and let $p_1 = r_1 \cdots r_n.$ Then for all $i \ne j,$
the polynomial
$p_i \wedge p_j$ factors over
$\boldZ$ as the product of pairs of irreducible quadratic polynomials
\[ (x- \eta^{b_i + b_j})(x - \eta^{-b_i - b_j}) \quad \text{and} \quad
(x- \eta^{b_i - b_j})(x - \eta^{-b_i + b_j}) \]
in $\boldZ[x],$ neither having roots of modulus one.
The two factors give a two rational invariant subspace of form $E_{ij}$
and $E_{ij}^\prime.$
Therefore the ideal
\[ \fraki = \bigoplus_{i=1}^n [E_i,E_i] \oplus
\bigoplus_{\{i,j\} \in S_1} E_{i,j} \oplus
\bigoplus_{\{i,j\} \in S_2} E_{i,j}^\prime\]
satisfies the four Auslander-Scheuneman conditions and is of the
form $\fraki(S_1,S_2)$ with respect to the appropriate Hall basis
of $\frakf_{2n,2}(\boldR).$
This completes the proof of the theorem.
\end{proof}
\subsection{When $p_1$ is a cubic}
We can classify all Anosov Lie algebras
of type $(3, \ldots, n_r)$ with $r=2$ or $r=3. $
\renewcommand{\arraystretch}{2}
\begin{table}
\begin{tabular}{|l | l | l | l |}
\hline
$\frakn = \frakf_{n,r}(\boldR)/\fraki$
& type & ideal $\fraki$ & reference for definition of $\fraki$ \\
\hline
\hline
$\frakf_{3,2}$ & $(3,3)$ & $\{0\}$ &\\
\hline
\hline
$\frakf_{4,2}$ & $(4,6)$ & $\{0\}$ & \\
\hline
$\frakf_{4,2}/\fraki$
& $(4,4)$ & $\fraki(V_4, [\bfz_2,\bfz_1])$
& Definition \ref{ideal-i} \\
\hline
$\frakf_{4,2}/\fraki \cong \frakh_3 \oplus \frakh_3$
& $(4,2)$ & $\fraki(C_4, [\bfz_2,\bfz_1])$ &
Definition \ref{ideal-i}\\
\hline
\hline
$\frakf_{5,2}$ & $(5,10)$ & $\{ 0 \}$ & \\
\hline
$\frakf_{5,2}/\fraki$ & $(5,9)$ & $\fraki_1$ & Definition \ref{2+3-i}\\
\hline
$\frakf_{5,2}/\fraki$ & $(5,6)$ & $\fraki_1 \oplus \fraki_2$ & Definition
\ref{2+3-i} \\
\hline
$\frakf_{5,2}/\fraki$ & $(5,5)$ & $\fraki(C_5, [\bfz_2,\bfz_1])$ &
Definition \ref{ideal-i} \\
\hline
$\frakf_{5,2}/\fraki$ & $(5,5)$ & $\fraki_2$ &
Example \ref{D10} \\
\hline
$\frakf_{5,2}/\fraki \cong \boldR^2 \oplus \frakf_{3,2}$
& $(5,3)$ & $\fraki_1 \oplus \fraki_3$
& Definition \ref{2+3-i} \\
\hline
\end{tabular}
\bigskip
\caption{\label{r=2}
Two-step Anosov Lie algebras of type $(n_1,n_2)$ with $n_1 \le 5$}
\end{table}
\begin{thm}\label{n1=3}
If $\frakn$ is a two-step Anosov Lie algebra
of type $(3, n_2),$ then $\frakn \cong \frakf_{3,2}.$
If $\frakn$ is three-step Anosov Lie algebra of type
$(3, n_2, n_3),$ then
$\frakn$ is isomorphic to the Anosov Lie algebra
$\frakf_{3,3}/F_2$ of type $(3,3,6),$ where
$\fraki = F_2$ is as defined in Equation
\eqref{F defs}, or
$\fraki_1 = F_2 \oplus \fraki(C_3, [[\bfz_2, \bfz_1],\bfz_2])$
of type $(3,3,3),$
where $\fraki(C_3, [[\bfz_2,\bfz_1],\bfz_2])$ is as defined in
Definition \ref{ideal-i}.
\end{thm}
\begin{proof}
Suppose that $A$ is a semisimple hyperbolic
matrix in $GL_n(\boldZ)$ with associated triple of polynomials
$(p_1, p_2, p_3).$ Let $\calB_1$ be a generating set for
$\frakf_{3,3}$ and let $\calB$ be the Hall basis determined by
$\calB.$ Let
$\alpha_1, \alpha_2, \alpha_3$ denote the roots of $p_1.$
Since
$p_1$ can not have 1 or -1 as a root, it is irreducible over
$\boldZ,$ and the Galois group $G$ of
the splitting field of $p_1$ is either $C_3$ or $S_3.$
As demonstrated in Example \ref{free-3,2}, the cubic polynomial
$p_2$ is irreducible and Anosov,
so there are no Anosov Lie algebras of type $(3,n_2)$
other than $\frakf_{3,2}.$
Let $\fraki$ be an ideal so that $f$ descends to an Anosov
automorphism of a three-step nilpotent Lie algebra $\frakf_{3,2}/\fraki.$
If $G$ is symmetric, then $\fraki = F_2$ by Theorem \ref{Sn}.
If $G$ is cyclic,
by Theorem \ref{Cn}, $\fraki$ is either $F_2$ or it is
an ideal of form
$F_2 \oplus \fraki(C_3, \bfw),$ for $\bfw \in \calC_3^{\prime}.$
Each Lie algebra that is listed may be realized
by choosing appropriate Anosov polynomial from Table \ref{low-degree-list}
and using its companion matrix $A$ to define an automorphism of
$\frakf_{3,2}$ or $\frakf_{3,3}.$ When $r=3,$ by Lemma \ref{odd-good}, all
vectors with eigenvalue $\pm 1$ will be in the kernel $F_2 = \frakj_{3,3}.$
\end{proof}
\subsection{When $p_1$ is a quartic}
Now we consider the case that $p_1$ is a quartic Anosov polynomial.
The next lemma is useful for understanding Anosov
Lie algebras of type $(4,n_2).$
\begin{lemma}\label{4-galois} Let $(p_1, p_2)$ be the
pair of polynomials associated to an irreducible
Anosov polynomial $p_1$ of degree four.
Let $G$ denote the Galois group of the splitting field for
$p_1.$ Then
\begin{enumerate}
\item{$G\cong S_4$ or $G \cong A_4$ if and only if $ p_2$ is irreducible.}
\item{$G \cong C_4$ or $G \cong D_8$ if and only if $ p_2$ has an irreducible
quartic factor. }
\item{$G \cong V_4$ if and only if $ p_2$ has no irreducible factors of
degree three or more.}
\end{enumerate}
Furthermore, roots of $p_2$ come in reciprocal pairs $\beta$ and
$\pm \beta^{-1}.$
\end{lemma}
\begin{proof} Let $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ denote the
four distinct roots of $p_1.$ Then the roots of $ p_2$ are the six
numbers $\alpha_i \alpha_j,$ where $1 \le i< j \le 4.$
Because $\alpha_1\alpha_2\alpha_3\alpha_4 = \pm 1,$
roots of $p_2$ come in pairs such as $\alpha_1\alpha_2$
and $\alpha_3\alpha_4 = \pm (\alpha_1\alpha_2)^{-1}.$
The resolvent cubic $r$ for $p_1$ has roots
\[ \beta_{1}=\alpha_1 \alpha_2 + \alpha_3 \alpha_4, \quad \beta_2= \alpha_1
\alpha_3 + \alpha_2 \alpha_4, \quad \text{and} \quad\beta_3 = \alpha_1
\alpha_4 + \alpha_2 \alpha_3.\]
Recall that one of three things must occur:
(1) that none of $\beta_1,\beta_2,\beta_3$ lies in
$\boldQ,$ $r$ is irreducible, and $G\cong S_4$ or $G \cong A_4,$
(2) exactly one
of $\beta_1,\beta_2,\beta_3$ lies in $\boldQ,$ $r$ is the product of
an irreducible quadratic and a linear factor, and $G\cong C_4$ or $G \cong D_8,$
and (3) $\beta_1,\beta_2,\beta_3$ all lie in $\boldQ,$ $r$
splits over $\boldQ,$ and $G\cong V_4.$
Since $\alpha_1\alpha_2\alpha_3\alpha_4 = \pm 1,$
$\beta_1 = \alpha_1 \alpha_2 + \alpha_3 \alpha_4$ is in $\boldQ$
if and only if
$(x - \alpha_1 \alpha_2)(x - \alpha_3 \alpha_4)$ is a quadratic factor
of $p_2$ over $\boldQ.$
Factors that are the counterparts from $\beta_2$ and $\beta_3,$
\begin{equation}\label{factorsabove}
(x - \alpha_1 \alpha_3)(x - \alpha_2 \alpha_4) \qquad \text{and} \qquad
(x - \alpha_1 \alpha_4)(x - \alpha_2 \alpha_3),
\end{equation}
are in $\boldQ[x]$ depending on whether $\beta_1$ and $\beta_2$
are in $\boldQ.$
Therefore, when all of $\beta_1, \beta_2, \beta_3$ lie in
$\boldQ,$ $p_2$ factors as the product of quadratics in
$\boldQ[x],$ establishing the claim in Case (3).
In Case (1), $p_2$ is irreducible by Theorem \ref{Sn}, Part \eqref{2,2}.
In the second case, when $G$ is $C_4$ or $D_8,$ there is a $G$ orbit
of cardinality four, something like
$\{\alpha_1\alpha_3, \alpha_1\alpha_4,\alpha_2\alpha_3,
\alpha_2\alpha_4\},$ that by Theorem \ref{actions} yields
a factor
\[ r(x)=(x - \alpha_1 \alpha_3)(x - \alpha_2 \alpha_4)(x - \alpha_1
\alpha_4) (x - \alpha_2 \alpha_3) \]
of $p_2,$
and an orbit of cardinality two
corresponding to a factor
$(x - \alpha_1 \alpha_2)(x - \alpha_3 \alpha_4)$ of $p_2.$
Then $\beta_1 \in \boldQ$ and $\beta_2, \beta_3 \not \in \boldQ.$
By Theorem \ref{actions},
if $r$ were to factor, it would be a power of an irreducible.
Knowing that $\beta_2, \beta_3 \not \in \boldQ$ rules out
the polynomials in Equation \eqref{factorsabove} as factors of $r.$
The only other
options for factors yield contradictions to the distinctness of roots of
$p_1.$ Thus, $r$ is an irreducible quartic factor of $p_2.$
\end{proof}
Now we are ready to
classify two-step Anosov Lie algebras of type $(4,n_2)$ using
the methods we have established.
\begin{thm}\label{n=4,r=2}
If $\frakn$ is an Anosov Lie algebra of type $(4, n_2),$
then $\frakn$ is one of the Anosov Lie algebras listed in Table \ref{r=2}.
\end{thm}
\begin{proof}
Suppose that $f$ is a semisimple automorphism of $\frakf_{4,2}(\boldR)
= V_1(\boldR) \oplus V_2(\boldR)$
that projects to an Anosov automorphism of
a two-step quotient $\frakn = \frakf_{4,2}/\fraki.$
Let $(p_1,p_2)$ be the polynomials associated to
$f,$ and let $K$ be the splitting field
for $p_1.$ Let $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ be the roots
of $p_1$ and let $\bfz_1, \bfz_2, \bfz_3, \bfz_4$
be corresponding eigenvectors for $f_K$ in $V_1(K).$
If $p_1$ is reducible, since it is Anosov, it is a product of
quadratics, and Theorem \ref{2+2+...} describes the Anosov Lie
algebras in that situation: $\fraki$ is one of $\fraki(V_4,[\bfz_2,\bfz_1])$
and $\fraki(C_4,[\bfz_2,\bfz_1]).$
Assume that $p_1$ is
irreducible.
The Galois group $G$ of $p_1$ is then a transitive
permutation group of degree four.
By Lemma \ref{4-galois}, if $G$ is $S_4$ or $A_4,$ the polynomial
$p_2$ is irreducible, so $V_2(\boldR)$ has no nontrivial
proper rational invariant subspaces; hence,
$\fraki = \{0\},$ and $\frakn$ is free.
If $G$ is $D_8$ or $C_4,$ then by considering the
action of $G$ on the set of roots it can be seen
that $\frakf_{4,2}(K)$ is the direct sum of a
four-dimensional rational $f_K$-invariant
subspace $\fraki_1$ and a two-dimensional rational $f_K$-invariant subspace
$\fraki_2.$ If the roots are real, these may be represented
as follows, renumbering the roots if necessary,
\[
\fraki_1 = \fraki(C_4,[\bfz_2,\bfz_1]) = \fraki(D_8,[\bfz_2,\bfz_1]), \quad \text{and} \quad
\fraki_2 = \fraki(C_4,[\bfz_3,\bfz_1]) = \fraki(D_8,[\bfz_3,\bfz_1]). \]
By Lemma \ref{4-galois}, the minimal polynomial for
$\fraki_1$ is irreducible, so $\fraki$ is
a minimal nontrivial invariant subspace. If there are complex roots,
a short computation shows that the rational invariant
subspaces $\fraki_1^\prime$ and $\fraki_2^\prime$ of $\frakf_{4,2}(K)$
yield rational invariant subspaces $(\fraki_1^\prime)^\boldR$ and
$(\fraki_2^\prime)^\boldR$ of $\frakf_{4,2}(\boldR)$ isomorphic
to $\fraki_1$ and $\fraki_2.$
The characteristic polynomial for
$\fraki_2$ is either irreducible or it has roots
$\pm 1,$ in which case $\fraki_2$ must be
contained in the ideal $\fraki.$ Thus, when $G = C_4$
or $D_8,$ the ideal $\fraki$ is $\{0\}, \fraki_1$
or $\fraki_2.$
Finally, if $G$ is the Klein four-group, by the same reasoning,
$V_2(K)$ is the direct sum of three two-dimensional ideals that are
either minimal or must be contained in $\fraki.$ Then $\frakn$
is of quadratic type.
Anosov automorphisms of all types $(4,n_2)$ may be realized
by choosing an appropriate polynomial $p_1$ from Table \ref{low-degree-list}.
By Remark \ref{roots of modulus one}, the polynomial $p_2$ defined by such
a $p_1$ can not have any nonreal roots of modulus one unless it is
self-reciprocal.
The only polynomials listed in Table \ref{low-degree-list} that is
self-reciprocal is for $V_4,$ so
$p_2$ will not have roots of modulus one unless $G = V_4,$ in which case
the eigenspaces of those roots lie in the ideal $\fraki.$
\end{proof}
\subsection{When $p_1$ is a quintic}
First we define some two-step nilpotent
Lie algebras on five generators.
\begin{definition}\label{2+3-i}
Let $\frakf_{5,2} = V_1(\boldR) \oplus V_2(\boldR)$ be a free Lie algebra on five generators
$\{ \bfz_i \}_{i=1}^5.$ Define subspaces $E_1$ and $E_2$ of
$V_1(\boldR)$ by $E_1 = \myspan_{\boldR} \{ \bfz_1, \bfz_2 \}$
and let $E_2 = \myspan_{\boldR} \{\bfz_3, \bfz_4, \bfz_5\},$
and define ideals of $\frakf_{5,2}$ by
\[
\fraki_1 = [E_1,E_1], \quad
\fraki_2 = [E_1,E_2], \quad \text{and} \quad
\fraki_3 = [E_2,E_2]. \]
Define two-step Lie algebras by
\[
\frakn_1 = \frakf_{5,2}/ \fraki_1, \quad
\frakn_2 = \frakf_{5,2}/ (\fraki_1 \oplus \fraki_2), \quad \text{and} \quad
\frakn_3 = \frakf_{5,2}/ (\fraki_1 \oplus \fraki_3).
\]
These are
of types $(5,9)$, $(5,3)$ and $(5,6)$ respectively. Note that
$\frakn_2 \cong \boldR^2 \oplus \frakf_{3,2}.$
\end{definition}
\begin{thm}\label{n1=5}
Suppose that $\frakn$ is a two-step
nilpotent Lie algebra
of type $(5, n_2)$ admitting an Anosov automorphism $f.$
Then $\frakn$ is one of the Lie algebras listed in Table \ref{r=2}.
Furthermore, all of the Lie algebras of
type $(5, n_2)$ in Table \ref{r=2} are Anosov.
\end{thm}
\begin{proof}
Let $(p_1,p_2)$ be the pair of polynomials associated to an
automorphism $f$ of $\frakf_{5,2} = V_1(\boldR) \oplus V_2(\boldR)$ that projects
to an Anosov automorphism of an Anosov Lie algebra
$\frakn = \frakf_{5,2}/\fraki,$
for some ideal $\fraki$ of $\frakf_{5,2}$ satisfying the
Auslander-Scheuneman conditions.
Without loss of generality we assume that the
roots of $p_1$ have product 1.
First suppose that $p_1$ is irreducible, so that its Galois group $G$ is
isomorphic to $S_5, A_5, D_{10}, C_{5}$ or the holomorph $\Hol(C_5)$ of $C_5.$
If $G$ is isomorphic to one of
$S_5, A_5$, and $\Hol(C_5),$ then the action of
$G$ on the roots of $p_1$ is two-transitive, so
by Theorem \ref{Sn},
$\frakn$ is isomorphic to $\frakf_{5,2}.$
The case that the Galois group is $D_{10}$ was considered in Example
\ref{D10}, where it was found that either $\frakn$ is free,
$\frakn$ is isomorphic to $\frakn_1$ or $\frakn_2$ in
Example \ref{D10}, both of type
$(5,5).$ The case of $C_5$ is covered by Theorem \ref{Cn}.
Thus, in all possible cases, $\frakn$ is one of the
Lie algebras listed in Table \ref{r=2}. Each example may be realized
by choosing a polynomial $p_1$ from
Table \ref{low-degree-list}, and using its companion matrix to
define an automorphism of $\frakf_{5,2}.$
To get $\frakn_1,$ one needs to choose $p_1$ with all real
roots, and to get $\frakn_2,$ one needs $p_2$ to have
four nonreal roots. Examples of both kinds are in the table.
The associated polynomials
$p_2$ have no roots of modulus one by Lemma \ref{odd-good}.
Now suppose that the Anosov polynomial $p_1$ is the product of a
quadratic Anosov polynomial $r_1$
and a cubic Anosov polynomial $r_2$. Let $E_1$ and $E_2$
denote the rational invariant subspaces of $V_1(\boldR)$ corresponding
to $r_1$ and $r_2$ respectively.
Because $r_1$ is quadratic,
$ r_1 \wedge r_1 = x \pm 1,$ so $\fraki_1 = [E_1,E_1]$ must be
contained in $\fraki.$ As seen in Example \ref{free-3,2}, since
$r_2$ is a cubic, the polynomial
$r_2 \wedge r_2 $ is irreducible and Anosov.
Therefore, the set $\fraki_3 = [E_2,E_2]$ is a minimal nontrivial invariant subspace of $V_2(\boldR).$
Let $\alpha_1$ and $\alpha_2$ denote the roots of $r_1,$ while
$\alpha_3, \alpha_4, \alpha_5$ are the roots of $r_2.$ Then
$\alpha_1\alpha_3$ is a root of
$r_1 \wedge r_2.$ By standard arguments, $[\boldQ(\alpha_1\alpha_3): \boldQ] = 6,$ so $r_1 \wedge r_2$ is the minimal polynomial of $\alpha_1\alpha_3.$
Therefore $r_1 \wedge r_2$ is
irreducible and $\fraki_2 = [E_1, E_2]$ is a minimal nontrivial
rational invariant subspace.
The subspace
$V_2(\boldR)$ decomposes as the sum $\fraki_1 \oplus \fraki_2 \oplus \fraki_3$
of minimal nontrivial invariant subspaces, where
$\fraki_1, \fraki_2$ and $\fraki_3$ are as in Definition \ref{2+3-i},
and the only possibilities for an ideal $\fraki$ defining a
two-step Anosov quotient are $\fraki_1, \fraki_1 \oplus \fraki_2$ and
$\fraki_1 \oplus \fraki_3,$ as claimed.
Choosing $r_1$
and $r_2$ to be arbitrary Anosov polynomials of degree two and three
respectively will yield an Anosov polynomial $p_1 = r_1 r_2$ such that
the corresponding
automorphism $f$ of $\frakf_{5,2}$ admits quotients of all types
listed in the table. By choosing $r_2$ so it has real roots,
the roots of $p_2$ will be real, and
$r_1 \wedge r_2$ will have no roots of modulus one.
\end{proof}
\section{Proofs of main theorems}\label{summary}
Now we provide proofs for the theorems presented in Section \ref{introduction}.
\begin{proof}[Proof of Theorem \ref{main}.]
Suppose that $\frakn$ is a two-step Anosov Lie algebra of type $(n_1,n_2)$
with associated polynomials $(p_1,p_2).$
If $n_1 = 3, 4,$ or $5,$ then $\frakn$ is one of the Lie algebras in
Table \ref{r=2}, by Theorems \ref{n1=3}, \ref{n=4,r=2}
and \ref{n1=5}. Therefore,
Part \eqref{classify-lowdim} of Theorem \ref{main} holds.
The second part follows immediately from by Part \eqref{2,2} of
Theorem \ref{Sn}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Sn-Cn}.]
The first part of the theorem follows immediately from Theorem \ref{Cn}.
The second part is a consequence of Theorem \ref{Sn}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{general-properties}.]
Corollaries \ref{prime-dim} and \ref{dimensions} imply
the theorem.
\end{proof}
\begin{proof}[Proof of Theorem \ref{spectrum}.]
Suppose that the spectrum of $f$ is in $\boldQ(\sqrt{b}).$ Then
the polynomial $p_1$ associated to $f$
is a product of quadratics, each of whose roots lie in $\boldQ(\sqrt{b}).$
Theorem \ref{2+2+...} implies that $\frakn$ is one of the
Lie algebras defined in Definition \ref{graph}.
\end{proof}
\bibliographystyle{alpha}
|
1,108,101,566,442 | arxiv | \section{Introduction}
\label{section:intro}
\subsection{Jordan property}
The \textit{Cremona group of rank $n$} is the group $\Cr_n(\Bbbk)$ of birational transformations of
the projective space $\mathbb P^n$ over a field $\Bbbk$.
It has been actively studied from various points of view for many years
(see~\cite{Hudson1927}, \cite{CantatLamy}, \cite{Deserti2012}, \cite{Dolgachev-Iskovskikh}, \cite{Serre-2008-2009}, \cite{Cantat2016}, and references therein).
One of the approaches to this huge group is to
try to understand its finite subgroups.
It appeared that it is possible to obtain a complete
classification of finite subgroups of $\Cr_2(\Bbbk)$ over an algebraically closed
field $\Bbbk$ of characteristic $0$ (see~
\cite{Bayle-Beauville-2000}, \cite{Beauville2004}, \cite{Blanc2009}, \cite{Dolgachev-Iskovskikh},
\cite{Tsygankov2011e}, and~\cite{Prokhorov-stable-conjugacy-II}),
and to obtain partial classification results
for~\mbox{$\Cr_3(\Bbbk)$}
(see~\cite{Prokhorov2009e},
\cite{Prokhorov2011a},
\cite{Prokhorov-2-elementary},
\cite{Prokhorov-2013-im}, and~\cite{Prokhorov-planes}).
Some results are also known for non algebraically closed fields,
see e.g. \cite{Serre2009}, \cite{Dolgachev-Iskovskikh-2009}, and~\cite{Yasinsky2016}.
In general, it is partially known and partially expected that
the collection of finite subgroups of a Cremona group
shares certain features with the collection of finite subgroups
of a group~\mbox{$\GL_m(\Bbbk)$}.
\begin{theorem}[C.\,Jordan, {see e.\,g.~\cite[Theorem 36.13]{Curtis-Reiner-1962}}]
\label{theorem:Jordan}
There is a constant~\mbox{$I=I(n)$} such that for any
finite subgroup $G\subset\GL_n(\C)$
there exists
a normal abelian subgroup~\mbox{$A\subset G$} of index at most $I$.
\end{theorem}
This leads to the following definition
(cf.~\cite[Definition~2.1]{Popov2011}).
\begin{definition}
\label{definition:Jordan}
A group $\Gamma$ is called \emph{Jordan}
(alternatively, we say
that~$\Gamma$ \emph{has
Jordan property})
if there is a constant $J$ such that
for any finite subgroup $G\subset\Gamma$ there exists
a normal abelian subgroup~\mbox{$A\subset G$} of index at most $J$.
\end{definition}
Theorem~\xref{theorem:Jordan} implies that
all linear algebraic groups
over an arbitrary field $\Bbbk$ of characteristic $0$
are Jordan.
Jordan property was also studied recently for groups
of birational automorphisms of algebraic varieties.
The starting point here was the following result of
J.-P.\,Serre.
\begin{theorem}[J.-P.\,Serre
{\cite[Theorem~5.3]{Serre2009}, \cite[Th\'eor\`eme~3.1]{Serre-2008-2009}}]
The Cremona group~\mbox{$\Cr_2(\Bbbk)$} over a field $\Bbbk$ of characteristic $0$ is Jordan.
\end{theorem}
\begin{remark}
Note that the assumption about characteristic is indispensable.
Indeed,
the group $\Cr_2(\Bbbk)$ contains $\PGL_2(\Bbbk)$, so that
if the characteristic of the field
$\Bbbk$ equals~\mbox{$p>0$} and $\Bbbk$ is algebraically closed,
then $\Cr_2(\Bbbk)$ contains
a series of simple subgroups~\mbox{$\PSL_2(\F_{p^k})$} of increasing order.
\end{remark}
It also appeared that there are surfaces with
non-Jordan groups of birational selfmaps (see~\cite{Zarhin10}).
V.\,Popov managed to give a complete
classification of surfaces with Jordan groups
of birational automorphisms.
\begin{theorem}[V.\,Popov {\cite[Theorem~2.32]{Popov2011}}]
Let $S$ be a surface over a field $\Bbbk$ of characteristic $0$.
Then the group
$\Bir(S)$ of birational automorphisms of~$S$
is Jordan if and only if~$S$ is not
birational to~\mbox{$E\times\P^1$}, where $E$ is an
elliptic curve.
\end{theorem}
In dimension $3$ Jordan property is known for
groups of birational automorphisms of
rationally connected varieties (see e.\,g.~\cite[\S\,IV.3]{Kollar-1996-RC}
for definition and basic background).
\begin{theorem}[{see~\cite[Theorem~1.8]{ProkhorovShramov-RC}}]
\label{theorem:RC-Jordan}
Fix a field $\Bbbk$ of characteristic~$0$.
Then there is a constant~$J$ such that
for any rationally connected
variety $X$ of dimension~$3$ defined over~$\Bbbk$
and any finite subgroup $G\subset\Bir(X)$ there exists
a normal abelian subgroup~\mbox{$A\subset G$} of index at most $J$.
In particular, for any rationally connected threefold $X$ the group~\mbox{$\Bir(X)$} is Jordan.
\end{theorem}
Actually, by~\cite[Theorem~1.8]{ProkhorovShramov-RC}
the assertion of Theorem~\xref{theorem:RC-Jordan} holds in
arbitrary dimension modulo boundedness of terminal Fano varieties
(see e.\,g.~\cite{Borisov-1996} or~\cite[Conjecture~1.7]{ProkhorovShramov-RC});
the latter boundedness was recently proved in~\cite[Theorem~1.1]{Birkar}.
For other results (in particular, for birational automorphisms of non rationally connected varieties,
and for automorphisms of varieties of different types) see~\cite{Prokhorov-Shramov-J},
\cite{Popov2011}, \cite{Popov-Jordan},
\cite{Zarhin2015},
\cite{BandmanZarhin2015},
\cite{BandmanZarhin2015a}, \cite{MengZhang},
\cite{Popov-Diff},
\cite{Zimmermann2014}, \cite{TurullRiera2015}, \cite{Riera2016},
and~\cite{Yasinsky2017}.
\subsection{Jordan constants}
Given a Jordan group $\Gamma$, one may get interested
in the minimal value of the constant involved in
Definition~\xref{definition:Jordan}, and in the values of
other relevant constants.
\begin{definition}
\label{definition:Jordan-constant}
Let $\Gamma$ be a Jordan group.
The \emph{Jordan constant}
$J(\Gamma)$ of the group $\Gamma$
is the minimal number $J$ such that
for any finite subgroup $G\subset\Gamma$ there exists
a normal abelian subgroup
$A\subset G$ of index at most $J$.
The \emph{weak Jordan constant}
$\bar{J}(\Gamma)$ of the group~$\Gamma$
is the minimal number $\bar{J}$ such that
for any finite subgroup $G\subset\Gamma$ there exists
a (not necessarily normal) abelian subgroup
$A\subset G$ of index at most $\bar{J}$.
\end{definition}
\begin{remark}\label{remark:Pyber}
It is more traditional to study Jordan constants than weak Jordan
constants of Jordan groups, although there is no big difference between
them. Indeed, one has~\mbox{$\bar{J}(\Gamma)\le J(\Gamma)$} for any Jordan
group $\Gamma$ for obvious reasons.
Moreover, if $G$ is a finite group and $A$ is an abelian
subgroup of $G$, then by~\cite[Theorem~1.41]{Isaacs2008}
(see also~\cite{ChermakDelgado}) one can find
a normal abelian subgroup $N$ of $G$ such that
\begin{equation*}
[G:N]\le [G:A]^2.
\end{equation*}
Therefore, if $\Gamma$ is a Jordan group, one always
has $J(\Gamma)\le\bar{J}(\Gamma)^2$. On the other hand, the advantage of
the weak Jordan constant is that it allows easy estimates
using subgroups of the initial group. Namely, if $\Gamma_1$ is a subgroup
of finite index in a group $\Gamma_2$, and $\Gamma_1$ is Jordan,
then $\Gamma_2$ is Jordan with
\begin{equation*}
\bar{J}(\Gamma_2)\le [\Gamma_2:\Gamma_1]\cdot\bar{J}(\Gamma_1).
\end{equation*}
Also, if $\Delta_1$ and $\Delta_2$ are Jordan groups,
the group $\Delta_1\times\Delta_2$ is Jordan with
\begin{equation*}
\bar{J}(\Delta_1\times\Delta_2)= \bar{J}(\Delta_1)\times\bar{J}(\Delta_2).
\end{equation*}
In particular, if $\Gamma$ is a subgroup of $\Delta\times A$, where $\Delta$ is a Jordan group
and $A$ is an abelian group, then $\Gamma$ is Jordan with~\mbox{$\bar{J}(\Gamma)\le\bar{J}(\Delta)$}.
\end{remark}
Jordan constants are known for example for the groups $\GL_n(\C)$
(see~\cite{Collins2007}).
In~\cite{Serre2009} J.-P.\,Serre gave an explicit
bound for the Jordan constant
of the Cremona group $\Cr_2(\Bbbk)$
(see Remark~\xref{remark:dim-2-cool} below).
Our first result also concerns the group~\mbox{$\Cr_2(\Bbbk)$}.
\begin{proposition}\label{proposition:Cr-2}
Suppose that the field $\Bbbk$ has characteristic~$0$. Then one has
\begin{equation*}
\bar{J}\big(\Cr_2(\Bbbk)\big)\le 288,\quad J\big(\Cr_2(\Bbbk)\big)\le 82944.
\end{equation*}
The first of these bounds becomes an equality if $\Bbbk$ is algebraically closed.
\end{proposition}
The main goal of this paper is to present a bound
for Jordan constants of the groups of birational automorphisms
of rationally connected threefolds, in particular, for the
group~\mbox{$\Cr_3(\Bbbk)=\Bir(\P^3)$}.
\begin{theorem}\label{theorem:constant}
Let $X$ be a rationally connected threefold over a field
$\Bbbk$ of characteristic~$0$. Then one has
\begin{equation*}
\bar{J}\big(\Bir(X)\big)\le 10\,368,\quad
J\big(\Bir(X)\big)\le 107\,495\,424.
\end{equation*}
If moreover $X$ is rational and $\Bbbk$ is algebraically closed,
then the first of these bounds becomes an equality.
\end{theorem}
It is known (see~\cite[Theorem~1.10]{ProkhorovShramov-RC})
that if $X$ is a rationally connected threefold over a field
of characteristic~$0$, then there is a constant $L$ such that for
any prime $p>L$ any finite $p$-group
$G\subset\Bir(X)$ is abelian.
An immediate consequence of Theorem~\xref{theorem:constant}
is an explicit bound for the latter constant $L$.
\begin{corollary}\label{corollary:prime}
Let $X$ be a rationally connected threefold over a field
$\Bbbk$ of characteristic~$0$, and let $p>10\,368$ be a prime.
Let $G\subset\Bir(X)$ be a finite $p$-group.
Then $G$ is abelian.
\end{corollary}
We believe that one can significantly improve the bound given by
Corollary~\xref{corollary:prime}.
\begin{remark}\label{remark:dim-2-cool}
J.-P.\,Serre showed (see the remark made after
Theorem~5.3 in~\cite{Serre2009})
that any finite subgroup $G$ of the Cremona group $\Cr_2(\Bbbk)$
over a field $\Bbbk$ of characteristic~$0$
has a normal abelian subgroup $A\subset G$ such that
the index $[G:A]$ \emph{divides} the number~\mbox{$2^{10}\cdot 3^4\cdot 5^2\cdot 7$}.
The result of Theorem~\xref{theorem:constant}
is not that precise: we cannot say much about the primes that divide
the index~\mbox{$[G:A]$} in our case.
This is explained by the fact that to obtain the bound we
have to deal with terminal singularities
on threefolds as compared to smooth surfaces.
See Remark~\xref{remark-P1-P1-P1}
for our expectations on possible improvements of the bounds given by Proposition~\xref{proposition:Cr-2} and
Theorem~\xref{theorem:constant}, and Remark~\xref{remark:dim-4-fail}
for a further disclaimer in higher dimensions.
\end{remark}
\smallskip
The plan of the paper is as follows.
In~\S\xref{section:GL} we compute weak Jordan constants for some
linear groups.
In~\S\xref{section:dim-2} we compute certain relevant constants for
rational surfaces, and in particular prove Proposition~\ref{proposition:Cr-2}.
In~\S\xref{section:terminal} we study groups
of automorphisms of three-dimensional terminal singularities
and estimate their weak Jordan constants;
then we use these estimates to bound weak Jordan constants for
groups of automorphisms of non-Gorenstein terminal Fano threefolds.
In~\S\xref{section:Mfs} we estimate weak Jordan constants
for groups acting on three-dimensional $G$-Mori fiber spaces.
In~\S\xref{section:Fano} and~\S\xref{section:smooth-Fano} we bound weak Jordan constants for
groups of automorphisms of Gorenstein terminal (and in particular smooth) Fano threefolds.
Finally, in~\S\xref{section:proof} we summarize the above partial results
and complete the proof of Theorem~\xref{theorem:constant}, and also make concluding remarks.
In Appendix~\xref{section:appendix}
we collect some information about automorphism groups of two particular classes
of smooth Fano varieties: complete intersections of quadrics, and
complete intersections in weighted projective spaces; these results are well known to experts,
but we decided to include them because we do not know proper references.
\smallskip
\textbf{Acknowledgments.} We would like to thank J.-P.\,Serre
who attracted our attention
to the questions discussed in this paper.
We are also grateful to
A.\,Anan'in, J.\,Hausen, A.\,Kuznetsov, A.\,Laface, L.\,Pyber, V.\,Popov, L.\,Rybnikov,
and H.\,Suess for useful discussions, and to the referee for his helpful comments.
\smallskip
\textbf{Notation and conventions.}
In what follows
we denote by $\mumu_m$ a cyclic group of order $m$.
We denote by $m.\Gamma$ a central extension
of a group $\Gamma$ by a group isomorphic to $\mumu_m$.
Starting from this point we always work over an \emph{algebraically closed}
field of characteristic~$0$.
\section{Linear groups}
\label{section:GL}
Now we are going to find weak Jordan constants~\mbox{$\bar{J}\big(\GL_n(\Bbbk)\big)$}
for small values of $n$.
Note that the values of Jordan constants~\mbox{$J\big(\GL_n(\Bbbk)\big)$}
were computed in~\cite{Collins2007} for any~$n$.
\subsection{Preliminaries}
\label{subsection:linear-prelim}
The following remark is elementary but rather useful.
\begin{remark}\label{remark:surjection}
Suppose that $\Gamma_1$ is a Jordan group, and there is
a surjective homomorphism~\mbox{$\Gamma_1\to\Gamma_2$} with finite kernel. Then
$\Gamma_2$ is also Jordan. Moreover, one has~\mbox{$\bar{J}(\Gamma_1)\ge \bar{J}(\Gamma_2)$}
and~\mbox{$J(\Gamma_1)\ge J(\Gamma_2)$}.
In particular, for any $n$ the group
$\PGL_n(\Bbbk)$ is Jordan with
\begin{equation*}
\bar{J}\big(\PGL_n(\Bbbk)\big)\le\bar{J}\big(\SL_n(\Bbbk)\big)=\bar{J}\big(\GL_n(\Bbbk)\big),
\quad
J\big(\PGL_n(\Bbbk)\big)\le J\big(\SL_n(\Bbbk)\big)=J\big(\GL_n(\Bbbk)\big).
\end{equation*}
\end{remark}
We will also need the following well-known observation.
Let $U$ be an arbitrary variety and~$P$ be a point of~$U$.
Denote by $\Aut_P(U)$ the stabilizer of $P$ in $\Aut(U)$.
Let~\mbox{$T_P(U)$} be the Zariski tangent space
to the variety~$U$ at the point~$P$.
\begin{lemma}[{see e.\,g. \cite[Lemma 2.4]{Bialynicki-Birula1973}, \cite[Lemma 4]{Popov-Jordan}}]
\label{lemma:Aut-P}
Suppose that $U$ is an irreducible variety.
For any finite group~\mbox{$G\subset\Aut_P(U)$} the
natural representation~\mbox{$G\to\GL\big(T_P(U)\big)$}
is faithful.
In particular, one has
\begin{equation*}
\bar{J}\big(\Aut_P(U)\big)\le \bar{J}\Big(\GL\big(T_P(U)\big)\Big),
\quad
J\big(\Aut_P(U)\big)\le J\Big(\GL\big(T_P(U)\big)\Big).
\end{equation*}
\end{lemma}
\begin{remark}
One does not necessarily have an embedding~\mbox{$\Gamma\hookrightarrow\GL\big(T_P(U)\big)$}
for a non-reductive subgroup $\Gamma\subset\Aut_P(U)$.
This is not the case already for $U\cong\mathbb{A}^2$
and~\mbox{$\Gamma=\Aut_P(U)$}.
\end{remark}
\subsection{Dimension $2$}
\label{subsection:GL2}
The following easy result will be used both to find the weak Jordan constant
of the group $\GL_2(\Bbbk)$, and also later in the proof of Corollary~\xref{corollary:genus-6}.
\begin{lemma}\label{lemma:2-PGL2}
Let $G$ be a group that fits into an exact sequence
\begin{equation*}
1\to \Gamma\longrightarrow G\stackrel{\phi}\longrightarrow\PGL_2(\Bbbk),
\end{equation*}
where $\Gamma\cong\mumu_2$.
Then $G$ is Jordan with~\mbox{$\bar{J}(G)\le 12$}.
\end{lemma}
\begin{proof}
Note that $\Gamma$ is contained in the center of the group~$G$.
We may assume that $G$ is finite.
By the well-known classification of finite subgroups
in $\PGL_2(\Bbbk)$, we know that the group $\bar{G}=\phi(G)$ is
either cyclic, or dihedral, or isomorphic to one of the groups $\A_4$,
$\SS_4$, or~$\A_5$.
If $\bar{G}$ is cyclic, then the group $G$ is abelian.
If $\bar{G}$ is dihedral, then the group $G$ contains an abelian subgroup of index~$2$.
If $\bar{G}\cong\A_4$, then $\bar{G}$ contains a cyclic subgroup of order~$3$,
so that $\bar{J}(G)\le 4$; the inequality here is due to the fact that
in the case when $G\cong\mumu_2\times\A_4$ one has $\bar{J}(G)=3$, but for a non-trivial
central extension $G\cong 2.\A_4$ one has $\bar{J}(G)=4$.
If $\bar{G}\cong\SS_4$, then $\bar{G}$ contains a cyclic subgroup of order~$4$, and $\bar{J}(G)=6$.
Finally, if $\bar{G}\cong\A_5$, then $\bar{G}$ contains a cyclic subgroup of order~$5$, and $\bar{J}(G)=12$.
\end{proof}
As an easy application of Lemma~\xref{lemma:2-PGL2}, we can
find the weak Jordan constants of the groups $\GL_2(\Bbbk)$ and~\mbox{$\Aut(\P^1)\cong\PGL_2(\Bbbk)$}.
\begin{corollary}
\label{corollary:GL2}
One has
\begin{equation*}
\bar{J}\big(\GL_2(\Bbbk)\big)=\bar{J}\big(\PGL_2(\Bbbk)\big)=12.
\end{equation*}
\end{corollary}
\begin{proof}
Let $V$ be a
two--dimensional
vector space over $\Bbbk$, and let $G\subset\GL(V)$ be a finite subgroup.
It is enough to study
the weak Jordan constant
$\bar{J}(G)$. Moreover, for this we may assume that
$G\subset\SL(V)\cong\SL_2(\Bbbk)$, and that $G$ contains the scalar matrix
acting by~$-1$ on~$V$. Therefore, the bound $\bar{J}(G)\le 12$
follows from Lemma~\xref{lemma:2-PGL2}, so that~\mbox{$\bar{J}(\GL_2(\Bbbk))\le 12$}.
The inequality
\begin{equation*}
\bar{J}\big(\PGL_2(\Bbbk)\big)\le\bar{J}\big(\GL_2(\Bbbk)\big)
\end{equation*}
holds by Remark~\xref{remark:surjection}.
The value $\bar{J}(\PGL_2(\Bbbk))=12$ is given by the group~\mbox{$\A_5\subset\PGL_2(\Bbbk)$},
and the value $\bar{J}(\GL_2(\Bbbk))=12$ is given by the group~\mbox{$2.\A_5\subset\GL_2(\Bbbk)$}.
\end{proof}
\begin{remark}\label{remark:elliptic}
Suppose that $C$ is an irreducible curve
such that the normalization
$\hat{C}$ of~$C$ has genus $g$.
Since the action of the group $\Aut(X)$ lifts to $\hat{C}$, one has
$$
\bar{J}\big(\Aut(C)\big)\le \bar{J}\big(\Aut(\hat{C})\big).
$$
On the other hand, it is well known that
$\bar{J}\big(\Aut(\hat{C})\big)\le 6$
if $g=1$, and the Hurwitz bound implies that
\begin{equation*}
\bar{J}\big(\Aut(\hat{C})\big)\le |\Aut(\hat{C})|\le 84(g-1)
\end{equation*}
if $g\ge 2$.
\end{remark}
We can use a classification of finite subgroups
in $\PGL_2(\Bbbk)$ to find the weak Jordan
constant of the automorphism group
of a line and a smooth two-dimensional quadric.
More precisely, we have the following result.
\begin{lemma}\label{lemma:weak-constant-quadric}
The following assertions hold.
\begin{enumerate}
\item \label{lemma:weak-constant-quadric-i}
Let $G\subset\Aut(\P^1)$ be a finite group.
Then there exists an abelian subgroup $A\subset G$ of index
at most $12$ acting on $\P^1$ with a fixed point.
\item\label{lemma:weak-constant-quadric-ii}
Let $G\subset\Aut\big(\P^1\times\P^1\big)$ be a finite group.
Then there exists an abelian subgroup~\mbox{$A\subset G$} of index
at most $288$ that acts on $\P^1\times\P^1$ with a fixed point,
and does not interchange the rulings of $\P^1\times\P^1$.
\item\label{lemma:weak-constant-quadric-iii}
One has
\begin{equation*}
\bar{J}\Big(\Aut\big(\P^1\times\P^1\big)\Big)=288.
\end{equation*}
\end{enumerate}
\end{lemma}
\begin{proof}
Assertion (i) follows from the classification of finite subgroups of $\PGL_2(\Bbbk)$.
Observe that
\begin{equation*}
\Aut\big(\P^1\times\P^1\big)\cong
\big(\PGL_2(\Bbbk)\times\PGL_2(\Bbbk)\big)\rtimes\mumu_2.
\end{equation*}
Therefore, assertion (i) implies assertion (ii).
In particular, we get the bound
\begin{equation*}
\bar{J}\Big(\Aut\big(\P^1\times\P^1\big)\Big)\le 288.
\end{equation*}
The required equality is given by the group
\begin{equation*}
\big(\A_5\times\A_5\big)\rtimes\mumu_2\subset\Aut\big(\P^1\times\P^1\big).
\end{equation*}
This proves assertion~(iii).
\end{proof}
\subsection{Dimension $3$}
\label{subsection:GL3}
\begin{lemma}\label{lemma:weak-GL3}
One has
\[
\bar{J}\big(\PGL_3(\Bbbk)\big)=40,
\qquad \bar{J}\big(\GL_3(\Bbbk)\big)=72.
\]
\end{lemma}
\begin{proof}
Let $V$ be a three-dimensional vector
space over $\Bbbk$, and let $G\subset\GL(V)$ be a finite subgroup.
It is enough to study the weak Jordan constant
$\bar{J}(G)$. Moreover, for this we may assume that
$G\subset\SL(V)\cong\SL_3(\Bbbk)$.
Recall that there are the following possibilities
for the group $G$ (see~\cite[Chapter~XII]{MBD1916}
or~\cite[\S8.5]{Feit}):
\begin{enumerate}
\item the $G$-representation $V$ is reducible;
\item there is a transitive homomorphism $h\colon G\to\SS_{3}$ such that
$V$ splits into a sum of three one-dimensional representations of
the subgroup $H=\Ker(h)$;
\item the group $G$ is generated by some subgroup
of scalar matrices in $\SL_3(\Bbbk)$ and a group $\hat{G}$ that
is one of the groups $\A_5$ or $\PSL_2(\F_7)$;
\item one has $G\cong 3.\A_6$;
\item one has $G\cong\mathcal{H}_3\rtimes \Sigma$,
where $\mathcal{H}_3$ is the Heisenberg group of order $27$, and
$\Sigma$ is some subgroup of $\SL_2(\F_3)$.
\end{enumerate}
Let us denote by $\bar{G}$ the image of $G$ in the group $\PGL_3(\Bbbk)$.
One always has~\mbox{$\bar{J}(\bar{G})\le\bar{J}(G)$}.
In case~(i) there is an embedding $G\hookrightarrow A\times\Gamma$,
where $A$ is a finite abelian group and $\Gamma$ is a finite subgroup
of $\GL_2(\Bbbk)$. Thus
\begin{equation*}
\bar{J}(\bar{G})\le\bar{J}(G)=\bar{J}(\Gamma)\le
\bar{J}\big(\GL_2(\Bbbk)\big)=\bar{J}\big(\PGL_2(\Bbbk)\big)=12
\end{equation*}
by Corollary~\xref{corollary:GL2}.
In case~(ii) the group $H$ is an abelian subgroup
of $G$, so that
\begin{equation*}
\bar{J}(\bar{G})\le\bar{J}(G)\le [G:H]\le |\SS_3|=6.
\end{equation*}
In case~(iii) it is easy to check that
$\bar{J}(\bar{G})\le\bar{J}(G)=\bar{J}(\hat{G})\le 24$.
In case (iv) one has $G\cong 3.\A_6$ and $\bar{G}\cong\A_6$. The
abelian subgroup of maximal order in $\bar{G}$
is a Sylow $3$-subgroup,
so that $\bar{J}(\bar{G})=40$.
The abelian subgroup of maximal order in $G$ is~$\mumu_{15}$
that is a preimage of a Sylow $5$-subgroup with respect
to the natural projection $G\to\bar{G}$.
This gives $\bar{J}(G)=72$.
In case~(v) one has
\begin{equation*}
\bar{J}(\bar{G})\le\bar{J}(G)\le
\bar{J}\big(\mathcal{H}_3\rtimes\SL_2(\F_3)\big)
=24.
\end{equation*}
Therefore, we see that $\bar{J}(\PGL_3(\Bbbk))=40$
and $\bar{J}\big(\GL_3(\Bbbk)\big)=72$.
\end{proof}
\begin{lemma}\label{lemma:PGL3-small}
Let $\bar{G}\subset\PGL_3(\Bbbk)$
be a finite subgroup of order
$|\bar{G}|>360$.
Then~\mbox{$\bar{J}(\bar{G})\le 12$}.
\end{lemma}
\begin{proof}
Let $G\subset\SL_3(\Bbbk)$ be a preimage
of $\bar{G}$ with respect to the natural
projection
\begin{equation*}
\SL_3(\Bbbk)\to\PGL_3(\Bbbk).
\end{equation*}
Then one has $|\bar{G}|=|G|/3$, and
$\bar{J}(\bar{G})\le\bar{J}(G)$.
Let us use the notation introduced in the proof
of Lemma~\xref{lemma:weak-GL3}.
If $G$ is a group of type~(i) or~(ii), then
$\bar{J}(G)\le 12$. If $G$ is a group of type~(iii) or~(iv), then
\begin{equation*}
|G|\le |3.\A_6|=1080,
\end{equation*}
and $|\bar{G}|\le 360$.
Finally, if $G$ is a group of type~(v), then
\begin{equation*}
|G|\le |\mathcal{H}_3\rtimes\SL_2(\F_3)|=648,
\end{equation*}
and $|\bar{G}|\le 216$.
\end{proof}
\begin{lemma}\label{lemma:three-generators}
Let $B$ be a (non-trivial) finite abelian subgroup of $\PGL_3(\Bbbk)$. Then
$B$ is generated by at most three elements.
\end{lemma}
\begin{proof}
Recall that a finite abelian subgroup of $\GL_n(\Bbbk)$ is generated by at most $n$ elements.
Let $\tilde{B}\subset\SL_3(\Bbbk)$ be the preimage of $B$ with respect to the natural projection~\mbox{$\SL_3(\Bbbk)\to\PGL_3(\Bbbk)$}.
Let $\tilde A\subset \tilde B$ be a maximal abelian subgroup and let $A\subset B$
be its image. Then $A$ has an isolated fixed point on $\P^2$, and
the number of its isolated fixed points is at most~$3$.
Therefore,
the group $B$ has an orbit of length at most $3$ on $\P^2$. Let~$P$ be a point of
such orbit, and let $B'\subset B$ be the stabilizer of $P$.
By Lemma~\xref{lemma:Aut-P} there is a faithful representation of the group
$B'$ in the Zariski tangent space $T_P(\P^2)\cong\Bbbk^2$, so that~$B'$ is generated by at most
two elements.
The group $B$ is generated by its subgroup~$B'$ and an arbitrary element from
$B\setminus B'$, if any.
\end{proof}
The following fact is a refinement of~\cite[Lemma~2.8]{Prokhorov-Shramov-J}
(cf.~\cite[Remark~2.4]{Prokhorov-Shramov-J}).
\begin{lemma}\label{lemma:4-PGL3}
Let $G$ be a group that fits into an exact sequence
\begin{equation*}
1\to \Gamma\longrightarrow G\stackrel{\phi}\longrightarrow\PGL_3(\Bbbk),
\end{equation*}
where $\Gamma\cong\mumu_2^m$ with $m\le 2$.
Then $G$ is Jordan with
\begin{equation*}
\bar{J}(G)\le 2304.
\end{equation*}
\end{lemma}
\begin{proof}
We may assume that $G$ is finite.
If the order of the group $\phi(G)\subset\PGL_3(\Bbbk)$
is at most $360$, then one has
\begin{equation*}
\bar{J}(G)\le [G:\Gamma]=|\phi(G)|\le 360.
\end{equation*}
Therefore, we may assume that $|\phi(G)|>360$.
By Lemma~\xref{lemma:PGL3-small}
we can find an abelian subgroup $B$ in $\phi(G)$ of index
$[\phi(G):B]\le 12$. Put $\tilde{G}=\phi^{-1}(B)$.
Then
\begin{equation*}
[G:\tilde{G}]=[\phi(G):B]\le 12,
\end{equation*}
so that by Remark~\xref{remark:Pyber} we are left
with the task to bound $\bar{J}(\tilde{G})$.
We have an exact sequence of groups
\begin{equation*}
1\to \Gamma\to \tilde{G}\to B\to 1.
\end{equation*}
For an element $g\in\tilde{G}$ denote by $Z(g)$ the centralizer
of $g$ in $\tilde{G}$. Since $B$ is an abelian quotient of $\tilde{G}$,
we see that the commutator subgroup of $\tilde{G}$ has order at most $|\Gamma|$,
so that for any $g\in\tilde{G}$ one has $[G:Z(g)]\le |\Gamma|$.
Since $B$ is an abelian subgroup of $\PGL_3(\Bbbk)$, it is generated
by at most three elements by Lemma~\xref{lemma:three-generators}.
Choose three generators of $B$, and let $g_1$, $g_2$ and $g_3$ be elements
of $\tilde{G}$ that project to these three generators.
Put
\begin{equation*}
I=Z(g_1)\cap Z(g_2)\cap Z(g_3).
\end{equation*}
Then the index
\begin{equation*}
[\tilde{G}:I]\le |\Gamma|^3\le 64.
\end{equation*}
Let $C$ be the centralizer of $\Gamma$ in $\tilde{G}$.
Since $\Gamma$ is a normal subgroup of $\tilde{G}$, we see that $C$
is a normal subgroup of $\tilde{G}$ as well.
Moreover, since $\Gamma\subset C$, we have an inclusion $\tilde{G}/C\subset B$,
so that $\tilde{G}/C$ is an abelian group generated by
three elements.
Also, one has an inclusion
\begin{equation*}
\tilde{G}/C\subset\Aut(\Gamma)\subset\GL_2(\F_2)\cong\SS_3.
\end{equation*}
Therefore, we conclude that $|\tilde{G}/C|\le 3$.
Let $Z$ be the center of $\tilde{G}$. Then $Z$ contains the intersection $C\cap I$, so that
\begin{equation*}
\bar{J}(\tilde{G})\le J(\tilde{G})\le [\tilde{G}:Z]\le [\tilde{G}:C\cap I]\le [\tilde{G}:C]\cdot [\tilde{G}:I]\le 3\cdot 64=192,
\end{equation*}
and thus
\begin{equation*}
\bar{J}(G)\le [G:\tilde{G}]\cdot\bar{J}(\tilde{G})\le 2304.\qedhere
\end{equation*}
\end{proof}
\subsection{Dimension $4$}
\label{subsection:GL4}
\begin{lemma}\label{lemma:weak-GL4}
One has
\begin{equation*}
\bar{J}\big(\PGL_4(\Bbbk)\big)=\bar{J}\big(\GL_4(\Bbbk)\big)= 960.
\end{equation*}
\end{lemma}
\begin{proof}
Let $V$ be a four-dimensional vector
space over $\Bbbk$, and let $G\subset\GL(V)$
be a finite subgroup. It is enough to study the weak Jordan constant
$\bar{J}(G)$. Moreover, for this we may assume that
$G\subset\SL(V)\cong\SL_4(\Bbbk)$.
Then there are the following possibilities
for the group $G$ (see~\cite[Chapter~VII]{Blichfeldt}
or~\cite[\S8.5]{Feit}):
\begin{enumerate}
\item the $G$-representation $V$ is reducible;
\item there is a transitive homomorphism $h\colon G\to\SS_{k}$
such that
$V$ splits into a sum of $k$ representations of
the subgroup $H=\Ker(h)$ of dimension $4/k$ for
some $k\in\{2,4\}$;
\item
the group $G$ contains a subgroup
$H$ of index at most $2$, such that $H$ is a quotient
by a certain central subgroup of a group $\Gamma_1\times\Gamma_2$,
where $\Gamma_1$ and $\Gamma_2$ are finite subgroups
of $\GL_2(\Bbbk)$;
\item
the group $G$ is generated by some subgroup
of scalar matrices in $\SL_4(\Bbbk)$ and a group $\hat{G}$ that is
one of the
groups $\A_5$, $\SS_5$, $2.\A_5$, $2.\SS_5$, or $\SL_2(\F_7)$;
\item
the group $G$ is generated by some subgroup
of scalar matrices in $\SL_4(\Bbbk)$ and a group $\hat{G}$ that is
one of the
groups $2.\A_6$, $2.\SS_6$, $2.\A_7$,
or $\Sp_4(\F_3)$;
\item
the group $G$ contains an extra-special group~$\mathcal{H}_4$
of order $32$ and
is contained in the normalizer
of $\mathcal{H}_4$ in $\SL(V)$.
\end{enumerate}
In case~(i) there is an embedding
$G\hookrightarrow \Gamma_1\times\Gamma_2$,
where $\Gamma_i$ is a finite subgroup
of $\GL_{n_i}(\Bbbk)$ for $i\in\{1,2\}$, and
$n_1\le n_2$ are positive integers such that $n_1+n_2=4$.
One has
\begin{equation*}
\bar{J}(G)\le\bar{J}(\Gamma_1\times\Gamma_2)\le
\bar{J}\big(\GL_{n_1}(\Bbbk)\big)\cdot
\bar{J}\big(\GL_{n_1}(\Bbbk)\big).
\end{equation*}
If
$(n_1,n_2)=(1,3)$, this gives
$\bar{J}(G)\le 72$
by Lemma~\xref{lemma:weak-GL3}.
If
$(n_1,n_2)=(2,2)$, this gives
\begin{equation*}
\bar{J}(G)\le 12\cdot 12=144
\end{equation*}
by Corollary~\xref{corollary:GL2}.
In case~(ii) the group $H$ is a subgroup
of $G$ of index
\begin{equation*}
[G:H]\le |\SS_k|=k!
\end{equation*}
Moreover, there is an embedding
$H\hookrightarrow \Gamma_1\times\ldots\times\Gamma_{k}$,
where $\Gamma_i$ are finite subgroups
of~\mbox{$\GL_{4/k}(\Bbbk)$}.
Thus
\begin{equation*}
\bar{J}(G)\le
[G:H]\cdot\bar{J}(H)\le
k!\cdot\bar{J}(\Gamma_1)\cdot\ldots\cdot\bar{J}(\Gamma_k)\le
k!\cdot\bar{J}\big(\GL_{4/k}(\Bbbk)\big)^k.
\end{equation*}
If $k=2$, this gives
$\bar{J}(G)\le 288$
by Corollary~\xref{corollary:GL2}.
If $k=4$, this gives
$\bar{J}(G)\le 24$.
In case~(iii) we obtain the bound
$\bar{J}(G)\le 288$ in a similar way.
In case (iv) one has
\begin{equation*}
\bar{J}(G)=\bar{J}\big(\hat{G}\big)\le |\hat{G}|\le 336.
\end{equation*}
In case (v) one has
\begin{equation*}
\bar{J}(G)=\bar{J}\big(\hat{G}\big)\le\bar{J}\big(\Sp_4(\F_3)\big)=960.
\end{equation*}
In case (vi) one has $\bar{J}(G)\le\bar{J}(N)$,
where $N$ is the normalizer of $\mathcal{H}_4$ in $\SL(V)$.
The group $N$ fits into the exact sequence
\begin{equation*}
1\to\tilde{\mathcal{H}}_4\to N\to \SS_6\to 1,
\end{equation*}
where $\tilde{\mathcal{H}}_4$ is a group generated by
$\mathcal{H}_4$ and a scalar matrix
\begin{equation*}
\sqrt{-1}\cdot\mathrm{Id}\in\SL(V).
\end{equation*}
Recall that
\begin{equation*}
\mathcal{H}_4\cong Q_8\times Q_8/\mumu_2,
\end{equation*}
where $Q_8$ is a quaternion group of order~$8$.
Being viewed as a subgroup of $\SL_2(\Bbbk)$, the group
$Q_8$ is normalized by a binary octahedral group
$2.\SS_4$. Thus the group $N$ contains a subgroup
\begin{equation*}
R\cong 2.\SS_4\times 2.\SS_4/\mumu_2,
\end{equation*}
and also a subgroup $\tilde{R}$ generated by $R$ and $\sqrt{-1}\cdot\mathrm{Id}$.
One has
\begin{equation*}
\bar{J}\big(\tilde{R}\big)=\bar{J}(R)=\bar{J}(2.\SS_4\times 2.\SS_4)=\bar{J}(2.\SS_4)^2=36.
\end{equation*}
On the other hand, we compute
the index $[N:\tilde{R}]=20$. This gives
\begin{equation*}
\bar{J}(N)\le [N:\tilde{R}]\cdot\bar{J}\big(\tilde{R}\big)=20\cdot 36=720.
\end{equation*}
Therefore, we see that
$\bar{J}(G)\le 960$, and thus $\bar{J}\big(\GL_4(\Bbbk)\big)\le 960$.
The inequality
\begin{equation*}
\bar{J}\big(\PGL_4(\Bbbk)\big)\le\bar{J}\big(\GL_4(\Bbbk)\big)
\end{equation*}
holds by Remark~\xref{remark:surjection}.
The value $\bar{J}(\PGL_4(\Bbbk))=960$ is given by the
group~\mbox{$\PSp_4(\F_3)\subset\PGL_4(\Bbbk)$}
whose abelian subgroup of maximal order is~$\mumu_3^3$
(cf.~\mbox{\cite[Table~2]{Vdovin}}).
The value $\bar{J}(\GL_4(\Bbbk))=960$ is given by the group~\mbox{$\Sp_4(\F_3)\subset\GL_4(\Bbbk)$}
whose abelian subgroup of maximal order is~$\mumu_2\times\mumu_3^3$
that is a preimage of a subgroup~\mbox{$\mumu_3^3\subset\PSp_4(\F_3)$} with respect
to the natural projection~\mbox{$\Sp_4(\F_3)\to\PSp_4(\F_3)$}.
\end{proof}
\begin{remark}
The group $2.\SS_5$ listed in case~(iv) of Lemma~\xref{lemma:weak-GL4}
is omitted in the list
given in~\cite[\S8.5]{Feit}. It is still listed by some other classical
surveys, see e.g.~\cite[\S119]{Blichfeldt}.
\end{remark}
Recall that for a given group $G$ with a representation in a
vector space $V$ a \emph{semi-invariant} of $G$ of degree $n$
is an eigen-vector of $G$ in
$\Sym^n V^{\vee}$.
\begin{lemma}\label{lemma:GL4-quadric}
Let $V$ be a four-dimensional vector
space over $\Bbbk$, and let $G\subset\GL(V)$
be a finite subgroup.
If $G$ has a semi-invariant of degree $2$, then
$\bar{J}(G)\le 288$.
\end{lemma}
\begin{proof}
Let $q$ be a semi-invariant of $G$ of degree $2$.
We consider the possibilities for the rank of the quadratic form $q$
case by case.
Suppose that $V$ has a one-dimensional subrepresentation of $G$.
Then~\mbox{$G\subset\Bbbk^*\times\GL_3(\Bbbk)$}, so that
$\bar{J}(G)\le 72$ by Lemma~\xref{lemma:weak-GL3}.
Therefore we may assume that the rank of $q$ is not equal to $1$ or $3$.
Suppose that the rank of $q$ is $2$, so that $q$ is a product of two linear forms.
Then there is a subgroup $G_1\subset G$ of index at most $2$
such that these linear forms are semi-invariant with respect to~$G_1$.
Hence $V$ splits as a sum of a two-dimensional and two one-dimensional
representations of $G_1$.
This implies that~\mbox{$G_1\subset\Bbbk^*\times\Bbbk^*\times\GL_2(\Bbbk)$}, so that
\begin{equation*}
\bar{J}(G)\le 2\cdot\bar{J}(G_1)\le 2\cdot\bar{J}\big(\GL_2(\Bbbk)\big)=24
\end{equation*}
by Corollary~\xref{corollary:GL2}.
Finally, suppose that the rank of $q$ is $4$, so that
the quadric $Q\subset \P(V)\cong\P^3$ given by the equation $q=0$ is smooth,
i.e. $Q\cong\P^1\times\P^1$.
By Lemma~\xref{lemma:weak-constant-quadric} there is a subgroup~\mbox{$H\subset G$}
of index $[G:H]\le 288$ that acts on $Q$ with a fixed point $P$ and does not
interchange the lines $L_1$ and $L_2$ passing through $P$ on $Q$.
As the representation of $H$, the vector space~$V$ splits as a sum of the one-dimensional
representation corresponding to the point~$P$, two one-dimensional
representations arising from the lines $L_1$ and $L_2$, and
one more one-dimensional representation.
Therefore, $H$ is an abelian group (note that Lemma~\xref{lemma:weak-constant-quadric}
asserts only that the image of $H$ in $\PGL_4(\Bbbk)$ is abelian).
This shows that~\mbox{$\bar{J}(G)\le 288$} and completes the proof of the lemma.
\end{proof}
\subsection{Dimension $5$}
\label{subsection:GL5}
\begin{lemma}\label{lemma:weak-GL5}
One has
\begin{equation*}
\bar{J}\big(\PGL_5(\Bbbk)\big)=\bar{J}\big(\GL_5(\Bbbk)\big)= 960.
\end{equation*}
\end{lemma}
\begin{proof}
Let $V$ be a five-dimensional vector
space over $\Bbbk$, and let $G\subset\GL(V)$
be a finite subgroup.
It is enough to study the weak Jordan constant
$\bar{J}(G)$. Moreover, for this we may assume that
$G\subset\SL(V)\cong\SL_5(\Bbbk)$.
Recall that there are the following possibilities
for the group $G$ (see~\cite{Brauer67} or~\cite[\S8.5]{Feit}):
\begin{enumerate}
\item the $G$-representation $V$ is reducible;
\item there is a transitive homomorphism $h\colon G\to\SS_{5}$
such that
$V$ splits into a sum of five one-dimensional representations of
the subgroup $H=\Ker(h)$;
\item
the group $G$ is generated by some subgroup
of scalar matrices in $\SL_5(\Bbbk)$ and a group
$\hat{G}$ that is
one of the groups
$\A_5$, $\SS_5$, $\A_6$, $\SS_6$, $\PSL_2(\F_{11})$, or $\PSp_4(\F_3)$;
\item one has $G\cong\mathcal{H}_5\rtimes \Sigma$,
where $\mathcal{H}_5$ is the Heisenberg group of order $125$, and
$\Sigma$ is some subgroup of $\SL_2(\F_5)$.
\end{enumerate}
In case~(i) there is an embedding
$G\hookrightarrow \Gamma_1\times\Gamma_2$,
where $\Gamma_i$ is a finite subgroup
of $\GL_{n_i}(\Bbbk)$ for $i\in\{1,2\}$, and
$n_1\le n_2$ are positive integers such that $n_1+n_2=5$.
One has
\begin{equation*}
\bar{J}(G)\le\bar{J}(\Gamma_1\times\Gamma_2)\le
\bar{J}\big(\GL_{n_1}(\Bbbk)\big)\cdot
\bar{J}\big(\GL_{n_1}(\Bbbk)\big).
\end{equation*}
If
$(n_1,n_2)=(1,4)$, this gives
$\bar{J}(G)\le 960$
by Lemma~\xref{lemma:weak-GL4}.
If
$(n_1,n_2)=(2,3)$, this gives
\begin{equation*}
\bar{J}(G)\le 12\cdot 72=864
\end{equation*}
by Corollary~\xref{corollary:GL2} and Lemma~\xref{lemma:weak-GL3}.
In case~(ii) the group $H$ is an abelian subgroup
of $G$, so that
\begin{equation*}
\bar{J}(G)\le [G:H]\le |\SS_5|=120.
\end{equation*}
In case~(iii) it is easy to check that
$\bar{J}(G)=\bar{J}(\hat{G})\le 960$,
cf. the proof of Lemma~\xref{lemma:weak-GL4}.
In case~(iv) one has
\begin{equation*}
\bar{J}(G)\le
\bar{J}\big(\mathcal{H}_5\rtimes\SL_2(\F_5)\big)
=120.
\end{equation*}
Therefore, we see that
$\bar{J}(G)\le 960$, and thus $\bar{J}\big(\GL_5(\Bbbk)\big)\le 960$.
The inequality
\begin{equation*}
\bar{J}\big(\PGL_5(\Bbbk)\big)\le\bar{J}\big(\GL_5(\Bbbk)\big)
\end{equation*}
holds by Remark~\xref{remark:surjection}.
The value $\bar{J}(\PGL_5(\Bbbk))=960$ is given by the group~\mbox{$\PSp_4(\F_3)\subset\PGL_5(\Bbbk)$},
cf. the proof of Lemma~\xref{lemma:weak-GL4}.
Similarly,
the value~\mbox{$\bar{J}(\GL_5(\Bbbk))=960$} is given by the group
$\PSp_4(\F_3)\subset\GL_5(\Bbbk)$.
\end{proof}
We summarize the main results of \S\S\ref{subsection:GL2}--\ref{subsection:GL5}
in Table~\ref{table:constants}. In the first column we list the dimensions we will need in the sequel.
In the second column we give the values of the weak Jordan constants~\mbox{$\bar{J}(\PGL_n(\Bbbk))$},
and in the third column we give the groups that attain these constants.
Similarly, in the fourth column we give the values of the weak Jordan constants~\mbox{$\bar{J}(\GL_n(\Bbbk))$},
and in the fifth column we give the groups that attain the constants.
In the sixth column we list the actual values of the usual Jordan constants~\mbox{$\bar{J}(\GL_n(\Bbbk))$}
which can be found in~\cite[Proposition~C]{Collins2007}.
\begin{table}[h]
\centering
\begin{tabularx}{\textwidth}{Y|Y|Y|Y|Y|Y}
\Xhline{1\arrayrulewidth}
$n$ & $\bar{J}(\PGL_n(\Bbbk))$ & group & $\bar{J}(\GL_n(\Bbbk))$ & group & $J(\GL_n(\Bbbk))$
\\\Xhline{2\arrayrulewidth}
$2$ & $12$ & $\A_5$ & $12$ & $2.\A_5$ & $60$ \\
$3$ & $40$ & $\A_6$ & $72$ & $3.\A_6$ & $360$ \\
$4$ & $960$ & $\PSp_4(\F_3)$ & $960$ & $\Sp_4(\F_3)$ & $25920$ \\
$5$ & $960$ & $\PSp_4(\F_3)$ & $960$ & $\PSp_4(\F_3)$ & $25920$ \\
\Xhline{1\arrayrulewidth}
\end{tabularx}
\vspace{7pt}
\caption{Jordan constants for linear groups}\label{table:constants}
\end{table}
\subsection{Dimension $7$}
We start with a general observation concerning finite groups
with relatively large abelian subgroups.
\begin{lemma}\label{lemma:isotypical}
Let $G$ be a group,
and $\tilde{\Gamma}\subset G$
be a normal finite abelian subgroup.
Suppose that $\tilde{\Gamma}$ cannot be generated by less than $m$ elements.
Let $V$ be an $N$-dimensional vector space over $\Bbbk$.
Suppose that $V$ is a faithful representation of $G$.
Then there exist positive integers~\mbox{$t$, $m_1,\ldots, m_t$, $d_1,\ldots,d_t$}
such that
\begin{itemize}
\item
$m_1d_1+\ldots+m_td_t=N$;
\item
$m_1+\ldots+m_t\ge m$;
\item
the group $G$ is Jordan with
\begin{equation*}
\bar{J}(G)\le \Big(\prod\limits_{i=1}^t m_i!\Big)
\cdot\Big(\prod\limits_{i=1}^t\bar{J}\big(\GL_{d_i}(\Bbbk)\big)^{m_i}\Big).
\end{equation*}
\end{itemize}
\end{lemma}
\begin{proof}
Let
\begin{equation}\label{eq:splitting}
V=V_1\oplus\ldots\oplus V_s
\end{equation}
be the splitting of $V$ into isotypical
components with respect to $\tilde{\Gamma}$.
Since $V$ is
a faithful representation of $\tilde{\Gamma}$, and $\tilde{\Gamma}$ is an abelian group,
we have an injective homomorphism~\mbox{$\tilde{\Gamma} \hookrightarrow (\Bbbk^*)^s$}.
By assumption one has $s\ge m$.
Suppose that the splitting~\eqref{eq:splitting} contains~$m_1$ summands of dimension $d_1$,
$m_2$ summands of dimension $d_2$, \ldots, and $m_t$ summands of dimension $d_t$.
Then one has $m_1d_1+\ldots+m_td_t=N$. Moreover, the total number of summands
in~\eqref{eq:splitting} equals $m_1+\ldots+m_t=s\ge m$.
Since $\tilde{\Gamma}\subset G$ is a normal subgroup, the group $G$
interchanges the summands in~\eqref{eq:splitting}.
Moreover, $G$ can interchange only those subspaces
$V_i$ and $V_j$ that have the same dimension.
Therefore, we get a homomorphism
\begin{equation*}
\psi\colon G\to\prod\limits_{i=1}^t \SS_{m_i}.
\end{equation*}
Let $\Delta\subset G$ be the kernel of the homomorphism
$\psi$. Then each summand of~\eqref{eq:splitting}
is invariant with respect to $\Delta$. Since $V$ is a
faithful representation of $\Delta$, one has an inclusion
\begin{equation*}
\Delta\hookrightarrow\prod_{j=1}^s \GL(V_j)\cong
\prod_{i=1}^t \big(\GL_{d_i}(\Bbbk)\big)^{m_i}.
\end{equation*}
Note that
\begin{equation*}
[G:\Delta]\le |\prod\limits_{i=1}^t \SS_{m_i}|=\prod\limits_{i=1}^t m_i!.
\end{equation*}
Recall that the groups $\GL_{d_i}(\Bbbk)$ are Jordan by Theorem~\xref{theorem:Jordan}.
Thus the group $G$ is Jordan with
\begin{equation*}
\bar{J}(G)\le [G:\Delta]\cdot\bar{J}(\Delta)\le \Big(\prod\limits_{i=1}^t m_i!\Big)\cdot
\Big(\prod\limits_{i=1}^t\bar{J}\big(\GL_{d_i}(\Bbbk)\big)^{m_i}\Big)
\end{equation*}
by Remark~\xref{remark:Pyber}.
\end{proof}
Lemma~\xref{lemma:isotypical} allows us to provide a bound for Jordan constants
of some subgroups of~\mbox{$\GL_7(\Bbbk)$}.
This bound will be used in the proof of Lemma~\xref{lemma:intersection-of-three-quadrics}.
\begin{lemma}\label{lemma:isotypical-7}
Let $G$ be a group,
and $\tilde{\Gamma}\subset G$ be a normal finite abelian subgroup such that
$\tilde{\Gamma}\cong\mumu_2^m$ with $m\ge 4$.
Suppose that $G$ has a faithful seven-dimensional representation.
Then $G$ is Jordan with
\begin{equation*}
\bar{J}(G)\le 10368.
\end{equation*}
\end{lemma}
\begin{proof}
Since $\tilde{\Gamma}\cong\mumu_2^m$ has a faithful seven-dimensional representation,
we have $m\le 7$.
By Lemma~\xref{lemma:isotypical} there
exist positive integers $t$, $m_1,\ldots, m_t$, $d_1,\ldots,d_t$,
such that
\begin{equation*}
m_1d_1+\ldots+m_td_t=7,
\end{equation*}
while $m_1+\ldots+m_t\ge m$ and
\begin{equation}\label{eq:J-isotypical-7}
\bar{J}(G)\le \Big(\prod\limits_{i=1}^t m_i!\Big)
\cdot\Big(\prod\limits_{i=1}^t\bar{J}\big(\GL_{d_i}(\Bbbk)\big)^{m_i}\Big).
\end{equation}
In particular, one has $4\le m_1+\ldots+m_t\le 7$. Also, we may assume
that $d_1<\ldots<d_t$. We consider several possibilities for $m_1+\ldots+m_t$ case by case.
If $m_1+\ldots+m_t=7$, then $t=1$, $d_1=1$ and $m_1=7$,
so that~\eqref{eq:J-isotypical-7}
gives
\begin{equation*}
\bar{J}(G)\le 7!=5040.
\end{equation*}
If $m_1+\ldots+m_t=6$, then $t=2$, $d_1=1$, $m_1=5$, $d_2=2$, $m_2=1$,
so that~\eqref{eq:J-isotypical-7}
gives
\begin{equation*}
\bar{J}(G)\le 5!\cdot \bar{J}\big(\GL_2(\Bbbk)\big)=120\cdot 12=1440
\end{equation*}
by Corollary~\xref{corollary:GL2}.
If $m_1+\ldots+m_t=5$, then $t=2$, $d_1=1$, and either $m_1=4$, $d_2=3$, $m_2=1$,
or $m_1=3$, $d_2=2$, $m_2=2$.
In the former case~\eqref{eq:J-isotypical-7}
gives
\begin{equation*}
\bar{J}(G)\le 4!\cdot \bar{J}\big(\GL_3(\Bbbk)\big)=24\cdot 72=1728
\end{equation*}
by Lemma~\xref{lemma:weak-GL3}.
In the latter case~\eqref{eq:J-isotypical-7}
gives
\begin{equation*}
\bar{J}(G)\le 3!\cdot 2!\cdot\bar{J}\big(\GL_2(\Bbbk)\big)^2=6\cdot 2\cdot 12^2=1728
\end{equation*}
by Corollary~\xref{corollary:GL2}.
Finally, if $m_1+\ldots+m_t=4$, then either
\begin{equation*}
t=2, \ d_1=1, \ m_1=3, \ d_2=4, \ m_2=1,
\end{equation*}
or
\begin{equation*}
t=2, \ d_1=1, \ m_1=1, \ d_2=2, \ m_2=3,
\end{equation*}
or
\begin{equation*}
t=3, \ d_1=1, \ m_1=2, \ d_2=2, \ m_2=1, \ d_3=3, \ m_3=1.
\end{equation*}
In the first case~\eqref{eq:J-isotypical-7}
gives
\begin{equation*}
\bar{J}(G)\le 3!\cdot \bar{J}\big(\GL_4(\Bbbk)\big)=6\cdot 960=5760
\end{equation*}
by Lemma~\xref{lemma:weak-GL4}.
In the second case~\eqref{eq:J-isotypical-7}
gives
\begin{equation*}
\bar{J}(G)\le 3!\cdot \bar{J}\big(\GL_2(\Bbbk)\big)^3=6\cdot 12^3=10368
\end{equation*}
by Corollary~\xref{corollary:GL2}.
In the third case~\eqref{eq:J-isotypical-7}
gives
\begin{equation*}
\bar{J}(G)\le 2!\cdot \bar{J}\big(\GL_2(\Bbbk)\big)\cdot \bar{J}\big(\GL_3(\Bbbk)\big)=
2\cdot 12\cdot 72=1728
\end{equation*}
by Corollary~\xref{corollary:GL2} and Lemma~\xref{lemma:weak-GL3}.
\end{proof}
\section{Surfaces}
\label{section:dim-2}
The goal of this section is to estimate weak Jordan constants for automorphism
groups of rational surfaces, as well as some other constants of similar nature.
In the sequel for any variety $X$ we will denote by $\Phi(X)$ the
minimal positive integer $m$ such that for any finite group
$G\subset\Aut(X)$ there is a subgroup $F\subset G$ with $[G:F]\le m$ acting
on $X$ with a fixed point. If there does not exist an integer $m$
with the above property, we put $\Phi(X)=+\infty$.
Note that $\Phi(X)$ is bounded by some universal constant for rationally
connected varieties~$X$ of dimension at most $3$ by~\cite[Theorem~4.2]{ProkhorovShramov-RC}.
\subsection{Preliminaries}
\label{subsection:surf-prelim}
We start with the one-dimensional case.
\begin{lemma}
\label{lemma:dim-1}
One has $\Phi(\P^1)=12$.
Moreover, if $T$ is a finite union of rational curves
such that its dual graph $T^{\vee}$ is a tree,
then $\Phi(T)\le 12$.
\end{lemma}
\begin{proof}
The inequality $\Phi(\P^1)\le 12$ is given by
Lemma~\xref{lemma:weak-constant-quadric}\xref{lemma:weak-constant-quadric-i}.
The equality~\mbox{$\Phi(\P^1)=12$} is given by the icosahedral group~\mbox{$\A_5\subset\Aut(\P^1)$}.
Since for any rational curve $C$ one has
\begin{equation*}
\Aut(C)\subset\Bir(C)\cong\Bir(\P^1)=\Aut(\P^1),
\end{equation*}
we also see that $\Phi(C)\le 12$.
Let $T$ be a finite union of rational curves
such that its dual graph $T^{\vee}$ is a tree. Then
there is a natural homomorphism
of $\Aut(T)$ to the finite group $\Aut(T^{\vee})$.
It is easy to show by induction on the number of vertices
that either there is an
edge of $T^{\vee}$ that is invariant under $\Aut(T^{\vee})$,
or there is a vertex of
$T^{\vee}$ that is invariant under $\Aut(T^{\vee})$.
In the former case there is a point $P\in T$ fixed by
$\Aut(T)$, so that $\Phi(T)=1$.
In the latter case there is a
rational curve $C\subset T$
that is invariant under $\Aut(T)$, so that
\begin{equation*}
\Phi(T)\le\Phi(C)\le 12.\qedhere
\end{equation*}
\end{proof}
Now we proceed with the two-dimensional case.
In a sense, we are going to do in a more systematic way
the same things that were done in Lemma~\xref{lemma:weak-constant-quadric}.
For a variety $X$ with an action of a finite group
$G$, we will denote by $\Phi_a(X,G)$ the
minimal positive integer $m$ such that there is an \emph{abelian}
subgroup $A\subset G$ with $[G:A]\le m$ acting
on $X$ with a fixed point.
The main advantage of this definition is the following property.
\begin{lemma}\label{lemma:Phi-a-lift}
Let $X$ and $Y$ be smooth surfaces acted on by a finite group
$G$. Suppose that there
is a $G$-equivariant birational morphism $\pi\colon Y\to X$.
Then $\Phi_a(Y,G)=\Phi_a(X,G)$.
\end{lemma}
\begin{proof}
The assertion is implied by the results of~\cite{Kollar-Szabo-2000}
in arbitrary dimension. We give the proof for dimension $2$ for
the readers convenience.
The inequality $\Phi_a(Y,G)\ge\Phi_a(X,G)$ is obvious.
To prove the opposite inequality
choose an abelian subgroup $A\subset G$ such that
there is a point $P\in X$ fixed by $A$.
We are going to produce a point $Q\in Y$ fixed by $A$ such that
$\pi(Q)=P$.
The birational morphism $\pi$ is a composition
of blow ups of smooth points.
Since $\pi$ is $G$-equivariant and thus $A$-equivariant,
we may replace $X$ by a neighborhood of the point~$P$
and thus suppose that $\pi$ is a sequence of blow ups
of points lying over the point~$P$. If~$\pi$ is an isomorphism,
then there is nothing to prove. Otherwise, by induction
in the number of blow ups, we see that it is enough to
consider the case when $\pi$ is a single blow up
of the point~$P$. In this case the exceptional
divisor $E=\pi^{-1}(P)$ is identified with
the projectivization of the Zariski tangent space
$T_P(X)$, and the action of $A$ on $E$ comes from a linear action of $A$ on~\mbox{$T_P(X)$}.
Since the group~$A$ is abelian, it has a one-dimensional
invariant subspace in $T_P(X)$, which gives an $A$-invariant
point~\mbox{$Q\in E\subset Y$}.
\end{proof}
\subsection{Del Pezzo surfaces}
\begin{lemma}\label{lemma:Phi-a-P2}
Let $G\subset\Aut(\P^2)$ be a finite group.
Then one has $\Phi_a(\P^2, G)\le 72$.
\end{lemma}
\begin{proof}
One has $\Aut(X)\cong\PGL_3(\Bbbk)$.
By the holomorphic Lefschetz fixed-point formula any cyclic group
acting on a rational variety has a fixed point.
Now the required bound is obtained from the classification of finite subgroups
of $\GL_3(\Bbbk)$
(see~\cite[Chapter~XII]{MBD1916}
or~\cite[\S8.5]{Feit},
and also the proof of Lemma~\xref{lemma:weak-GL3}).
\end{proof}
\begin{remark}
Note that the bound given by Lemma~\xref{lemma:Phi-a-P2} is actually attained for the
group~\mbox{$\A_6\subset\PGL_3(\Bbbk)$} whose abelian subgroup of maximal order acting on~$\P^2$
with a fixed point is~$\mumu_5$.
\end{remark}
\begin{lemma}\label{lemma:Phi-a-DP}
Let $X$ be a smooth del Pezzo surface.
Let $G\subset\Aut(X)$ be a finite group.
Then one has
\begin{equation*}
\Phi_a(X, G)\le 288.
\end{equation*}
Moreover, if $X$ is not isomorphic to $\P^1\times\P^1$,
then~\mbox{$\Phi_a(X, G)\le 144$}.
\end{lemma}
\begin{proof}
If $X\cong\P^2$, then $\Phi_a(X, G)\le 72$ by Lemma~\xref{lemma:Phi-a-P2}.
Suppose that $X\cong\P^1\times\P^1$. Then one has
$\Phi_a(X,G)\le 288$ by Lemma~\xref{lemma:weak-constant-quadric}(ii). Note that this value is attained
for the group
\begin{equation*}
G\cong\big(\A_5\times\A_5\big)\rtimes\mumu_2\subset\Aut(\P^1\times\P^1).
\end{equation*}
Suppose that $X$ is a blow up $\pi\colon X\to\P^2$ at one or two
points. Then $\pi$ is an $\Aut(X)$-equivariant birational
morphism, so that $\Phi_a(X,G)\le 72$ by Lemmas~\xref{lemma:Phi-a-P2}
and~\xref{lemma:Phi-a-lift}.
Put $d=K_X^2$. We may assume that $d\le 6$.
Suppose that $d=6$. Then
\begin{equation*}
\Aut(X)\cong\big(\Bbbk^*\times\Bbbk^*)\rtimes\mathrm{D}_{6},
\end{equation*}
where $\mathrm{D}_{6}$ is the dihedral group of order $12$
(see~\cite[Theorem~8.4.2]{Dolgachev-book}). The subgroup~\mbox{$\Bbbk^*\times\Bbbk^*\subset\Aut(X)$}
acts on $X$ with a fixed point by Borel's theorem (see e.\,g.~\mbox{\cite[VIII.21]{Humphreys1975}}).
From this one can easily deduce that~\mbox{$\Phi_a(X, G)\le 12$}
for any finite subgroup~\mbox{$G\subset\Aut(X)$}.
If $d\le 5$, then the group $\Aut(X)$ is finite,
and it is enough to show that~\mbox{$\Phi_a\big(X,\Aut(X)\big)\le 144$}.
Suppose that $d=5$. Then $\Aut(X)\cong\SS_5$
(see~\cite[Theorem~8.5.6]{Dolgachev-book}).
Hence for any subgroup $G\subset\Aut(X)$ one has
\begin{equation*}
\Phi_a\big(X,\Aut(X)\big)\le |\Aut(X)|=120.
\end{equation*}
Suppose that $d=4$.
Then
\begin{equation*}
\Aut(X)\cong\mumu_2^4\rtimes\Gamma,
\end{equation*}
where $|\Gamma|\le 10$
(see~\cite[Theorem~8.6.6]{Dolgachev-book}).
Representing $X$ as an intersection of two quadrics with
equations in diagonal form, one can see that there
is a subgroup~\mbox{$\mumu_2^2\subset\Aut(X)$}
acting on $X$ with a fixed point.
Therefore, one has
\begin{equation*}
\Phi_a\big(X,\Aut(X)\big)\le \frac{|\Aut(X)|}{|\mumu_2^2|}\le
\frac{160}{4}=40.
\end{equation*}
Suppose that $d=3$. Then either
$\Aut(X)\cong \mumu_3^3\rtimes\SS_4$
and $X$ is the Fermat cubic, or~\mbox{$|\Aut(X)|\le 120$}
(see~\cite[Theorem~9.5.6]{Dolgachev-book}).
In the former case it is easy to see that there is a subgroup
$\mumu_3^2\subset\Aut(X)$ acting on $X$ with a fixed point,
so that
\begin{equation*}
\Phi_a\big(X,\Aut(X)\big)\le \frac{|\Aut(X)|}{|\mumu_3^2|}=
\frac{648}{9}=72.
\end{equation*}
In the latter case one has
\begin{equation*}
\Phi_a\big(X,\Aut(X)\big)\le |\Aut(X)|\le 120.
\end{equation*}
Suppose that $d=2$. Then either
$|\Aut(X)|\le 96$,
or $\Aut(X)\cong\mumu_2\times (\mumu_4^2\rtimes\SS_3)$,
or~\mbox{$\Aut(X)\cong\mumu_2\times\PSL_2(\F_7)$}
(see~\cite[Table~8.9]{Dolgachev-book}).
In the latter case one has
\begin{equation*}
\Phi_a\big(X,\Aut(X)\big)\le |\Aut(X)|\le 120.
\end{equation*}
To estimate $\Phi_a\big(X,\Aut(X)\big)$ in the former two cases,
recall that the anticanonical
linear system $|-K_X|$ defines a double cover
\begin{equation*}
\varphi_{|-K_X|}\colon X\to\P^2
\end{equation*}
branched over a smooth quartic curve $C\subset\P^2$.
The subgroup $\mumu_2$ acts by the Galois
involution of the corresponding double cover.
In particular, the curve $\varphi_{|-K_X|}^{-1}(C)$ consists
of $\mumu_2$-fixed points. If $\Aut(X)\cong\mumu_2\times (\mumu_4^2\rtimes\SS_3)$,
this gives
\begin{equation*}
\Phi_a\big(X,\Aut(X)\big)\le\frac{|\Aut(X)|}{|\mumu_2|}=\frac{192}{2}=96.
\end{equation*}
If $\Aut(X)\cong\mumu_2\times\PSL_2(\F_7)$, then the group
$\PSL_2(\F_7)\subset\Aut(X)$ contains a subgroup~$\mumu_7$,
and $\mumu_7$ acts on the curve $\varphi_{|-K_X|}^{-1}(C)\cong C$ with a fixed point
(this can be easily seen, for example, from the Riemann--Hurwitz formula since $C$ is a smooth curve of genus $3$).
Thus
\begin{equation*}
\Phi_a\big(X,\Aut(X)\big)\le\frac{|\Aut(X)|}{|\mumu_2\times\mumu_7|}=\frac{336}{14}=24.
\end{equation*}
Finally, suppose that $d=1$. Then
\begin{equation*}
\Phi_a\big(X, \Aut(X)\big)\le |\Aut(X)|\le 144
\end{equation*}
(see~\cite[Table~8.14]{Dolgachev-book}).
\end{proof}
\begin{remark}\label{remark:pohuj}
In several cases
(say, for a del Pezzo surface of degree $d=5$)
one can produce better upper bounds for~\mbox{$\Phi_a(X, G)$}
than those given in the proof of
Lemma~\xref{lemma:Phi-a-DP}, but we do not pursue
this goal.
\end{remark}
Lemma~\xref{lemma:Phi-a-DP} immediately implies the following.
\begin{corollary}[{cf. Lemma~\xref{lemma:weak-constant-quadric}(iii)}]
\label{corollary:barJ-for-DP}
Let $X$ be a smooth del Pezzo surface.
Then one has $\bar{J}\big(\Aut(X)\big)\le 288$.
Moreover, if $X$ is not isomorphic to $\P^1\times\P^1$,
then~\mbox{$\bar{J}\big(\Aut(X)\big)\le 144$}.
\end{corollary}
\subsection{Rational surfaces}
Now we pass to the case of arbitrary rational surfaces.
\begin{lemma}\label{lemma:dim-2-constants}
Let $X$ be a smooth rational surface, and $G\subset\Aut(X)$ be a finite subgroup.
Then there exists an abelian subgroup $H\subset G$
of index $[G:H]\le 288$ that acts on $X$ with a fixed point.
\end{lemma}
\begin{proof}
Let $Y$ be a smooth projective rational
surface, and $G\subset\Aut(Y)$
be a finite group. Let
$\pi\colon Y\to X$ be a result of a $G$-Minimal Model Program
ran on $Y$. One has
\begin{equation*}
\Phi_a(Y,G)=\Phi_a(X,G)
\end{equation*}
by Lemma~\xref{lemma:Phi-a-lift}. Moreover, $X$ is either a del Pezzo surface,
or there is a
$G$-equivariant conic bundle structure on~$X$
(see~\cite[Theorem~1G]{Iskovskikh-1979s-e}).
If $X$ is a del Pezzo surface, then~\mbox{$\Phi_a(X,G)\le 288$} by Lemma~\xref{lemma:Phi-a-DP},
so that $\Phi_a(Y,G)\le 288$.
Therefore, we assume that there is a $G$-equivariant conic bundle structure
\begin{equation*}
\phi\colon X\to B\cong \P^1.
\end{equation*}
There is an exact sequence of groups
\begin{equation*}
1\to G_{\phi}\longrightarrow G\stackrel{u}\longrightarrow
G_{B}\to 1,
\end{equation*}
where $G_{\phi}$ acts by fiberwise automorphisms with respect to
$\phi$, and $G_{B}\subset\Aut(\P^1)$.
By Lemma~\xref{lemma:dim-1} we find a subgroup
$G_{B}'\subset G_{B}$ of index $[G_{B}:G_{B}']\le 12$
acting on $\P^1$ with a fixed point $P\in\P^1$.
The group
\begin{equation*}
G'=u^{-1}(G_{B}')\subset G
\end{equation*}
acts by automorphisms of the fiber $C=\phi^{-1}(P)$.
Note that $C$ is a reduced conic, i.e.
it is either isomorphic to $\P^1$, or is a union
of two copies of $\P^1$ meeting at one point.
Suppose that $C\cong \P^1$. Then there is a point $Q\in C$ that is invariant
with respect to some subgroup~\mbox{$G''\subset G'$} of index
$[G':G'']\le 12$ by Lemma~\xref{lemma:dim-1}.
The morphism $\phi\colon X\to B$ is smooth at $Q$.
Hence the map $d\phi\colon T_Q(X)\to T_P(B)$ is surjective.
By Lemma~\xref{lemma:Aut-P}
the group $G''$ acts faithfully
on the Zariski tangent space $T_Q(X)$, and
the group $G'_{B}$ acts faithfully
on the Zariski tangent space $T_P(B)$.
The map $d\phi$ is $G''$-equivariant and so $G''$ has one-dimensional
invariant
subspace $\Ker (d\phi)\subset T_Q(X)\cong \Bbbk^2$.
In this case $G''$ must be abelian with $[G:G'']\le 12\cdot 12=144$.
Now consider the case when $C$ is a reducible conic, i.e.
it is a union
of two copies of $\P^1$ meeting at one point, say $Q$.
Then $Q$ is $G'$-invariant. There exists a subgroup $G''\subset G'$ of index
$[G':G'']\le 2$ such that both irreducible components $C_1,\, C_2\subset C$ are invariant with
respect to~$G''$. In this case subspaces $T_Q(C_i)\subset T_Q(X)$ are $G''$-invariant
and as above $G''$ is abelian with $[G:G'']\le 12\cdot 2=24$.
Therefore, one has
\begin{equation*}
\Phi_a(Y,G)=\Phi_a(X,G)\le [G:G'']\le 144.\qedhere
\end{equation*}
\end{proof}
\begin{corollary}\label{corollary:barJ-for-surfaces}
Let $X$ be a smooth rational surface.
Then one has $\bar{J}\big(\Aut(X)\big)\le 288$.
\end{corollary}
\begin{corollary}\label{corollary:Cr-2}
One has $\bar{J}\big(\Cr_2(\Bbbk)\big)=288$.
\end{corollary}
\begin{proof}
Let $G\subset\Cr_2(\Bbbk)$ be a finite group.
It is enough to study the weak Jordan constant $\bar{J}(G)$.
Regularizing the action of $G$ and taking an equivariant
desingularization (see e.\,g.~\mbox{\cite[Lemma-Definition~3.1]{Prokhorov-Shramov-J}}),
we may assume that~\mbox{$G\subset\Aut(X)$} for a smooth rational surface~$X$.
Now the bound $\bar{J}\big(\Cr_2(\Bbbk)\big)\le 288$ follows from Corollary~\xref{corollary:barJ-for-surfaces}.
The equality is due to Lemma~\ref{lemma:weak-constant-quadric}\ref{lemma:weak-constant-quadric-iii}.
\end{proof}
A direct consequence of Corollary~\ref{corollary:Cr-2} is that the weak Jordan constant of
the Cremona group of rank~$2$ is bounded by~$288$ for an arbitrary (not necessarily algebraically closed)
base field. Together with Remark~\ref{remark:Pyber}
this gives a proof of Proposition~\ref{proposition:Cr-2}.
\subsection{Non-rational surfaces}
We conclude this section by three easy observations concerning automorphism groups
of certain non-rational surfaces.
\begin{lemma}\label{lemma:ruled-surface}
Let $C$ be a smooth curve of genus $g\ge 2$, and let $S$ be a
ruled surface over~$C$.
Then the group $\Aut(S)$ is Jordan with $\bar{J}\big(\Aut(S)\big)\le 1008(g-1)$.
\end{lemma}
\begin{proof}
Let $G\subset\Aut(S)$ be a finite group. It is enough to prove the corresponding
bound for~$\bar{J}(G)$.
There is an exact sequence of groups
\begin{equation*}
1\to G_{\phi}\longrightarrow G\longrightarrow G_C\to 1,
\end{equation*}
where $G_{\phi}$ acts by fiberwise automorphisms with
respect to $\phi$, and $G_C\subset\Aut(C)$.
One has
\begin{equation*}
|G_C|\le 84(g-1)
\end{equation*}
by the Hurwitz bound.
On the other hand, the group $G_{\phi}$ is a subgroup
of~\mbox{$\Aut(\P^1)\cong\PGL_2(\Bbbk)$},
so that $G_{\phi}$ contains an abelian subgroup
$H$ of index
\begin{equation*}
[G_{\phi}:H]\le 12
\end{equation*}
by Corollary~\xref{corollary:GL2}.
Thus one has
\begin{equation*}
\bar{J}(G)\le [G:H]=[G:G_{\phi}]\cdot [G_{\phi}:H]=
[G:G_{\phi}]\cdot |G_C|\le 12\cdot 84\cdot (g-1)=1008(g-1).\qedhere
\end{equation*}
\end{proof}
\begin{lemma}[{cf. \cite[Corollary~2.15]{Prokhorov-Shramov-J}}]
\label{lemma:abelian-surface}
Let $S$ be an abelian surface.
Then the group~\mbox{$\Aut(S)$} is Jordan with $\bar{J}\big(\Aut(S)\big)\le 5760$.
\end{lemma}
\begin{proof}
One has $\Aut(S)\cong A\rtimes\Gamma$, where $A$ is an abelian group
(that is identified with the group of points on $S$), and
$\Gamma$ is a subgroup of $\GL_4(\Z)$.
Thus $\Aut(S)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(S)\big)\le [\Aut(S):A]=|\Gamma|\le 5760
\end{equation*}
by the Minkowski bound for $\GL_4(\Z)$ (see e.\,g.~\cite[\S1.1]{Serre2007}).
\end{proof}
To obtain a bound for a weak Jordan constant in the last case we will use some
purely group-theoretic facts.
\begin{proposition}[{see Corollary~2 of Theorem~1.17 in Chapter~2 of~\cite{Suzuki82}}]
\label{proposition:Suzuki}
Let $p$ be a prime number, $G$ be a group of order $p^n$, and $A\subset G$ be an abelian normal subgroup of maximal possible order~$p^a$. Then $2n\le a(a+1)$.
\end{proposition}
\begin{lemma}\label{lemma:group-theory}
Let $G$ be a finite group with $|G|\le 79380$.
Then
\begin{equation*}
\bar{J}(G)\le 9922.
\end{equation*}
\end{lemma}
\begin{proof}
Suppose that $|G|$ is divisible by a
prime number $p$. Then $G$ contains a cyclic subgroup of order $p$, so that
\begin{equation*}
\bar{J}(G)\le \frac{|G|}{p}.
\end{equation*}
In particular, if $|G|$ is divisible by a prime $p\ge 11$, then
\begin{equation*}
\bar{J}(G)\le \frac{|G|}{11}<7217.
\end{equation*}
Similarly, suppose that $p$ is a prime such that $|G|$ is divisible by $p^2$.
Let $G_p\subset G$ be a Sylow $p$-subgroup.
Then $|G_p|\ge p^2$. If $|G_p|=p^2$, then $G_p$ is abelian, so that
\begin{equation*}
\bar{J}(G)\le [G:G_p]=\frac{|G|}{p^2}.
\end{equation*}
If $|G_p|\ge p^3$, then $G_p$ contains an abelian subgroup
$A$ of order $|A|\ge p^2$ by Proposition~\ref{proposition:Suzuki}, and we again have
\begin{equation*}
\bar{J}(G)\le [G:A]\le \frac{|G|}{p^2}.
\end{equation*}
In particular, if there is a prime $p\ge 3$ such that $|G|$ is divisible
by $p^2$, then
\begin{equation*}
\bar{J}(G)\le \frac{|G|}{p^2}\le \frac{|G|}{9}\le 8820.
\end{equation*}
Now suppose that $|G|$ is not divisible by any prime greater than $7$,
and $|G|$ is not divisible by a square of any prime greater than $2$. This means
that
\begin{equation*}
|G|=2^{\alpha}\cdot 3^{\beta}\cdot 5^{\gamma}\cdot 7^{\delta},
\end{equation*}
where $\beta,\gamma,\delta\in\{0,1\}$. If $\alpha\le 3$, then
\begin{equation*}
\bar{J}(G)\le |G|\le 2^3\cdot 3\cdot 5\cdot 7=840.
\end{equation*}
Thus we assume that $\alpha\ge 4$. Let $G_2\subset G$ be a Sylow $2$-subgroup.
Applying Proposition~\ref{proposition:Suzuki} once again, we see that $G_2$ contains an abelian
subgroup~$A$ of order $|A|\ge 8$. Hence one has
\begin{equation*}
\bar{J}(G)\le [G:A]\le \frac{|G|}{8}<9923.\qedhere
\end{equation*}
\end{proof}
Now we are ready to bound a weak Jordan constant for automorphism groups
of surfaces of general type of low degree.
\begin{lemma}
\label{lemma:general-type-45}
Let $S$ be a smooth minimal surface of general type of degree
$K_S^2\le 45$. Then the group~\mbox{$\Aut(S)$} is Jordan with
$\bar{J}\big(\Aut(S)\big)\le 9922$.
\end{lemma}
\begin{proof}
By
\cite{Xiao1995}
one has
\begin{equation*}
|\Aut(S)|\le 42^2\cdot K_S^2\le 79380.
\end{equation*}
Thus the group~\mbox{$\Aut(S)$} is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(S)\big)\le 9922
\end{equation*}
by Lemma~\xref{lemma:group-theory}.
\end{proof}
\section{Terminal singularities}
\label{section:terminal}
In this section we study Jordan property for automorphism groups of
germs of three-dimensional terminal singularities,
and derive some conclusions about automorphism groups
of non-Gorenstein terminal Fano threefolds.
\subsection{Local case}
Recall from~\S\xref{subsection:linear-prelim} that for an arbitrary variety $U$ and a point $P\in U$
we denote by $\Aut_P(U)$ the stabilizer of $P$ in $\Aut(U)$.
Now we are going to estimate a weak Jordan constant of a
group~\mbox{$\Aut_P(U)$}, where~\mbox{$P\in U$} is a three-dimensional
terminal singularity.
\begin{lemma}\label{lemma:dim-3-terminal}
Let $U$ be a threefold, and $P\in U$ be a terminal singular point of~$U$.
Let~\mbox{$G\subset\Aut_P(U)$} be a finite subgroup.
Then for some positive integer $r$ there is an extension
\begin{equation}
\label{equation-terminal-singular-point-extension}
1\longrightarrow \mumu_r
\longrightarrow \tilde{G} \longrightarrow G
\longrightarrow 1
\end{equation}
such that the following assertions hold.
\begin{enumerate}
\item There is an embedding $\tilde{G}\subset\GL_4(\Bbbk)$,
and the group $\tilde{G}$ has a semi-invariant of degree~$2$.
\item If $(U,P)$ is a cyclic
quotient singularity, then there is an embedding $\tilde{G}\subset\GL_3(\Bbbk)$.
\item Let $D$ be a $G$-invariant boundary on $X$ such that the log pair $(U,D)$
is log canonical and such that there is a minimal center~$C$ of log canonical singularities
is a $G$-invariant curve containing~$P$
\textup(see \cite[Proposition~1.5]{Kawamata1997}\textup).
Then $\tilde{G}\subset\Bbbk^*\times\GL_3(\Bbbk)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $r\ge 1$ be the index of $U\ni P$, i.\,e.
$r$ equals the minimal positive
integer~$t$ such that~$tK_U$ is Cartier at~$P$.
Replacing $U$ by a smaller $G$-invariant neighborhood of $P$ if necessary,
we may assume that $rK_U\sim 0$. Consider the index-one cover
\begin{equation*}
\pi\colon (U^\sharp,P^\sharp)\to (U,P)
\end{equation*}
(see \cite[Proposition~3.6]{Reid-YPG1987}).
Then $U^\sharp\ni P^\sharp$ is a terminal singularity of index~$1$,
and~\mbox{$U\cong U^\sharp /\mumu_r$}.
Note that $U^\sharp\ni P^\sharp$ is a hypersurface singularity,
\mbox{i.\,e.~$\dim T_{P^\sharp}(U^\sharp)\le 4$} (see~\mbox{\cite[Corollary~3.12(i)]{Reid-YPG1987}}).
Moreover, $U^\sharp$ is smooth at $P^\sharp$ if~\mbox{$(U,P)$}
is a cyclic quotient singularity.
By construction of the index one cover
every element of $\Aut_P(U)$ admits~$r$
lifts to~\mbox{$\Aut(U^\sharp, P^\sharp)$}. Thus
we have a natural exact sequence~\eqref{equation-terminal-singular-point-extension},
where $\tilde{G}$ is some subgroup of $\Aut_{P^\sharp}(U^\sharp)$.
Furthermore, by Lemma~\xref{lemma:Aut-P} we know that $\tilde{G}\subset\GL_3(\Bbbk)$
if $U^\sharp$ is smooth at $P^\sharp$. This gives assertion~(ii).
Now suppose that $\dim T_{P^\sharp}(U^\sharp)= 4$.
By Lemma~\xref{lemma:Aut-P} one has an embedding~\mbox{$\tilde{G}\subset\GL_4(\Bbbk)$}.
Moreover, $U^\sharp\ni P^\sharp$ is a hypersurface singularity
of multiplicity $2$ by~\mbox{\cite[Corollary~5.38]{Kollar-Mori-1988}}.
This means that the kernel of the natural map
\[
\Sym^2\big(\mathfrak m_{P^{\sharp}, U^{\sharp}}/\mathfrak m_{P^{\sharp}, U^{\sharp}}^2\big)
\longrightarrow
\mathfrak m_{P^{\sharp}, U^{\sharp}}^2/\mathfrak m_{P^{\sharp}, U^{\sharp}}^3
\]
is generated by an element of degree $2$.
Therefore,
the group $\tilde{G}$ has a semi-invariant polynomial of degree $2$.
This completes the proof of assertion~(i).
Finally, let $C$, $D$, and $G$ be as in assertion~(iii).
Put $D^\sharp=\pi^*D$ and $C^\sharp=\pi^{-1}(C)$.
One can show that $C^\sharp$ is again a minimal center of log canonical singularities of
$(U^\sharp, D^\sharp)$ (cf. \cite[Proposition~5.20]{Kollar-Mori-1988}).
In particular, $C^\sharp$ is smooth (see~\cite[Theorem~1.6]{Kawamata1997}).
As above, one has an embedding $\tilde{G}\subset\GL\big(T_{P^\sharp}(U^\sharp)\big)$.
Moreover, since $C^{\sharp}$ is $\tilde{G}$-invariant,
we have a decomposition
of $\tilde{G}$-representations
\begin{equation*}
T_{P^\sharp}(U^\sharp)=T_1\oplus T_3,
\end{equation*}
where $T_1=T_{P^\sharp}(C^\sharp)\cong\Bbbk$
and $\dim T_3=3$.
Hence, one has
\begin{equation*}
\tilde{G}\subset \GL(T_1)\times \GL(T_3)\cong\Bbbk^*\times\GL_3(\Bbbk),
\end{equation*}
which proves assertion~(iii).
\end{proof}
\begin{corollary}\label{corollary:dim-3-terminal}
Let $U$ be a threefold, and $P\in U$ be a terminal singularity.
Then the following assertions hold.
\begin{enumerate}
\item The group
$\Aut_P(U)$ is Jordan with
\begin{equation*}
\bar{J}(\Aut_P(U))\le 288.
\end{equation*}
\item If $(U,P)$ is a cyclic
quotient singularity, then $\Aut_P(U)$ is Jordan with
\begin{equation*}
\bar{J}(\Aut_P(U))\le 72.
\end{equation*}
\item Let $C\ni P$ be a curve contained in $U$
and $\Gamma\subset\Aut_P(U)$
be a subgroup such that $C$ is $\Gamma$-invariant.
Assume that $C$ is a minimal center of log canonical singularities
of the log pair $(U,D)$ for some $\Gamma$-invariant boundary $D$.
Then $\Gamma$ is Jordan with
\begin{equation*}
\bar{J}(\Gamma)\le 72.
\end{equation*}
\end{enumerate}
\end{corollary}
\begin{proof}
Suppose that $G\subset\Aut_P(U)$ is a finite subgroup.
It is enough to prove the corresponding bounds for the constant $\bar{J}(G)$.
One has $\bar{J}(G)\le\bar{J}(\tilde{G})$, where $\tilde{G}$ is the extension
of $G$ given by Lemma~\xref{lemma:dim-3-terminal}.
Thus, assertion~(i) follows from Lemma~\xref{lemma:dim-3-terminal}(i) and
Lemma~\xref{lemma:GL4-quadric}, while
assertion~(ii) follows from Lemma~\xref{lemma:dim-3-terminal}(ii) and
Lemma~\xref{lemma:weak-GL3}.
Suppose that $\Gamma$ is as in assertion~(iii), and $G\subset\Gamma$.
Then
\[
\bar{J}(G)\le\bar{J}(\tilde{G})\le
\bar{J}\big(\Bbbk^*\times\GL_3(\Bbbk)\big)=
\bar{J}\big(\GL_3(\Bbbk)\big)
\]
by Lemma~\xref{lemma:dim-3-terminal}(iii). Therefore, assertion~(iii) follows from
Lemma~\xref{lemma:weak-GL3}.
\end{proof}
\subsection{Non-Gorenstein Fano threefolds}
Now we will use Corollary~\xref{corollary:dim-3-terminal} to study automorphism
groups of non-Gorenstein terminal Fano threefolds.
\begin{lemma}\label{lemma:non-Gorenstein-Fano-3-fold}
Let $X$ be a Fano
threefold with terminal singularities.
Suppose that $X$ has a non-Gorenstein singular point.
Then the group~\mbox{$\Aut(X)$} is Jordan with
\begin{equation*}
\bar J\big(\Aut(X)\big)\le 4608.
\end{equation*}
\end{lemma}
\begin{proof}
We use the methods of \cite[\S6]{Prokhorov2009e}. Let $P_1$ be a non-Gorenstein point
and~\mbox{$P_1,\ldots, P_N\in X$} be its $\Aut(X)$-orbit.
Let $r$ be the index of points $P_1,\ldots, P_N\in X$.
By the orbifold Riemann--Roch theorem and
Bogomolov--Miyaoka inequality
we have
\begin{equation*}
\frac{3}{2} N\le \Big( r-\frac{1}{r}\Big)N\le 24
\end{equation*}
(see \cite{Kawamata-1992bF}, \cite{KMMT-2000}).
This immediately implies that $N\le 16$.
The subgroup $\Aut_{P_1}(X)\subset\Aut(X)$ stabilizing the point
$P_1$ has index
\begin{equation*}
[\Aut(X):\Aut_{P_1}(X)]\le N.
\end{equation*}
Thus we have
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le
[\Aut(X):\Aut_{P_1}(X)]\cdot\bar{J}\big(\Aut_{P_1}(X)\big)\le
N\cdot \bar{J}\big(\Aut_{P_1}(X)\big)\le 16\cdot 288=4608
\end{equation*}
by Corollary~\xref{corollary:dim-3-terminal}(i).
\end{proof}
\begin{remark}
It is known that terminal non-Gorenstein Fano threefolds are bounded,
i.e. they belong to an algebraic family (see \cite{Kawamata-1992bF}, \cite{KMMT-2000}).
However it is expected that the class of these varieties is huge \cite{GRD}.
There are only few results related to some special types of these Fanos
(see e.g. \cite{Brown-Suzuki-2007mm}, \cite{Prokhorov-e-QFano7}).
\end{remark}
\section{Mori fiber spaces}
\label{section:Mfs}
Recall that a $G$-equivariant morphism $\phi\colon X\to S$ of normal
varieties acted on by a finite group
$G$ is a \emph{$G$-Mori fiber space},
if $X$ has terminal $G\Q$-factorial singularities, one has~\mbox{$\dim(S)<\dim(X)$},
the fibers of $\phi$ are connected, the anticanonical divisor~$-K_X$
is $\phi$-ample, and the relative $G$-invariant Picard number
$\rho^G(X/S)$ equals~$1$.
If the dimension of~$X$ equals~$3$, there are three cases:
\begin{itemize}
\item
$S$ is a point, $-K_X$ is ample; in this case $X$ is said to be a $G\Q$-Fano threefold,
and $X$ is a $G$-Fano threefold provided that the singularities of $X$ are Gorenstein;
\item
$S$ is a curve, a general fiber of $\phi$ is a del Pezzo surface; in this case $X$ is said to be a $G\Q$-del Pezzo fibration;
\item
$S$ is a surface, a general fiber of $\phi$ is a rational curve; in this case $X$ is said to be a $G\Q$-conic bundle.
\end{itemize}
The goal of this section is to estimate weak Jordan constants for
the automorphism groups of varieties of $G\Q$-conic bundles and $G\Q$-del Pezzo fibrations.
\subsection{Conic bundles}
We start with automorphism groups of $G\Q$-conic bundles.
\begin{lemma}\label{lemma:conic-bundle}
Let $G$ be a finite group, and $\phi\colon X\to S$ be a
three-dimensional $G$-equivariant fibration into rational curves
over a rational surface $S$.
Then $\bar{J}(G)\le 3456$.
\end{lemma}
\begin{proof}
By~\cite{Avilov2014}
we may assume that
$X$ and $S$ are smooth, and any fiber of $\phi$ is a (possibly reducible or non-reduced) conic.
There is an exact sequence of groups
\begin{equation*}
1\to G_{\phi}\longrightarrow G\stackrel{\gamma}\longrightarrow G_S\to 1,
\end{equation*}
where $G_{\phi}$ acts by fiberwise automorphisms with
respect to $\phi$, and $G_S\subset\Aut(S)$.
By Lemma~\xref{lemma:dim-2-constants} there is an abelian subgroup $G_S'\subset G_S$
of index
\begin{equation*}
[G_S:G_S']\le 288
\end{equation*}
such that $G_S'$
acts on $S$ with a fixed point. Let $P\in S$ be one
of the fixed points of $G_S'$, and let
\begin{equation*}
C=\phi^{-1}(P)\subset X
\end{equation*}
be the fiber of $\phi$ over the point $P$. Put $G'=\gamma^{-1}(G_S')$.
Then $G'$ is a subgroup of $G$ of index
\begin{equation*}
[G:G']=[G_S:G_S']\le 288,
\end{equation*}
and the fiber $C$ is $G'$-invariant.
The fiber $C$ is a reduced conic, so that it is either isomorphic to $\P^1$, or is a union
of two copies of $\P^1$ meeting at one point.
In the former case there is a point $Q\in C$ that is invariant
with respect to some subgroup~\mbox{$G''\subset G'$} of index
\begin{equation*}
[G':G'']\le 12
\end{equation*}
by Lemma~\xref{lemma:dim-1}. In the latter case the
intersection point $Q$ of the irreducible components~$C_1$ and~$C_2$
of~$C$ is invariant with respect to the group $G'$,
and there exists a subgroup~\mbox{$G''\subset G'$} of index
$[G':G'']\le 2$
such that~$C_1$ and~$C_2$ are invariant with
respect to~$G''$.
By Lemma~\xref{lemma:Aut-P} the group $G''$ acts faithfully
on the Zariski tangent space $T_Q(X)$, and
the group $G'_S$ acts faithfully
on the Zariski tangent space $T_P(S)$. As we have seen,
the group $G''$ preserves
the point $Q$ and a tangent direction
\begin{equation*}
v\in T_Q(X)\cong\Bbbk^3
\end{equation*}
that lies in the kernel of the natural projection
$T_Q(X)\to T_P(S)$.
Moreover, there is an embedding
\begin{equation*}
G''\hookrightarrow\Gamma_1\times\Gamma_2,
\end{equation*}
where $\Gamma_1\subset\Bbbk^*$, and $\Gamma_2\subset G_S'$.
Since $G_S'$ and $\Bbbk^*$ are abelian groups, we conclude
that so is~$G''$. Therefore, one has
\begin{equation*}
\bar{J}(G)\le [G:G'']=[G:G']\cdot [G':G'']\le 288\cdot 12=3456.\qedhere
\end{equation*}
\end{proof}
\subsection{Del Pezzo fibrations}
Before we pass to the case of $G\Q$-del Pezzo fibrations we will
establish some auxiliary results.
Recall \cite[Definition~3.7]{Kollar-ShB-1988} that a surface singularity
is said to be \textit{of type $T$}
if it is a quotient singularity and admits a $\Q$-Gorenstein one-parameter smoothing.
\begin{lemma}\label{lemma-T-singularities}
Let $X$ be a normal threefold with at worst isolated singularities
and let~\mbox{$S\subset X$} be an effective Cartier divisor
such that the log pair $(X,S)$ is purely log terminal
(see \cite[\S2.3]{Kollar-Mori-1988}). Then $S$ has only singularities of type $T$.
\end{lemma}
\begin{proof}
Regard $X$ as the total space of a deformation of $S$. By our assumptions
divisors~\mbox{$K_X+S$} and~$S$ are $\Q$-Cartier. Hence
$X$ is $\Q$-Gorenstein.
By the inversion of adjunction (see \cite[Theorem~5.20]{Kollar-Mori-1988})
the surface $S$ has only Kawamata log terminal (i.\,e. quotient) singularities
(see \cite[Theorem~5.50]{Kollar-Mori-1988}).
Hence the singularities of $S$ are of type~$T$.
\end{proof}
\begin{lemma}\label{lemma:P1xP1-degeneration-1}
Let $S$ be a singular del Pezzo surface with $T$-singularities.
Assume that~$S$ has at least one non-Gorenstein point.
Then $\Aut(S)$ has an orbit of length at most~$2$ on~$S$.
\end{lemma}
\begin{proof}
Assume that $\Aut(S)$ has no orbits of length at most $2$
on $S$. By \cite[Proposition~2.6]{Hacking-Prokhorov-2010} one has
\begin{equation*}
\dim |-K_S|=K_S^2\ge 1.
\end{equation*}
Write $|-K_S|=F+|M|$, where $|M|$ is a linear system without fixed components
and $F$ is the fixed part of $|-K_S|$, so that
\begin{equation*}
\dim |M|=\dim |-K_S|=K_S^2.
\end{equation*}
By \cite[Theorem~4.2]{Prokhorov-degenerations-del-Pezzo} the log pair $(S, M+F)$ is
log canonical for a general member~\mbox{$M\in |M|$}.
In particular, $F$ is reduced.
Let $\Sing'(S)$ be the set
of non-Du Val points of~$S$.
By our assumptions $\Sing'(S)\neq \varnothing$.
Clearly,
any member of $|-K_S|$ contains~\mbox{$\Sing'(S)$}; otherwise~$-K_S$ is Cartier
at some point of $\Sing'(S)$, so that this point is Du Val on~$S$.
Since the log pair $(S,F+M)$ is
log canonical and $K_S+F+M$ is Cartier, by the classification of two-dimensional log canonical singularities
(\cite[Theorem~4.15]{Kollar-Mori-1988}) the divisor~\mbox{$F+M$} has two analytic branches
at each point of $\Sing(X) \cap \Supp(F+M)$.
In particular, we have
\begin{equation*}
\Sing'(S)\subset \Sing(F+M).
\end{equation*}
Thus by our assumption $\Sing(F+M)$ contains
at least three points.
Furthermore, since the support of $F+M$ is connected, by adjunction one has~\mbox{$p_a(F+M)=1$},
all irreducible components of $F+M$
are smooth rational curves and
the corresponding dual graph is a combinatorial cycle. Moreover,
the number of these irreducible components is at least $3$.
First assume that $F\neq 0$.
By Shokurov's connectedness theorem (see e.g
\cite[Theorem~5.48]{Kollar-Mori-1988})
we know that $F$ is connected.
Hence $F$ is a connected chain of rational curves.
In this situation $\Aut(S)$ acts on $F$ so that there exists either
a fixed point~\mbox{$P\in \Sing(F)$} or an invariant irreducible component $F_1\subset F$
(cf. the proof of Lemma~\xref{lemma:dim-1}).
In the first case we have a contradiction with our assumption
and in the second case $\Aut(S)$ permutes two points of intersection
of $F_1$ with~\mbox{$\Supp(F-F_1)$}, again a contradiction.
Thus $F=0$ and so $\Sing'(S)\subset \Bs|M|$ and $\Sing'(S)\subset \Sing(M)$.
Since $\Sing'(S)$ contains at least three points and $p_a(M)=1$, the divisor~$M$ is reducible.
By Bertini's theorem the linear system $|M|$ is composed of a pencil, which means
that there is a pencil~$|L|$ such that~\mbox{$|M|=n|L|$} for some $n\ge 2$, and $\Sing'(S)\subset \Bs|L|$.
Since the log pair~\mbox{$(S,M)$} is log canonical, there are exactly two irreducible
components of $M$ passing through
any point $P\in \Sing'(S)$, see \cite[Theorem~4.15]{Kollar-Mori-1988}.
Since $\Sing'(S)$ contains at least three points,
the dual graph of $M$ cannot be a combinatorial cycle,
a contradiction.
\end{proof}
\begin{lemma}\label{lemma:DP-smooth-fiber}
Let $X$ be a threefold, and $G\subset\Aut(X)$
be a finite subgroup.
Suppose that there is a $G$-invariant smooth del Pezzo surface
$S$ contained in the smooth locus of $X$.
Then~\mbox{$\bar{J}(G)\le 288$}.
\end{lemma}
\begin{proof}
There is an exact sequence of groups
\begin{equation*}
1\to K\longrightarrow G\stackrel{\beta}\longrightarrow H\to 1,
\end{equation*}
where $K$ acts on $S$ trivially, and
$H\subset\Aut(S)$.
By Lemma~\xref{lemma:Phi-a-DP} there is a point
$Q\in S$ fixed by an abelian subgroup $H_Q\subset H$ of
index $[H:H_Q]\le 288$.
Put $G_Q=\beta^{-1}(H_Q)$. Then $G_Q\subset G$ is a subgroup
that fixes the point $Q$,
such that the index
\begin{equation*}
[G:G_Q]=[H:H_Q]\le 288.
\end{equation*}
By Lemma~\xref{lemma:Aut-P} the group $G_Q$ acts
faithfully on the Zariski tangent space
$T_Q(X)\cong\Bbbk^3$. The two-dimensional Zariski tangent space
$T_Q(S)\subset T_Q(X)$ is $G_Q$-invariant, and thus
$G_Q$ is contained in a subgroup
\begin{equation*}
\Bbbk^*\times\GL\big(T_Q(S)\big)\cong
\Bbbk^*\times\GL_2(\Bbbk)
\subset\GL_3(\Bbbk)\cong\GL\big(T_Q(X)\big).
\end{equation*}
Hence $G_Q\subset A\times H_Q$, where $A\subset\Bbbk^*$ is some
cyclic group.
Therefore, the group $G_Q$ is abelian,
so that one has
\begin{equation*}
\bar{J}(G)\le [G:G_Q]\le 288.\qedhere
\end{equation*}
\end{proof}
\begin{remark}\label{remark:invariant-set}
Let $G\subset\Aut(X)$ be a finite subgroup,
and $\Sigma\subset X$ be a non-empty finite subset.
Then a stabilizer $G_P\subset G$ of a point $P\in\Sigma$ has
index $[G:G_P]\le |\Sigma|$, so that by Remark~\xref{remark:Pyber} one has
\begin{equation*}
\bar{J}(G)\le |\Sigma|\cdot\bar{J}(G_P)\le |\Sigma|\cdot\bar{J}
\big(\Aut_P(X)\big).
\end{equation*}
\end{remark}
Now we are ready to finish with weak Jordan constants of
rationally connected three-dimensional $G\Q$-del Pezzo fibrations.
\begin{lemma}\label{lemma:dP-fibration}
Let $G$ be a finite group, and $\phi\colon X\to B\cong\P^1$ be a
three-dimensional $G\Q$-del Pezzo fibration.
Then $\bar{J}(G)\le 10368$.
\end{lemma}
\begin{proof}
There is an exact sequence of groups
\begin{equation*}
1\to G_{\phi}\longrightarrow G\stackrel{\alpha}\longrightarrow G_{B}\to 1,
\end{equation*}
where $G_{\phi}$ acts by fiberwise automorphisms with
respect to $\phi$, and
\begin{equation*}
G_{B}\subset\Aut(B)\cong\PGL_2(\Bbbk).
\end{equation*}
By Lemma~\xref{lemma:dim-1} there is a subgroup $G_{B}'\subset G_{B}$
of index $[G_{B}:G_{B}']\le 12$ such that~$G_{B}'$
acts on $B$ with a fixed point.
Let $P\in B$ be one
of the fixed points of $G_{B}'$, let $F= \phi^*(P)$
be the scheme fiber over~$P$, and let $S=F_{\red}$.
Put $G'=\alpha^{-1}(G_{B}')$.
Then $G'$ is a subgroup of $G$ of index
\begin{equation*}
[G:G']=[G_{B}:G_{B}']\le 12,
\end{equation*}
and the fiber $S$ is $G'$-invariant. In particular, one has
\begin{equation*}
\bar{J}(G)\le [G:G']\cdot\bar{J}(G').
\end{equation*}
Suppose that $F$ is a multiple fiber of $\phi$, i.e. $S\neq F$.
Then by~\cite{MoriProkhorov-multfibers}
there is a $G$-invariant set $\Sigma\subset S$ of singular
points of $X$ such that either $|\Sigma|\le 3$, or $|\Sigma|=4$ and
$\Sigma$ consists of cyclic quotient singularities.
In the former case Remark~\xref{remark:invariant-set}
and Corollary~\xref{corollary:dim-3-terminal}(i) imply that
\begin{equation*}
\bar{J}(G)\le 12\cdot 3\cdot 288=10368.
\end{equation*}
In the latter case Remark~\xref{remark:invariant-set}
and Corollary~\xref{corollary:dim-3-terminal}(ii) imply that
\begin{equation*}
\bar{J}(G)\le 12\cdot 4\cdot 72=3456.
\end{equation*}
Therefore, we can assume that $S$ is not a multiple fiber of $\phi$.
In particular, $S=F$ is a Cartier divisor on $X$.
Suppose that the log pair $(X, S)$ is not purely log terminal
(see \cite[\S2.3]{Kollar-Mori-1988}).
Let $c$ be the log canonical threshold
of the log pair $(X,S)$
(cf. the proof of~\cite[Lemma~3.4]{ProkhorovShramov-RC}).
Let~\mbox{$Z_1\subset S$} be a minimal center of log
canonical singularities of the log pair~\mbox{$(X, cS)$},
see \cite[Proposition~1.5]{Kawamata1997}.
Since $(X,S)$ is not purely log terminal, we conclude
that $c<1$, so that $\dim(Z)\le 1$.
It follows from~\cite[Lemma~2.5]{ProkhorovShramov-RC}
that $Z$ is $G'$-invariant. If~$Z$ is a point, then
\begin{equation*}
\bar{J}(G)\le [G:G']\cdot\bar{J}(G')\le
12\cdot 288=3456
\end{equation*}
by Corollary~\xref{corollary:dim-3-terminal}(i).
Thus we assume that $Z$ is a curve.
Using~\cite[Lemma~2.5]{ProkhorovShramov-RC} once again,
we see that $Z$ is smooth and rational.
By Lemma~\xref{lemma:dim-1} there is a subgroup
$G''\subset G'$ of index $[G':G'']\le 12$ such that
$G''$ has a fixed point on $Z$.
Hence
\begin{equation*}
\bar{J}(G)\le [G:G'']\cdot\bar{J}(G'')\le
144\cdot 72=10368
\end{equation*}
by Corollary~\xref{corollary:dim-3-terminal}(iii).
Therefore, we may assume that the log pair
$(X,S)$ is purely log terminal.
Then by~\mbox{\cite[Theorem 5.50]{Kollar-Mori-1988}} the surface $S$
is a del Pezzo surface with only Kawamata log terminal
singularities. Moreover, the singularities of $S$ are of type $T$
(see Lemma \xref{lemma-T-singularities}).
If~$K_S$ is not Cartier, Lemma~\xref{lemma:P1xP1-degeneration-1}
implies
that there is a $G'$-orbit of length at most~$2$
contained in~$S$. In this case we have
\begin{equation*}
\bar{J}(G)\le [G:G']\cdot\bar{J}(G')\le
12\cdot 2\cdot 288=6912
\end{equation*}
by Remark~\xref{remark:invariant-set}
and Corollary~\xref{corollary:dim-3-terminal}(i).
Therefore, we may assume that $K_S$ is Cartier and so $S$ has at worst Du Val
singularities. Denote their number by $m(S)$.
Then by Noether formula applied to the minimal resolution
we have
\begin{equation*}
m(S)\le 9-K_S^2\le 8.
\end{equation*}
Thus by Remark~\xref{remark:invariant-set} and Corollary~\xref{corollary:dim-3-terminal}(i)
we have
\begin{equation*}
\bar{J}(G)\le [G:G']\cdot\bar{J}(G')\le
2\cdot 9\cdot 288=5184.
\end{equation*}
Therefore, we are left with the case when $S$ is smooth.
Now Lemma~\xref{lemma:DP-smooth-fiber}
implies that
\begin{equation*}
\bar{J}(G)\le [G:G']\cdot\bar{J}(G')\le
12\cdot 288=3456
\end{equation*}
and completes the proof.
\end{proof}
\section{Gorenstein Fano threefolds}
\label{section:Fano}
Let $X$ be a Fano threefold
with at worst terminal Gorenstein singularities.
In this case, the number
\begin{equation*}
\g(X)=\frac{1}{2}(-K_X)^3+1
\end{equation*}
is called the \textit{genus} of $X$.
By Riemann--Roch theorem and Kawamata--Viehweg vanishing one has
\begin{equation*}
\dim |-K_X|=\g(X)+1
\end{equation*}
(see e.\,g. \cite[2.1.14]{Iskovskikh-Prokhorov-1999}).
In particular, $\g(X)$ is an integer, and $\g(X)\ge 2$.
The maximal number $\iota=\iota(X)$
such that $-K_X$ is divisible by $\iota$ in
$\Pic(X)$ is called the \textit{Fano index}, or sometimes just \emph{index}, of~$X$.
Recall that~\mbox{$\Pic(X)$} is a finitely generated torsion free abelian group,
see e.g.~\mbox{\cite[Proposition 2.1.2]{Iskovskikh-Prokhorov-1999}}.
The rank $\rho(X)$ of the free abelian group
$\Pic(X)$ is called the \emph{Picard rank} of~$X$.
Let $H$ be a divisor class such that
$-K_X\sim\iota(X) H$.
The class $H$ is unique since $\Pic(X)$ is torsion free.
Define the \textit{degree} of $X$ as~\mbox{$\dd(X)=H^3$}.
The goal of this section is to bound weak Jordan constants for
automorphism groups of singular terminal Gorenstein Fano threefolds.
\subsection{Low degree}
\label{subsection:low-degree}
We start with the case of small anticanonical degree.
We will use notation and results of~\S\ref{subsection:WCI}.
\begin{proposition}[{cf. \cite[Lemma~4.4.1]{Kuznetsov-Prokhorov-Shramov}}]
\label{proposition:double-cover}
Let $X$ be a Fano threefold with terminal
Gorenstein singularities such that $\rho(X)=1$.
Suppose that $H$ is not very ample,
i.\,e. one of the following
possibilities holds
\textup(see \cite[Theorem~0.6]{Shin1989}, \cite[Corollary~0.8]{Shin1989},
\cite[Theorem 1.1]{Jahnke-Radloff-2006},
\cite[Theorem 1.4]{Przhiyalkovskij-Cheltsov-Shramov-2005en}\textup):
\begin{enumerate}
\renewcommand\labelenumi{(\roman{enumi})}
\renewcommand\theenumi{(\roman{enumi})}
\item\label{enumerate-(ii)}
$\iota(X)=2$ and $\dd(X)=1$;
\item\label{enumerate-(iii)}
$\iota(X)=2$ and $\dd(X)=2$;
\item\label{enumerate-(i)}
$\iota(X)=1$ and $\g(X)=2$;
\item\label{enumerate-(iv)}
$\iota(X)=1$, $\g(X)=3$, and $X$ is a double cover of a three-dimensional quadric.
\end{enumerate}
Suppose that $G\subset\Aut(X)$ is a finite group.
Then for some positive integer $r$ there is a central extension
\begin{equation*}
1\to \mumu_r\to\tilde{G}\to G\to 1
\end{equation*}
such that one has an embedding
$\tilde{G}\subset\GL_3(\Bbbk)\times\Bbbk^*$ in case~\xref{enumerate-(ii)},
an embedding $\tilde{G}\subset\GL_4(\Bbbk)$
in cases~\xref{enumerate-(iii)} and~\xref{enumerate-(i)}, and an embedding~\mbox{$\tilde{G}\subset\GL_5(\Bbbk)$}
in case~\xref{enumerate-(iv)}.
\end{proposition}
\begin{proof}
According to \cite[Corollary 0.8]{Shin1989} and
\cite[Theorem~1.5]{Przhiyalkovskij-Cheltsov-Shramov-2005en},
in cases \xref{enumerate-(ii)}, \xref{enumerate-(iii)} and~\xref{enumerate-(i)}
our Fano variety $X$ is naturally embedded as a weighted hypersurface in
the weighted projective space
$\P=\P(a_0,\ldots,a_4)$,
where
\begin{equation*}
(a_0,\ldots,a_4)=(1^3,2,3), (1^4,2), (1^4,3),
\end{equation*}
respectively.
In case \xref{enumerate-(iv)} our $X$ is naturally embedded as a weighted complete intersection of multidegree $(2,4)$
in $\P=\P(1^5,2)$.
Let $\O_X(1)$ be the restriction of the (non-invertible) divisorial sheaf
$\O_{\P}(1)$ to $X$ (see~\mbox{\cite[1.4.1]{Dolgachev-1982}}).
Since $X$ is Gorenstein, in all cases it is contained in the smooth
locus of~$\P$, and thus $\O_X(1)$ is an invertible divisorial sheaf on~$X$.
Moreover, under the above embeddings we have
\begin{equation*}
\O_X(1)=\O_X(-K_X)
\end{equation*}
in cases \xref{enumerate-(i)} and \xref{enumerate-(iv)}, while
\begin{equation*}
\O_X(1)=\O_X(-\textstyle{\frac{1}{2}}K_X)
\end{equation*}
in cases \xref{enumerate-(ii)} and \xref{enumerate-(iii)}.
Since the group $\Pic(X)$ has no torsion, in all cases
the class of $\O_X(1)$ in $\Pic(X)$
is invariant with respect to the whole automorphism group~$\Aut(X)$.
Also, the line bundle $\O_X(1)$ is ample, so that the algebra $R(X,\O_X(1))$
is finitely generated.
Therefore, by Lemma~\xref{lemma:action-on-algebra-and-WPS} for any finite subgroup $\Gamma\subset\Aut(X)$
the action of $\Gamma$ on $X$ is induced by its action on
\begin{equation*}
\P\cong \operatorname{Proj} R\big(X,\O_X(1)\big).
\end{equation*}
Thus the assertion follows from Lemma~\xref{lemma:WPS}.
\end{proof}
\begin{remark}
Assume the setup of Proposition~\xref{proposition:double-cover}.
Then using the notation of the proof of Lemma~\xref{lemma:action-on-algebra-and-WPS} one can argue
that a central extension of the group $G$ acts on the vector space
\begin{equation*}
V=\bigoplus\limits_{m=1}^N V_m,
\end{equation*}
which immediately gives its embedding into $\GL_{k_1+\ldots+k_N}(\Bbbk)$.
This would allow to avoid using Lemma~\xref{lemma:WPS}, but would give a slightly
weaker result.
\end{remark}
Using a more explicit geometric approach, one can strengthen the assertion of
Proposition~\xref{proposition:double-cover}\xref{enumerate-(ii)}.
\begin{corollary}
In the assumptions of Proposition~\xref{proposition:double-cover}\xref{enumerate-(ii)}
one has~\mbox{$G\subset\GL_3(\Bbbk)$}.
\end{corollary}
\begin{proof}
The base locus of the linear system $|H|$ is a single point $P$ which is contained in
the smooth part of $X$ (see e.g. \cite[Theorem 0.6]{Shin1989}). Clearly,
the point $P$ is $\Aut(X)$-invariant.
Therefore, Lemma~\xref{lemma:Aut-P} implies that
$G\subset\GL_3(\Bbbk)$.
\end{proof}
\begin{lemma}\label{lemma:double-cover}
Let $X$ be a Fano threefold with Gorenstein terminal singularities.
Suppose that $\rho(X)=1$, and one of the following
possibilities holds:
\begin{enumerate}
\renewcommand\labelenumi{(\roman{enumi})}
\renewcommand\theenumi{(\roman{enumi})}
\item\label{lemma:double-cover-enumerate-(ii)}
$\iota(X)=2$ and $\dd(X)=1$;
\item\label{lemma:double-cover-enumerate-(iii)}
$\iota(X)=2$ and $\dd(X)=2$;
\item\label{lemma:double-cover-enumerate-(i)}
$\iota(X)=1$ and $\g(X)=2$;
\item\label{lemma:double-cover-enumerate-(iv)}
$\iota(X)=1$, $\g(X)=3$, and $X$ is a double cover of a three-dimensional quadric.
\end{enumerate}
Then the group $\Aut(X)$ is Jordan with
$\bar{J}\big(\Aut(X)\big)\le 960$.
\end{lemma}
\begin{proof}
Apply Proposition~\xref{proposition:double-cover} together with Lemma~\xref{lemma:weak-GL5}.
\end{proof}
\begin{lemma}\label{lemma:hypersurface}
Let $X\subset\P^4$ be a
hypersurface of degree at least~$2$.
Then the group $\Aut(X)$ is Jordan with
$\bar{J}\big(\Aut(X)\big)\le 960$.
\end{lemma}
\begin{proof}
There is an embedding
$\Aut(X)\subset\PGL_5(\Bbbk)$,
see e.g. \cite[Corollary~3.1.4]{Kuznetsov-Prokhorov-Shramov}. Thus the assertion follows
from Lemma~\xref{lemma:weak-GL5}.
\end{proof}
\subsection{Complete intersection of a quadric and a cubic}
\label{subsection:X23}
Now we will describe some properties of finite subgroups of automorphisms
of a complete intersection of a quadric and a cubic in~$\P^5$.
\begin{lemma}\label{lemma:X23}
Let $X\subset\P^5$ be a
Fano threefold with terminal Gorenstein
singularities
such that $\rho(X)=1$, $\iota(X)=1$, and $\g(X)=4$,
i.\,e. $X$ is a complete intersection of a quadric and a cubic
in $\P^5$ \textup(see \cite[Proposition~IV.1.4]{Iskovskikh-1980-Anticanonical},
\cite[Theorem 1.6 or Remark~4.2]{Przhiyalkovskij-Cheltsov-Shramov-2005en}\textup).
Let~\mbox{$Q\subset\P^5$} be the \textup(unique\textup) quadric passing through $X$.
Then one of the following possibilities occurs:
\begin{enumerate}
\item
the quadric $Q$ is smooth; in this case
there is a subgroup~\mbox{$\Aut'(X)\subset\Aut(X)$}
of index at most $2$ such that~\mbox{$\Aut'(X)\subset\PGL_4(\Bbbk)$};
\item
the quadric $Q$ is a cone with an isolated singularity; in this case
for any finite subgroup
$G\subset\Aut(X)$ there is an embedding
\begin{equation*}
G\subset\SO_5(\Bbbk)\times\Bbbk^*\subset\GL_5(\Bbbk);
\end{equation*}
\item the quadric $Q$ is a cone whose singular locus is a line; in this case
for any finite subgroup
$G\subset\Aut(X)$ there
is a subgroup $F\subset G$ of index~\mbox{$[G:F]\le 3$}
such that there is an embedding
\begin{equation*}
F\subset\Bbbk^*\times\left(\SO_4(\Bbbk)\times\Bbbk^*/\mumu_2\right)\subset\Bbbk^*\times\GL_4(\Bbbk).
\end{equation*}
\end{enumerate}
\end{lemma}
\begin{proof}
The embedding $X\hookrightarrow\P^5$ is given by the anticanonical linear system
on~$X$. Hence there is an action of the group $\Aut(X)$ on $\P^5$ that
agrees with the action of $\Aut(X)$ on~$X$, see e.g.~\cite[Lemma~3.1.2]{Kuznetsov-Prokhorov-Shramov}.
The quadric $Q$ is $\Aut(X)$-invariant, and
the action of~\mbox{$\Aut(X)$} on $Q$ is faithful.
Since the singularities of $X$ are terminal and thus isolated, we see that
the singular locus of $Q$ is at most one-dimensional.
Suppose that $Q$ is non-singular. Then $Q$ is isomorphic to
the Grassmannian $\Gr(2,4)$, so that
\begin{equation*}
\Aut(Q)\cong\PGL_4(\Bbbk)\rtimes\mumu_2,
\end{equation*}
which gives
case~(i).
Therefore, we may assume that $Q$ is singular.
Then $\Sing(Q)$ is a linear subspace of~$\P^5$
of dimension $\delta\le 1$.
Suppose that $\delta=0$, so that $\Sing(Q)$ is a single point $P$.
Then the point $P$ is $\Aut(Q)$-invariant,
and thus also $\Aut(X)$-invariant. Let $G\subset\Aut(X)$
be a finite subgroup.
By Lemma~\xref{lemma:Aut-P} there is an embedding
\begin{equation*}
G\subset\GL\big(T_P(Q)\big)=\GL\big(T_P(\P^5)\big)\cong\GL_5(\Bbbk).
\end{equation*}
Moreover, the group
$G$ acts by a character on a quadratic polynomial on $T_P(\P^5)$
that corresponds to the quadric $Q$. Hence $G$ is contained in
the subgroup
\begin{equation*}
\pi^{-1}\left(\PSO_5(\Bbbk)\right)\subset \GL_5(\Bbbk),
\end{equation*}
where $\pi\colon\GL_5(\Bbbk)\to\PGL_5(\Bbbk)$ is the natural projection.
This gives case~(ii).
Finally, suppose that $\delta=1$.
Let $L\cong\P^1$ be the vertex of $Q$. Then $L$ is $\Aut(Q)$-invariant,
and thus also $\Aut(X)$-invariant. Let $G\subset\Aut(X)$ be a finite
subgroup.
Note that $X\cap L$ is non-empty and consists of at most three points.
Hence there
is a subgroup~\mbox{$F\subset G$} of index $[G:F]\le 3$
such that $F$ has a fixed point on $L$. Denote this point
by $P$. By Lemma~\xref{lemma:Aut-P} there is an embedding
\begin{equation*}
F\hookrightarrow \GL\big(T_P(\P^5)\big)\cong\GL_5(\Bbbk).
\end{equation*}
Moreover, the representation of $F$ in
$T_P(\P^5)$ splits as a sum of a one-dimensional and a four-dimensional
representations since $F$ preserves the tangent
direction $T_P(L)$ to $L$. Put
\begin{equation*}
V=T_P(\P^5)/T_P(L).
\end{equation*}
Then there is an embedding
$F\hookrightarrow F_1\times F_2$, where $F_1$ is a finite cyclic
group, and $F_2$ is a finite subgroup of $\GL(V)\cong\GL_4(\Bbbk)$.
The last thing we need to observe is that
$F_2$ preserves a quadric cone in $\P(V)$ corresponding to an intersection of the tangent cone to~$Q$
at $P$ with the subspace $V\hookrightarrow T_P(\P^5)$.
Therefore, $F_2$ is contained in
the subgroup
\begin{equation*}
\pi^{-1}\left(\PSO_4(\Bbbk)\right)\subset \GL_4(\Bbbk),
\end{equation*}
where $\pi\colon\GL_4(\Bbbk)\to\PGL_4(\Bbbk)$ is the natural projection.
Since
\begin{equation*}
\pi^{-1}\left(\PSO_4(\Bbbk)\right)\cong \SO_4(\Bbbk)\times\Bbbk^*/\mumu_2,
\end{equation*}
this gives case~(iii) and completes the proof of the lemma.
\end{proof}
\begin{corollary}\label{corollary:X23}
Let $X$ be a Fano threefold with Gorenstein terminal singularities.
Suppose that $\rho(X)=1$, $\iota(X)=1$, and $\g(X)=4$.
Then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 1920.
\end{equation*}
\end{corollary}
\begin{proof}
By Lemma~\xref{lemma:X23} one of the following possibilities holds:
\begin{enumerate}
\item
there is a subgroup $\Aut'(X)\subset\Aut(X)$
of index at most $2$ such that~\mbox{$\Aut'(X)\subset\PGL_4(\Bbbk)$};
\item
for any finite subgroup
$G\subset\Aut(X)$ there is an embedding $G\subset\GL_5(\Bbbk)$;
\item for any finite subgroup
$G\subset\Aut(X)$ there
is a subgroup $F\subset G$ of index~\mbox{$[G:F]\le 3$}
such that there is an embedding
\begin{equation*}
F\subset \Bbbk^*\times\left(\SO_4(\Bbbk)\times\Bbbk^*/\mumu_2\right).
\end{equation*}
\end{enumerate}
In particular, the group $\Aut(X)$ is Jordan.
In case~(i) one has
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 2\cdot\bar{J}\big(\Aut'(X)\big)\le
2\cdot\bar{J}\big(\PGL_4(\Bbbk)\big)=2\cdot 960=1920
\end{equation*}
by Lemma~\xref{lemma:weak-GL4}.
In case~(ii) one has
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le \bar{J}\big(\PGL_5(\Bbbk)\big)= 960
\end{equation*}
by Lemma~\xref{lemma:weak-GL5}.
In case~(iii) one has
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 3\cdot\bar{J}\left(\Bbbk^*\times\left(\SO_4(\Bbbk)\times\Bbbk^*/\mumu_2\right)\right)=
3\cdot \bar{J}\big(\SO_4(\Bbbk)\big)\le
3\cdot 288=864
\end{equation*}
by Lemma~\xref{lemma:GL4-quadric}.
\end{proof}
\subsection{General case}
The results of \S\xref{subsection:low-degree}
and~\S\xref{subsection:X23} imply the following
\begin{corollary}\label{corollary:Fano-3-fold-small-g}
Let $X$ be a Fano threefold with Gorenstein terminal singularities.
Suppose that $\rho(X)=1$, $\iota(X)=1$, and $\g(X)\le 4$.
Then $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 1920.
\end{equation*}
\end{corollary}
\begin{proof}
Recall that $\g(X)\ge 2$.
If $\g(X)=2$, then $\Aut(X)$ is Jordan with
$\bar{J}\big(\Aut(X)\big)\le 960$ by Lemma~\xref{lemma:double-cover}.
If $\g(X)=3$ and $-K_X$ is not very ample,
then $\Aut(X)$ is also Jordan with
$\bar{J}\big(\Aut(X)\big)\le 960$ by Lemma~\xref{lemma:double-cover}.
If $\g(X)=3$ and $-K_X$ is very ample,
then~$X$ is a smooth quartic in $\P^4$ (because $\dim |-K_X|=4$ and $-K_X^3=4$),
so that~\mbox{$\Aut(X)$} is Jordan with
$\bar{J}\big(\Aut(X)\big)\le 960$ by Lemma~\xref{lemma:hypersurface}.
Finally, if $\g(X)=4$, then the group~\mbox{$\Aut(X)$}
is Jordan with $\bar{J}\big(\Aut(X)\big)\le 1920$ by Corollary~\xref{corollary:X23}.
\end{proof}
Now we are ready to study automorphism groups of arbitrary \emph{singular}
Gorenstein $G$-Fano threefolds.
\begin{lemma}\label{lemma:Gorenstein-G-Fano-3-fold}
Let $G$ be a finite group, and let $X$ be a singular Gorenstein
$G$-Fano threefold.
Then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 9504.
\end{equation*}
\end{lemma}
\begin{proof}
Let $P_1,\ldots, P_N\in X$ be all singular points of $X$.
The group $\Aut(X)$ acts on the set $\{P_1,\dots,P_N\}$.
The subgroup $\Aut_{P_1}(X)\subset\Aut(X)$ stabilizing the point
$P_1$ has index
\begin{equation*}
[\Aut(X):\Aut_{P_1}(X)]\le N.
\end{equation*}
We have
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le
N\cdot\bar{J}\big(\Aut_{P_1}(X)\big).
\end{equation*}
According to~\cite{Namikawa-1997} there exists a \textit{smoothing} of $X$, that is
a one-parameter deformation
\begin{equation*}
\mathfrak{X}\to B\ni 0
\end{equation*}
such that
a general fiber $\mathfrak{X}_b$ is smooth and the central fiber~$\mathfrak{X}_0$
is isomorphic to~$X$.
One has
\begin{equation}\label{eq:Namikawa}
N\le 21-\frac 12 \chi_{\operatorname{top}}(\mathfrak{X}_b) = 20-\rho(\mathfrak{X}_b) + h^{1,2}(\mathfrak{X}_b)
\end{equation}
by \cite[Theorem~13]{Namikawa-1997}. Moreover, there is an identification
$\Pic(\mathfrak{X}_b)\cong\Pic(X)$, see~\mbox{\cite[Theorem~1.4]{Jahnke2011}}.
Suppose that $\rho(X)\ge 2$.
Smooth Fano threefolds $V$ whose Picard group admits an action of a finite group $G$ such
that $\rho(V)^G=1$ and $\rho(V)>1$ are classified in \cite{Prokhorov-GFano-2}.
Applying this classification to $V=\mathfrak{X}_b$
we obtain $h^{1,2}(\mathfrak{X}_b)\le 9$.
Suppose that $\rho(X)=1$.
If $\iota(X)=2$ and $\dd(X)\le 2$, then the group $\Aut(X)$
is Jordan with $\bar{J}\big(\Aut(X)\big)\le 960$
by Lemma~\xref{lemma:double-cover}.
If $\iota(X)=1$ and $\g(X)\le 4$, then $\Aut(X)$
is Jordan with $\bar{J}\big(\Aut(X)\big)\le 1920$ by
Corollary~\xref{corollary:Fano-3-fold-small-g}.
In all other cases by the classification of
smooth Fano threefolds (see~\cite[\S12.2]{Iskovskikh-Prokhorov-1999})
we have $h^{1,2}(\mathfrak{X}_b)\le 14$.
Therefore, we are left with several possibilities with $h^{1,2}(\mathfrak{X}_b)\le 14$.
In this case~\eqref{eq:Namikawa} implies that
$N\le 33$ (and in some cases this bound can be significantly improved, see~\cite{Prokhorov-factorial-Fano-e}).
Now Corollary~\xref{corollary:dim-3-terminal}(i)
implies that $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 33\cdot 288=9504.\qedhere
\end{equation*}
\end{proof}
\section{Smooth Fano threefolds}
\label{section:smooth-Fano}
In this section we bound weak Jordan constants for automorphism
groups of smooth Fano threefolds.
\subsection{Complete intersections of quadrics}
It appears that
we can get a reasonable bound for a weak Jordan constant
of an automorphism group of a smooth complete intersection of two quadrics
of arbitrary dimension.
Here we will use the results of~\S\ref{subsection:int-two-quadrics-finite}.
\begin{lemma}\label{lemma:int-2-quadrics}
Let $X\subset\P^n$, $n\ge 4$, be a smooth complete intersection of $2$ quadrics.
Then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le (n+1)!
\end{equation*}
\end{lemma}
\begin{proof}
By Proposition~\xref{proposition:int-2-quadrics} there is an exact sequence
\[
1\longrightarrow \Gamma\longrightarrow \Aut(X)\longrightarrow G_{\mathcal{P}}\longrightarrow 1
\]
where $\Gamma\cong\mumu_2^n$
and $G_{\mathcal{P}}\subset\SS_{n+1}$.
Therefore, the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le [\Aut(X):\Gamma]\le |\SS_{n+1}|=(n+1)!\qedhere
\end{equation*}
\end{proof}
In dimension $3$ we can also bound weak Jordan constants for automorphism
groups of smooth complete intersections of three
quadrics.
\begin{lemma}\label{lemma:intersection-of-three-quadrics}
Let $X\subset\P^6$ be a smooth complete intersection of $3$ quadrics.
Then
the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 10368.
\end{equation*}
\end{lemma}
\begin{proof}
There is an exact sequence
\begin{equation*}
1\to \Gamma\to\Aut(X)\to\PGL_{3}(\Bbbk),
\end{equation*}
where $\Gamma\cong\mumu_2^m$ with $m\le 6$,
see~\eqref{eq:sequence-aut-quadrics} and
Corollary~\xref{corollary:gamma-finite}.
If $m\le 2$, then~\mbox{$\bar{J}\big(\Aut(X)\big)\le 2304$} by Lemma~\xref{lemma:4-PGL3}.
Therefore, we assume that $m\ge 3$.
Put
\begin{equation*}
V=H^0\big(\P^6, \mathcal{O}_{\P^6}(H)\big)^{\vee},
\end{equation*}
so that $\P^6$ is identified with $\P(V)$.
Since the anticanonical class of $X$ is linearly equivalent
to a hyperplane section of $X$ in $\P^6$, the group $\Aut(X)$ acts on~$V$,
see e.g.~\cite[Corollary~3.1.3]{Kuznetsov-Prokhorov-Shramov}. Thus we may assume that $\Aut(X)\subset\GL(V)$.
Let $-\Id\in\GL(V)$ be the scalar matrix $\diag(-1,\ldots,-1)$.
Let $\tilde{\Gamma}\subset\GL(V)$ be a group generated by $\Gamma$ and $-\Id$,
and let $G\subset\GL(V)$ be a group generated by $\Aut(X)$ and $-\Id$.
Since $\Aut(X)\subset\GL(V)$ acts faithfully on $\P(V)$ and thus does not contain scalar matrices,
we see that
\begin{equation*}
\tilde{\Gamma}\cong\mumu_2\times\Gamma\cong \mumu_2^{m'}
\end{equation*}
with $m'=m+1\ge 4$.
We conclude that $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le\bar{J}(G)\le 10368
\end{equation*}
by Lemma~\xref{lemma:isotypical-7}.
\end{proof}
\begin{remark}\label{remark:int-3-quadrics-nonrational}
Let $X\subset\P^6$ be a smooth complete intersection of $3$ quadrics.
Then $X$ is non-rational, see \cite[Theorem~5.6]{Beauville1977}.
Therefore, automorphism groups of varieties of this type cannot
provide examples of subgroups in~$\Cr_3(\Bbbk)$ whose Jordan constants
attain the bounds given by Theorem~\xref{theorem:constant}, cf. Remark~\xref{remark-P1-P1-P1} below.
\end{remark}
\subsection{Fano threefolds of genus $6$}
\label{subsection:Gushel}
Recall that a smooth Fano threefold $X$ with~\mbox{$\rho(X)=1$}, $\iota(X)=1$, and $\g(X)=6$
may be either an intersection of the Grassmannian~\mbox{$\Gr(2,5)\subset \P^9$}
with a quadric and two hyperplanes,
or a double cover of a smooth Fano threefold
\begin{equation*}
Y=\Gr(2,5)\cap \P^6\subset \P^9
\end{equation*}
with the branch divisor $B\in |-K_Y|$ (see \cite{Gushelcprime1982}).
We will refer to the former varieties as \emph{Fano
threefolds of genus $6$ of the first type}, and to the latter varieties as \emph{Fano
threefolds of genus $6$ of the second type}.
\begin{remark}
In~\cite{Debarre-Kuznetsov2015}
these were called ordinary and special varieties, respectively.
\end{remark}
\begin{lemma}[{cf.~\cite[Corollary~4.2]{DIM12}, \cite[Proposition~3.12]{Debarre-Kuznetsov2015}}]
\label{lemma:smooth-Fano-3-fold-g-6}
Let $X$ be a smooth Fano threefold with~\mbox{$\rho(X)=1$}, $\iota(X)=1$, and $\g(X)=6$.
If $X$ is of the first type,
then there is an embedding
\begin{equation*}
\Aut(X)\hookrightarrow\Aut\big(\Gr(2,5)\big) \cong\PGL_5(\Bbbk).
\end{equation*}
If $X$ is of the second type,
then there is a normal subgroup $\Gamma\subset\Aut(X)$ such that
$\Gamma\cong\mumu_2$ and there is an exact sequence
\begin{equation*}
1\to\Gamma\to\Aut(X)\to\PGL_2(\Bbbk).
\end{equation*}
\end{lemma}
\begin{proof}
By definition, we have a natural morphism $\gamma\colon X \to \Gr(2,5)$.
By~\cite[Theorem~2.9]{Debarre-Kuznetsov2015} the morphism $\gamma$ is functorial.
Note that $\gamma$ is completely determined by what is called GM data in~\cite{Debarre-Kuznetsov2015},
in particular it is equivariant with respect to the action of the group $\Aut(X)$.
Consider the corresponding map
\begin{equation*}
\theta \colon \Aut(X) \to \Aut(\Gr(2,5)) \cong \PGL_5(\Bbbk).
\end{equation*}
Suppose that $X$ is a Fano threefold of genus $6$ of the first type.
Then functoriality of~$\gamma$ implies that $\theta$ is an embedding.
This proves the first assertion of the lemma.
Now suppose that $X$ is a Fano threefold of genus $6$ of the second type.
Then the morphism $\gamma$ is a double cover, and its image is a Fano threefold $Y$
with $\rho(Y)=1$, $\iota(Y)=2$, and~\mbox{$\dd(Y)=5$},
see~\cite[Proposition~2.20]{Debarre-Kuznetsov2015}.
Let $\Gamma\subset\Aut(X)$ be the subgroup generated by the Galois
involution of the double cover~\mbox{$\gamma\colon X\to Y$}.
Then $\Gamma\cong\mumu_2$ is a normal subgroup
of $\Aut(X)$, and $\Aut(X)/\Gamma$ embeds into~\mbox{$\Aut(Y)$}.
On the other hand, one has $\Aut(Y)\cong\PGL_2(\Bbbk)$,
see e.g.~\cite[Proposition~4.4]{Mukai-1988}
or~\cite[Proposition~7.1.10]{CheltsovShramov2016}.
This gives the second assertion of the lemma.
\end{proof}
\begin{corollary}\label{corollary:genus-6}
Let $X$ be a smooth Fano threefold with
$\rho(X)=1$, $\iota(X)=1$ and~\mbox{$\g(X)=6$}.
Then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 960.
\end{equation*}
\end{corollary}
\begin{proof}
Suppose that $X$ is a Fano threefold of genus $6$ of the first type.
Then there is an embedding $\Aut(X)\subset\PGL_5(\Bbbk)$
by Lemma~\xref{lemma:smooth-Fano-3-fold-g-6}, so that $\Aut(X)$ is Jordan with~\mbox{$\bar{J}(\Aut(X))\le 960$}
by Lemma~\xref{lemma:weak-GL5}.
Now suppose that $X$ is a Fano threefold of genus $6$ of the second type.
Then there is an exact sequence
\begin{equation*}
1\to\Gamma\to\Aut(X)\to\PGL_2(\Bbbk)
\end{equation*}
by Lemma~\xref{lemma:smooth-Fano-3-fold-g-6}.
Therefore, $\Aut(X)$ is Jordan with~\mbox{$\bar{J}(G)\le 12$} by Lemma~\xref{lemma:2-PGL2}.
\end{proof}
\subsection{Large degree and index}
Now we consider the cases with large anticanonical degree
and large index.
\begin{lemma}\label{lemma:high-genus-Fanos-Jordan}
Let $X$ be a smooth Fano threefold with $\iota(X)=1$ and
$\g(X)\ge 7$. Then the group $\Aut(X)$ is Jordan with
\begin{enumerate}
\item $\bar{J}\big(\Aut(X)\big)\le 504$
if $\g(X)=7$;
\item $\bar{J}\big(\Aut(X)\big)\le 9922$
if $\g(X)=8$;
\item $\bar{J}\big(\Aut(X)\big)\le 2016$
if $\g(X)=9$;
\item $\bar{J}\big(\Aut(X)\big)\le 5760$
if $\g(X)=10$;
\item $\bar{J}\big(\Aut(X)\big)\le 40$
if $\g(X)=12$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assertion~(i) follows from~\cite[Corollary~4.3.5(i)]{Kuznetsov-Prokhorov-Shramov}
and Remark~\xref{remark:elliptic}.
Assertion~(ii) follows from~\cite[Corollary~4.3.5(ii)]{Kuznetsov-Prokhorov-Shramov}
and Lemma~\xref{lemma:general-type-45}.
Assertion~(iii) follows from~\cite[Corollary~4.3.5(iii)]{Kuznetsov-Prokhorov-Shramov}
and Lemma~\xref{lemma:ruled-surface}.
Assertion~(iv) follows from~\cite[Corollary~4.3.5(iv)]{Kuznetsov-Prokhorov-Shramov}
and Lemma~\xref{lemma:abelian-surface}. Finally,
assertion~(v) follows from~\cite[Corollary~4.3.5(v)]{Kuznetsov-Prokhorov-Shramov}
and Lemma~\xref{lemma:weak-GL3}.
\end{proof}
\begin{lemma}\label{lemma:rho-1-iota-large}
Let $G$ be a finite group, and $X$ be a smooth Fano threefold.
Suppose that~\mbox{$\rho(X)=1$} and $\iota(X)>1$. Then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 960.
\end{equation*}
\end{lemma}
\begin{proof}
It is known that $\iota(X)\le 4$. Moreover, $\iota(X)= 4$
if and only if $X\cong \P^3$, and~\mbox{$\iota(X)=3$}
if and only if $X$ is a quadric in $\P^4$
(see e.\,g. \cite[3.1.15]{Iskovskikh-Prokhorov-1999}).
In the former case one has~\mbox{$\Aut(X)\cong\PGL_4(\Bbbk)$}, so that
the group $\Aut(X)$ is Jordan with
$\bar{J}\big(\Aut(X)\big)=960$ by Lemma~\xref{lemma:weak-GL4}.
In the latter case the group $\Aut(X)$ is Jordan with
$\bar{J}\big(\Aut(X)\big)\le 960$ by Lemma~\xref{lemma:hypersurface}.
Thus we may assume that $\iota(X)=2$. Recall that $1\le \dd(X)\le 5$ (see e.\,g. \cite[\S12.2]{Iskovskikh-Prokhorov-1999}).
If $\dd(X)=5$, then $X$ is isomorphic
to a linear section
of the Grassmannian~\mbox{$\operatorname{Gr}(2,5)\subset\P^9$} by a
subspace~\mbox{$\P^6\subset\P^9$}, see \cite[\S12.2]{Iskovskikh-Prokhorov-1999}.
In this case one has
\begin{equation*}
\Aut(X)\cong \PGL_2(\Bbbk),
\end{equation*}
see~\mbox{\cite[Proposition~4.4]{Mukai-1988}} or~\cite[Proposition~7.1.10]{CheltsovShramov2016}.
So, the group $\Aut(X)$ is Jordan with~\mbox{$\bar{J}\big(\Aut(X)\big)=12$} by Corollary~\xref{corollary:GL2}.
If $\dd(X)=4$, then
$X$ is a complete intersection of two quadrics in $\P^5$
(see \cite[\S12.2]{Iskovskikh-Prokhorov-1999}).
Thus $\Aut(X)$ is Jordan with
$\bar{J}\big(\Aut(X)\big)\le 720$ by Lemma~\xref{lemma:int-2-quadrics}.
If $\dd(X)=3$, then $X\cong X_3\subset \P^4$ is a cubic threefold
(see \cite[\S12.2]{Iskovskikh-Prokhorov-1999}).
Thus $\Aut(X)$ is Jordan with
$\bar{J}\big(\Aut(X)\big)\le 960$ by Lemma~\xref{lemma:hypersurface}.
Finally, if $\dd(X)=2$ or $\dd(X)=1$, then $\Aut(X)$ is Jordan with
$\bar{J}\big(\Aut(X)\big)\le 960$ by Lemma~\xref{lemma:double-cover}.
\end{proof}
\subsection{Large Picard rank}
Finally, we deal with smooth $G$-Fano threefolds with Picard rank
greater than~$1$. We denote by $W_6$ a smooth divisor of bidegree $(1,1)$ in~\mbox{$\P^2\times \P^2$}.
Clearly, $W_6$ is a Fano threefold with $\iota(W_6)=2$ and $\rho(W_6)=2$.
\begin{lemma}\label{lemma:rho-large}
Let $G$ be a finite group, and $X$ be a smooth $G$-Fano threefold.
Suppose that~\mbox{$\rho(X)>1$}. Then $\Aut(X)$ is Jordan with
$\bar{J}\big(\Aut(X)\big)\le 10368$.
\end{lemma}
\begin{proof}
By \cite{Prokhorov-GFano-2} we have the following possibilities.
\begin{enumerate}
\item $\rho(X)=2$, $\iota(X)=2$, and $X\cong W_6$;
\item $\rho(X)=3$, $\iota(X)=2$, and $X\cong\P^1\times\P^1\times\P^1$;
\item $\rho(X)=2$, $\iota(X)=1$, and $X\subset \P^2\times \P^2$ is a divisor of
bidegree $(2,2)$;
\item $\rho(X)=2$, $\iota(X)=1$, and $X$ is a double cover of $W_6$
whose branch divisor $S\subset W_6$ is a member of the linear system~\mbox{$|-K_{W_6}|$};
\item $\rho(X)=2$, $\iota(X)=1$, and $X$ is the blow up of $\P^3$ along a curve $C\subset \P^3$
of degree~$6$ and genus $3$;
\item $\rho(X)=2$, $\iota(X)=1$, and $X$ is the blow up of a smooth quadric $Q\subset \P^4$
along a rational twisted quartic curve
$C\subset Q$;
\item $\rho(X)=3$, $\iota(X)=1$, and $X$ is a double cover of $\P^1\times \P^1\times \P^1$
whose branch divisor~$S$ is a member of the linear system~\mbox{$|-K_{\P^1\times \P^1\times \P^1}|$};
\item $\rho(X)=3$, $\iota(X)=1$, and $X$ is the blow up of $W_6$ along a rational
curve $C\subset W_6$ of bidegree $(2,2)$;
\item $\rho(X)=4$, $\iota(X)=1$, and $X\subset \P^1\times \P^1\times \P^1\times \P^1$ is a divisor of
multi\-degree~\mbox{$(1,1,1,1)$}; in this case each of four projections
$\pi_i\colon X\to \P^1\times \P^1\times \P^1$ is the blow up along
an elliptic curve $C$ which is an intersection of two
members of the linear system~\mbox{$|-\frac{1}{2}K_{\P^1\times \P^1\times \P^1}|$}.
\end{enumerate}
In case (i) one has
\begin{equation*}
\Aut(X)\cong\PGL_3(\Bbbk)\rtimes\mumu_2,
\end{equation*}
so that $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le |\mumu_2|\cdot\bar{J}\big(\PGL_3(\Bbbk)\big)=2\cdot 40=80
\end{equation*}
by Lemma~\xref{lemma:weak-GL3}.
In case (ii) one has
\begin{equation*}
\Aut(X)\cong\big(\PGL_2(\Bbbk)\times\PGL_2(\Bbbk)\times\PGL_2(\Bbbk)\big)
\rtimes\SS_3,
\end{equation*}
so that $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le |\SS_3|\cdot \bar{J}\big(\PGL_2(\Bbbk)\big)=6\cdot 12^3=10368
\end{equation*}
by Corollary~\xref{corollary:GL2}.
In case (iii) one has $\rho(X)=2$, so that the projections
\begin{equation*}
p_i\colon X \hookrightarrow \P^2\times\P^2\to \P^2,\quad i=1,2,
\end{equation*}
are all possible Mori contractions from $X$.
Hence
the action of $\Aut(X)$ on $X$ lifts to the action on $\P^2\times\P^2$
and the embedding
\begin{equation*}
p_1\times p_2\colon X \hookrightarrow \P^2\times\P^2
\end{equation*}
is $\Aut(X)$-equivariant. Thus
\begin{equation*}
\Aut(X)\subset\Aut(\P^2\times\P^2)\cong\big(\PGL_3(\Bbbk)\times\PGL_3(\Bbbk)\big)\rtimes\mumu_2,
\end{equation*}
so that $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 2\cdot\bar J\big(\PGL_3(\Bbbk)\big)^2=2\cdot 40^2=3200
\end{equation*}
by Lemma~\xref{lemma:weak-GL3}.
In case (iv) one has $\rho(X)=2$, so that
two conic bundles
\begin{equation*}
\pi_i\colon X\to W_6\to\P^2,\quad i=1,2,
\end{equation*}
are
all possible Mori contractions from $X$. Thus there is a
subgroup~\mbox{$\Aut'(X)\subset\Aut(X)$} of index at most $2$ such that
the conic bundle $\pi_1\colon X\to\P^2$ is $\Aut'(X)$-equivariant.
Let~\mbox{$G\subset\Aut'(X)$} be a finite subgroup. Then one has
\begin{equation*}
\rho(X/\P^2)^{G}=\rho(X/\P^2)=1,
\end{equation*}
so that
$\pi_1\colon X\to\P^2$ is a $G$-equivariant conic bundle. Thus $\Aut(X)$ is Jordan
with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le [\Aut(X):\Aut'(X)]\cdot \bar{J}\big(\Aut'(X)\big)\le 2\cdot 3456=
6912
\end{equation*}
by Lemma~\xref{lemma:conic-bundle}.
In case (v)
one has $\rho(X)=2$, so that the contraction~\mbox{$\pi\colon X\to\P^3$}
is one of the two possible Mori contractions from $X$. Hence
there is a subgroup $\Aut'(X)$ of index at most~$2$ such that
$\pi$ is $\Aut'(X)$-equivariant. In particular,
$\Aut'(X)$ acts on $\P^3$ faithfully, and since the curve $C\subset\P^3$
is not contained in any plane, $\Aut'(X)$ acts
faithfully on $C$ as well. Therefore,~\mbox{$\Aut(X)$} is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le [\Aut(X):\Aut'(X)]\cdot\bar{J}\big(\Aut'(X)\big)\le
2\cdot\bar{J}\big(\Aut(C)\big)\le 2\cdot 168=336
\end{equation*}
by Remark~\xref{remark:elliptic}.
In case (vi)
one has $\rho(X)=2$, so that the contraction~\mbox{$\pi\colon X\to Q$}
is one of the two possible Mori contractions from $X$. Hence
there is a subgroup $\Aut'(X)$ of index at most~$2$ such that
$\pi$ is $\Aut'(X)$-equivariant. In particular,
$\Aut'(X)$ acts on $Q$ faithfully. Since all automorphisms of $Q$ are linear,
and the curve $C\subset Q\subset\P^4$
is not contained in any hyperplane, $\Aut'(X)$ acts
faithfully on $C$ as well.
Therefore,~\mbox{$\Aut(X)$} is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le [\Aut(X):\Aut'(X)]\cdot\bar{J}\big(\Aut'(X)\big)\le
2\cdot\bar{J}\big(\PGL_2(\Bbbk)\big)=24
\end{equation*}
by Corollary~\xref{corollary:GL2}.
In case (vii) one has $\rho(X)=3$, and the map $X\to \P^1\times\P^1\times\P^1\hookrightarrow \P^8$
is given by the anticanonical linear system.
Three projections $\P^1\times\P^1\times\P^1\to\P^1\times\P^1$ give us three conic bundle
structures
\begin{equation*}
\pi_i\colon X\to\P^1\times\P^1\times\P^1\to\P^1\times\P^1,\quad i=1,2,3,
\end{equation*}
on $X$ and these projections are permuted by the automorphism group
$\Aut(X)$, because the morphism $X\to\P^1\times\P^1\times\P^1$ is $\Aut(X)$-equivariant.
Thus there is a subgroup~\mbox{$\Aut'(X)\subset\Aut(X)$} of index at most $3$ such that
the conic bundle~\mbox{$\pi_1\colon X\to\P^1\times\P^1$} is $\Aut'(X)$-equivariant.
Let $G\subset\Aut'(X)$ be a finite subgroup. Then one has
\begin{equation*}
\rho(X/\P^1\times\P^1)^{G}=\rho(X/\P^1\times\P^1)=1,
\end{equation*}
so that
$\pi_1\colon X\to\P^1\times\P^1$ is a $G$-equivariant conic bundle. Thus $\Aut(X)$ is Jordan
with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le [\Aut(X):\Aut'(X)]\cdot \bar{J}\big(\Aut'(X)\big)\le 3\cdot 3456=
10368
\end{equation*}
by Lemma~\xref{lemma:conic-bundle}.
In case (viii)
one has $\rho(X)=3$, and three divisorial contractions
\begin{equation*}
\pi_i\colon X\to W_6,\quad i=1,2,3,
\end{equation*}
are all possible birational
Mori contractions
from $X$ (see~\cite[Table~3, no.~13]{Mori1981-82}).
Thus there is a subgroup
$\Aut'(X)$ of index at most $3$ such that
$\pi_1$ is $\Aut'(X)$-equivariant. In particular,~\mbox{$\Aut'(X)$} acts on $W_6$ faithfully.
The morphism $\pi_1$ is a blow up
of a rational curve~\mbox{$C_1\subset W_6$} of bi-degree $(2,2)$.
Since the images of $C_1$ under both projections
\begin{equation*}
C_1\hookrightarrow W_6\to \P^2
\end{equation*}
span~$\P^2$, we see that $\Aut'(X)$ acts on $C_1$ faithfully as well.
Therefore,~\mbox{$\Aut(X)$} is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le [\Aut(X):\Aut'(X)]\cdot\bar{J}\big(\Aut'(X)\big)\le
3\cdot\bar{J}\big(\PGL_2(\Bbbk)\big)=36
\end{equation*}
by Corollary~\xref{corollary:GL2}.
In case (ix)
one has $\rho(X)=4$, and four projections
\begin{equation*}
\pi_i\colon X\hookrightarrow\P^1\times\P^1\times\P^1\times\P^1\to\P^1\times\P^1\times\P^1,
\quad i=1,2,3,4,
\end{equation*}
are all possible birational Mori contractions from $X$ (see~\cite[Table~4, no.~1]{Mori1981-82}).
Thus there is a subgroup
$\Aut'(X)\subset\Aut(X)$ of index at most $4$ such that
the divisorial contraction
\begin{equation*}
\pi_1\colon X\to\P^1\times\P^1\times\P^1
\end{equation*}
is $\Aut'(X)$-equivariant.
The morphism $\pi_1$ is a blow up
of an elliptic curve
\begin{equation*}
C_1\subset\P^1\times\P^1\times\P^1
\end{equation*}
of tri-degree $(1,1,1)$.
Since all three projections
\begin{equation*}
C_1\hookrightarrow \P^1\times\P^1\times\P^1\to \P^1
\end{equation*}
are dominant, one can see that $\Aut'(X)$ acts on $C_1$ faithfully as well.
Therefore,~\mbox{$\Aut(X)$} is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le [\Aut(X):\Aut'(X)]\cdot\bar{J}\big(\Aut'(X)\big)\le
4\cdot\bar{J}\big(\Aut(C)\big)\le 24
\end{equation*}
by Remark~\xref{remark:elliptic}.
\end{proof}
\begin{remark}[{cf. Remark~\xref{remark:int-3-quadrics-nonrational}}]
Let $X$ be a smooth $G$-Fano threefold with~\mbox{$\rho(X)>1$}, and assume the notation of
the proof of Lemma~\xref{lemma:rho-large}. Then one has $\bar{J}\big(\Aut(X)\big)<10368$
with an exception of case~(ii), and with a possible exception
of case~(vii). However, if $X$ is like in case~(vii), then
it is non-rational, see~\cite{Alzati1992}.
Therefore, automorphism groups of varieties of this type cannot
provide examples of subgroups in~$\Cr_3(\Bbbk)$ whose Jordan constants
attain the bounds given by Theorem~\xref{theorem:constant}, cf. Remark~\xref{remark-P1-P1-P1} below.
\end{remark}
\begin{remark}
In general, studying Fano varieties with large automorphism
groups is an interesting problem on its own. In many cases such varieties exhibit
intriguing birational properties, see e.g.~\cite{CheltsovShramov11},
\cite{CheltsovShramov2016}, \cite{PrzyjalkowskiShramov2016}.
\end{remark}
\section{Proof of the main theorem}
\label{section:proof}
In this section we complete the proof of Theorem~\xref{theorem:constant}.
\subsection{Summary for Fano threefolds}
We summarize the results of \S\xref{section:Fano}
and~\S\xref{section:smooth-Fano} as follows.
\begin{proposition}\label{proposition:Gorenstein-Fano-Jordan}
Let $X$ be a Fano threefold with terminal Gorenstein singularities.
Suppose that $\rho(X)=1$.
Then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 10368.
\end{equation*}
\end{proposition}
\begin{proof}
If $X$ is singular, the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 9504
\end{equation*}
by Lemma~\xref{lemma:Gorenstein-G-Fano-3-fold}.
Therefore, we assume that $X$ is smooth.
If $\iota(X)>1$, then the group~\mbox{$\Aut(X)$} is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 960
\end{equation*}
by Lemma~\xref{lemma:rho-1-iota-large}.
It remains to consider the case when $X$ is a smooth Fano threefold
with~\mbox{$\Pic(X)=\Z\cdot K_X$}.
According to the classification
(see e.\,g. \cite[\S12.2]{Iskovskikh-Prokhorov-1999}),
one has either $2\le \g(X) \le 10$, or $\g(X)=12$.
If $\g(X)\le 4$, then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 1920
\end{equation*}
by Corollary~\xref{corollary:Fano-3-fold-small-g}.
If $\g(X)=5$, then the variety $X$ is an intersection
of three quadrics in~$\P^6$ (see \cite[\S12.2]{Iskovskikh-Prokhorov-1999}),
so that the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 10368
\end{equation*}
by Lemma~\xref{lemma:intersection-of-three-quadrics}.
If $\g(X)=6$, then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 960
\end{equation*}
by Corollary~\xref{corollary:genus-6}.
Finally, if $\g(X)\ge 7$, then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 9922
\end{equation*}
by Lemma~\xref{lemma:high-genus-Fanos-Jordan}.
\end{proof}
\begin{corollary}\label{corollary:G-Fano-Jordan}
Let $G$ be a finite group, and
$X$ be a $G\Q$-Fano threefold.
Then the group~\mbox{$\Aut(X)$} is Jordan with
\begin{equation*}
\bar{J}\big(G\big)\le 10368.
\end{equation*}
\end{corollary}
\begin{proof}
If $X$ has a non-Gorenstein singular point, then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 4608
\end{equation*}
by Lemma~\ref{lemma:non-Gorenstein-Fano-3-fold}. Therefore, we may assume that $X$ is a (Gorenstein) $G$-Fano threefold.
If $X$ is singular, then the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 9504
\end{equation*}
by Lemma~\xref{lemma:Gorenstein-G-Fano-3-fold}.
If $X$ is smooth and $\rho(X)>1$, then
the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 10368
\end{equation*}
by Lemma~\xref{lemma:rho-large}.
Therefore, we may assume that $\rho(X)=1$, so that
the group $\Aut(X)$ is Jordan with
\begin{equation*}
\bar{J}\big(\Aut(X)\big)\le 10368
\end{equation*}
by Proposition~\xref{proposition:Gorenstein-Fano-Jordan}.
\end{proof}
\begin{remark}[{cf. Remark~\xref{remark:pohuj}}]
In several cases
one can produce better bounds for weak Jordan constants
of certain Fano threefolds applying a bit more effort.
We did not pursue this goal since the current estimates are already
enough to prove our main results.
\end{remark}
\subsection{Proof and concluding remarks}
Now we are ready to prove Theorem~\xref{theorem:constant}.
\begin{proof}[{Proof of Theorem~\xref{theorem:constant}}]
Let $X$ be a rationally connected threefold over an arbitrary field~$\Bbbk$
of characteristic~$0$,
and let $G\subset\Bir(X)$ be a finite group. It is enough to establish the upper
bounds for $\bar{J}(G)$ and~\mbox{$J(G)$}. Moreover, to prove the bounds we may assume
that~$\Bbbk$ is algebraically closed.
Regularizing the action of $G$ and taking an equivariant
desingularization (see e.\,g.~\mbox{\cite[Lemma-Definition~3.1]{Prokhorov-Shramov-J}}),
we may assume that $X$ is smooth and~\mbox{$G\subset\Aut(X)$}.
Applying $G$-equivariant Minimal Model Program to $X$
(which is possible due to an equivariant version of~\cite[Corollary 1.3.3]{BCHM} and
\cite[Theorem~1]{MiyaokaMori}, since rational connectedness
implies uniruledness), we may assume that either there is a $G\Q$-conic bundle
structure~\mbox{$\phi\colon X\to S$} for some rational surface $S$, or there is a
$G\Q$-del Pezzo fibration~\mbox{$\phi\colon X\to\P^1$}, or~$X$ is a $G\Q$-Fano threefold.
Therefore, we have
\begin{equation*}
\bar{J}(G)\le 10368
\end{equation*}
by Lemmas~\xref{lemma:conic-bundle} and~\xref{lemma:dP-fibration}
and Corollary~\xref{corollary:G-Fano-Jordan}.
Applying Remark~\xref{remark:Pyber}, we obtain the inequality
\begin{equation*}
J(G)\le 10368^2=107495424.
\end{equation*}
If $\Bbbk$ is algebraically closed, then the group
$\Cr_3(\Bbbk)$ contains a group
\[
\Aut\big(\P^1\times\P^1\times\P^1\big)\supset
\big(\A_5\times\A_5\times\A_5\big)\rtimes\SS_3,
\]
and the largest abelian subgroup of the latter finite group
has order $125$.
Therefore, one has
\begin{equation*}
\bar{J}\big(\Cr_3(\Bbbk)\big)=10368.
\end{equation*}
\end{proof}
\begin{remark}\label{remark-P1-P1-P1}
We do not know whether the bound for the
(usual) Jordan constant for the group $\Cr_3(\Bbbk)$ over an algebraically closed field~$\Bbbk$
of characteristic~$0$ provided by
Theorem~\xref{theorem:constant} is sharp or not. The Jordan constant
of the group~\mbox{$\Aut(\P^1\times\P^1\times\P^1)$} is smaller than that,
but there may be other automorphism groups of rational varieties
providing this value, cf. Lemma~\xref{lemma:dP-fibration}.
We also do not know the actual value of $J\big(\Cr_2(\Bbbk)\big)$, but we
believe that it can be found by a thorough (and maybe a little bit boring)
analysis of automorphism groups of del Pezzo surfaces and two-dimensional conic bundles,
since in dimension~$2$ much more precise classification results are available.
\end{remark}
\begin{remark}\label{remark:dim-4-fail}
In dimension $4$ and higher we cannot hope
(at least on our current level of
understanding the problem) to obtain results
similar to Theorem~\xref{theorem:constant}.
The first reason is that in dimension~$3$
we have a partial classification of Fano varieties, which gives a
much more detailed information than the boundedness proved in~\cite{KMMT-2000}
and~\cite{Birkar}; this gives us a possibility to (more or less)
establish an alternative proof of Theorem~\ref{theorem:RC-Jordan}
by repeating the same steps as in~\cite{ProkhorovShramov-RC}
and using this information instead of boundedness.
Another (and actually more serious)
reason is that
we use a classification
of three-dimensional terminal singularities to obtain bounds for Jordan constants
of automorphism groups of terminal Fano varieties and Mori fiber spaces.
The result of~\cite[Theorem~1]{Kollar11} shows that a ``nice'' classification
of higher dimensional terminal singularities is impossible,
at least in the setup we used in Lemma~\xref{lemma:dim-3-terminal} and
Corollary~\xref{corollary:dim-3-terminal}, due to unboundedness
of the dimensions of Zariski tangent spaces of their index one
covers.
\end{remark}
|
1,108,101,566,443 | arxiv | \section*{Acknowledgement}
We appreciate the feedback from anonymous reviewers.
MZ is supported in part by the Office of the Director of National Intelligence (\abr{odni}), Intelligence Advanced Research Projects Activity (\abr{iarpa}), via the \abr{better} Program contract \#2019-19051600005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of \abr{odni}, \abr{iarpa}, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
\bibliographystyle{style/acl_natbib}
\section{Dataset Construction}
\label{sec:data}
To study reply suggestion in multiple languages, we build \name{}, a dataset with message-reply pairs based on Reddit comments.
The dataset is available at \url{https://github.com/zhangmozhi/mrs}.
We download Reddit comments between January 2010 and December 2019 from the Pushshift Reddit dataset~\citep{baumgartner2020pushshift}.\footnote{\url{https://files.pushshift.io/reddit/comments}}
We extract message-reply pairs from each thread by considering the parent comment as an input message and the response to the comment as the reference reply.
We remove comments starting with \emph{[removed]} or \emph{[deleted]}, which are deleted messages.
We also skip comments with a rating of less than one, since they are likely to contain inappropriate content.
After extracting examples, we identify their languages with fastText language detector~\citep{joulin2016fasttext}.
For each example, we run the model on the concatenation of the message and the reply.
We discard low-confidence examples where none of the languages has a score higher than 0.7.
For the remaining examples, we use the highest-scoring label as the language.
We only use English data from 2018 because English data is abundant on Reddit.
Non-English examples are much more scarce, so we use data from the last ten years.
We select the top \langnum{} languages with at least 100K examples.
We create three splits for each language: 80\% examples for training, 10\% for validation, and 10\% for testing.
Table~\ref{tab:lang} shows some dataset statistics.
\name{} is heavily biased towards English.
We have more than 48 million English examples, but fewer than one million examples for half of the languages.
This gap reflects a practical challenge for reply suggestion---we do not have enough data for most languages in the world.
Nevertheless, we can use \name{} to test models in different multilingual settings, including cross-lingual transfer learning, where we build non-English reply suggestion models from English data (Section~\ref{ssec:setting}).
We also build response sets and filter out toxic examples. We describe these steps next.
\subsection{Response Set}
\label{ssec:response}
We build a response set of 30K to 50K most frequent replies for each language, which are used in the retrieval model.
We want the response set to cover generic responses, so we select replies that appear at least twenty times in the dataset.
This simple criterion works well for English, but the set is too small for other languages.
For non-English languages, we augment the response set by translating the English response set to other languages with Microsoft Translator.
The non-English response set is sometimes smaller than the English set, because different English responses may have the same translation.
\subsection{Filtering Toxic Examples}
\label{ssec:filter}
Exchanges on Reddit are sometimes uncivil, inappropriate, or even abusive~\citep{massanari2017gamergate,mohan2017impact}.
We try to filter out toxic contents, as they are not desirable for reply suggestion systems.
We use two toxicity detection models.
First, we use an in-house multilingual model.
The model is initialized with multilingual \abr{bert}~\citep[\abr{mbert}]{devlin2019bert} and fine-tuned on a mixture of proprietary and public datasets with toxic and offensive language labels.
The model outputs a score from zero to one, with a higher score corresponding to a higher level of toxicity.
Second, we use Perspective \abr{api}\footnote{\url{https://www.perspectiveapi.com}}, a publicly available model.
Perspective \abr{api} has limited free access (one query per second), so we only use the \abr{api} on the English validation, test, and response set.
For other languages, we rely on our in-house model.
We filter message-reply pairs if it has greater than 0.9 score according to the in-house model, or greater than 0.5 score according to Perspective \abr{api}~\citep{gehman2020real}.
About one percent of examples are filtered.
After filtering the data, we manually validate three hundred random examples and do not find any toxic examples, which confirms that our filter method have a high recall.
While we hope the filtered dataset leads to better reply suggestion models, existing filtering methods are not perfect and can introduce other biases~\citep{dixon2018measuring,sap2019risk,hutchinson2020social}.
Therefore, models trained on all \name{} data may still have undesirable behavior.
\name{} is intended to be used as a benchmark for testing cross-lingual generalization of generation and retrieval models.
\textbf{The dataset should not be directly used in production systems.}
To use the dataset in practice, additional work is required to address other possible biases and toxic or inappropriate content that may exist in the data.
\section{Results and Discussion}
\label{sec:result}
We experiment with the two baselines from Section~\ref{sec:model} on \name{}.
We first compare the models in English, where we have enough training data and human referees.
We then build models for other languages and compare training settings listed in Section~\ref{ssec:setting}.
\subsection{Results on English}
\label{ssec:en_exp}
Figure~\ref{fig:en} compares the generation and retrieval models in the English monolingual setting.
Generation model not only has higher relevance (\abr{rouge}) score but also can generate more diverse replies (higher \abr{dist} scores).
For English, we also ask three human referees to compare the model outputs on a subset of 500 test examples.
Again, the referees prefer the generation model more often than the retrieval model (Figure~\ref{fig:en}).
We look at some generated responses to understand the models qualitatively.
In the top two examples in Table~\ref{tab:example}, the generation model produces replies highly specific to the input message.
In contrast, the retrieval model fails to find a relevant reply, because the response set does not cover these topics.
This explains why the generation model has much higher \abr{rouge} and distinct $n$-gram scores than the retrieval model.
However, the expressiveness comes at the cost of a lack of control over the generated replies.
The generation model sometimes produces incoherent replies that are repetitive and/or contradictory, as shown in the bottom two examples of Table~\ref{tab:example}.
For the retrieval model, we can easily avoid these problems by curating the fixed response set.
These degenerative behaviors are observed in other text generation tasks and can be mitigated by changing training and decoding objectives~\citep{holtzman2020curious,welleck2019neural}.
We leave these directions for future research.
\subsection{Results on Other Languages}
\label{ssec:xling_exp}
After comparing English models, we experiment on other languages using the settings from Section~\ref{ssec:setting}.
\paragraph{Retrieval Model.}
Table~\ref{tab:ret} shows results for the retrieval model when initialized with \abr{mbert}.
The retrieval model can generalize fairly well across languages, as the \abr{rouge} in the zero-shot setting is often close to the monolingual setting.
This result confirms that initializing with \abr{mbert} is an effective strategy for cross-lingual generalization.
Training on \abr{mt} data is usually worse than training in the zero-shot setting.
This is possible because the \abr{mt} system may create artifacts that do not appear in organic data~\citep{artetxe2020translation}.
For the multilingual model, the training language \abr{rouge} scores are lower than monolingual training (gray cells in Table~\ref{tab:ret}).
However, multilingual training sometimes leads to better \abr{rouge} on unseen languages compared to transferring from only English (zero-shot).
Previous work observes similar results on other tasks, where multilingual training hurts training languages but helps generalization to unseen languages~\citep{johnson2017google,conneau2020unsupervised,wang2020negative}.
Finally, Appendix~\ref{sec:xlmr} shows similar results when initializing with \abr{xlm-r}~\citep{conneau2020unsupervised}.
\paragraph{Generation Model.}
Table~\ref{tab:gen} shows results for the generation model.
In the monolingual setting, the generation model has higher scores than the retrieval model on most languages, consistent with the English result~(Figure~\ref{fig:en}).
However, unlike the retrieval model, the generation model fails to generalize across languages in the zero-shot setting, despite using Unicoder-\abr{xdae} for initialization.
We do not show zero-shot results in Table~\ref{tab:gen}, because \abr{rouge} are close to zero for non-English languages.
After training on English data, the model always produces English replies, regardless of the input language; i.e., the generation model ``forgets'' multilingual knowledge acquired during pre-training~\citep{kirkpatrick2017overcoming}.
This result is surprising because Unicoder-\abr{xdae} works in the zero-shot setting for other generation tasks~\citep{liang2020xglue},
which suggests that reply suggestion poses unique challenges for cross-lingual transfer learning.
Interestingly, the multilingual model can generalize to unseen languages;
perhaps training on multiple languages regularizes the model to produce replies in the input language.
Overall, the best method to generalize the generation model across languages is to use machine-translated data.
\section{Experiment Settings}
\label{sec:setting}
After presenting the dataset, we explain how we use \name{} to compare reply suggestion models.
We describe the two frameworks for reply suggestion, our experiment settings, and evaluation metrics.
\subsection{Task Formulation}
\label{ssec:framework}
In reply suggestion, the input is a message $\vect{x}$, and the output is one or more suggested replies $\vect{y}$.
In practice, reply suggestion systems can choose to not suggest any replies.
This decision is usually made by a separate trigger model~\citep{kannan2016smart}.
In this paper, we focus on reply generation, so we assume that the models always need to suggest a fixed number of replies.
Reply suggestion can be formulated as either a \emph{retrieval} problem or a \emph{generation} problem.
\paragraph{Retrieval Model.}
A retrieval model selects the reply $\vect{y}$ from a fixed response set~$\mathcal{Y}$ (Section~\ref{ssec:response}).
Given an input message $\vect{x}$, the model computes a relevance score $\vect{\Theta}_{\vect{x}\vect{y}}$ for each candidate reply $\vect{y}\in\mathcal{Y}$.
The model then selects the highest-scoring replies as suggestions; e.g., the top-1 reply is $\argmax_{\vect{y} \in \mathcal{Y}} \Theta_{\vect{x}\vect{y}}$.
\paragraph{Generation Model.}
A generation model generates the reply $\vect{y}$ from scratch.
Generation models usually follow the sequence-to-sequence framework~\citep[\abr{seq2seq}]{sutskever2014sequence}, which generates $\vect{y}$ token by token.
Given an input message $\vect{x} = (x_1, x_2, \cdots, x_n)$ of $n$ tokens, a \abr{seq2seq} model estimates the probability of a reply $\vect{y} = (y_1, y_2, \cdots, y_m)$ of $m$ tokens as following:
\begin{equation}
\label{eq:likelihood}
p(\vect{y} \, | \, \vect{x}) = \prod_{i=1}^{m} p(y_i \, | \, \vect{x}, y_{<i}).
\end{equation}
The model computes probability for the next token $p(y_i \, | \, \vect{x}, y_{<i})$ based on the input $\vect{x}$ and the first~$(i-1)$ tokens of the output $\vect{y}$.
The model is trained to maximize the probability of reference replies in the training set.
At test time, we find the top replies that approximately maximize \eqref{eq:likelihood} with beam search.
The two models have different strengths.
The generation model is more flexible, but the retrieval model is faster~\citep{henderson2017efficient}, and the output can be controlled by curating the response set~\citep{kannan2016smart}.
We compare a retrieval model and a generation model as baselines for \name{}.
To our knowledge, we are the first to systematically compare the two models in both monolingual and multilingual settings. We explain our training settings and metrics next.
\subsection{Training Settings}
\label{ssec:setting}
For each language in \name{}, we train and compare models in four settings.
Future work can experiment with other settings (discussed in Section~\ref{sec:future}).
\paragraph{Monolingual.}
Here, we simply train and test models in a single language.
This setting simulates the scenario where we have adequate training data for the target language. Previous reply suggestion models were only studied in the English monolingual setting.
\paragraph{Zero-Shot.}
Next, we train models in a zero-shot cross-lingual setting.
We train the model on the English training set and use the model on the test set for another language.
This setting simulates the scenario where we want to build models for a low-resource language using our large English set.
To generalize across languages, we initialize the models with pre-trained multilingual models (details in Section~\ref{sec:model}).
These models work well in other tasks~\citep{wu2019beto,liang2020xglue}.
We test if they also work for reply suggestion, as different tasks often prefer different multilingual representations~\citep{zhang2020overfit}.
\paragraph{Machine Translation (\abr{mt}).}
Another strategy for cross-lingual generalization is to train on machine-translated data~\citep{banea2008multilingual}.
We train models on nineteen million English training examples machine-translated to the target language with Microsoft Translator.
We compare against the zero-shot setting to compare the two cross-lingual generalization strategies.
\paragraph{Multilingual.}
Finally, we build a multilingual model by jointly training on the five languages with the most training data: English, Spanish, German, Portuguese, and French.
We oversample non-English training data to have the same number of training examples data across all languages~\citep{johnson2017google}.
We make two comparisons:
1) for the five training languages, we compare against the \emph{monolingual} setting to test whether fitting multiple languages in a single model hurts performance;
and 2) for other languages, we compare against the \emph{zero-shot} setting to check if adding more training languages helps cross-lingual generalization.
\subsection{Evaluation Metrics}
\label{ssec:metric}
The goal of reply suggestion is to save user typing time, so the ideal metrics are click-through rate (\abr{ctr}), how often the user chooses a suggested reply, and time reduction, how much time is saved by clicking the suggestion instead of typing.
However, these metrics require deploying the model to test on real users, which is not feasible at full-scale while writing this paper. Instead, we focus on automated offline metrics that can guide research and model development before deploying production systems. Specifically, we evaluate models using a test set of message-reply pairs.
To identify a good metric, we compare several metrics in a pilot study by deploying an English system.
We collect millions of user interactions and measure Pearson's correlation between \abr{ctr} and automated offline metrics.
The next paragraph lists the metrics.
Based on the study, we recommend weighted \abr{rouge} F1 ensemble (\textbf{\abr{rouge}} in tables), which has the highest correlation with \abr{ctr}.
For the retrieval model, we follow previous work and consider mean reciprocal rank \citep[\abr{mrr}]{kannan2016smart} and precision at one~\citep{henderson2017efficient}.
These metrics test if the model can retrieve the reference response from a random set of responses. Alternatively, we compute \abr{mrr} and precision on a subset of examples where the reference reply is in the response set so that we can directly measure the rank of the reference response in the response set. This set also allows us to compute \abr{mrr} for individual responses, so we can compute macro-\abr{mrr}, the average \abr{mrr} over each response in the set. Higher macro-\abr{mrr} can indicate diversity but has a worse correlation than computing \abr{mrr} over the entire test set.
For the generation model, we consider model perplexity~\citep{adiwardana2020humanlike}.
Finally, we consider two word overlap scores, \abr{bleu}~\citep{papineni2002bleu} and \abr{rouge}~\citep{lin2004rouge}, which can be used for both retrieval and generation models.
Our pilot study shows that \abr{rouge} has the best correlation.
However, individual \abr{rouge} F1 scores (\abr{rouge-1/2/3}) are sensitive to small changes in sequence lengths (more so because our responses are generally short). Therefore, we use a weighted average of the three scores:
\begin{equation}
\label{eq:rouge}
\frac{\abr{Rouge-1}}{6} + \frac{\abr{Rouge-2}}{3} + \frac{\abr{Rouge-3}}{2}.
\end{equation}
This weighted score leads to the highest correlation with \abr{ctr}.
Intuitively, the weights balance the differences in the average magnitude of each metric and thus reduce variance on short responses.
Popular reply suggestion systems (such as Gmail and Outlook) suggest three replies for each message, while the user only selects one. To simulate this setting, we predict three replies for each message.
For the retrieval model, we use the three highest-scoring replies from the response set.
For the generation model, we use top-three results from beam search.
Out of the three replies, we only use the reply with the highest \abr{rouge} compared to the reference reply when computing the final metrics; i.e., the model only has to provide one ``correct'' reply to have a full score.
We compare models primarily with \abr{rouge}, since the metric has the best correlation in the pilot study.
Nevertheless, word overlap scores have known limitations~\citep{liu2016evaluate}, as there are different ways to reply to a message.
We encourage future research to investigate other metrics to understand different aspects of the model.
As examples, we also report two diversity scores: the proportion of distinct unigrams (\textbf{Dist-1}) and bigrams (\textbf{Dist-2}) in the generated replies~\citep{li2016diversity}.
While \abr{rouge} measures the relevance of the replies, higher diversity can also increase \abr{ctr}~\citep{deb2019diversifying}.
We can improve the diversity of the three replies with diversity-promoting decoding~\citep{li2016diversity,vijayakumar2018diverse,zhang2018generating} or latent variable models~\citep{deb2019diversifying}, but we leave this direction to future work.
For our English monolingual experiments, we also complement automatic metrics with human judgments (\textbf{Human} in Figure~\ref{fig:en}).
For each example, we display the input message and sets of three suggested replies from both generation and retrieval models to three human annotators (crowd workers).
We then ask the annotators to select the set with more responses that they prefer to send as a reply.
We leave evaluations for other languages to future work due to resource limitations.
\section{Future Work}
\label{sec:future}
\name{} opens up opportunities for future research.
Our experiments use four training settings (Section~\ref{ssec:setting}), but there are many other settings to explore.
For example, we can use other combinations of training languages, which may work better for some target languages~\citep{ammar2016many,cotterell2017cross,ahmad2019difficulties,lin2019choosing,zhang2020caco}.
We are also interested in training on both organic data and \abr{mt} data; i.e., mixing the zero-shot and \abr{mt} setting.
We can also compare other models on \name{}.
For the English monolingual setting, we can initialize the generation model with state-of-the-art language models~\citep{radford2019language,brown2020language,zhang2020dialogpt}.
For cross-lingual settings, we can initialize the generation model with several recent pre-trained multilingual \abr{seq2seq} models~\citep{chi2020cross,chi2021infoxlm,liu2020multilingual,tran2020cross,lewis2020marge,xue2020mt5}.
For retrieval models, we can experiment with other multilingual encoders that use different pre-training tasks~\citep{artetxe2019massively,chidambaram2019learning,reimers2020making,feng2020language}.
Another idea is to combine the two models.
Given an input message, we first use a generation model to create a set of candidate replies.
We then use a retrieval model to compute relevance scores and rerank these candidates.
Reranking the output of a generation model helps other natural language processing tasks~\citep{shen2004discriminative,collins2005discriminative,ge2006discriminative},
and previous work uses a similar idea for chatbots~\citep{qiu2017alime}.
Our experiment shows that reply suggestion poses unique challenges for cross-lingual generalization, especially for the generation model.
Future work can study methods to improve cross-lingual generalization methods.
Some examples include applying adversarial learning~\citep{chen2018adversarial,chen2019multi,huang2019cross}, using adapters~\citep{pfeiffer2020mad}, adaptive transfer~\citep{xia2021metaxl}, mixing pre-training and fine-tuning~\citep{phang2020english}, and bringing a human in the loop~\citep{yuan2020interactive}.
\section{Baseline Models}
\label{sec:model}
This section introduces the two baseline models: a retrieval model and a generation model.
\subsection{Retrieval Model}
\label{ssec:retrieval}
For the retrieval model, we use the architecture from \citet{henderson2017efficient}, except we replace the feedforward network encoders with Transformers~\citep{vaswani2017attention}.
Given an input message $\vect{x}$ and candidate reply $\vect{y}$,
two Transformer encoders $\Phi_x$ and $\Phi_y$ map the message and the reply to two vectors $\Phi_x(\vect{x})$ and $\Phi_y(\vect{y})$.
The relevance score $\Theta_{\vect{x}\vect{y}}$ between the message $\vect{x}$ and the reply $\vect{y}$ is the dot product of the two vectors:
\begin{equation}
\Theta_{\vect{x}\vect{y}} = \Phi_x(\vect{x})^\top \Phi_y(\vect{y}).
\end{equation}
\citet{henderson2017efficient} also adds a language model score to encourage more frequent replies.
We do not use language model score for simplicity.
We train the model with the symmetric loss from~\citet{deb2019diversifying}.
Suppose the batch size is $n$.
For a batch of training messages $\{\vect{x}_i\}_{i=1}^n$ and corresponding replies $\{\vect{y}_i\}_{j=1}^n$, we maximize:
\begin{equation}
\sum_{i=1}^n \frac{e^{\Theta_{\vect{x}_i\vect{y}_i}}}{\sum_{j=1}^n \left (e^{\Theta_{\vect{x}_i\vect{y}_j}} + e^{\Theta_{\vect{x}_j\vect{y}_i}} \right) - e^{\Theta_{\vect{x}_i\vect{y}_i}}}.
\end{equation}
In a regular softmax loss, the denominator only sums over one variable.
The denominator in the symmetric loss sum over both variables to encourage bidirectional compatibility: the message should be predictive of the reply, and the reply should be predictive of the message.
This encourages the model to select responses specific to the message, similar to the Maximum Mutual Information objective from \citet{li2016diversity}.
The two encoders $\Phi_x$ and $\Phi_y$ are initialized with \abr{mbert}~\citep{devlin2019bert}, a Transformer with 110 million parameters pre-trained on multilingual corpora.
Initializing with \abr{mbert} allows the model to generalize across languages~\citep{wu2019beto}.
In Appendix~\ref{sec:xlmr}, we experiment with another pre-trained multilingual Transformer, \abr{xlm-r}~\citep{conneau2020unsupervised}.
We use the ``base'' version with 270 million parameters.
\subsection{Generation Model}
For the generation model, we follow the \abr{seq2seq} architecture (Section~\ref{ssec:framework}).
We use a Transformer encoder to read the input $\vect{x}$, and another Transformer decoder to estimate $p(y_i \, | \, \vect{x}, y_{<i})$ in \eqref{eq:likelihood}.
We cannot initialize the generation model with \abr{mbert} or \abr{xlm-r}, because the model also has a decoder.
Instead, we use Unicoder-\abr{xdae}~\citep{liang2020xglue}, a pre-trained multilingual \abr{seq2seq} model, which can generalize across languages in extractive generation tasks such as news title generation and question generation.
We test if Unicoder-\abr{xdae} also generalizes in the more challenging reply suggestion task.
There are other generation models we can use, which we discuss as future work in Section~\ref{sec:future}.
\subsection{Training Details}
\label{sec:hyperparameter}
We train the retrieval model using Adam optimizer~\citep{kingma2015adam} with 1e-6 learning rate, default $\beta$, and 256 batch size.
For monolingual and zero-shot settings, we use twenty epochs for English and fifty epochs for other languages.
We use ten epochs for \abr{mt} and multilingual settings.
The first 1\% training steps are warmup steps.
During training, we freeze the embedding layers and the bottom two Transformer layers of both encoders, which preserves multilingual knowledge from the pre-trained model and improves cross-lingual transfer learning~\citep{wu2019beto}.
All hyperparameters are manually tuned on the English validation set.
We use almost the same hyperparameters as \citet{liang2020xglue} to train generation models.
Specifically, we use Adam optimizer with 1e-5 initial learning rate, default $\beta$, and 1024 batch size.
For the monolingual and zero-shot setting, we use four epochs for English and 5000 steps for other languages (equivalent to two to nine epochs depending on the language).
We use one epoch for the \abr{mt} setting and 40,000 steps for the multilingual setting.
The first 20\% training steps are warmup steps.
We freeze the embedding layer during training for faster training.
All models are trained with eight Tesla V100 \abr{gpu}.
It takes about an hour to train the generation model for 1000 steps (covering about one million examples).
For the retrieval model, an epoch on the English training set (about 48 million examples) takes about seven hours.
\section{Conclusion}
\label{sec:conclusion}
We present \name{}, a multilingual dataset for reply suggestion.
We compare a generation and a retrieval baseline on \name{}.
The two models have different strengths in the English monolingual setting and require different strategies to transfer across languages.
\name{} provides a benchmark for future research in both reply suggestion and cross-lingual transfer learning.
\section{Multilingual Reply Suggestion}
\label{sec:intro}
Automated reply suggestion is a useful feature for email and chat applications.
Given an input message, the system suggests several replies, and users may click on them to save typing time (Figure~\ref{fig:reply}).
This feature is available in many applications including Gmail, Outlook, LinkedIn, Facebook Messenger, Microsoft Teams, and Uber.
Reply suggestion is related to but different from open-domain dialog systems or chatbots~\citep{adiwardana2020humanlike,huang2020challenges}.
While both are conversational \abr{ai} tasks~\citep{gao2019neural}, the goals are different:
reply suggestion systems help the user quickly reply to a message,
while chatbots aim to \emph{continue} the conversation and focus more on multi-turn dialogues.
Ideally, we want our model to generate replies in any language.
However, reply suggestion models require large training sets, so previous work mostly focuses on English~\citep{kannan2016smart,henderson2017efficient,deb2019diversifying}.
To investigate reply suggestion for other languages with possibly limited data,
we build a multilingual dataset, dubbed \name{} (\textbf{M}ultilingual \textbf{R}eply \textbf{S}uggestion).
From publicly available Reddit threads, we extract message-reply pairs, response sets, and machine-translated examples in \langnum{} languages~(Table~\ref{tab:lang}).
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/smartreply.png}
\caption{An example of reply suggestion system. User can click on the suggestions for a quick reply.}
\label{fig:reply}
\end{figure}
One interesting aspect of the reply suggestion problem is that there are two modeling approaches.
Some models follow the retrieval framework and select the reply from a predetermined response set~\citep{henderson2017efficient}.
Others follow the generation framework and generate the reply from scratch~\citep{kannan2016smart}.
The two approaches have different advantages.
Generation models are more powerful because they are not constrained by the response set.
In comparison, retrieval models are easier to train and runs faster, and a curated response set guarantees the coherence and the safety of the model output.
\input{tables/data}
The two frameworks make reply suggestion an interesting task for studying cross-lingual generalization.
Most cross-lingual generalization benchmarks use classification and sequence labeling tasks~\citep{tjong2002introduction,nivre2016universal,strassel2016lorelei,conneau2018xnli,li2018mldoc,clark2020tydi,hu2020xtreme,lewis2020mlqa}.
In contrast, reply suggestion has two formulations that require different cross-lingual generalization strategies.
While some recent work explores cross-lingual transfer learning in generation tasks,
the tasks are \emph{extractive}; i.e., the output often has significant overlap with the input.
These tasks include news title generation, text summarization, and question generation~\citep{chi2020cross,liang2020xglue,scialom2020mlsum}.
Reply suggestion is more challenging because the reply often does not overlap with the message (Figure~\ref{fig:reply}), so the model needs to address different cross-lingual generalization challenges (Section~\ref{ssec:xling_exp}).
We build two baselines for \name{}: a retrieval model and a generation model.
We first compare the models in English, where we have abundant training data and human referees.
We evaluate the models with both automatic metrics and human judgments.
The two models have different strengths.
The generation model has higher word overlap scores and is favored by humans on average, but inference is slower, and the output is sometimes contradictory or repetitive~\citep{holtzman2020curious}.
In contrast, the retrieval model is faster and always produces coherent replies, but the replies are sometimes too generic or irrelevant due to the fixed response set.
Next, we test models in other languages.
We compare different training settings and investigate two cross-lingual generalization methods: initializing with pre-trained multilingual models~\citep{wu2019beto,conneau2020unsupervised,liang2020xglue} and training on machine-translated data~\citep{banea2008multilingual}.
Interestingly, the two models prefer different methods: multilingual pre-training works better for the retrieval model, while the generation model prefers machine translation.
In summary, we present \name{}, a multilingual reply suggestion dataset.
We use \name{} to provide the first systematic comparison between generation and retrieval models for reply suggestion in both monolingual and multilingual settings.
\name{} is also a useful benchmark for future research in reply suggestion and cross-lingual generalization.
The rest of the paper is organized as follows.
Section~\ref{sec:data} describes the data collection process for \name{}.
Section~\ref{sec:setting} introduces task formulations, experiment settings, and evaluation metrics.
Section~\ref{sec:model} describes the baseline generation and retrieval models.
Section~\ref{sec:result} presents our experiment results.
Section~\ref{sec:future} discusses how \name{} can help future research.
\section*{Ethical Considerations}
\paragraph{Data Collection.}
No human annotators are involved while creating \name{}.
The examples and response sets of \name{} come from publicly available Reddit dumps from Pushshift, which are used in more than a hundred peer-reviewed publications~\citep{baumgartner2020pushshift}.
\paragraph{Privacy.}
Examples in \name{} do not have the username and are from publicly available data.
Therefore, we do not anticipate any privacy issues.
In the pilot study (Section~\ref{ssec:metric}), we measure the correlation of user \abr{ctr} with different evaluation metrics.
To protect user privacy, we only collect aggregated statistics (\abr{ctr}) and use no other information.
\paragraph{Potential Biased and Toxic Content.}
Despite our best effort to filter toxic contents (Section~\ref{ssec:filter}), the dataset may not be perfectly cleansed and may have other biases that are typical in open forums~\citep{massanari2017gamergate,mohan2017impact}.
Users should be aware of these issues.
We will continue to improve the quality of the dataset.
\paragraph{Intended Use of \name{}.}
Because of the possible biases and inappropriateness in the data, \name{} should \emph{not} be directly used to build production systems (as mentioned in Section~\ref{ssec:filter}).
The main use of \name{} is to test cross-lingual generalization for text retrieval and generation models, and researchers should be aware of possible ethical issues of Reddit data before using \name{}.
\section{Results for \abr{xlm-r}}
\label{sec:xlmr}
\input{tables/xlmr}
\section{Related Work}
\label{sec:related}
\mz{Will move most of these contents somewhere else, and maybe remove this section.}
Automated reply suggestions is available in many applications including Gmail, Outlook, LinkedIn, Facebook Messenger, Micrsoft Teams, Uber, and etc.
SR task is in the domain of conversational AI, and can be compared to open domain dialog \cite{dialog citations} agents. However it is also quite different and perhaps closer to goal oriented dialog systems \cite{goal directed} to help users quickly finish a task (reply to a message), rather than engage in open chat which is the main objective in dialog systems.
Suggested replies have been modeled both as a generative process with sequence-to-sequence architectures \cite{kannan2016smart} commonly used in machine translation as well as a retrieval process \cite{henderson2017efficient, deb2019diversifying} which tended to be more popular in industrial settings due to better control over the suggestions. We compare the popular architectures in retrieval (matching architecture) and generation (s2s) in a multilingual setting. In addition, several conditional language generation models \cite{GPT2, GPT3, T5 etc} can be applied here which we leave for future work.
As SR gets popular it becomes increasingly important to evaluate multiple languages for the task. Overall it is hard to imagine any NLP tasks without multi-lingual dataset support as a fairness and inclusion objectives in both research and production.
In \cite{conversation datasets} (Reddit, OpenSubtitles and Amazon Q/A) and \cite{Ubuntu dialog corpus} (ubuntu corpus) provides resources for generating data from publicly available repositories which maybe applicable for the SR task. However they are mostly in English, and multilingual dialog datasets are rare. In \cite{citation}, the ConvAI2 (Dinan et al., 2019a) dataset is extended to support six languages, using machine translations followed by followed by human refinement. This is a relatively small dataset and applicable for chat. Previous work has looked at dialog acts In \cite{cite here} and task oriented dialog /cite{task oriented} in multi-lingual in multilingual settings. Multilingual datasets also exist for standard NLP tasks such as Machine Translation, Language Understanding, Question-Answering to name a few, but to the best of our knowledge, the neither a dataset, nor any evaluations exists for the SR task.
Research has looked at both universal multi-lingual training as well as cross lingual transfer learning for extending models to multiple languages. More recently, fine-tuning pretrained multiligual text encoders for specific tasks has become an increasingly important technique for achieving multi-linguality. Multilingual extensions to popular pretrained models exist (MBERT etc, please cite). For generation models the laternatives include (unicoder, T5? etc). Several papers have improved their cross-lingual, multi-lingual, and zero-shot scenarios, by leveraging parallel data and different pre-training tasks \cite{devlin2018bert,conneau2019unsupervised,chi2020infoxlm}. We continue on such lines and evaluate the retrieval and generation approaches to SR using two standard encoders (MBERT, Unicoder).
|
1,108,101,566,444 | arxiv | \section{Introduction}
A lot of work is being done in recent times to find a suitable system to physically implement qubits, which are the fundamental building blocks of quantum computation. Some of the leading candidates are superconducting qubits, trapped ion qubits, NV$^{-}$ center based qubits etc. One of the leading contenders among them is spin qubits based on planar quantum dot systems~\cite{loss1998quantum}. It is implemented using spins of electrons confined in the quantum dots. In a single qubit system, a static magnetic field is applied to a single confined electron. This causes a split in the degenerate spin energy level, thus giving rise to a two energy-level system. These two energy levels are used as two states of qubit. An ac magnetic field having a frequency equal to the transition frequency between the two energy levels is used to implement gate operations.
But like many other qubit systems, these spin qubits also suffer from noise due to decoherence. Interaction of the confined electrons with the environment is the major cause of decoherence in these systems and this in turn decreases the fidelity of the gate operations. The two major physical phenomena causing decoherence are hyperfine noise arising from nuclear spin fluctuations and phonon interaction mediated by spin-orbit interaction coupling ~\cite{taylor2006hyperfine,amasha2008electrical,khaetskii2001spin,san2006geometrical,marquardt2005spin,borhani2006spin}.
In this work, we investigate the effect of change of ac magnetic field value on the fidelity of single qubit $NOT$ gate operation in presence of decoherence. From this study, we show that the range of static magnetic field for which high-fidelity gate operation can be carried can be increased by manipulating the ac magnetic field. The paper is structured as follows: in section \MakeUppercase{\romannumeral 2}, we discuss the model used to simulate the system dynamics including decoherence. In section \MakeUppercase{\romannumeral 3}, we discuss the results followed by conclusion.
\section{Simulation Methodology}
This section provides the methodology followed to simulate the effect of ac field value on fidelity of gate operations for a single quantum dot based spin qubit system in presence of decohrence. We use the Lindblad Master Equation to model the dynamics of our system, that is given as follows:
\begin{equation}\label{Master_Eqaution}
\frac{d\rho(t)}{dt}= L_0 + L_D.
\end{equation}
\begin{equation}\label{Master_Equation_Cohrent_Part}
L_0= -\frac{i[H(t), \rho(t)]}{\hbar}.
\end{equation}
The terms $L_0$ and $L_D$ capture the coherent and non-coherent parts of the dynamics of the quantum system, respectively. $L_0$, in general, is given by Eq.~\ref{Master_Equation_Cohrent_Part} where $\rho(t)$ denotes the state of the quantum system at an arbitrary time instant $t$ and $H(t)$ is the Hamiltonian for the closed system. The Hamiltonian $H(t)$ has the general form $H = -g \overrightarrow \mu . \overrightarrow B$. For our system, this expression is given by Eq.~\ref{Hamiltonian}
\begin{equation}\label{Hamiltonian}
H(t) = -g\mu B_{static}\sigma_z - g\mu B_{ac}(cos(\omega t )\sigma_x - sin(\omega t )\sigma_y)
\end{equation}
where $g$ is the gyro-magnetic ratio, $\mu$ is the magnetic moment of electron and $\sigma_i$'s are the Pauli matrices where $i=\{x,y,z\}$. The non-coherent part of the Master Equation $L_D$ has the form given below
\begin{equation}\label{Non_cohrent_master}
L_{D}=\sum_{n}\frac{1}{2}( 2C_{n}\rho(t)C_{n}^\dagger-\rho(t)C_{n}^\dagger C_{n}-C_{n}^\dagger C_{n}\rho(t))
\end{equation}
where the operator $C_n$ corresponds to the $n^{th}$ decoherence operator due to hyperfine noise and phonon interaction.
\section{Results}
Following the simulation methodology discussed in the earlier section we obtain our results. First, we initialize our system to $\ket{\downarrow}$ state and then apply a $\pi$ pulse. After that we measure the probability of finding the qubit in $\ket{\uparrow}$ state. We do this for different values of the static magnetic field keeping the ac magnetic field constant.
This whole sequence is repeated for different values of ac magnetic field.
\vspace{0em}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.06]{Fig1.png}
\vspace{0em}
\caption{(a) Probability $(P_\uparrow)$ of finding the spin in $\ket{\uparrow}$ state after application of the $\pi$-pulse as function of varying static magnetic field $(B_{static})$. (b) Static magnetic field value for which probability $P_\uparrow$ drops to $0.9$ as function of ac magnetic field value. In this regime of $B_{static}$, hyperfine noise is the dominant decoherence mechanism.}
\label{results_for_low_B}
\end{figure}
For the results obtained in Fig.~\ref{results_for_low_B}, we vary the static magnetic field $(B_{static})$ between $5$ mT to $15$ mT keeping the ac field $(B_{ac})$ constant. This is repeated for various values of $(B_{ac})$ in the range of $0.05$ mT to $0.25$ mT. Decoherence due to hyperfine interaction is dominant for such low values of the static magnetic field~\cite{mehl2013noise}. It is observed that for a given value of the ac field, the lesser the value of $B_{static}$ more the decoherence as can be seen in Fig.~\ref{results_for_low_B}(a). The horizontal line $(T_{U})$ in Fig.~\ref{results_for_low_B}(a) denote the probability $P_\uparrow = 0.9$. The points represented by $B_F$ is the value of the static magnetic field for which the probability $(P_\uparrow)$ drops to $0.9$ for a certain value of $B_{ac}$. We can clearly see in Fig.~\ref{results_for_low_B}(a) that for higher values of $B_{ac}$, the values of $B_{F}$ is lower. In Fig.~\ref{results_for_low_B}(b), we plot the values of $B_F$ for different $B_{ac}$ values. It can be clearly seen from this figure that a higher value of $B_{ac}$, results in a lower the value of $(B_{F})$. This clearly shows that higher values of $B_{ac}$ can suppress the effect of decoherence due to hyperfine interaction to a certain extent.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.06]{Fig2.png}
\vspace{-1em}
\caption{(a) Probability $(P_\uparrow)$ of finding the spin in $\ket{\uparrow}$ state after application of the $\pi$-pulse as function of varying static magnetic field at high values, where phonon interaction is the dominant decoherence mechanism. (b) Static magnetic field value for which probability drops to $0.9$ of finding the spin in state $\ket{\uparrow}$ $(P_\uparrow)$ as function of ac magnetic field value where phonon interaction is the dominant decoherence mechanism.}
\vspace{0em}
\label{results_for_high_B}
\end{figure}
For the results obtained in Fig.~\ref{results_for_high_B}, we vary the static magnetic field between $1$ T to $8$ T and the ac magnetic field is varied between $20$ mT and $50$ mT. Phonon interaction is dominant for such high values of the static magnetic field causing decoherence~\cite{mehl2013noise}. For a given value of the ac field, the higher the value of the static field $(B_{static})$, the more the decoherence as can be seen in Fig.~\ref{results_for_high_B}(a). The horizontal line and the points denoted as $B_F$ in Fig.~\ref{results_for_high_B}(a) denote the same quantities as earlier. We can see in Fig.~\ref{results_for_high_B}(a) that for higher values of ac field $(B_{ac})$ the values of $B_F$ is higher. In Fig.~\ref{results_for_high_B}(b), we plot the values of $B_F$ for varying $B_{ac}$ values. It can be clearly seen from this figure that for higher values of $B_{ac}$ we can work with higher values of $B_{static}$. So, it can be said that higher values of $B_{ac}$ can suppress the effect of decoherence due to phonon interaction to a certain extent.
In summary, we observe that a higher value of ac field mitigate decoherence at both high and low magnetic fields. We think this might be due to the fact that raising the value of the ac magnetic field decreases the $\pi$-pulse time (faster gate operation). Therefore, our system gets much lesser time to interact with its environment and this leads to lesser decoherence and a better performance of the system.
\section*{Conclusion}
We have shown that we can obtain a high fidelity $NOT$ gate for much larger range of static magnetic field by increasing the ac magnetic field. This allows us to have a greater flexibility with both control parameters. Furthermore, we plan to undertake a similar study for multi-spin systems and other gates as well.
\bibliographystyle{IEEEtran}
|
1,108,101,566,445 | arxiv | \section{Introduction}
In \cite{ArCa}, the first and third author investigated the $k$-\emph{universal transversal
property}, or $k$-ut property for short, of a permutation group $G$ on
$\Omega$. We say that $G$ has this property if, given any $k$-subset $S$ of
$\Omega$ (subset with $k$ elements), and any $k$-partition $\mathcal{P}$ of $\Omega$
(partition with $k$ parts), there is an element $g\in G$ which maps
$S$ to a section (or transversal) for $\mathcal{P}$. The paper comes close to giving
a characterisation of permutation groups with this property, for
$2<k< \lceil n/2\rceil$, together with several applications to semigroup theory.
The aim of this paper is to tackle Problem 5 of \cite{ArCa}, the study of the $k$-\emph{existential transversal property}, or $k$-et property, a concept that is much weaker than the $k$-ut property, and so its consequences for semigroups are
substantially stronger. We say that the permutation group $G$ has the
$k$-et property if there exists a $k$-subset $S$ of $\Omega$ such that, for any
$k$-partition $\mathcal{P}$ of $\Omega$, there is an element $g\in G$ which maps $S$ to a
transversal (or cross-section) for $\mathcal{P}$. The first part of our goal is to understand groups with
the $k$-et property for $k\le n/2$.
Recall that a permutation group $G$ is \emph{$k$-homogeneous} if it acts
transitively on the set of all $k$-subsets of $\Omega$. We have the
obvious implications
\[\hbox{$k$-homogeneous}\Rightarrow\hbox{$k$-ut}\Rightarrow\hbox{$k$-et}.\]
The first theorem in the paper of Livingstone and Wagner~\cite{lw} asserts that
a $k$-homogeneous group of degree $n$, with $k\le\lceil n/2\rceil$, is
$(k-1)$-homogeneous. A significant result of the earlier paper is a theorem
in the same spirit: for $2<k<\lceil n/2\rceil$, $k$-ut implies $(k-1)$-ut.
A similar result for the $k$-et property does not hold. Indeed,
there are exactly two counterexamples, which are very interesting, and one of the
surprising results of the paper.
In the second section of this paper, we introduce the $k$-et property, with
a little background, and prove a number of results about it. The following
theorem summarises the main results of this section.
\begin{theorem}
\begin{enumerate}\itemsep0pt
\item Intransitive groups with $k$-et for $2<k<n$ are known.
\item Transitive groups with $k$-et for $3<k<n-2$ are primitive.
\item Transitive groups with $k$-et, $5\le k\le n/2$ are $2$-homogeneous for sufficiently large $n$.
\end{enumerate}
\end{theorem}
In the last statement, we remark that the restriction on $n$ being sufficiently large will be eliminated later in the article.
In the next section, we present several examples of the $k$-et property,
and show that some of the results in the preceding theorem are best possible.
The next two sections tackle the classification problem. In Section~\ref{s:8}
we show:
\begin{theorem}
For $8\le k\le n/2$, a permutation group with the $k$-et
property is symmetric or alternating.
\end{theorem}
The theorem is best possible. The Mathieu group $M_{24}$ has the $k$-et property for $k\le 7$
but not for $k=8$. The following section shows that it is the only $7$-et group
apart from symmetric and alternating groups, and also gives a complete
classification of $6$-et groups, and nearly complete classifications for
$k$-et groups with $k=4,5$.
The techniques developed in these sections allow some improvements to be made
in the results of \cite{ArCa}; we turn to this in Section~\ref{s:ut}, and also
correct a few small mistakes in that paper (a gap in the proof of \cite[Proposition 2.6]{ArCa} and a couple of
missing groups in \cite[Theorem 4.2(4)]{ArCa}).
{\color{black}
After this, we turn to the applications for semigroups, which provided the
motivation for this group theory problem. We are concerned with semigroups of the form
$\langle G,t\rangle$, where $G$ is a permutation group on $\Omega$ and $t$ a
transformation of $\Omega$ which is not a permutation. Our main interest is
in regularity: an element $x$ of a semigroup $S$ is regular if it has a von
Neumann inverse $x'$ (satisfying $xx'x=x$), and a semigroup is regular if all
its elements are regular. The basic result, due to Levi, McAlister and
McFadden~\cite{lmm}, asserts that $t$ is regular in $\langle G,t\rangle$ if and only if
there exists $g\in G$ such that $tgt=t$. Such an element $g$ maps the image
of $t$ to a transversal for the kernel of $t$. Hence we see that
\begin{itemize}\itemsep0pt
\item every map $t$ of rank $k$ is regular in $\langle G,t\rangle$ if and only
if $G$ has the $k$-universal transversal property;
\item every map $t$ with image $B$ satisfying $|B|=k$ is regular in
$\langle G,t\rangle$ if and only if $G$ has the $k$-existential transversal
property with witness $B$.
\end{itemize}
Note that a non-regular semigroup can be generated by its regular elements;
therefore, the fact that every element in $G$ is regular ($g=gg^{-1}g$) and $t$ is regular in $\langle G,t\rangle$, does not imply that $\langle G,t\rangle$ is regular.
However, the key result in \cite{ArCa}
(asserting that for $k\le n/2$, the $k$-ut implies
the $(k-1)$-ut) ensures that if $G$ has the $k$-ut property for $k \le n/2$ and $t$ has rank $k$, then
the semigroup $\langle G,t\rangle$ is in fact regular. Our aim in this paper is to
investigate the much more difficult question: when is it true that $\langle G,t\rangle$ is regular for all maps whose image is a given $k$-set $B$?
It is easy to see that if $G$ has the
$k$-et property with witnessing set $B$, and also has the $(k-1)$-ut property, then this is
true. However, this sufficient condition is not necessary. In fact,
with the exception of one sporadic group, we fully solve the regularity problem, in the sense that our result on groups with unknown $k$-et status are conditional on them having this necessary property.
}
The following theorem compiles the main results on regularity of semigroups.
\begin{theorem}
Let $G \le S_n$ be a group different from $\mathop{\mathrm{Sz}}(32):5$ (degree $n=1025$), and $k \le n/2$. Suppose $G$ possesses the $k$-et property and $B$ witnesses it.
Then the semigroup $\langle G,t\rangle$ is regular for all image $B$ transformations $t \in T_n$ if and only if one of the following holds
\begin{enumerate}
\item $k \le 3$ or $k\ge 7$.
\item $k=6$ and $G$ possesses $5$-ut, is intransitive, or is one of the following groups: $\mathop{\mathrm{PGL}}(2,17)$ ($n=18$), $\mathord{\mathrm{M}}_{11}$ ($n=12$), $\mathord{\mathrm{M}}_{23}$($n=23$).
\item $k=5$ and $G$ is not $\mathop{\mathrm{PGL}} (2,27)$ or $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,27)$ ($n=28$).
\item $k=4$ and $G$ possesses $3$-ut, is intransitive, or is $G=\mathop{\mathrm{AGL}} (1,13)$ ($n=13$).
\end{enumerate}
In particular, if $n\ge27$ and $k\ge 4$, then $\langle G,t\rangle$ is regular if and only if $G$ is intransitive or possesses $(k-1)$-ut.
\end{theorem}
These sets $B$ have many interesting interpretations in terms of finite geometries, but we refer the reader to Section \ref{app}. In a more speculative register, these sets might be connected with bases (sets of smallest size whose pointwise stabiliser is the identity), but we could not decide the issue.
The general context of this paper is the following. The theory of transformation semigroups, through its connections to theoretical computer science and automata theory, quickly led to several very natural problems, which were totally hopeless with the techniques available three or four decades ago. However, given the enormous progress made in the last decades, permutation
{\color{black}group theory} now has the tools to answer many of those problems. The problems usually translate into beautiful statements in the language of permutation groups and combinatorial structures, as shown in several recent investigations (for a small sample please see \cite{AAC,abc,abc2,abcrs,abdkm,arcameron22,ArCa,acmn,acs,ArMiSc,ArnoldSteinberg,randomsynch,gr,neumann,sv15}). One especially interesting consequence of the results in this paper is that they unearth some finite geometric structure on the image ranges of transformations (apparently acting on unstructured sets).
\section{The $k$-et property}
Throughout, $G$ denotes a permutation group on a finite set $\Omega$, with
$|\Omega|=n$.
A permutation group $G$ on $\Omega$ has the \emph{$k$-existential transversal
property} if there exists a $k$-subset $S$ of $\Omega$ such that, for any
$k$-partition $\mathcal{P}$ of $\Omega$, there exists $g\in G$ such that $Sg$ is a
section (or transversal) for $\mathcal{P}$. We call $S$ a \emph{witnessing $k$-set}.
We write $k$-et for short.
A useful consequence of $k$-et is the following.
\begin{prop}\label{p:witness}
Suppose that $G$ has the $k$-et property. Then $G$ has at most $k$ orbits on $(k-1)$-sets,
and a witnessing $k$-set contains representatives of every $G$-orbit on
$(k-1)$-sets.
\end{prop}
\begin{proof}
The first statement clearly follows from the second. Let $S$ be the
witnessing $k$-set. If $|A|=k-1$, let $\mathcal{P}$ be the partition with the elements
of $A$ as singleton parts and one part $\Omega\setminus A$. Then, if
$Sg$ is a section for $\mathcal{P}$, we have $Ag^{-1}\subseteq S$.
\hfill$\Box$\end{proof}
We say that $G$ has the \emph{weak $k$-et property} if there exists a
$k$-set containing representatives of every $G$-orbit on $(k-1)$-sets.
One way to show that a permutation group $G$ does not have the $k$-et property
is to show that $G$ is the automorphism group of a structure containing two
$(k-1)$-subsets which cannot be contained in a $k$-set. We will say that two
such subsets \emph{cannot coexist}.
We make two further observations about the weak $k$-et property.
\begin{prop}\label{p:stab}
Suppose that $G$ is transitive, and the stabiliser of a point has the weak
$k$-et property. Then $G$ has the weak $k$-et property.
\end{prop}
\begin{proof}
Let $S$ be a witnessing $k$-set for the point stabiliser $G_x$, and let $A$
be any $(k-1)$-subset of the domain. We can move $A$ by an element of $G$ to
ensure that it does not contain $x$, and move the result into $S$ by an
element of $G_x$.\hfill$\Box$
\end{proof}
\begin{prop}\label{p:order}
If $G$ has the weak $k$-et property, then $|G|\ge{n\choose k-1}/k$.
\end{prop}
\begin{proof}
Each orbit of $G$ on $(k-1)$-sets has size at most $|G|$, and there are at
most $k$ such orbits.
\hfill$\Box$\end{proof}
Note that (although it is not the case that the $k$-et property implies the
$(k-1)$-et property) if $G$ fails the $k$-et property because the above bound
fails, then $G$ fails the $k'$-et property for
all $k'$ with $k\le k'\le n/2$. This is because the ratio of consecutive
values of the right-hand side is $(n-k+1)/(k+1)$, which is greater than $1$
for $k<n/2$.
\medskip
We can obtain a slightly better bound if $G$ has the $k$-et property.
\begin{prop}\label{p:order2}
If $G$ has the $k$-et property, then $|G|\ge\frac{2}{k+1}{n\choose k-1}$.
\end{prop}
\begin{proof}
Let $B$ witness the $k$-et property.
Each orbit of $G$ on $(k-1)$-sets has size at most $|G|$, so the bound holds if there are at
most $(k+1)/2$ such orbits.
Assume that there are $k\ge l > (k+1)/2$ orbits. At least $2l-k\ge 2$ orbits have a unique representative in $B$. Let $B_1, B_2$
be two such representatives, and let $b_i$ be the unique element of $B\setminus B_i$. Consider the $k$-partition $\mathcal{P}$ of $\Omega$ consisting of
$\{b_1,b_2\}, \Omega \setminus B$, and the singleton subsets of $B \setminus \{b_1,b_2\}$. If $g \in G$ is such that $Bg $ is a transversal of $\mathcal{P}$, then
clearly one of $B_1,B_2$, say $B_1$, is contained in $Bg$.
However, $B_1$ is the unique representative of its orbit in $B$, and thus $B_1g=B_1$ and $b_2g \in \Omega \setminus B$. It follows that the stabilizer of $B_1$ is
non-trivial, and so its orbit has size at most $|G|/2$. Repeating the above argument, we see that all but one of the orbits with unique representatives in $B$ have size
at most $|G|/2$. Summing over all orbits we obtain that
$${n\choose k-1} \le |G| (l-2l+k) +\left(|G|/2\right)(2l-k-1)+|G|,$$
and the bound follows.
\hfill$\Box$\end{proof}
As with the bound for weak $k$-et, if $G$ fails the order bound for $k$, it does so for all $k'$
with $k \le k'\le n/2$.
Note, however, that the techniques of these theorems do not allow us to
improve Proposition~\ref{p:witness}. The group $\mathop{\mathrm{PGL}}(2,32)$ with degree $33$
turns out to have the $5$-et property, and to have five orbits on $4$-sets.
We will utilise both the stronger bound from Proposition \ref{p:order2}, and the slightly weaker, but simpler, bound from Proposition \ref{p:order}.
By abuse of notation we will refer to both expressions as the \emph{order bound}.
\medskip
We remark that if a permutation group $G$ preserves a geometric structure, it can often be used to show that $G$ is not $k$-et.
A typical arguments runs along the following lines: suppose $G$ preserves at least two tiers of non-trivial
geometric objects. Then for a partition ${\mathcal P}$ with``cascading" sets, as depicted in Figure \ref{fig:FIG2} for an affine-like geometry, we can often conclude that any section cannot be contained in a
geometric object. On the other hand, for a partition with $k-1$ singletons ``crammed" into a small flat (see Figure \ref{fig:FIG1}), any section often will need to lie in a flat, potentially from a higher tier. These conditions cannot simultaneously hold for any $k$-set, and so the group does not satisfy $k$-et.
\begin{figure}
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figura2a.png}
\captionof{figure}{A ``cascading" partition. Each part extends the union of all smaller parts to a geometric object of the next higher tier. Under suitable conditions, no section of the partition lies in a small-tier flat.}
\label{fig:FIG2}
\end{minipage}
\hspace{0.2cm}
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figura1.png}
\captionof{figure}{A ``crammed" partition, in which $k-1$ singleton parts are placed in a flat from the smallest tier possible. Under suitable conditions, any section lies in a flat from a small tier.}
\label{fig:FIG1}
\end{minipage}
\end{figure}
Even if $G$ preserves only one tier of geometric objects, we obtain restrictions on the potential witnessing sets for $k$-et. If every flat is uniquely determined by any $k-2$ of its points, then
by the weak-$k$-et property, every potential witness for $k$-et contains $k-1$ point within one flat and an additional point (see Figure \ref{fig:FIG5}).
In addition the stabiliser of a flat must act $(k-1)$-homogeneously on it, as visualised by the partition in Figure \ref{fig:FIG3}. If a flat is already determined by $k-3$ points, then the stabiliser of a flat also acts transitively on its complement, as can be seen from Figure \ref{fig:FIG4}.
\begin{figure}
\centering
\includegraphics[width=.35\linewidth]{figura3a.png}
\captionof{figure}{If a group preserves any type of flat that is determined by $k-1$ of its points, then a witnessing set for $k$-et consists of $k-1$ points in a flat and an additional point.}
\label{fig:FIG5}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figura3.png}
\captionof{figure}{A partition showing that the stabiliser of a flat acts $(k-1)$-homogeneously on it.}
\label{fig:FIG3}
\end{minipage}
\hspace{0.2cm}
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figura4maybe.png}
\captionof{figure}{{A partition showing that the stabiliser of a flat acts transitively on its complement.}}
\label{fig:FIG4}
\end{minipage}
\end{figure}
Now we turn to the classification of intransitive groups with $k$-et.
\begin{prop}\label{p:intrans}
Let $G$ be an intransitive permutation group with the $k$-et property,
where $2<k<n$. Then $G$ fixes a point and acts $(k-1)$-homogeneously on
the remaining points.
\end{prop}
\begin{proof} If $G$ fixes a point $x$, then the witnessing $k$-set $K$
must contain $x$. But there is only one $(k-1)$-subset of $K$ which does
not contain $x$, so $G$ must be $(k-1)$-homogeneous on the points different
from $x$.
Suppose that $G$ has two complementary fixed sets $A$ and $B$, with $|A|=a$ and
$|B|=b$, so that $a+b=n$; suppose that $a,b\ge2$. Then there is a $(k-1)$-set
$L_1$ satisfying $|L_1\cap A|=\min(k-1,a)$, and a $(k-1)$-set $L_2$ satisfying
$|L_2\cap B|=\min(k-1,b)$. These two sets must have images fitting inside a
$k$-set; so
\[\min(k-1,a)+\min(k-1,b)\le k.\]
But the left hand side is
\[\min(2k-2,k-1+a,k-1+b,n).\]
Since $a,b\ge2$ and $3\le k\le n-1$, the minimum is at least $k+1$, a
contradiction.
\hfill$\Box$\end{proof}
The converse is also true. This gives a complete characterisation of the
intransitive $k$-et groups for $2<k<n$, and shows that $k$-et implies
$(k-1)$-et for $2<k<n/2$ and intransitive groups.
\begin{prop}\label{prop2.4}
Suppose that $G$ fixes a point and is $(k-1)$-homogeneous on the remaining
points. Then $G$ has the $k$-et property.
\end{prop}
\begin{proof}
Let $a$ be the fixed point, and $B$ any $k$-set containing $a$. We claim that
$B$ witnesses the $k$-et property. Let $\mathcal{P}=\{A_1,\ldots,A_k\}$ be a
$k$-partition, where $A_1$ is the part containing $a$, and choose
$a_i\in A_i$ for $i=2,\ldots,k$. Choose an element $g\in G$ mapping
$\{a_2,\ldots,a_k\}$ to $B\setminus\{a\}$; then $B$ is a section for $\mathcal{P}g$.
\hfill$\Box$\end{proof}
However, the class of intransitive $2$-et groups is larger. Any permutation
group $G$ with two orbits $A$ and $B$, such that $G$ acts transitively on
$A\times B$, has the $2$-et property, with a set containing one point from
each orbit as a witnessing set. For suppose that $G$ has two orbits and is
transitive on their product, and let $\mathcal{P}=\{P_1,P_2\}$ be any $2$-partition.
Without loss of generality, $P_1$ contains a point $a\in A$. If $P_2$ contains
a point $b\in B$, then $\{a,b\}$ is the required section; so we can suppose
that $B\subseteq P_1$. Running the argument the other way, we see that also
$A\subseteq P_1$, a contradiction.
\medskip
Now we turn to transitive groups.
We say that a transitive permutation group $G$ is \emph{fully imprimitive} if
the following equivalent conditions hold:
\begin{enumerate}\itemsep0pt
\item any two points of $\Omega$ are contained in a proper block of
imprimitivity;
\item every orbital graph for $G$ is disconnected;
\item for any $\alpha\in\Omega$ and $g\in G$, $\langle G_\alpha,g\rangle\ne G$.
\end{enumerate}
For example, a group $G$ in its regular action is fully imprimitive if and only
if it is not cyclic.
\begin{prop}\label{p:2et}
The transitive group $G$ has the $2$-et property if and only it is not
fully imprimitive. Moreover, a $2$-set witnesses $2$-et if and only if it is not contained in any proper block of imprimitivity.
\end{prop}
\begin{proof}
The set $S$ is a witnessing $2$-set if and only if the graph $X$ with vertex set
$\Omega$ and edge set $S^G$ has the property that, for every $2$-partition
$\{A,B\}$ of $\Omega$, $X$ has an edge between $A$ and $B$. This simply means
that $X$ is connected. The proposition follows.
\hfill$\Box$\end{proof}
\begin{prop}\label{p:2blocks}
A transitive imprimitive permutation group with the $3$-et property has two
blocks of imprimitivity in any block system; a witnessing set contains two
points from one block and one from the other. Moreover, if $n>4$, the block
system is unique.
\end{prop}
\begin{proof}
A witnessing set cannot be contained in a block, and cannot contain three
points from distinct blocks; so it must have two points from one block and
one from another. But if $\mathcal{P}$ is a partition two of whose three parts are
blocks, then no image of $S$ is a transversal for $\mathcal{P}$. So there can be at
most two blocks.
If there are two block systems, then blocks from the two systems intersect in
$n/4$ points; this contradicts the previous paragraph unless $n=4$.
\hfill$\Box$\end{proof}
\begin{example}
The obvious place to look is for a maximal imprimitive group with
two blocks of imprimitivity. So take the wreath product $S_m\wr S_2$, with
$n=2m$. We claim that a $3$-set containing two points
from one block of imprimitivity witnesses $3$-et.
Take any $3$-partition $\{P_1,P_2,P_3\}$. If none of the three parts meets
both bipartite blocks, then two of them are contained in one block and one
in the other, and the assertion is clear. So suppose that $P_3$ meets both
bipartite blocks. Choose arbitrary representatives for $P_1$ and $P_2$. If
they happen to be in the same block, then choose the representative for $P_3$
from the other block; otherwise choose any representative. Note that the example holds if we replace $S_m$ with another $2$-homogeneous group.
\end{example}
It is not the case that primitive groups with $3$-et are $2$-homogeneous
with finitely many exceptions.
\begin{example}
There is an infinite family of primitive groups which have the $3$-et
property but are not $2$-homogeneous.
The groups we take are $S_n\wr S_2$ in its product action on the square grid
of size $n$, for $n\ge4$. We claim that $S=\{(1,1),(1,2),(2,3)\}$ witnesses the
$3$-et property.
Suppose that we have any $3$-partition $\{P_1,P_2,P_3\}$ of the grid. First we observe that there
is a line of the grid meeting at least two parts. For if every horizontal line
is contained in a single part, then any vertical line meets all three parts.
Suppose first that $L$ is a line meeting only $P_1$ and $P_2$, without loss
$L=\{(1,x):x\in\{1,\ldots,n\}\}$. Let $A_i=\{x:(1,x)\in P_i\}$ for $i=1,2$.
If $a_i\in A_i$ for $i=1,2$, then every point $(j,x)$ with $x\ne a_1,a_2$ and
$j>1$ must lie in $P_1\cup P_2$, since otherwise we would have a transversal
in the orbit of $S$. So for any point $(u,v)\in P_3$, we have $u>1$ and
$v=a_1$ or $v=a_2$. Assuming without loss that $|A_2|>1$, we can repeat the
argument with another point of $A_2$; this leads to the conclusion that
$|A_1|=1$ and $P_3\subseteq\{(j,a_1):j>1\}$. Without loss, take $a_1=1$.
Suppose that $(2,1)\in P_3$. If there is a point $(x,y)\in P_2$ with $x\ne 2$
and $y\ne 1$, then we have a section in the orbit of $S$; so suppose not,
so that every such point lies in $P_1$. Again, this forces that $P_3$ is a
singleton $\{(2,1)\}$, and all points $(x,y)$ with $x>2$ and $y>1$ lie in
$P_1$. If any point $(x,1)$ with $x>2$ belongs to $P_1$, then we have our
section $\{(1,2),(2,1),(x,1)$; so we can suppose that all these points belong
to $P_2$. But then $\{(2,1),(3,1),(4,2)\}$ is the required section.
The other case is that no line meets just two parts of the partition. Choose
a line $L$ meeting all three parts, say $L=\{(1,x):x\in\{1,\ldots,n\}\}$.
Let $A_i=\{x:(x,1)\in P_i\}$. Now we find a section of the required kind if,
for example, $\{2,\ldots,n\}\times A_1$ contains a point of $P_1$; so we may
assume that this set is contained in $P_2\cup P_3$, and similarly for the
other two sets of this form. Since $n\ge4$, at least one of the sets $A_i$
has size greater than $1$, say $A_1$. Now choose $a,a'\in A_1$. Then the
line $L=\{(a,x):x\in\{1,\ldots,n\}\}$ meets $P_1$ and one other part, so it
meets all three parts. If $(a,x_2)\in P_2$ and $(a,x_3)\in P_3$, then these
two points together with $(a',1)$ form the required section.
\end{example}
Transitive groups with $k$-et for $k>3$ must be primitive:
\begin{prop}\label{p:prim}
Let $G$ be a transitive permutation group of degree $n$ having the $k$-et
property, where $n>9$ and $3<k<n-2$. Then $G$ is primitive.
\end{prop}
\begin{proof}
Suppose that the permutation group $G$ has the $k$-et property, and is
transitive and
imprimitive, with $b$ blocks of imprimitivity of size $a$. Let $l=k-1$.
If $K$ is a witnessing $k$-set, then $K$ contains a representative of every
orbit of $G$ on $l$-sets, and hence contains an $l$-set partitioned into
at most $b$ parts each of size at least $a$ in every possible way by the
blocks of imprimitivity. Hence any two such partitions differ in a single
move (consisting of reducing one part by one and increasing another by one).
So the question becomes:
\begin{quote}
For which $l,a,b$ is it true that any two partitions of $l$ into at most $b$
parts of size at most $a$ differ by at most one move?
\end{quote}
We will call such partitions \emph{admissible}.
When we phrase the problem in this way, we see that it is invariant under
replacement of $l$ by $n-l$, where $n=ab$; so we may assume, without loss
of generality, that $l\le n/2$. Also, replacing a partition by its dual, we
see that it is invariant under interchange of $a$ and $b$; so we may assume
that $b\le a$.
Suppose first that $b=2$. Then $l\le a$, so two allowable partitions are
$(l)$ and $(\lfloor l/2\rfloor, \lceil l/2\rceil)$. These differ by at least
two moves if $l\ge4$. So only $l\le 3$ is possible here, giving $k\le 4$.
If $k=4$, let $A$ be a witnessing set for $4$-et. $A$ contains $3$ elements from one block of imprimitivity and $1$ element from the other block. However,
such a set fails $4$-et with any
$4$-partition in which $2$ partition each form a block of imprimitivity, for a contradiction.
Now suppose that $b\ge3$. For the first subcase, suppose that $l\le a$. Then
the partition $(l)$ is admissible; and there is an admissible partition with
largest part $\lceil l/b\rceil$, and these are at least two steps apart
unless $l=2$. In the second subcase, there is a partition with largest part
$a=n/b$, and a partition with largest part $\lceil l/b\rceil<n/(2b)+1$. If
these are at most one step apart, then $n/b<n/(2b)+1$, giving $n<4b$, so
that $a\le 3$. Since, by assumption, $b\le a$, we have $n\le 9$.
So the theorem is proved.
\hfill$\Box$\end{proof}
The condition $k>3$ is necessary here, as we saw earlier.
Higher $k$ implies higher transitivity:
\begin{prop}\label{p:2homog}
Let $k\ge 5$, and $G$ be transitive of degree $n$ with the $k$-et property,
where $n>R(k-1,k-1)$. Then $G$ is $2$-homogeneous.
\end{prop}
\begin{proof}
We know that $G$ is primitive. If it is not $2$-homogeneous, it has more than
one orbit on $2$-sets. Partition these orbits into two parts, called red and
blue, in any manner. Since $n>R(k-1,k-1)$, Ramsey's theorem implies that there is a
monochromatic $(k-1)$-set, say red; the witnessing $k$-set $S$ must contain a red
$(k-1)$-set. So the blue edges within $S$ form a star, and all but
(at most) one of them have valency $1$. So the blue graph cannot have adjacent
vertices of degree greater than $1$, since such a configuration would give us a
triangle or path of length~$3$ in the representative $k$-set. But this is a
contradiction, since the blue graph is regular and connected.
\hfill$\Box$\end{proof}
We will in fact show that all such transitive groups with $5\le k\le n/2$ are $2$-homogeneous.
By a different argument we can extend this result to the $4$-et property.
\begin{theorem} \label{t:4et2hom} Let $G$ be a permutation group of degree $n\ge 8$ that satisfies $4$-et.
If $G$ is primitive, but not $2$-homogeneous, then $n=100$ and $G$ is either the Higman-Sims group or its automorphism group.
\end{theorem}
\begin{proof}
Suppose that $G$ is such a group, of degree $n\ge8$. Let $\Gamma$ be any
orbital graph for $G$, and let $A$ be a $4$-set witnessing $4$-et. Then all
$3$-vertex induced subgraphs of $\Gamma$ are represented in $A$. Since a
$3$-clique and a $3$-coclique cannot coexist within a $4$-set, we see that
either $\Gamma$ or its complement is triangle-free.
Suppose that $\Gamma$ is triangle-free. Then $\Gamma$ must contain all other
$3$-vertex graphs. (By Ramsey's theorem it must contain a null graph of size
$3$. If it omits the graph with three vertices and two edges then it consists
of isolated edges, and if it omits the graph with three vertices and one edge
then it is complete bipartite. In either case, $G$ is imprimitive.) Then we see
that the induced subgraph on $A$ must be a path of length $2$ and an isolated
vertex.
So the witnessing $4$-set contains two or four edges of any orbital graph,
which implies that $G$ has at most three orbits on $2$-sets. First we eliminate
the case when there are three orbits. In this case, let the orbital graphs be
$\Gamma_1$, $\Gamma_2$, $\Gamma_3$. All three are triangle-free, and the
structure of $A$ is as follows, up to choice of numbering: $\{1,2\}$ and
$\{2,3\}$ are edges of $\Gamma_1$; $\{1,4\}$ and $\{2,4\}$ of $\Gamma_2$; and
$\{1,3\}$ and $\{3,4\}$ of $\Gamma_3$. So the end-vertices of a path of
length~$2$ in $\Gamma_i$ are joined in $\Gamma_{i-1}$ (indices mod~$3$).
Let the valencies of the three graphs be $k_1$, $k_2$, and $k_3$, and suppose
without loss of generality that $k_1\ge k_3$. Now there are $k_1(k_1-1)$ paths
of length two in $\Gamma_1$ leaving a vertex $v$; all end among the $k_3$
neighbours of $v$ in $\Gamma_3$. So if $w\in\Gamma_3(v)$, the number of common
neighbours of $v$ and $w$ in $\Gamma_1$ is $k_1(k_1-1)/k_3\ge k_1-1>k_1/2$.
So any two neighbours $w,w'$ of $v$ in $\Gamma_3$ have a common neighbour in
$\Gamma_1$, giving a triangle with two edges $\{v,w\}$ and $\{v,w'\}$ in
$\Gamma_1$ and one edge $\{w,w'\}$ in $\Gamma_2$, a contradiction.
So we can assume that $G$ has just two orbits on $2$-sets, and is a group of
automorphisms of a triangle-free strongly regular graph. Using CFSG, the only
such graphs are known ones with $10$, $16$, $50$, $56$, $77$ or $100$ vertices;
computation shows that only the last of these has automorphism group with the
$4$-et property (and moreover, both the Higman--Sims group and its automorphism
group have this property). \hfill$\Box$
\end{proof}
\section{Some examples}
\label{s:examples}
In this section, we treat various families of groups. First, the groups
$\mathop{\mathrm{AGL}}(d,2)$.
\begin{theorem}\label{t:ex.affine}
Let $G$ be the affine group $\mathop{\mathrm{AGL}}(d,2)$, where $d\ge3$. Then $G$ has the $k$-et
property for $k\le 4$,
but fails the $k$-et property for all $k$ with $4<k\le n/2$ except when $d=4$.
The group $\mathop{\mathrm{AGL}}(4,2)$ has the $6$-et property but not the $5$-et property.
\end{theorem}
\begin{remark}
This example shows that (unlike the $k$-ut property) the $k$-et property is not
monotone.
\end{remark}
\begin{proof}
The affine group $\mathop{\mathrm{AGL}}(d,2)$ is $3$-transitive, and so certainly has the
$k$-et property for $k\le 3$. We will show in Theorem~\ref{egregious} that it
does not have the $k$-et property for $k\ge8$. We treat the remaining values
individually.
\paragraph{The case $k=4$:}
We claim that an affine independent $4$-set witnesses the $4$-et property.
For let $\{P_1,\ldots,P_4\}$ be any $4$-partition, with (without loss)
$|P_4|>1$. Choose arbitrary representatives of $P_1,P_2,P_3$. There is at
most one point which makes an affine plane with the three chosen
representatives; since $|P_4|>1$ we can avoid this point in our choice of the
representative for $P_4$.
\paragraph{The case $k=5$:}
There are two orbits on $5$-sets, an affine plane with an extra
point, and an affine independent $5$-tuple. To defeat the first type, take
the partition $\mathcal{P}=(P_1,\ldots,P_5)$, where $P_1$ and $P_2$ are singletons,
$P_3$ is a $2$-set extending $P_1\cup P_2$ to a plane, $P_4$ a $4$-set
extending $P_1\cup P_2\cup P_3$ to a $3$-space, and $P_5$ the remaining set.
To defeat the second type, let $\mathcal{P}$ be the partition where $P_1,\ldots,P_4$
are singletons forming an affine plane and $P_5$ the remaining set.
\paragraph{The case $k=6$:} First we deal with $\mathop{\mathrm{AGL}}(4,2)$.
Let $G=\mathop{\mathrm{AGL}}(4,2)$. Then $G$ is $3$-transitive, has
two orbits on $4$-sets (planes, and affine-independent sets), two orbits on
$5$-sets (plane plus point and affine-independent), and three orbits on
$6$-sets (six points of a $3$-space, an affine-independent $5$-set and one
point making a plane with three points of this set, and a $6$-set all of whose
$5$-subsets are affine independent. Call these orbits on $6$-sets $A$, $B$,
$C$. We are going to show that a set of type $B$ is a witnessing set. Note that
given an affine independent $5$-set, there are ${5\choose3}=10$ points which
enlarge it to a set of type $B$, and only one which enlarges it to a set of
type $C$.
Let $\mathcal{P}$ be a $6$-partition.
\subparagraph{Case 1:} there are four singleton parts forming a plane. Then
given any fifth point, there are just three points which enlarge the resulting
$5$-set to a $6$-set of type $A$. Since the remaining two parts of the
partition, say $U$ and $V$, have twelve points between them, at least one
(say $V$) has size greater than $3$. So take a point of $U$, and then we can
find a point of $V$ which enlarges the resulting $5$-set to a $6$-set of
type $B$, as required.
\subparagraph{Case 2:} not the above.
We claim first that from any four parts of $\mathcal{P}$ we can choose representatives
not forming a plane. If all four parts are singletons, this is true by the case
assumption; otherwise, choose representatives of three of the parts excluding
one part of size bigger than $2$, then all but one point of the final part
will work.
Now the six parts of $\mathcal{P}$ contain $16$ points, so the largest two parts, say
$U$ and $V$, together contain at least six points. Choose representatives
of the other four parts not forming a plane. Then just four points extend
this four-set to an affine dependent set, so some point of $U\cup V$ extends
the given four-set to an affine independent $5$-set.
If $|U|$ and $|V|$ are each at least two, then we can find a point in the
unused part which extends the $5$-set to a $6$-set of type $B$, since only
one point fails to do this.
In the remaining case, the partition has five singleton parts (forming an
affine independent $5$-set) and one part containing everything else. But all
but one point of this part extends the $5$-set to a $6$-set of type $B$. We
are done.
\medskip
For $d>4$, there is an additional type of $6$-set, namely an affine independent
set. Now a $6$-partition constructed as before (consisting of the differences
in an increasing chain of subspaces) has the property that any transversal
must be affine independent. On the other hand, a partition with four singleton
parts forming an affine plane has the property that all its transversals are
affine dependent. So $\mathop{\mathrm{AGL}}(d,2)$ does not have $6$-et for $d>4$.
\paragraph{The case $k=7$:}
For $k=7$, $d\ge5$, there is an affine independent $6$-set, and a $6$-set
contained in an affine $3$-space. These two sets cannot be contained in a
common $7$-set. For $d=4$, we can replace the first set with a set of type $C$.
\hfill$\Box$\end{proof}
Now we turn to the largest Mathieu group, and show:
\begin{theorem}\label{t:m24}
$M_{24}$ has the $k$-et property for $k\le 7$ (but not for larger $k$).
\end{theorem}
\begin{proof}
Let $G$ be the Mathieu group $M_{24}$, with $n=24$.
This group is $5$-transitive, so it has the $k$-et property for $k\le5$.
We show the $6$-et property. Recall that $M_{24}$ is the
automorphism group of a Steiner system $S(5,8,24)$ (blocks of size $8$, any
five points in a unique block). We claim that a $6$-set not contained in a
block is a witnessing set (these form a single orbit of $M_{24}$). For take
any $6$-part partition. By the Pigeonhole Principle, one of its parts (without
loss $P_6$) contains at least four points. Choose arbitrary representatives
of $P_1,\ldots,P_5$. These representatives lie in a unique block, which
has at most three more points; so there is a point of $P_6$ not in this block;
choose this as the representative of $P_6$.
A similar but more intricate argument shows that $M_{24}$ has the $7$-et
property. (It is a property of the Steiner system that, of any seven points,
six of them are contained in a block; a witnessing set is one in which six but
not all seven points lie in a block.) We have confirmed this by computer, and
also showed that it fails to have the $8$-et property. \hfill$\Box$
\end{proof}
We remark in passing that these sets of size $7$ are the minimal bases for
the permutation group $M_{24}$ (sets of smallest size whose pointwise
stabiliser is the identity).
We now turn to a collection of groups which have the $4$-et property.
\begin{theorem}\label{t:ex.Sp}
Let $G$ be the symplectic group $\mathop{\mathrm{Sp}}(2d,2)$ (in one of its $2$-transitive
representations of degree $2^{2d-1}\pm2^{d-1}$, or the affine symplectic group
$2^{2d}:\mathop{\mathrm{Sp}}(2d,2)$ of degree $2^{2d}$ (with $d\ge2$), or the
Conway group $\mathord{\mathrm{Co}}_3$ of degree $276$. Then $G$ has the $4$-et property.
\end{theorem}
\begin{proof}
We treat these groups using a variant of the arguments of
\cite[Proposition 4.7]{ArCa}. In each case the group has just two orbits
on $3$-subsets, each orbit $T$ forming a \emph{regular two-graph}
\cite{taylor}: this means
\begin{enumerate}\itemsep0pt
\item any $4$-set contains an even number of members of $T$;
\item any two points lie in $k$ members of $T$.
\end{enumerate}
The values of $k$ for the groups of interest are:
\begin{itemize}\itemsep0pt
\item For $\mathop{\mathrm{Sp}}(2d,2)$ with $n=2^{2d-1}\pm2^{d-1}$, $k=2^{2d-2}$ or
$k=2^{2d-2}\pm2^{d-1}-2$.
\item For $2^{2d}:\mathop{\mathrm{Sp}}(2d,d)$ with $n=2^{2d}$, $k=2^{2d-1}$ or
$2^{2d-1}-2$.
\item For $\mathord{\mathrm{Co}}_3$, $k=112$ or $162$.
\end{itemize}
In connection with (a), we will call a $4$-set \emph{full}, \emph{mixed},
or \emph{empty} according as it contains $4$, $2$ or $0$ members of $T$. It
is clear from Proposition~\ref{p:witness} that the only possible witnessing
sets for the $4$-ut property are the mixed sets.
We need a small amount of theory of regular two-graphs. Suppose that $T$
is a regular two-graph. For any point $x$, form a graph with vertex set
$\Omega\setminus\{x\}$ whose edges are all pairs $\{y,z\}$ for which
$\{x,y,z\}\in T$. We say this graph is obtained by \emph{isolating} $x$.
Now the graph uniquely determines $T$: a triple $\{a,b,c\}$ containing $x$
is in $T$ if and only if the two vertices different from $x$ form an edge;
and a triple not containing $x$ is in $T$ if and only if an odd number of
$\{a,b\}$, $\{b,c\}$ and $\{a,c\}$ are in $T$. Also, the graph is regular
with valency $k$.
\begin{lemma}
Suppose that $G$ is an automorphism group of a regular two-graph $T$. Suppose
that
\begin{enumerate}\itemsep0pt
\item $G$ is transitive on the set of mixed $4$-sets;
\item $(n-4)/3<k<2(n-1)/3$.
\end{enumerate}
Then $G$ has the $4$-et property, with the mixed $4$-sets as witnessing sets.
\end{lemma}
\begin{proof}
Suppose that there is a partition $\mathcal{P}=\{P_1,\ldots,P_4\}$ (with
$|P_1|\le\cdots\le|P_4|$) for which no mixed $4$-set is a section. Thus every
section to $\mathcal{P}$ is a full or empty $4$-set. We first show that either all
sections are full, or all are empty. Suppose that $\{x_1,\ldots,x_4\}$ is a
full $4$-set with $x_i\in P_i$ for $i=1,\ldots,4$. If $x'_4$ is another
point in $P_4$, then $\{x_1,x_2,x_3,x'_4\}$ is a section containing a
$3$-set $\{x_1,x_2,x_3\}\in T$, so it must be a full $4$-set. By connectedness,
every section is full.
By replacing the two-graph by its complement if necessary, we may assume that
the sections are all full.
\subparagraph{Case 1:} $|P_1|=1$. Let $P_1=\{a\}$, and let $\Gamma$ be the
graph obtained by isolating $a$. Then $\Gamma$ contains all the edges of the
complete tripartite graph with tripartition $\{P_2,P_3,P_4\}$; so a vertex in
$P_2$ is joined to everything in $P_3\cup P_4$, and its valency is at least
$2(n-1)/3$ (since $P_2$ is the smallest of these three parts),
contradicting (b).
\subparagraph{Case 2:} $|P_1|>1$. Choose $a,b\in P_1$. Consider the $4$-set
$\{a,b,p_2,p_3\}$, where $p_2\in P_2$ and $p_3\in P_3$. Since
$\{a,p_2,p_3\}$ and $\{b,p_2,p_3\}$ are in $T$, we see that both or neither
of $\{a,b,p_2\}$ and $\{a,b,p_3\}$ are in $T$. Again by connectedness, it
follows that either $\{a,b,q\}\in T$ for all $q\notin P_1$, or this holds
for no $q\notin P_1$. Hence, in the graph obtained by isolating $a$, either
$b$ is joined to all vertices not in $P_1$, or to none of them. Thus either
$k\ge 3n/4$ or $k\le n/4 - 2$, contradicting (b).
This proves that every partition has a mixed $4$-set as a section. By (a),
a mixed $4$-set witnesses the $4$-et property.
\hfill$\Box$\end{proof}
Now we turn to the proof of the theorem. Note that these groups are all
$2$-transitive, and so have the $k$-et property for $k\le 2$; they
were shown to have the $3$-ut property (and hence the $3$-et property) in
\cite{ArCa}. (This also follows, more easily, from Proposition~\ref{p:watkins}
below.)
The group $\mathop{\mathrm{Sp}}(4,2)$ on $6$ points is $S_6$ and clearly has the
$4$-et property. Excluding this case, each of these groups is, as noted, the
automorphism group of a regular two-graph; so we only have to verify the
hypotheses of the Lemma. For (b), this is simple arithmetic; so we need to
prove that $G$ is transitive on mixed $4$-sets. For $\mathord{\mathrm{Co}}_3$, this can
be checked by computation.
For the infinite families, we argue as follows. We show that the groups in
question have $6$ orbits on $4$-tuples of distinct points whose underlying
set is a mixed $4$-set. This will prove the claim, since there are six ways
of selecting two $3$-subsets of a $4$-set.
Our main tool is \emph{Witt's Theorem}, see \cite[Theorem 7.4]{classical}.
We can translate any $4$-set so that it contains $0$, and show that the
triples of points making up a mixed $4$-set with $0$ fall into six orbits.
Witt's theorem says that if $f:U\to V$ is a linear isometry on a subspace $U$
of a formed space $V$ with radical $\mathrm{Rad}(V)$, and $f$ maps
$U\cap\mathrm{Rad}(V)$ to $f(U)\cap\mathrm{Rad}(V)$, then $f$ extends to a
linear isometry from $V$ to $V$. In our case, the radical of $V$ is $\{0\}$,
so the second condition is automatically satisfied.
In the case $G=2^{2d}:\mathop{\mathrm{Sp}}(2d,2)$, the space $V$ will be the
$2d$-dimensional space over $\mathop{\mathrm{GF}}(2)$ with a symplectic form $B$ on it.
In the case $G=\mathop{\mathrm{Sp}}(2d,2)$ in either of its $2$-transitive
actions, $\Omega$ can be identified with the set of zeros of a non-singular
quadratic form of one of the two possible types on the space $V$ (which also
carries a symplectic form $B$, obtained by polarising the quadratic form). We
will apply Witt's theorem to these formed spaces. In each case, the triples
of the two-graph can be taken as those $\{x,y,z\}$ for which
\[B(x,y)+B(y,z)+B(z,x)=0.\]
First note that a mixed $4$-set cannot be a subspace $W$ of $V$. For if so,
then either the symplectic form restricted to $W$ is identically zero, or it
is the unique such form on a $2$-dimensional space. Thus, either $B(x,y)=0$
for all $x,y\in W$, or $B(x,y)=1$ for all distinct non-zero $x,y\in W$;
calculation shows that $W$ is empty or full in the two cases.
So, if $\{0,a,b,c\}$ is a mixed $4$-set, then $\{a,b,c\}$ is a basis for a
$3$-dimensional subspace $U$ of $V$. Of the three inner products $B(a,b)$,
$B(b,c)$ and $B(c,a)$, one or two are zero, so there are six possibilities.
The values of $B$ on basis vectors determine uniquely its values on the whole
of $U$.
In the case of $\mathop{\mathrm{Sp}}(2d,2)$, we also have a quadratic form $Q$, which is
zero on $\{0,a,b,c\}$, and we see that the values of $Q$ on $U$ are also
determined, by the polarisation rule
\[Q(x+y)=Q(x)+Q(y)+B(x,y).\]
Hence there are just six orbits of the group on such tuples, as claimed.
\hfill$\Box$\end{proof}
\medskip
Another group which is an automorphism group of a regular two-graph is the
Higman--Sims group, with degree $176$. This group was shown in \cite{ArCa} to
have the $3$-ut property. We do not know whether it has $4$-et, but it is
possible to show that it has weak $4$-et.
\medskip
Here is an example to show that $k$-et does not imply $(k-1)$-ut for all
but finitely many groups.
\begin{theorem}\label{t:ex.psl}
Let $q$ be a prime power, and $\mathop{\mathrm{PSL}}(3,q)\le G\le\mathop{\mathrm{P}\Gamma\mathrm{L}}(3,q)$. Then $G$ has
the $4$-et property but not the $3$-ut property.
\end{theorem}
\begin{proof}
$G$ acts $2$-transitively on the point set $\Omega$ of the projective plane
$\mathrm{PG}(2,q)$. The group induced on a line of the plane by its setwise
stabiliser contains $\mathop{\mathrm{PGL}}(2,q)$, and so is $3$-transitive; and the pointwise
stabiliser of the line contains the translation group of the affine plane,
and so is transitive on the complement of the line. Thus, $G$ has just two
orbits on triples (collinear and noncollinear triples), and is transitive
on $4$-tuples $(x_1,x_2,x_3,x_4)$ where $x_1,x_2,x_3$ lie on a line $L$ and
$x_4\notin L$.
We show first that $G$ does not satisfy $3$-ut. (This is a special case of an
argument in \cite{ArCa}.) Let $a$ be a point on a line $L$, and consider the
partition $\{\{a\}, L\setminus\{a\}, \Omega\setminus L\}$. Clearly any
section consists of three noncollinear points.
Now we show that the set $\{x_1,\ldots,x_4\}$ in the first paragraph is a
witnessing set for the $4$-et property. Let $\{P_1,P_2,P_3,P_4\}$ be any
$4$-partition of $\Omega$.
First we show that there is a line $L$ meeting at least three parts of the
partition. Let $L'$ be any line. If $L$ meets at least three parts, then
take $L=L'$. If not, suppose without loss that $L'\subseteq P_1\cup P_2$.
Choose $y_3\in P_3$ and $y_4\in P_4$, and let $L$ be the line $y_3y_4$.
Then $L$ intersects $L'$, and so contains a point in either $P_1$ or $P_2$.
Now let $L$ be a line meeting at least three parts. If $L$ meets only
three parts, say $P_1,P_2,P_3$, choose $y_i\in P_i\cap L$ for $i=1,2,3$
and $y_4\in P_4$; if $L$ meets all four parts, then choose any point
$y_4\notin L$, and suppose without loss that $y_4\in P_4$, and then
choose $y_i\in P_i\cap L$ for $i=1,2,3$. In either case, $(y_1,\ldots,y_4)$
is a section for the partition and lies in $(x_1,\ldots,x_4)G$. \hfill$\Box$
\end{proof}
\section{The $k$-et property for $k\ge8$}
\label{s:8}
In this section we show that there is an absolute bound on $k$ for which a
transitive $k$-et group other than a symmetric or alternating group can exist.
Our result is as follows.
\begin{theorem}\label{egregious}
For $8\le k\le n/2$, a transitive permutation group of degree $n$ which has the
$k$-et property is the symmetric or alternating group.
\end{theorem}
The theorem is best possible: we saw earlier that $M_{24}$ has the $7$-et
property. We show in the next section that it is the only such example.
\begin{proof}
Let $G$ be a transitive group of degree $n$ with the $k$-et property, where
$n$ and $k$ are as above. By Proposition~\ref{p:prim}, $G$ is primitive.
We begin with an observation that will be used repeatedly in the proof.
The $k$-et property is closed upwards; so we may assume that $G$ is
a maximal subgroup of $S_n$ other than $A_n$, or a maximal subgroup of $A_n$.
We also need a technique which helps deal with groups which are not
$2$-homogeneous (and which can be adapted to other cases as well).
\begin{lemma}
Let the transitive group $G$ be contained in the automorphism group of a graph
$\Gamma$ with clique number $\omega$ and independence number $\alpha$, and
suppose that $G$ has the $k$-et property, with $k\ge4$.
\begin{enumerate}\itemsep0pt
\item If $\omega,\alpha\ge3$ then $k\ge\omega+\alpha-1$.
\item If $\omega\ge3$ and $\alpha=2$, then $k\ge\omega+2$.
\end{enumerate}\label{l:clique_ind}
\end{lemma}
\begin{proof}
Note that $G$ is primitive, by Lemma \ref{p:prim}.
(a) Suppose that $G$ has the $k$-et property with $4<k\le\omega+\alpha-2$.
Choose $l,m$ with $3\le l,m\le k-1$ and $l+m=k+2$. Now choose two $(k-1)$-subsets
$A$ and $B$ such that $A$ contains an $l$-clique $C$ and $B$ contains an
independent set $D$ of size $m$. We show that (in the terminology introduced
earlier) $A$ and $B$ cannot coexist.
Let $K$ be a $k$-set containing
$G$-images of $A$ and $B$; without loss, $A,B\subseteq K$, and so certainly
$C,D\subseteq K$. Since $|C|+|D|=k+2$, and $|C\cup D|\le k$, we have
$|C\cap D|\ge2$. But this is a contradiction, since two points of $C$ are
joined while two points of $D$ are not.
(b) Suppose that $G$ has the $k$-et property with $4<k<\omega+2$. Let $S$ be
a witnessing $k$-set. Then $S$ contains a $(k-1)$-clique; so the complementary
graph restricted to $S$ is a star. Also, since the complementary graph has
edges, and has valency at least~$2$, it contains a $4$-vertex path or
cycle, and so $S$ must contain such a path or cycle in the complement, a
contradiction.
\hfill$\Box$\end{proof}
We also make frequent use of Proposition~\ref{p:order}, the \emph{order bound},
and the remark following it (asserting that if $G$ is shown to fail $k$-et
because it fails the order bound, then $G$ does not have $l$-et for
$k\le l\le n/2$).
\medskip
Now we begin our analysis of primitive groups. The strategy is almost always
to find a lower bound for $k$ using the Lemma above, by finding a suitable
graph on which our group acts, and showing that for this value of $k$ the
order bound is violated. The calculations for the last step are exceedingly
messy, but in virtually every case we succeed with plenty to spare. (In
outline, a group with $k$-et has order not much less than $n^{k-1}$; but
in all cases we know, or have good upper bounds for, $|G|$.) In the
first case, we outline the calculations.
\paragraph{Case 1:} $G$ is not basic. Then $G\le S_q\wr S_m$, acting on
the set $\{1,\ldots,q\}^m$ of all $m$-tuples over an alphabet of size $q$.
We may assume that $q>5$, since if $q\le4$ then $G$ has a regular normal
subgroup and is contained in an affine group (this case is treated later).
This group is the automorphism group of the graph in which two tuples are
joined if they agree in at least one coordinate. This graph has a clique of
size $q^{m-1}$, consisting of all $m$-tuples with a fixed value in the first
coordinate; and an independent set of size $q$, consisting of the ``diagonal''
tuples $(x,x,\ldots,x)$ for $x\in\{1,\ldots,q\}$. Thus, if $G$ is
$k$-et with $k>4$, then $k>q^{m-1}+q-1$. But for $k=q^{m-1}+q-1$, we can
see that the order bound fails.
For we can take $k$ a little smaller, say $k=q^{m-1}+1$; so suppose that
\[(q!)^mm!=|G|\le{q^m\choose q^{m-1}}/(q^{m-1}+1).\]
The left-hand side is smaller than $q^{qm}m^m$, whereas the right-hand side is
greater than
\[(q^m-q^{m-1})^{q^{m-1}}/((q^{m-1})^{q^{m-1}})=(q-1)^{q^{m-1}}.\]
This is certainly false for $m>2$.
For $m=2$, we need to do the argument with a little more care. It is enough
to use the exact value $k=q^{m-1}+q-1=2q-1$ given by our argument.
\medskip
We conclude that $G$ must be basic. By the O'Nan--Scott Theorem
\cite[Theorem 4.1A]{dixon}, $G$ is affine, diagonal, or almost simple.
\paragraph{Case 2:} $G$ is diagonal. Then $G\le T^d(\mathop{\mathrm{Out}}(T)\times S_d)$ for
some finite simple group $T$, with $n=|T|^{d-1}$, where $\mathop{\mathrm{Out}}(T)$ is the outer
automorphism group of $T$; and we may assume that equality holds.
We use the fact that outer automorphism groups of simple groups are small.
Certainly, since every simple group $T$ is generated by two elements, we have
$|\mathop{\mathrm{Aut}}(T)|\le|T|^2$, so $|\mathop{\mathrm{Out}}(T)|\le|T|$.
The domain for $G$ is identified with $T^{d-1}$; and $G$ is generated by
right translations, the map
\[\lambda_t:(t_1,\ldots,t_{d-1})\mapsto
(t^{-1}t_1,\ldots,t^{-1}t_{d-1})\]
for $t\in T$, automorphisms of $T$ acting componentwise (where inner
automorphisms are represented by the composition of $\lambda_t$ and right
multiplication by $t$ in each coordinate), coordinate permutations, and the
map
\[\sigma:(t_1,\ldots,t_d)\mapsto(t_1^{-1},t_1^{-1}t_2,\ldots,t_1^{-1}t_d).\]
Consider first the case $d=2$. We have $n=|T|$ and $|G|\le2|T|^3$. The order
bound would give
\[2n^3\ge{n\choose k-1}/k;\]
for $k\ge5$, this implies $n\le 240$. So only $A_5$ and $\mathop{\mathrm{PSL}}(2,7)$ need
further consideration. But in each case the outer automorphism group has
order $2$, so the left-hand side can be improved to $4n^2$, and the bound
becomes $n\le\sqrt{480}$, which is false for both groups. So $k$-et
fails for $k\ge5$. (In fact, both groups fail $4$-et as well, since they have
respectively $13$ and $30$ orbits on $3$-sets.)
Now consider the general case.
The subgroup $T^{d-1}$ of $G$ acts regularly, so we choose a Cayley graph
for this subgroup which is invariant under $G_1$. Note that the $G_1$-orbit
of a tuple $(t,1,\ldots,1)$, for $t\ne1$, consists of tuples having either a
single non-identity element, or all of its elements equal; we use the set of
all such elements as our connection set. There is a clique of size $|T|-1$
consisting of all elements with a single non-identity entry in the first
coordinate; an independent set of size $3$ is easily constructed. So
$k\ge|T|+1$, which is large enough to violate the order bound if $d$ is not
too large compared to $|T|$ (say $d<|T|$).
In the remaining case we use a similar argument, considering elements which
have at most $d/3$ non-identity coordinates and their images, which have
at least $1+2d/3$ coordinates equal. This time we can produce a clique of size
${\lfloor d/3\rfloor\choose\lfloor d/6\rfloor}(|T|-1)^{\lfloor d/6\rfloor}$,
consisting of elements with $\lfloor d/6\rfloor$ non-identity coordinates
within a fixed $\lfloor d/3\rfloor$-set; again we can build a coclique of
size~$3$, and the order bound is violated.
\paragraph{Case 3:} $G$ is affine. Again we may assume that $G=\mathop{\mathrm{AGL}}(d,p)$ for
some $d,p$.
If $k\le d+1$,
there is an affine independent $(k-1)$-set; if $k\ge6$, there
exist five points contained in an affine space of dimension $2$ or $3$.
These
cannot both be contained in a $k$-set. So $k$-et fails. Thus we may assume
that $k\ge d+2$.
If $k\le p^{d-1}$, there is an affine space contained in a hyperplane, and
another with the property that any hyperplane misses two of its points (take
$d+2$ points, any $d+1$ independent). So we may assume that $k>p^{d-1}$.
Now calculation shows that ${p^d\choose k-1}/k$ is greater than $|G|$,
with finitely many exceptions (indeed, only $\mathop{\mathrm{AGL}}(4,2)$ and $\mathop{\mathrm{AGL}}(5,2)$
don't satisfy this inequality).
\paragraph{Case 4:} $G$ is almost simple.
The \emph{base size} of a permutation group is the smallest number of points
of the domain whose pointwise stabiliser is the identity.
By results of Tim Burness with various co-authors (see \cite{bls}), an
almost simple primitive group $G$ satisfies one of the following:
\begin{enumerate}\itemsep0pt
\item $G$ is a symmetric or alternating group, acting on subsets of fixed size
or uniform partitions of fixed shape;
\item $G$ is a classical group, acting on an orbit on subspaces or
complementary pairs of subspaces of the natural module;
\item the base size of $G$ is at most $7$, with equality only in the case
$G=M_{24}$, $n=24$.
\end{enumerate}
\subparagraph{Case 4(a):} $G$ is $S_m$ on $r$-sets or uniform $r$-partitions.
First consider the case that $G$ acts on $r$-sets, with $m>2r$. Form a graph by
joining two $r$-sets if their intersection is non-empty. There is a clique of
size $m-1\choose r-1$ consisting of all $r$-sets containing a specified point,
and an independent set of size $\lfloor m/r\rfloor$ consisting of pairwise
disjoint $r$-sets. If $m\ge3r$, the Lemma applies, and shows that
$k\ge{m-1\choose r-1}$, and the order bound is violated.
If $2r<m<3r$, use instead the graph where two $r$-sets are joined if they
intersect in $r-1$ points. There is a clique of size $m-r+1$ consisting of
all $r$-sets containing a fixed $(r-1)$-set, and an independent set of size
$\lceil(m-r+2)/2\rceil$ consisting of $r$-sets intersecting pairwise in a
given $(r-2)$-set. So $k\ge m-r+\lceil(m-r+2)/2\rceil$, and again the order
bound is violated.
\medskip
Now consider the case that $G$ acts on partitions with $r$ parts of size $s$,
with $rs=m$. Let $p(r,s)$ be the number of such partitions; so
\[p(r,s)=(rs)!/(s!)^rr!.\]
If $r>2$, make a graph by joining two partitions which have a part in common.
There is a clique of size $p(r-1,s)$ and a large coclique, so the usual
argument works.
Suppose that $r=2$. Join two partitions if their common refinement has two
parts of size $1$. There is a clique of size $s+1$ containing all partitions
for which one part contains a given $(s-1)$-set. To produce a large independent
set, if $s$ is even, take a partition of $\{1,\ldots,m\}$ into $s$ parts of
size $2$, and consider partitions which are unions of parts in this
subsidiary partition. If $s$ is odd, leave two isolated points, and put one
into each part of the partition made up of parts of the subsidiary partition.
\subparagraph{Case 4(b):} $G$ is a classical group on an orbit of subspaces
or pairs of subspaces of complementary dimension in its natural module; in
the latter case we may assume that either the subspaces are complementary
or one contains the other, and for groups preserving a form we may assume
that subspaces are either totally singular or non-singular.
We defer the cases $G=\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$ (with $n=q+1$) and
$\mathrm{P}\Gamma\mathrm{U}(3,q)$ (with $n=q^3+1$) until Case 4(c) below.
Suppose first that $G=\mathop{\mathrm{P}\Gamma\mathrm{L}}(m,q)$ on $1$-dimensional subspaces (with
$n=(q^m-1)/(q-1)$, $m\ge3$). A similar argument to that
used for the affine groups applies. If $m+1<k\le(q^{m-1}-1)/(q-1)+1$, then a
$(k-1)$-subset of a hyperplane, and a $(k-1)$-set containing $m+2$ points with
no $m+1$ in a hyperplane, cannot be moved inside the same $k$-set by $G$;
so we may assume that $k>(q^{m-1}-1)/(q-1)+1$, and the order bound is violated.
In the case $\Gamma\mathrm{L}(m,q)$ on $r$-dimensional subspaces, we may
assume that $1<r\le m/2$. We follow the argument for $S_m$ on $r$-sets: if
$r\le m/3$, join two subspaces if they intersect; if $r>m/3$, join them if
their intersection is a hyperplane in each.
For other classical groups on subspaces, an almost identical approach works,
except in the case of split orthogonal groups $O^+(2r,q)$ acting on totally
singular $r$-spaces. In this case the graph given by ``intersection of
codimension $1$'' is bipartite, so we take codimension~$2$ instead.
For groups acting on pairs of subspaces, we can join two pairs if one subspace
in the pair coincides. To find a coclique, use the fact that the incidence
matrix of $r$-spaces and $(m-r)$-spaces is invertible
(Kantor~\cite{kantor:inc}), so there is a bijection
between the two sets of subspaces such that each subspace is incident with
its image (contains it, or is contained in it).
\subparagraph{Case 4(c):} $G$ has base size at most $6$. (We can ignore
$M_{24}$, since computation shows that this group fails the $k$-et property for
$8\le k\le 12$.) In this case we know little about the structure of $G$, so
we proceed differently.
First we make a couple of observations.
A quick check with \textsf{GAP}~\cite{GAP} shows that almost simple primitive groups,
other than those in cases (a) and (b), fail the order bound
for $8$-et (and so for $k$-et for $8\le k\le n/2$) for degrees $n$ satisfying
$24\le n<2500$.
Let us call a primitive group $G$ of degree $n$ \emph{very small} if
$|G|\le n(n-1)(n-2)(n-3)$. Now a very small group satisfies the order bound
for $8$-et if $(n-4)(n-5)(n-6)<8!$, which holds only for $n\le39$. Very
small groups include all the rank~$1$ doubly transitive groups (those with
socle $\mathop{\mathrm{PSL}}(2,q)$, $\mathrm{PSU}(3,q)$, $\mathop{\mathrm{Sz}}(q)$ and
$\mathrm{R}_1(q)$). Of these groups, further examination shows that only
$\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,27)$ and $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,32)$ need further investigation.
A group with base size at most $6$ has order at most $n(n-1)\cdots(n-5)$.
So, if such a group satisfies the order bound for $k=8$, then
\begin{eqnarray*}
n(n-1)\cdots(n-5) &\ge& {n\choose 7}/8,\\
n-6 &\le& 8!=40320.
\end{eqnarray*}
This is beyond reasonable computational bounds, but we can do better. Note
first that, if $G$ satisfies $9$-et, then this result is improved to
$(n-6)(n-7)\le 9!$, or $n\le 609$. By ``upward closure'' of the order bound,
and the computer search, we can assume that $G$ has the $8$-et property
(unless its degree is at most $24$).
According to Lemma~\ref{l:clique_ind}, if $G$ has the $8$-et property,
then a non-trivial $G$-invariant graph contains no $7$-clique (and clearly
no $7$-coclique either). It follows from known bounds on Ramsey numbers (see
the survey \cite{rad}) that
$n<540$. Such a group (if almost simple of type (c)) is excluded by the
computer search mentioned earlier (except for groups of degree at most $24$).
Note that the weaker, and elementary, bound $R(7,7)\le{12\choose 6}=924$
would suffice here.
The remaining case consists of $2$-homogeneous groups. Such an almost simple
group is either ``very small'', or covered by case (b) above, or one of
finitely many others. Further inspection shows that the exceptions which
need to be considered are $M_{22}$ and its automorphism group, $M_{23}$,
$M_{24}$, and $\mathord{\mathrm{Co}}_3$. The Conway group can be excluded by \emph{ad hoc}
arguments. It fails the order bound for $k=9$ (and so for larger $k$), so
we may assume that $k=8$. It acts on a regular two-graph (a set of $3$-subsets)
which contains complete sub-hypergraphs of size $6$ and null sub-hypergraphs
of size $23$. Now an argument similar to that in Lemma~\ref{l:clique_ind}
gives a contradition to the $8$-et property.
We are left to check groups with degrees in the range $\{16,\ldots,24\}$ and
two larger examples with degrees $28$ and $33$. The last two are excluded since
they have too many orbits on $7$-sets ($29$ and $32$ respectively).
Let $G$ be primitive of degree $n$, where $16\le n\le 24$.
Filtering out those groups with more than $8$ orbits on $7$-sets, leaves just
nine groups (three of degree~$16$, two of degree $17$, and the Mathieu groups
including $\mathop{\mathrm{Aut}}(M_{22})$. Since the property is closed upwards, we only
need to consider $\mathop{\mathrm{AGL}}(4,2)$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,16)$,
$\mathop{\mathrm{Aut}}(M_{22})$, $M_{23}$ and $M_{24}$. We outline arguments for these.
For $\mathrm{AGL}(4,2)$ we only need to consider $k=8$. There is an $8$-set
which is an affine subspace (containing no more than four independent points),
and a $6$-set with any $5$-subset independent; the second and a $7$-subset
of the first cannot coexist.
For $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,16)$, again we need to consider $k=8$.
A short computation shows that this group fails the weak $8$-et property.
For the Mathieu groups, we use Proposition~\ref{p:stab}, and an obvious
modification, to conclude that, if $M_{24}$ fails the weak $k$-et property,
then so do the other three groups. So consider first the case $G=M_{24}$,
$8\le k\le 12$.
There is an $8$-set which is a block, and a $12$-set meeting no block in more
than $6$ points. The first and a subset of the second cannot coexist. This
excludes $k$ with $9\le k\le 12$.
Unfortunately $M_{24}$ does have the weak $8$-et property: it has just two
orbits on $7$-sets, and by connectedness there must be members of different
orbits meeting in six points. So we have to deal separately with the case $k=8$
for all the Mathieu groups.
For $M_{23}$, there is a $7$-set which is a block of the Steiner system, and
another meeting any block in at most $3$ points; these cannot coexist. A
similar argument applies to $\mathop{\mathrm{Aut}}(M_{22})$. Finally, for $M_{24}$, we resorted
to a computer search, as described earlier.
\hfill$\Box$\end{proof}
\section{The $k$-et property for $4\le k\le7$}
Let $4 \le k\le 7$, and $n\ge k/2$.
In this section, we will give a partial classification of all permutation groups $G$ on $n$ points that are $k$-et. In some cases considered below, our arguments are repetitions of
those used in the case that $k\ge 8$; we chose to give complete results to make this section self-contained.
Essentially, the results of the previous sections reduce the classification problem to the case of $2$-homogeneous groups, potentially up to finitely many exceptions. All $2$-homogeneous groups are classified as a consequence of the CFSG and the work of Kantor \cite{kantor:4homog, kantor:2homog}. The task is then to go through this list. At several points, we used GAP to check primitive groups for $k$-et, either by checking complete lists or dealing with large special cases. We give a general outline of these checks.
To test whether a group has the $k$-et property, we check first whether it is
$k$-transitive (in which case the answer is yes), and then whether it satisfies
the order bound (if not, then the answer is no). If the case is not yet
decided, we make a list of orbit representatives on $k$-sets which are
witnesses for the weak $k$-et property (again, if none exists, then the
answer is no). For each such witness $B$, we build a $k$-partition of a subset
of $\Omega$, beginning with one in which each part is a singleton (these can
be constructed from orbit representatives on $k$-sets). Take a point not in
this subset, and try adding it to each part of the partition, testing whether
the resulting partition has an image of $B$ as a section. If we reach a
partition of $\Omega$ without this condition becoming true, we have found a
partition demonstrating that $B$ is not a witness; otherwise we conclude that $B$
is a witness. The program can thus find orbit representatives of all witnesses,
and certificates showing the failure of other $k$-sets.
We also note that the same program can be used to check the $k$-ut property;
simply check whether every orbit representative on $k$-sets witnesses $k$-et.
For our classification results, we will deal with $k=5,6,7$ together. Below, the group $\mathop{\mathrm{PXL}}(2,q)$, for $q$ an
odd square, denotes the extension of $\mathop{\mathrm{PSL}}(2,q)$ by the product of diagonal
and field automorphisms of order~$2$.
\begin{theorem}\label{th567}
A permutation group $G$ of degree $n \ge 14$ satisfies $7$-et if and only if it satisfies one of the following:
\begin{enumerate}
\item $G$ fixes a point and acts $6$-homogeneously on the remaining ones;
\item $G=M_{24}$;
\item $G$ is $7$-homogeneous.
\end{enumerate}
A permutation group $G$ of degree $n \ge 12$ satisfies $6$-et if and only if it satisfies one of the following:
\begin{enumerate}
\item $G$ fixes a point and acts $5$-homogeneously on the remaining ones;
\item $G$ is one of $\mathop{\mathrm{AGL}}(4,2)$, $2^4:A_7$, $\mathop{\mathrm{PGL}}(2,17)$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,27)$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,32)$, $M_{11}(n=12)$, $M_{12}$, $M_{23}$, $M_{24}$;
\item $G$ is $6$-homogeneous.
\end{enumerate}
A permutation groups $G$ of degree $n\ge10$ that satisfies one of the following properties has the $5$-et property:
\begin{enumerate}
\item $G$ fixes a point and acts $4$-homogeneously on the remaining ones;
\item $G=\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$ for prime powers $9\le q \le 27$, or $q=32$, or $G$ is one of the subgroups $\mathop{\mathrm{PGL}}(2,9)$, $M_{10}$, $\mathop{\mathrm{PSL}}(2,11)$, $\mathop{\mathrm{PSL}}(2,16)$, $\mathop{\mathrm{PSL}}(2,16):2$, $\mathop{\mathrm{PGL}}(2,25)$, $\mathop{\mathrm{PXL}}(2,25)$, $\mathop{\mathrm{PGL}}(2,27)$,
$\mathop{\mathrm{PSL}}(2,32)$;
\item $G$ is one of $\mathop{\mathrm{PSL}}(2,11)(n=11)$, $M_{11}(n=11,12)$, $M_{22}$, $M_{22}:2$, $M_{23}$;
\item $G$ is $5$-homogeneous.
\end{enumerate}
The above list is complete, with the {\color{black}potential exception of $G= \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$.}
\end{theorem}
\begin{proof}
Let $k \in\{5,6,7\}$. It is clear that $k$-homogeneous groups are $k$-et, and the listed intranstive groups are $k$-et by proposition \ref{p:intrans}. The remaining sporadic groups listed in the theorem can be checked by computer to satisfy the listed $k$-et, with only the case of $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,32)$ and $k=6$ requiring extensive computation.
Conversely, let $G$ be $k$-et.
If $G$ is intransitive, then by Proposition \ref{p:intrans}, $G$ fixes one point and acts $(k-1)$-homogenously on the remaining points. If $G$ is transitive, then by Proposition \ref{p:prim}, $G$ is primitive, in which case $G$ is either $2$-homogeneous or $n\le R(k-1,k-1)$, by Proposition \ref{p:2homog}.
Using GAP, we directly check all primitive groups of degree at most $32$, confirming the above results. In addition, we checked the primitive groups with degree up to the known upper limits on $R(k-1,k-1)$ against the order bound.
The only non-$2$-homogeneous groups remaining were of the form $S_m$ acting on $2$-sets, or $S_m \wr S_2$, as well some of their normal
subgroups. These can be ruled out as follows. Consider $S_m$ on pairs.
The number of orbits of this group on $(k-1)$-sets is equal to the number of
graphs (up to isomorphism) with $m$ vertices and $k-1$ edges. This number is easily seen to exceed $k$.
For $S_m \wr S_2$, we can use the same argument, counting bipartite graphs with $2m$ vertices.
Hence it remains to check the $2$-homogeneous groups of degree larger than $32$. Those groups are either affine or almost simple.
In the affine case all such groups are contained in $\mathop{\mathrm{AGL}}(d,p)$, for some $d \ge 1$, and prime $p$. As in the previous section, if $\mathop{\mathrm{AGL}}(d,p)$ does not satisfy $k$-et, then neither does any subgroup.
Let $d\ge 3$.
Choose $l=\min(k-1, d)$ disjoint sets $P_i$ such that $\cup_{i=1}^j\, P_i$ is an affine subspace of dimension $j-1$, for $j=1,\dots, l$, and extend to a $k$-partition. This partition shows
that any potential witnessing set for $k$-et must span an affine subspace of dimension $l$. In contrast, consider a partition with $k-1$ singletons whose union lies in an affine subspace of dimension $h=\lceil \log_p (k-1)\rceil$. Any section of such a partition lies in a subspace of dimension at most $h+1$. These two requirements are incompatible for most values of $d,p,k$ with $p^d\ge 33$, showing that $\mathop{\mathrm{AGL}}(d,p)$ is not $k$-et.
The remaining cases are as follows.
\begin{enumerate}
\item Several values with $p=2$. Here the result follows from Theorem \ref{t:ex.affine}.
\item $k=7$, $\mathop{\mathrm{AGL}}(3,5)$, which fails the order bound.
\end{enumerate}
Now consider $\mathop{\mathrm{AGL}}(2,p)$. As $p^2\ge 33$, we have $p \ge 7$, and so there exist $k-1$ points lying on an affine line. Moreover there are sets of $4$ points for which every $3$-subset is affine independent. These two sets cannot coexists in a witnessing set of size $k$, and so $\mathop{\mathrm{AGL}}(2,p)$ is not $k$-et for $k\ge 5$.
Finally, let $d=1$. In this case, the order bound gives
$$p(p-1)\ge 2{p\choose k-1}/(k+1),$$
which fails for all relevant values of $p$ and $k$.
We next consider almost simple groups.
If $G$ has alternating socle,
then $G$ is $k$-homogeneous and hence $k$-et.
Suppose next that $G$ has socle $\mathop{\mathrm{PSL}}(2,q)$ for some $q=p^e$, $p$ prime, with its natural action.
Let $k=5$. Orbits of $\mathop{\mathrm{PGL}}(2,q)$ on $4$-tuples of distinct elements are indexed by cross ratios of which there are $q-2$ values. The corresponding $4$-sets are indexed by sets of at most $6$ cross ratio values, hence $\mathop{\mathrm{PGL}}(2,q)$ has at least $(q-2)/6$ orbits on $4$-sets. In $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$, field automorphisms can reduced this number by at most a factor of $1/e$. Hence for $n =q+1\ge 32$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$ has too many orbits on $4$-sets to be $5$-et, unless potentially $q\in\{32, 49,64,81,128\}$. Additional computations exclude $q= 49, 64$, and
confirm $q=32$, as well as the subgroup $\mathop{\mathrm{PSL}}(2,32)$.
We can exclude $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,81)$ by an argument based on circle geometries. This group preserves two type of circles with $4$ and $10$ elements, respectively. Choose circles
$C \subset C'$ of different type, and consider a $5$-partition of the projective plane into sets $P_i$ such that $P_1\cup P_2 \cup P_3 =C$, $P_4=C' \setminus C$. Any section of such a partition cannot contain a circle of the smaller type. However, we can create a partition whose sections contain such a circle by using $4$ singletons sets. This leaves the case of $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$ (its socle $\mathop{\mathrm{PSL}}(2,128)$ can be excluded by the orbit counting argument from above).
If $k=6, 7$, the only group of degree at least 33 that does not fail the order bound is $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,32)$, for $k=6$. This group was confirmed to be $6$-et by an extensive computation.
Consider next the case of $G = \mathop{\mathrm{P}\Gamma\mathrm{L}}(3,q)$ with its action on projective points. These groups do not satisfy $k$-et, as we may find a set of $k-1$
projective points that lie within a hyperplane, and a set of $4$ points in which all $3$-subsets span the projective plane. These two sets cannot coexist in a witnessing set. A similar argument excludes $\mathop{\mathrm{P}\Gamma\mathrm{L}}(d,q)$ with $d\ge 4$: in most cases, we may choose a set of $k-1$ points that lies in a flat of minimal possible rank, and a set of size $l=\min(k-1, d)$ which spans a flat of rank $l-1$. For a few cases with $d=4$, we also require that every space spanned by a $(k-1)$-subset of the latter set has maximal possible rank.
For $G$ with socle $\mbox{PSU}(3,q)$, ${}^2B_2(q)$, or ${}^2G_2(q)$, $n\ge 33$, the order bound fails except for $\mathop{\mathrm{P\Gamma U}}(3,4)$, $k=5$. This case can be excluded by having too many orbits on $4$-sets.
Consider $\mathop{\mathrm{Sp}}(2d,2)$ in either $2$-transitive representation. For $k=5$ note that a full and empty $4$-set (in the notation of Theorem \ref{t:ex.Sp}) cannot coexists in a $5$-set. For $k=6,7$ we may instead replace the full $4$-set with one of size $k-1$ in which each $3$-subset is an element of
the designated orbit $T$. As demonstrated in \cite[Section 2.6]{ArCa}, these sets exist up to size $2^{d-1}$ in the $-$ case and size $2^d$ in the $+$ case, which is sufficient to cover all cases with $n \ge 32$. Hence $\mathop{\mathrm{Sp}}(2d,2)$ is not $k$-et.
The remaining sporadic cases all have $n\le 32$, except for the Conway group $\mathord{\mathrm{Co}}_3$ and the Higman-Sims group. $\mathord{\mathrm{Co}}_3$ is not $k$-et for $5\le k \le 7$ on account of having too many orbits on $(k-1)$-sets.
Finally, HS fails the bound for $k\ge 6$, and has too many orbits on $4$-sets to be $5$-et.\hfill$\Box$
\end{proof}
To obtain a classification for $4$-et, we first establish a results about the action of $\mathop{\mathrm{PGU}}(3,q)$. This group acts on a $3$-dimensional vector space $V$
over $\mathop{\mathrm{GF}}(q^2)$, preserving a nondegenerate Hermitian form $H$ (a sesquilinear
form with zero radical satisfying $H(w,v)=H(v,w)^q$). It acts $2$-transitively
on the \emph{unital} $U(q)$, the set of $1$-dimensional subspaces of $V$
on which $H$ vanishes; any two points of the unital lie on a unique line of the
projective space, meeting the unital in $q+1$ points (so these lines are the
blocks of a Steiner system $S(2,q+1,q^3+1)$).
\begin{prop}
The number of orbits of the group $\mathop{\mathrm{PGU}}(3,q)$ on $3$-element subsets of $U(q)$
is $(q+3)/2$ if $q$ is odd, $(q+2)/2$ if $q$ is even. Apart from one orbit
consisting of collinear triples, these orbits are parametrised by inverse
pairs of elements of $\mathop{\mathrm{GF}}(q^2)^\times/\mathop{\mathrm{GF}}(q)^\times$ excluding the coset
$\{x\in\mathop{\mathrm{GF}}(q^2)^\times:x^q=-x\}$.
\label{p:umain}
\end{prop}
\paragraph{Remark} The parametrisation allows us to count orbits of $G$ with
$\mathop{\mathrm{PGU}}(3,q) \le G \le \mathop{\mathrm{P\Gamma U}}(3,q)$ on $3$-sets; these just correspond to orbits of the corresponding subgroup of the Galois
group on the pairs of cosets described.
\medskip
We begin with a preliminary result.
\begin{lemma}
Let $(v_1,v_2,v_3)$ and $(w_1,w_2,w_3)$ be two bases for $V$, and let
$P_i=\langle v_i\rangle$ and $Q_i=\langle w_i\rangle$ for $i=1,2,3$. Let
$a_{ij}=H(v_i,v_j)$ and $b_{ij}=H(w_i,w_j)$. Then
\begin{itemize}\itemsep0pt
\item[(a)] The element $c=a_{12}a_{23}a_{31}$ satisfies $c^q+c\ne0$;
\item[(b)] $(P_1,P_2,P_3)$ and $(Q_1,Q_2,Q_3)$ lie in the same orbit of
$\mathop{\mathrm{PGU}}(3,q)$ if and only if $a_{12}a_{23}a_{31}$ and $b_{12}b_{23}b_{31}$ lie
in the same coset of $\mathop{\mathrm{GF}}(q)^\times$ in $\mathop{\mathrm{GF}}(q^2)^\times$.
\end{itemize}
\end{lemma}
\paragraph{Proof} (a) The Gram matrix of $\{v_1,v_2,v_3\}$ relative to the
form is
\[\pmatrix{0&a_{12}&a_{13}\cr a_{21}&0&a_{23}\cr a_{31}&a_{32}&0\cr}.\]
Since $H$ is nondegenerate, this matrix must be nonsingular. But its
determinant is
\[a_{12}a_{23}a_{31}+a_{13}a_{32}a_{21}=c+c^q,\]
since $a_{21}=a_{12}^q$ and so on.
\smallskip
(b) By Witt's theorem \cite[p.57]{taylor}, there is an element of $\mathop{\mathrm{PGU}}(3,q)$
mapping $(v_1,v_2,v_3)$ to $(w_1,w_2,w_3)$ if and only if
$H(v_i,v_j)=H(w_i,w_j)$ for all $i,j$. In order to map the points spanned by
the first three vectors to those spanned by the second, we have to map
$(v_1,v_2,v_3)$ to $(x_1w_1,x_2w_2,x_3w_3)$ for some scalars $(x_1,x_2,x_3)$.
This requires $a_{ij}=x_ix_j^qb_{ij}$, and so
\[a_{12}a_{23}a_{31}=(x_1x_2x_3)^{q+1}b_{12}b_{23}b_{31};\]
so $a_{12}a_{23}a_{31}$ and $b_{12}b_{23}b_{31}$ differ by a $(q+1)$st power
factor (i.e. an element of $\mathop{\mathrm{GF}}(q)^\times$).
Conversely, if this is the case, we can adjust the vectors by scalar factors
to ensure that $a_{12}=a_{31}=b_{12}=b_{31}=1$ and $a_{23}=b_{23}$, so the
two triples lie in the same orbit. The adjustments introduce $(q+1)$st power
factors into the expressions $a_{12}a_{23}a_{31}$ and $b_{12}b_{23}b_{31}$.
\paragraph{Proof of Proposition~\ref{p:umain}} We know that two triples of
points lie in the same orbit if and only if the expressions $a_{12}a_{23}a_{31}$
lie in the same coset of $\mathop{\mathrm{GF}}(q)^\times$, and that one coset is excluded. So
there are $q$ orbits on such (ordered) triples.
It follows from the lemma that each triple is invariant under a subgroup of
$\mathop{\mathrm{PGU}}(3,q)$ which permutes its elements cyclically, so we only have to decide
whether there is an element of this group which induces a transposition on
such a triple. For this to hold, $a_{12}a_{23}a_{31}$ and $a_{13}a_{32}a_{21}$
must lie in the same coset of $\mathop{\mathrm{GF}}(q)^\times$. These elements are $c$ and
$c^q$; so the map $x\mapsto x^q$ must fix this coset. This means that
$c^{q-1}\in\mathop{\mathrm{GF}}(q)^\times$, so $c^{(q-1)^2}=1$. It follows that $c^{2(q-1)}=1$,
so that $c^{q-1}=1$ or $c^{q-1}=-1$. The second possibility was excluded by
part (a) of the Lemma. If $q$ is odd, there remains just one such coset; if
$q$ is even, the two cases are the same. The other cosets ($q-1$ or $q$
depending on the parity of $q$) are permuted in $2$-cycles by this
transformation. So the number of orbits is $1+(q-1)/2=(q+1)/2$ if $q$ is odd,
and $q/2$ if $q$ is even.
Adding one (for the single orbit consisting of collinear triples) gives the
result of the Proposition.\hfill$\Box$
\begin{theorem}\label{th:4-et}
Let $G$ be a permutation group of degree $n\ge 8$. If $G$ satisfies any of the following conditions, then $G$ has the $4$-et property.
\begin{enumerate}
\item $G$ fixes a point and acts $3$-homogeneously on the remaining ones;
\item $G$ is one of $\mathord{\mathrm{HS}}$ or $\mathord{\mathrm{HS}}:2$ with their action on $100$ points\label{l:HS};
\item $G=\mathop{\mathrm{AGL}}(d,2)$, $d \ge 3$;
\item $G$ is a $2$-transitive subgroup of $\mathop{\mathrm{AGL}}(3,2)$ or $\mathop{\mathrm{AGL}}(2,3)$;
\item $G$ is one of $\mathop{\mathrm{AGL}}(1,16):2$, $\mathop{\mathrm{A}\Gamma\mathrm{L}}(1,16)$,
$\mathop{\mathrm{ASL}}(2,4)$, $\mathop{\mathrm{ASL}}(2,4):2$, $\mathop{\mathrm{AGL}}(2,4)$, $\mathop{\mathrm{A}\Gamma\mathrm{L}}(2,4)$, $2^4.A_6$, $2^4.A_7$, $\mathop{\mathrm{ASL}}(2,5)$, $\mathop{\mathrm{ASL}}(2,5):2$, $\mathop{\mathrm{AGL}}(2,$ $5)$,
$\mathop{\mathrm{A}\Gamma\mathrm{L}}(2,8)$, $\mathop{\mathrm{A}\Sigma\mathrm{L}}(2,8)$, $\mathop{\mathrm{AGL}}(2,8)$, $\mathop{\mathrm{AGL}}(1,11)$, $\mathop{\mathrm{AGL}}(1,13)$, $\mathop{\mathrm{A}\Gamma\mathrm{L}}(1$, $32)$, $\mathop{\mathrm{A}\Gamma\mathrm{L}}(2,32)$, $2^6: G_2(2)$, or $2^6: \mathop{\mathrm{PSU}}(3,3)$;
\item $G=2^d: \mathop{\mathrm{Sp}}(d,2)$, $d \ge4$ and even;
\item $\mathop{\mathrm{PSL}}(2,q)\le G \le \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$ for prime powers $q$ with $7 \le q \le 49$;
\item $\mathop{\mathrm{PSL}}(3,q) \le G \le \mathop{\mathrm{P}\Gamma\mathrm{L}}(3,q)$, for prime powers $q\ge 3$;
\item $\mathop{\mathrm{PSU}}(3,q) \le G \le \mathop{\mathrm{P\Gamma U}}(3,q)$, for $q\in\{3,4\}$;
\item $G=\mathop{\mathrm{Sp}}(2d,2)$, $d \ge 3$, in either of its $2$-transitive representations;
\item $G$ is one of $\mathop{\mathrm{PSL}}(2,11)(n=11)$, $M_{11}(n=12)$, $M_{22}$, $M_{22}:2$, $\mathop{\mathrm{Sz}}(8).3$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$, $Co_3$;
\item $G$ is $4$-homogeneous.
\end{enumerate}
{\color{black}If any other groups $G$ are $4$-et, then they satisfy one of the following:
\begin{enumerate}
\item $\mathop{\mathrm{PSL}}(2,q) \le G\le \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$ for some prime power $q \ge 51$, $G \ne \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$;
\item $G \in\{\mathop{\mathrm{PGU}}(3,5)$, $\mathop{\mathrm{P\Gamma U}}(3,5)$, $\mathop{\mathrm{PSU}}(3,8).3$, $\mathop{\mathrm{PSU}}(3,8).6$, $ \mathop{\mathrm{PSU}}(3,8).3^2$, $\mathop{\mathrm{P\Gamma U}}(3,8)$, $\mathop{\mathrm{P\Gamma U}}(3,9)$,
$\mathop{\mathrm{P\Gamma U}}(3,16)$, $\mathop{\mathrm{Sz}}(8)$, $\mathop{\mathrm{Sz}}(32):5$, $\mathord{\mathrm{HS}}$ $(n=176) \}$.
\end{enumerate}
In the last case, note that there are $3$ non-isomorphic groups of the form $\mathop{\mathrm{PSU}}(3,8).3$. Only one of those has less than $5$ orbits on $3$-sets and could be $4$-et.}
\end{theorem}
\begin{proof}
If $G$ is intransitive, the result follows from Proposition \ref{p:2homog}. Transitive, but imprimitive groups are excluded by Proposition \ref{p:prim} with possible exceptions for groups with $n \in \{8,9\}$. These cases can be handled exactly as in the proposition, as the premise $n>9$ was only needed in a different subcase.
If $G$ is primitive, but not $2$-homogeneous, then it is $4$-et exactly if listed under (\ref{l:HS}), by Theorem \ref{t:4et2hom}.
Hence it remains to classify the $2$-homogeneous groups satisfying $4$-et. For groups of degree at most $50$, we can do so directly using GAP, confirming the above results.
Assume that $n\ge 51$.
If $G$ is $2$-homogeneous, but not $2$-transitive, then by \cite{kantor:2homog}, $G$ is contained in a one-dimensional affine group, while a $2$-transitive group is either affine or almost simple.
We will address the affine cases first.
Any such group is contained in $\mathop{\mathrm{AGL}}(d,p)$ for some $d\ge 1$ and prime $p$. As above, if $\mathop{\mathrm{AGL}}(d,p)$ does not satisfy $4$-et, then neither does any subgroup. If $p=2$, then
$\mathop{\mathrm{AGL}}(d,2)$ has the $4$-et property by Theorem \ref{t:ex.affine}.
So, let $p\ge 3$, and consider first $d\ge 3$.
Choose three disjoint sets $P_i$ such that $\cup_{i=1}^j\, P_i$ is an affine subspace of dimension $j-1$, for $j=1,\dots, 3$, and extend to a $4$-partition. This partition shows
that any potential witnessing set for $4$-et must span an affine subspace of dimension $3$. Using $3$ singletons contained in an affine line, we can construct another partition whose sections are contained in an affine space of dimension at most $2$, showing that $\mathop{\mathrm{AGL}}(d,p)$ is not $k$-et.
If $d=2$, then $p \ge 11$, as $n\ge 51$. By adopting the partition from the case $d\ge 3$, we see that any potential witnessing set for $4$-et must contain $3$ points on an affine line, and one point not on the line. It follows that $G$ needs to act transitively on $3$-sets of collinear points.
However, for a given collinear triple $(x_1,x_2,x_3)$ of distinct points, $\mathop{\mathrm{AGL}}(2,p)$ preserves the value $\lambda \in F_q\setminus\{0,1\}$ satisfying $x_2-x_1=\lambda (x_3-x_1)$. By permuting the $x_i$, at most $6$ different values of $\lambda$ arise. Hence for $p\ge 11$, there are at least $2$ orbits of collinear $3$-sets.
It follows that $\mathop{\mathrm{AGL}}(2,p)$ is not $4$-et for $p\ge 11$.
Finally, let $d=1$. In this case, the order bound fails for $p=n\ge 51$.
It remains to examine the $2$-homogeneous subgroups of $\mathop{\mathrm{AGL}}(d,2)$ for $d\ge 6$.
Consider the groups $\mathop{\mathrm{A}\Gamma\mathrm{L}}(e,2^{d/e})$, for $e$ properly dividing $d$. These can be handled similarly to $\mathop{\mathrm{AGL}}(d,p)$, $p\ge 3$, except that field automorphisms change some of the numerical estimates involved. Concretely, if $e\ge 3$, the same argumentation shows that $\mathop{\mathrm{A}\Gamma\mathrm{L}}(e,2^{d/e})$ is not $4$-et. If $e=1$, the order bound shows that $\mathop{\mathrm{A}\Gamma\mathrm{L}}(1,2^l)$ is not $4$-et for $l \ge 7$. This leaves the case $\mathop{\mathrm{A}\Gamma\mathrm{L}}(1,64)$, which can be excluded be special computation.
If $e=2$, then each value of $\lambda$ in the argument of the prime case may be mapped to an additional $d/2$ values due to field automorphisms. The argument now carries through to show that $4$-et fails for $2^{d/2} > 32$, leaving the cases $\mathop{\mathrm{A}\Gamma\mathrm{L}}(2,8)$, $ \mathop{\mathrm{A}\Gamma\mathrm{L}}(2,16)$, and $\mathop{\mathrm{A}\Gamma\mathrm{L}}(2, 32)$. Computation shows that $\mathop{\mathrm{A}\Gamma\mathrm{L}}(2,8)$ and its listed subgroups are $4$-et, while $\mathop{\mathrm{A}\Gamma\mathrm{L}}(2,16)$ can be embedded into
$\mathop{\mathrm{A}\Gamma\mathrm{L}}(4,4)$ and thus is not $4$-et.
For $G=\mathop{\mathrm{A}\Gamma\mathrm{L}}(2,32)$, we can use a similar argument as in Theorem \ref{t:ex.psl}. The $4$-sets in which exactly $3$ elements lie on an affine line form an orbit $O$ of $G$, as the stablizer of a line acts $3$-transitively. Consider a $4$-partition $\mathcal{P}=(P_1,P_2,P_3,P_4)$ with $|P_1|\le |P_2|\le |P_3|\le |P_4|$. We claim that we can choose a line meeting at least three parts of $\mathcal{P}$.
Choose $x \in P_1, y \in P_2, z \in P_3$. If the line $L$ through $x$ and $y$ intersect $P_3 \cup P_4$, we can choose $L$. Otherwise $L \subseteq P_1 \cup P_2$. Now $P_4$
has at least $256$ elements, at most $31$ of which lie on the line through $z$ and parallel to $L$. Hence we may chose $w\in P_4$ not on this line, in which case the line through
$z$ and $w$ intersects $L$ and hence one of $P_1$ or $P_2$. We can now see that $O$ contains a section of $\mathcal{P}$ by the same argument as in Theorem \ref{t:ex.psl}.
Considering the subgroups of $\mathop{\mathrm{A}\Gamma\mathrm{L}}(2,32)$, note that $\mathop{\mathrm{AGL}}(2,32)$ is not $4$-et, as it has more than one orbit on $3$-sets of collinear points. For triples
$\vec x=(x_1,x_2,x_3)\in F_{32}^2$, let $A_{\vec x}$ be the matrix with columns $x_1+x_2, x_1+x_3$. Now $\det A_{\vec x}$ is invariant under
the induced action of $\mathop{\mathrm{ASL}}(2,32)$ as well as under permutation of the arguments. It follows that $\mathop{\mathrm{ASL}}(2,32)$ has at least $32$ orbits on $3$-sets, and hence $\mathop{\mathrm{A}\Sigma\mathrm{L}}(2,32)$, has
at least $7$ and is not $4$-et.
It remains to check subgroups of $\mathop{\mathrm{AGL}}(d,2)$ that are not contained in any $\mathop{\mathrm{A}\Gamma\mathrm{L}}(e,2^{d/e})$ for $d\ge 6$, $e\ge 2$. The groups $2^d:\mathop{\mathrm{Sp}}(d,2)$, with $d$ even, were shown to be
$4$-et in Theorem \ref{t:ex.Sp}. Finally, for $d=6$, there are two more sporadic cases ($2^6:G_2(2)$ and its subgroup $2^6: \mbox{PSU}(3,3)$), which can be handled computationally.
We next cover the case that $G$ is a $2$-transitive almost simple group of degree $n$. We may assume that $G$ is not $4$-homogeneous.
Let $G$ have socle $\mathop{\mathrm{PSL}}(d,q)$ for $d\ge 2$, $q=p^e$, $p$ prime. For $d=2$, these cases are currently open above the computational range, with $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$ confirmed to have
$4$-et by computation as well.
By Theorem \ref{t:ex.psl}, groups $G$ with $\mathop{\mathrm{PSL}}(3,q) \le G \le \mathop{\mathrm{P}\Gamma\mathrm{L}}(3,q)$ are $4$-et. If $d\ge 4$, we can exclude $\mathop{\mathrm{P}\Gamma\mathrm{L}}(d,q)$ by a now familiar argument: choose a partition containing $3$ points on a projective line, and another that forces every section to span a projective $3$-space.
If $G$ has unitary socle, we can calculate the number of orbits on $3$-sets by Proposition \ref{p:umain} and the remark following it. This count excludes all values of $q$ except $4,5,8,9,16$. Proper subgroups of $\mathop{\mathrm{P\Gamma U}}(2,16)$ can be excluded by this argument as well. We can directly compute the number of orbits for proper subgroups of $\mathop{\mathrm{P\Gamma U}}(3, q)$ for $q=5,8,9$, which excludes all groups not listed in the theorem. Finally
$\mathop{\mathrm{PSU}}(3,4)$ was confirmed to be $4$-et by direct computation.
If $G$ has socle $\mathop{\mathrm{Sz}}(q)$, or ${}^2G_2(q)$,
then eventually the order bound will fail. For $n\ge 51$, this leaves
only
$\mathop{\mathrm{Sz}}(q)$, $q\in\{8,32\}$. Computation confirms that $\mathop{\mathrm{Sz}}(8).3$ has $4$-et, and that $\mathop{\mathrm{Sz}}(32)$ has $6$ orbits on $3$-sets, and hence is not $4$-et.
$\mathop{\mathrm{Sp}}(2d,2)$ (in either $2$-transitive representation) was shown to be $4$-et in Theorem \ref{t:ex.Sp}.
The remaining sporadic socles all have degree less than $51$, except for the Conway group $\mathord{\mathrm{Co}}_3$ and the Higman-Sims group. $\mathord{\mathrm{Co}}_3$ was shown to be $4$-et in Theorem \ref{t:ex.Sp}. For $HS$, $4$-et is open.
\hfill$\Box$
\end{proof}
\section{The $k$-ut condition}
\label{s:ut}
In this section we extend the work on the $k$-ut condition in \cite{ArCa} by
classifying some of the previously unresolved cases. We would also like to
record here the correction of a couple of small mistakes in \cite{ArCa}.
In the proof of Proposition 2.6 of that paper, the authors
assert ``a short calculation yields $n\le k+2$'': this is not correct, but it is easy to fix.
The situation is that we have a vertex-primitive graph $\Gamma$ whose valency
$v-1$ is smaller than $k$, such that every $(k+1)$-set contains a closed
vertex-neighbourhood in the graph, and wish to reach a contradiction.
Now by a theorem of Little, Grant and
Holton~\cite{lgh}, $\Gamma$ has a near $1$-factor (a collection of pairwise
disjoint edges covering all or all but one of the vertices). If $n\ge 2(k+1)$,
a $(k+1)$-set containing at most one vertex from each edge of the near
$1$-factor yields a contradiction. In the remaining case $n=2k+1$, let $w$ be
the uncovered vertex. If two vertices $x,y$ in the neighbourhood of $w$ form an
edge of the partial $1$-factor, take $x$ and $y$ and one point from each
remaining edge; if not, take $w$ together with one point from each edge of the
partial $1$-factor (if the first choice contains the closed neighbourhood of
$w$, replace one vertex by the other end of the edge in the partial $1$-factor).
\medskip
In addition, two specific groups were omitted from the list of groups with the
$3$-universal transversal property \cite[Theorem 4.2(4)]{ArCa}, namely
$\mathop{\mathrm{PSL}}(2,11)$ (degree~$11$) and $2^4:A_6$ (degree~$16$). It is easy to verify
these directly; but they are both handled by a general result which also has
applications to the $3$-existential transversal property, which we give here.
(Both these two specific groups satisfy the conditions of the last sentence
of the Proposition following; this also deals with cases (i), (ii), (iii) and
(v) of \cite[Theorem 4.2(4)]{ArCa}.
\begin{prop}
Let $G$ be a $2$-primitive permutation group of degree~$n$, and let $\Delta$
an orbit of the stabiliser of two points $x,y$ which has cardinality greater
than $n/3-1$. Then the set $\{x,y,z\}$, for $z\in\Delta$, witnesses the $3$-et
property. In particular, if all $G_{xy}$ orbits have size greater than $n/3-1$,
then $G$ has the $3$-ut property.
\label{p:watkins}
\end{prop}
\begin{proof}
The images of $\{y,z\}$ under $G_x$ form an orbital graph for this group,
with valency $k=|\Delta|$ (or possibly twice this number, if $\Delta$ is a
non-self-paired suborbit of $G_x$). This graph is vertex-primitive, so by a
theorem of Watkins~\cite{watkins}, its vertex-connectivity is at least $k$.
(Although Watkins does not state this explicitly, it is a simple consequence
of his results: in his terminology, atomic parts are blocks of imprimitivity;
if the vertex-connectivity is less than the valency then these blocks are
non-trivial.)
Take any $3$-partition $\mathcal{P}$ of $\Omega$, with smallest part $A$ of size $l$,
where $l\le n/3$. Without loss of generality, $x\in A$. Now by hypothesis,
$l-1<k$; so removing $l-1$ points from the graph $\Gamma$ leaves a connected
graph. This graph has an edge $\{u,v\}$ which is a transversal to the
$2$-partition of $\Omega\setminus A$ formed by the other two parts of $\mathcal{P}$; thus
$\{x,u,v\}$ is a transversal to $\mathcal{P}$ and is an image of $\{x,y,z\}$, as
required. \hfill$\Box$
\end{proof}
We now extend the results of \cite{ArCa} by addressing some cases left open. Our first technical result also has some relevance with regard to the $k$-et question. Recall that the orbits of $\mathop{\mathrm{PGL}}(2,q)$ on ordered distinct $4$-tuples are indexed by cross ratios from $F_q\setminus\{0,1\}$. The corresponding orbits on $4$-sets are then given by sets of usually six, but occasionally fewer, cross ratio values. If $\mathop{\mathrm{PGL}}(2,q) \le G\le \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$, it follows that the $G$-orbits on $4$-sets are also indexed by sets of cross ratios.
\begin{lemma}
Let $\mathop{\mathrm{PGL}}(2,q) \le G\le \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$, and $O$ an orbit of $G$ on $4$-sets. If the cross ratios associated with $O$ do not generate the multiplicative group $F_q^*$, then an element of $O$ does not witness the $4$-et property for $G$.
\label{l:crossratio}
\end{lemma}
\begin{proof} Let $M$ be the subgroup of $F_q^*$ generated by the cross ratios associated with $O$. Partition the projective line into $\{ \infty \}, \{0\}, M, F_q^*\setminus M$, and consider any section $(\infty, 0, x,y)$. One of the possible orders results in a cross ratio of $x/y$. This element cannot be in $M$, and hence cannot be one of the cross ratios indexing $M$. It follows that the elements of $O$ do not witness $4$-et. \hfill$\Box$
\end{proof}
\begin{cor}
Suppose that $p\ge 13$ is prime, and $p \not\equiv 11 \mbox{ mod } 12$. Then $\mathop{\mathrm{PGL}}(2,p)$ (and its subgroups) do not satisfy $4$-ut.
\end{cor}
\begin{proof} If $p \equiv 1 \mbox{ mod } 3$, then $F_q$ contains a primitive sixth root of unity $\omega$. An orbit with this cross ratio has the property that other cross ratios lie in the group of sixth roots of unity.
The result for $\mathop{\mathrm{PGL}}(2,p)$ now follows directly from the lemma.
If $c$ is one of the cross ratios of an orbit, the corresponding subgroup $M$ of $F_q^*$ is generated by $c,c-1, -1$. If $p \equiv 1 \mbox{ mod } 4$, then, as detailed in the remarks after \cite[Theorem 5.3]{ArCa}, there are values $c, c-1$ that are both squares in $F_q$. In addition $-1$ is a square and so these values generate a subgroup of the group of squares of $F_q^*$. The result now follows again from the lemma. \hfill$\Box$
\end{proof}
In addition to the results above, we have settled several remaining open cases computationally. The groups $\mathop{\mathrm{PGL}}(2,7)$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$, and $\mathop{\mathrm{PSL}}(2,q)\le G\le \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$ for $8,11,23, 32, 47$ satisfy $4$-ut, while $\mathop{\mathrm{PSL}}(2,7)$
does not. Finally, $\mathop{\mathrm{Sz}}(8)$ and $\mathop{\mathrm{Sz}}(8):3$ satisfy $3$-ut.
On the basis of our computations, we venture the conjecture that the converse
of Lemma~\ref{l:crossratio} is also true.
\section{Applications to semigroups}\label{app}
A semigroup $S$ is said to be (von Neumann) regular if for every $x\in S$ there exists $x'\in S$ such that $x=xx'x$. Some of the most important classes of semigroups (such as groups, inverse semigroups, completely regular semigroups, the endomorphism monoid of a vector space or of a set, etc.) are contained in the class of regular semigroups, and the theory is rich enough to allow some of the deepest and most interesting results in semigroups.
Regarding the general aim of using the powerful tools in group theory to extract information about semigroups (studying the interplay between the structure of a semigroup and its group of units), the ultimate goal is to classify the pairs $(G,t)$, where $G\le S_n$ is a group of permutations of some $n$-set and $t$ is a transformation of the same set, such that $\langle G,t\rangle$ has a prescribed property $P$. This problem, in its full generality, was solved for a particular instance of $P$ in \cite{AAC}. Given the current state of our knowledge, a full solution of this problem is totally hopeless when $P$ is the property of being a regular semigroup. Nevertheless, in previous investigations, it was possible to solve particular, yet very interesting, instances of this general problem. For example, we have the classification of the groups $G\le S_n$ such that $\langle G,t\rangle$ is regular, for all $t\in T_n$ \cite{ArMiSc}; then, resorting on a much deeper analysis, we found the classification of the groups $G$ that together with any rank $k$ map (for a fixed $k\le n/2$) generate a regular semigroup \cite{ArCa}. Now our goal is to move a step forward classifying the groups $G\le S_n$ such that $\langle G,t\rangle$ is regular, for all maps $t$ with image a given set.
Let $n$ be a natural number and let $X:=\{1,\ldots,n\}$ be a set. Let $k\le n/2$ and let $B\subseteq X$ be a $k$-set. Denote by $T_{n,k}$ the set of rank $k$ maps in $T_n$; denote by $T_{n,B}$ the set of maps in $T_n$ whose image is $B$. Of course $T_{n,B}\subseteq T_{n,k}$.
As said above, we have the classification of the groups $G\le S_n$ such that $\langle G,t\rangle$ is regular, for all $t\in T_{n,k}$; the goal now is to tackle the much more ambitious problem of classifying the groups $G\le S_n$ such that $\langle G,t\rangle$ is regular, for all $t\in T_{n,B}$ with $B$ being a given $k$-set.
The next result provides a necessary condition these latter groups must satisfy.
\begin{theorem}\cite[Theorem 2.3 and Corollary 2.4]{lmm}\label{aux1}
Let $G\le S_n$ and $t\in T_n$.
Then the following are equivalent:
\begin{itemize}
\item $t$ is regular in $\langle G,t\rangle$;
\item there exists $g\in G$ such that $\mathop{\mathrm{rank}}\nolimits(t)=\mathop{\mathrm{rank}}\nolimits(tgt)$;
\item the elements in $\langle G,t\rangle$ having the same rank as $t$ are regular.
\end{itemize}
\end{theorem}
Let $B$ be a finite set contained in $\{1,\ldots,n\}$. It follows from this result that if $B$ witnesses $|B|$-et, then any $t\in T_{n,B}$ is regular in $\langle G, t \rangle$. In fact,
if $t', s\in \langle G,t \rangle$, $\mathop{\mathrm{rank}}\nolimits(t')=\mathop{\mathrm{rank}}\nolimits (t's)$, and the image $I$ of $t's$ witnesses $|I|$-et, then $t'$ is regular in $\langle G, t \rangle$.
Converesely, if $\langle G,t\rangle$ is regular for all $t\in T_{n,B}$, then $G$ has the $|B|$-et property and $B$ witnesses it.
This observation together with Theorem \ref{egregious} immediately implies the following.
\begin{cor}
Let $X=\{1,\ldots,n\}$, let $8\le k\le n/2$ and let $B\subseteq X$ be a $k$-set.
Let $G\le S_n$ be transitive. If $\langle G,t\rangle$ is regular for all $t\in T_{n,B}$, then $G$ is $A_n$ or $S_n$. Conversely, if $G$ is $A_n$ or $S_n$, then for any $k$-set $B$
and $t\in T_{n,B}$, $\langle G,t\rangle$ is regular.
\end{cor}
In order to handle the intransitive case and the remaining values of $k$, we need some more considerations. Fix $k$ such that $k\le n/2$. Let $G$ be a group possessing the $k$-et property, and suppose $B$ witnesses it. This means that any map $t\in T_{n,B}$ is regular in $\langle G,t\rangle$; in fact, by Theorem \ref{aux1} we know that every map of the same rank as $t$ is regular.
Therefore, in the semigroup $\langle G,t\rangle$ we have:
\begin{enumerate}
\item the elements of $G$, which are all regular;
\item the elements with rank $k$, which are all regular;
\item the elements whose rank is less than $k$.
\end{enumerate}
The conclusion is that the semigroup $\langle G,t\rangle$ will be regular if the lower rank maps are regular. As constants are idempotents (hence regular) it follows that for $k=2$, the semigroup will be regular.
Regarding larger values of $k$, the easy way of ensuring regularity of the semigroup is to require the group to have the $(k-1)$-ut. These observations are summarised in the following theorem.
\begin{theorem} \label{semimain}
Let $X=\{1,\ldots,n\}$, let $2\le k\le n/2$ and let $B\subseteq X$ be a $k$-set. Let $G\le S_n$ be a group possessing the $k$-et property (witnessed by $B$)
and, in addition, possessing the $(k-1)$-ut property. Then
$\langle G,t\rangle$ is regular, for all $t\in T_{n,B}$.
\end{theorem}
\begin{proof}
As seen above, the elements in $G$ and the rank $k$ elements are regular. The fact that the group has the $(k-1)$-ut property guarantees that the rank $k-1$ elements are also regular. In addition, by \cite{ArCa}, we know that a group with the $(k-1)$-ut property possesses the $(k-2)$-ut property. The result follows by repeated application of the foregoing argument.
\hfill$\Box$\end{proof}
By essentially the same argument, if $G$ satisfies $k$-et and $l$-ut for some $l <k$, it suffices to show that all elements with rank strictly between $k$ and $l$ are regular to establish regularity of $\langle G, t\rangle$. In fact, except for the intransitive groups and $2$ further examples, all relevant groups satisfy $k$-et and $(k-2)$-ut, reducing the problem to examining elements of rank $k-1$. The following lemma address these elements. Its additional assumption also hold in nearly all cases.
\begin{lemma}\label{l:reg-1} Let $G\le S_n$ be $k$-et with witnessing set $B$, as well as $(k-1)$-et, but not $(k-1)$-ut. Let $\bar B \subset B$ be a subset that does
not witness $(k-1)$-et, such that no other $(k-1)$-subset of $B$ belongs to the orbit of $\bar B$.
\begin{enumerate}
\item \label{e:notreg} Let $t \in T_{n,B}$, and $\mathcal{P}=\{P_1,\dots, P_k\}$ be the kernel of $t$, such that
$P_kt = B \setminus \bar B$. Suppose that every $(k-1)$-subsection of $\mathcal{P}$ containing an element of $P_k$ does not lie in the orbit of $\bar B$, and that the following holds for some $g \in G$:
\begin{enumerate}
\item $Bg$ omits exactly the kernel class $P_k$.
\item The (unique) two elements of $Bg$ lying in the same class of $\mathcal{P}$ are in $\bar Bg$.
\end{enumerate}
Then $tgt$ has rank $k-1$ and is not regular in $\langle G,t\rangle$.
\item \label{e:reg} Assume that the orbit of $\bar B$ is the only one not witnessing $(k-1)$-et. Suppose that every $k$-partition $\mathcal{P}$ of $\Omega$ satisfies the following condition:
If there exists a part $P$, such that every $(k-1)$-subsection of $\mathcal{P}$ intersecting $P$ witnesses $(k-1)$-et, then every $(k-1)$-subsection of $\mathcal{P}$ not intersecting $P$ does not witnesses $(k-1)$-et.
Then
for every $t \in T_{n,B}$, all rank $k-1$ elements in $\langle G,t \rangle$ are regular.
\end{enumerate}
\end{lemma}
\begin{proof} Assume first that the conditions of (\ref{e:notreg}) hold for $g \in G$, and consider $tgt$.
Note that $tgt$ has image $\bar B$ and rank $k-1$. We claim that it is not regular in $\langle G, t\rangle$. For let $I$ be in the orbit of $\bar B$. Our conditions on $\mathcal{P}$ imply that either $It=\bar B$, and hence in the same orbit, or has rank less than $k-1$. By induction,
the image of $tgts$ does satisfy one of these two conditions for all $s\in \langle G,t\rangle$. Now, the kernel if $tgt$ is obtained from $\mathcal{P}$ by merging $P_i$ and $P_j$ with $i,j < k$. It follows that every section of the kernel of $tgt$ contains an element of $P_k$ and hence is not in the orbit of $\bar B$. Thus the image of $tgts$ is not a section of the kernel of $tgt$, and so $tgtstgt$
has rank less than $k-1$, for all $s\in \langle G,t\rangle$. Thus $tgt$ is not regular.
Now assume that the conditions of (\ref{e:reg}) hold. Let $t\in T_{n,B}$ and $t' \in \langle G,t \rangle$ have rank $k-1$. Let $\mathcal{P}$ be the kernel of $t$.
If the image $I$ of $t'$ witnesses $(k-1)$-et, then $t'$ is regular.
So assume that $I$ does not witness $(k-1)$-et. Suppose there is a $(k-1)$-subsection $S$ of $\mathcal{P}$ that does not witness $(k-1)$-et, and that $St \ne \bar B$. As only one orbit of $G$ does not witness $(k-1)$-et, there exists $g \in G$ mapping $I$ to $S$. Then $t'gt$ has image witnessing $(k-1)$-et, and $t'$ is regular.
Otherwise, there is a kernel class $P$, the preimage of $B \setminus \bar B$, such that every $(k-1)$-subsection of $\mathcal{P}$ intersecting $P$ witnesses $(k-1)$-et. Hence every $(k-1)$-subsection of $\mathcal{P}$ not intersecting $P$ does not witnesses $(k-1)$-et. Write $t'$ as a product of the generators in $G \cup\{t\}$.
In this product, consider the first
occurrence of a subterm $tg't$ such that $tg't$ has rank $k-1$. If the image $I'$ of $tg't$ witnesses $(k-1)$-et, then $I't \ne \bar B$, and hence also witnesses $(k-1)$-et. Clearly,
so does $I'g$ for any $g\in G$, which implies that $I$ witnesses $(k-1)$-et, contrary to assumption. Hence $I'=\bar B$. However, this implies that the image $Bg'$ of $tg'$ intersects all kernel classes other than $P$. Thus $Bg'$ is the union of two sections of $\mathcal{P}\setminus \{P\}$. One of these sections is not $\bar Bg'$, and hence witnesses $(k-1)$-et,
contrary to our assumption. Hence this case cannot occur, and $t'$ is regular. \hfill$\Box$
\end{proof}
We have automated part of the search process required by the second part of the lemma. Starting with a partial partition containing the elements of $B$ as singletons, only the part $B \setminus \bar B$ can play the role of $P$. We extend the partial partition by single elements to obtain all partitions in which every $(k-1)$-subsection intersecting $P$ witnesses $(k-1)$-et, pruning partial partition that already violate this condition. All such partition can then be checked to see if they satisfy the additional conditions of Lemma \ref{l:reg-1}.
We will now address the individual groups satisfying $k$-et, starting with the intransitive case.
\begin{prop} \label{p:regintrans} Let $2 < k\le n/2$, and $G\le S_n$ be an intransitive group, that satisfies $k$-et with witnessing set $B$. Then $\langle G,t\rangle$ is regular, for all $t\in T_{n,B}$.
\end{prop}
\begin{proof} By Proposition \ref{p:intrans}, $G$ contains two orbits $O, O'$ of sizes $1$ and $n-1$, and acts $(k-1)$-homogeneously on $O'$. It is easy to see that $B$ witnesses $k$-et
if and only if it contains the unique element $x$ in $O$.
Consider an element $t' \in \langle G,t\rangle$. If $t'$ has rank $k$ or $n$, it is regular, so assume it has rank smaller than $k$. If the image $B'$ of $t'$ contains $x$, then $B'$ witnesses $|B'|$-et, and $t'$ is regular.
If $B'$ does not contain $x$ then $x \notin \{x\}t^{-1}$, and $yt=x$ for some $y \ne x$. By $|B'|$-homogeneity, there exists a $g\in G$ mapping $B'$ to a subsection of the kernel of $t$ containing $y$.
Then $t'gt$ has rank $|B'|$ and its image contains $x$ and hence witnesses $|B'|$-et. Again, $t'$ is regular, and hence $\langle G,t\rangle$ is regular. \hfill$\Box$
\end{proof}
The next theorem fully solves the case of $k=2$.
\begin{theorem} \label{k=2}
Let $n\ge 4$, $X=\{1,\ldots,n\}$, let $G\le S_n$ and $B\subseteq X$ be a $2$-set. The following are equivalent:
\begin{itemize}
\item $\langle G,t\rangle$ is regular, for all $t\in T_{n,B}$;
\item $G$ has the $2$-et property and $B$ witnesses it.
\end{itemize}
The possible sets $B$ are:
\begin{itemize}
\item $G$ fixes one point, say $a$, and is transitive on the remaining points, in which case any $B$ containing $a$ works;
\item $G$ has two orbits on $X$, say $A_1$ and $A_2$, and is transitive on $A_1\times A_2$, in which case any $B$ intersecting both $A_1$ and $A_2$ works;
\item $G$ is transitive on $X$ and has at least one connected orbital graph, in which case $B$ can be any edge in one of the connected orbital graphs.
\end{itemize}
\end{theorem}
\begin{proof}
The first part follows from Theorem \ref{aux1} and the observation that all constants are idempotent (and hence regular). The second part follows from Proposition \ref{prop2.4} and the observations after it.
\hfill$\Box$
\end{proof}
The next theorem fully solves the case of $k=3$.
\begin{theorem} \label{k=3}
Let $n\ge 6$, $X=\{1,\ldots,n\}$, let $G\le S_n$ and $B\subseteq X$ be a $3$-set. Then $\langle G,t\rangle$ is regular for all $t\in T_{n,B}$ if and only if $G$ has the $3$-et property and $B$ witnesses it.
\end{theorem}
\begin{proof}
If $\langle G,t\rangle$ is regular, for all $t\in T_{n,B}$, then by Theorem \ref{aux1}, $G$ must possess the $3$-et property and $B$ must witness it.
Conversely, let $G$ be $3$-et and $B$ be a witnessing set.
The groups with the $3$-et property either are non-transitive, or transitive. If $G$ is non-transitive, the result follows from Proposition \ref{p:regintrans}.
If $G$ is transitive, either it is primitive or imprimitive. In the latter case, by Proposition \ref{p:2blocks}, $G$ has two blocks of imprimitivity and $B$ has two points from one of the blocks and a point from the other. Say that the blocks are $A_1:=\{x,z,\ldots\}$ and $A_2:=\{y,\ldots\}$, and $B:=\{x,y,z\}$. As $n\ge 6$ it follows that $|A_1|=|A_2|\ge 3$.
Note that such a group cannot be totally imprimitive. By Proposition \ref{p:2et} and its proof, it follows that that $G$ has the $2$-et property, and that this is witnessed exactly by the sections of $A_1,A_2$, in particular by $\{x,y\}, \{y,z\}$.
Thus $G$ satisfies the conditions of Lemma \ref{l:reg-1}. We want to show that it also satisfies the additional condition of part (\ref{e:reg}) of the lemma.
Consider any $3$-partition $\mathcal{P}$, and single out a part $P$. Suppose that the parts other than $P$ have a section $\{a,b\}$ that is also a section of $A_1,A_2$, and hence witnesses $2$-et. Let $c \in P$, then either $\{a,c\}$ or $\{b,c\}$ lies in a block of imprimitivity and hence does not witness $2$-et.
By Lemma \ref{l:reg-1}(\ref{e:reg}), all elements of rank $2$ in
$\langle G, t\rangle$ are regular.
As permutations, constant maps, and all maps of rank $3$ are regular (the latter by $3$-et), $\langle G, t\rangle$ is regular for all $ t\in T_{n,B}$.
Now, suppose that $G$ is primitive, satisfies the $3$-et property with witness $B$, and that $t \in T_{n,B}$. Then by \cite[Theorem 1.8]{ArCa}, $G$ also satisfies the $2$-ut property. It follows that all maps of rank $3$ or $2$ in $\langle G, t\rangle$ are regular, and hence that $\langle G, t\rangle$ is regular.
\hfill$\Box$
\end{proof}
The possible sets $B$ are:
\begin{itemize}
\item if $G$ is intransitive (so it has a unique fixed point), any $3$-set containing the fixed point;
\item if $G$ is transitive but imprimitive (so it has two blocks of imprimitivity), any set containing two points from one block and one from the other;
\item if $G$ is primitive, then any $3$-set witnessing $3$-et.
\end{itemize}
We next address the case $k=4$. We start with a list of negative results.
\begin{lemma}\label{l:non4reg-line}
Suppose that $G$ is a $4$-et group of degree $n$ of one of the following types:
\begin{enumerate}
\item $\mathop{\mathrm{PSL}}(3,q) \le G\le \mathop{\mathrm{P}\Gamma\mathrm{L}}(3,q)$, $n=q^2+q+1$, where $q\ge 3$ is a prime power;
\item $G\le \mathop{\mathrm{A}\Gamma\mathrm{L}}(2,q)$, $n=q^2$, where $q\ge 3$ is a prime power;
\item $\mathop{\mathrm{PSU}}(3,q) \le G \le \mathop{\mathrm{P\Gamma U}}(3,q)$, $n=q^3+1$, for $3\le q\le 16$.
\end{enumerate}
Let $B$ be a set witnessing $4$-et. Then there exists a $t \in T_{n,B}$ such that $\langle G,t\rangle$ is not regular.
\end{lemma}
\begin{proof}
All listed groups preserve Steiner systems of type $(2,l,n)$, namely those of projective lines, affine lines, and those induced by projective lines. We will refer to any block of such a system as a line. Our numerical constraints guarantee that each line has at least $3$ points and that there are at least $5$ lines.
If a set $B$ witnessing $4$-et exists, it must witness weak~$4$-et. Such a set consists of $3$ points $x_1,x_2,x_3$ lying on a line $L$, and an additional point $x_4 \notin L$, and thus satisfies the conditions of Lemma \ref{l:reg-1} with $\bar B=\{x_1,x_2,x_3\}$. Pick a point
$y$ that does not lie on any line containing two points from $B$. Let $K$ be the line through $y$ and $x_4$, and $K'$ the line through $x_1$ and $y$. Now define $t$ with image $B$ by $x_1t^{-1}=K\setminus\{y\}$, $x_2t^{-1}=K'\setminus\{y\}$, $x_3t^{-1}=\Omega\setminus(K\cup K')$, $x_4t^{-1}=\{y\}$.
Then $t$ satisfies the conditions of Lemma \ref{l:reg-1}(\ref{e:notreg}) with $g$ the identity. Hence $t^2$ is not regular.
\hfill$\Box$
\end{proof}
\begin{lemma} \label{l:non4reg-HS}
Suppose that $G$ is $HS$ or $HS:2$ with its action on $100$ points.
Let $B$ be a set witnessing $4$-et. Then there exists a $t \in T_{n,B}$ such that $\langle G,t\rangle$ is not regular.
\end{lemma}
\begin{proof}
The elements of $G$ act as automorphisms of the triangle-free, strongly regular Higman-Sims graph $\Gamma$.
As it satisfies weak~$4$-et, $B$ consists of a $2$-path $x_1-x_2-x_3$ and an additional point $x_4$ not adjacent to any other elements of $B$.
In $\Gamma$, non-adjacent vertices have $6$ common neighbours, hence we may pick a vertex $y$ adjacent to $x_2$ and $x_4$, with $y\ne x_1, x_3$.
Now define $t$ with image $B$ by $x_1t^{-1}=\{x_4\}$, $x_2t^{-1}=\{y\}$, $x_3t^{-1}=\{x_2\}$, $x_4t^{-1}=\Omega \setminus \{y,x_2,x_4\}$.
Consider $t^2$. By construction, $t^2$ has image $\{x_1,x_3,x_4\}$ and kernel classes $\{y\}, \{x_2,x_4\}, \Omega \setminus \{y,x_2,x_4\}$.
The vertices in the image of $t^2$ are pairwise non-adjacent, while every vertex in $\{x_2,x_4\}$ is adjacent to $y$.
Hence $t$ satisfies the conditions of Lemma \ref{l:reg-1}(\ref{e:notreg}), with $\bar B=\{x_1,x_3,x_4\}$, and $g$ being the identity. By the lemma, $t^2$ is not regular in $\langle G,t\rangle$. \hfill$\Box$
\end{proof}
\begin{theorem}Suppose that $G\le S_n$, $n\ge 8$, has $4$-et, and that $B$ witnesses it. {\color{black}In addition assume that $G\ne Sz(32):5$}, with $n=1025$. Then
$\langle G,t\rangle$ is regular for every $t \in T_{n,B}$, if and only if $G$ is intransitive, $G=\mathop{\mathrm{AGL}}(1,13)$ ($n=13$), or $3$-ut.
\end{theorem}
\begin{proof} The list of groups satisfying (or potentially satisfying) $4$-et is given in Theorem \ref{th:4-et}. If $G$ is intransitive, the results follows from Proposition \ref{p:regintrans}, and if $G$ has the $3$-ut property from Theorem \ref{semimain}. For $G=\mathop{\mathrm{AGL}}(1,13)$, we have checked by computer that
$G$ satisfies the conditions of Lemma \ref{l:reg-1}(\ref{e:reg}) for $B$ in both orbits witnessing $4$-et. By the lemma, all element of order $3$ in $\langle G,t\rangle$ are regular. As $\mathop{\mathrm{AGL}}(1,13)$ is also $2$-ut, $\langle G,t\rangle$ is regular.
All remaining groups listed in Theorem \ref{th:4-et} are excluded by Lemmas \ref{l:non4reg-line} and \ref{l:non4reg-HS}. \hfill$\Box$ \end{proof}
Concretely, the pairs $(G,B)$ introducing regularity in this way are those satisfying the following conditions (in the last two cases, only if $G$ has the $4$-et property). If no set $B$ is given, then either not all witnesses are known, or we could not find a suitable geometric description.
\begin{enumerate}
\item $G$ fixes a point $x$ and acts $3$-homogeneously on $\Omega\setminus\{x\}$, $B$ is any $4$-set containing $x$;
\item $G=\mathop{\mathrm{AGL}}(d,2)$, $d\ge 3$, $n=2^d$, $B$ is any affine independent $4$-set;
\item $G$ is one of $\mathop{\mathrm{AGL}}(1,8)$, $\mathop{\mathrm{A}\Gamma\mathrm{L}}(1,8)$ ($n=8$), $2^4.A_7$ ($n=16$), $\mathop{\mathrm{A}\Gamma\mathrm{L}}(1,$ $32)$ $(n=32$), $B$ is a $\mathop{\mathrm{GF}}(2)$-affine independent $4$-set;
\item $G$ is one of
$\mathop{\mathrm{AGL}}(1,11)$ ($n=11$), $\mathop{\mathrm{AGL}}(1,13)$ ($n=13)$, $2^6: G_2(2)$, $2^6: \mathop{\mathrm{PSU}}(3,3)$ ($n=64$), or $\mathop{\mathrm{Sz}}(8).3$ ($n=65$);
\item $G=2^d: \mathop{\mathrm{Sp}}(d,2)$, $d \ge4$ and even ($n=2^d$), $B$ is a mixed $4$-set;
\item $\mathop{\mathrm{PSL}}(2,q)\le G \le \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$ for prime powers $q$ with $7 \le q \le 49$ ($n=q+1$);
\item $G=\mathop{\mathrm{Sp}}(2d,2)$, $d \ge 3$, in either of its $2$-transitive representations
($n=2^{2d-1}\pm2^{d-1}$), $B$ is a mixed $4$-set;
\item $G$ is one of $\mathop{\mathrm{PSL}}(2,11)$ ($n=11$), $M_{22}$, $M_{22}:2$ ($n=22$), $B$ is not contained in any line/block of its biplane geometry/Steiner system $S(3,6,22)$;
\item $G=Co_3$ ($n=276$), $B$ is a mixed $4$-set;
\item $G$ is $4$-transitive or one of $\mathop{\mathrm{PSL}}(2,8)$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,8)$ ($n=9$), $M_{11}$ ($n=12$), $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,32)$ ($n=33$), or $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$ ($n=129$), $B$ is any $4$-set;
\item $\mathop{\mathrm{PSL}}(2,q) \le G\le \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$, $G \ne \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$, for some prime power $q \ge 51$ ($n=q+1$);
\item $G \in\{Sz(8) (n=65), HS (n=176) \}$.
\end{enumerate}
\begin{lemma}\label{l:pgl217k5} Let $G=\mathop{\mathrm{PGL}}(2,17)$ ($n=18$), and $B$ a set witnessing $5$-et. Then
$\langle G,t\rangle$ is regular for every $t \in T_{18,B}$.
\end{lemma}
\begin{proof}
The group $G=\mathop{\mathrm{PGL}}(2,17)$ is $3$-ut, has one orbit not witnessing $4$-et, and $2$ orbits that witness $5$-et. The witnessing sets from one of these orbits contain $4$ subsets that witness $4$-et. In the other orbit there are $3$ such subsets. A computerised search (similar to the one described after Lemma \ref{l:reg-1}) shows that for every $5$-partition, there exists
at least $3$ different collections of $4$ parts that each contain a section not witnessing $4$-et.
If $t' \in \langle G,t\rangle$ of rank $4$ has an image witnessing $4$-et, then it is regular. So assume otherwise. In the image of $t$, at least $3$ subsets
witness $4$-et. Applying the result of our computer search to the kernel of $t$, we see at least one of those $4$-et witnesses is the image of a $4$-set $B'$ not witnessing $4$-et. As there is only one orbit of non-witnesses, there exist $g \in G$ mapping the image of $t'$ to $B'$. Thus $t'gt$ has an image that witnesses $4$-et, and $t'$ is regular.
As $G$ possesses the $3$-ut property, the result follows. \hfill$\Box$
\end{proof}
\begin{lemma}\label{l:pgl225} Let $G=\mathop{\mathrm{PGL}}(2,25)$, $\mathop{\mathrm{PXL}}(2,25)$, or $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,25)$ ($n=26$) and $B$ a set witnessing $5$-et. Then
$\langle G,t\rangle$ is regular for every $t \in T_{26,B}$.
\end{lemma}
\begin{proof} The group $G$ preserves a circle geometry with circles of size $6$. It is also $3$-ut, hence it suffices to consider $t' \in \langle G,t\rangle$ of rank $4$. If the image of $t'$ witnesses $4$-et, then $t'$ is regular. Otherwise the image belongs to one of two orbits $O, O'$ on $4$-sets. One orbit, say $O$, consists of $4$-subsets of circles. The witnessing set $B$
contains exactly one member of $O, O'$ each, and $3$ additional $4$-subsets that witness $4$-et.
Assume first that the image of $t'$ lies in $O'$, and consider the collection $B^*$ of one-element subsets of $B$.
By a similar computation as described after Lemma \ref{l:reg-1}, we have confirmed that it is not possible to extend $B^*$ to a $5$-partition of $\Omega$ in which every $4$-subsection from $O'$ lies in those parts whose intersection with $B$ does nor witness $4$-et. Hence any partition has at least $2$ $O'$-subsections and one $O$-subsection that pairwise intersect different parts.
We now apply this result to the kernel of $t$. If one of the $O'$-subsection in the kernel has an image that witnesses $4$-et, then for suitable $g \in G$, $t'gt$ has the same witnessing image, and $t'$ is regular. Otherwise, the kernel has only two such subsections, which map to elements of $O$ and $O'$, respectively.
Hence there exist $g_1 \in G$, such that $t'gt$ has an image in $O$.
Moreover in this case, there is an $O$-subsection that is
mapped to an image that witnesses $4$-et. So for suitable $g_2\in G$, $t'g_1tg_2t$ also has this image, and $t'$ is regular.
Now let $t'$ have an image in $O$. A similar search reveals that any $5$-partition will either have at least $4$-subsection from $O$ that intersect different parts, or consists of $4$ parts that partition a circle and one part containing the remaining elements. If the kernel of $t$ belongs to the first case, there exists $g \in G$ such that the image of $t'gt$
either lies in $O'$ or witnesses $4$-et. In the later case, $t'$ is regular. In the first case, we can repeat the above argument to show that $t's$ has an image witnessing $4$-et, and hence $t'$ is regular as well.
Finally, if the kernel $t$ consists of the $4$-partition of a circle and an additional part, then in order for $t'$ to have an image in $O$, the $4$-subset of $B$ in $O$ cannot be the image of the classes that partition the circle. However, in this case, there exists $g \in G$ that maps the image of $t'$ to a section of the classes partitioning the circle. Thus $t'gt$ has an image that witnesses $4$-et or lies in $O'$, and the result follows as above. \hfill$\Box$
\end{proof}
\begin{lemma}\label{l:pgl227}
Let $G=\mathop{\mathrm{PGL}}(2,27)$ or $G=\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,27)$ ($n=28$), and $B$ a witness for $5$-et. Then there exist $t \in T_{28,B}$ such that $\langle G, t\rangle$ is not regular.
\end{lemma}
\begin{proof}
In GAP, $G=\mathop{\mathrm{PGL}}(2,27)$ and $G=\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,27)$ are both represented on the set $\{1,2,\dots, 28\}$. With regard to this representation, let $t$ be given by
$$t^{-1}(5)= \{5, 3, 15, 25\}, t^{-1}(13) =\{8, 7, 14, 19\}, t^{-1}(18)=\{23\}, t^{-1}(19)=\{10\},$$
$$t^{-1}(23)=\{22, 1, 2, 4, 6, 9, 11, 12, 13,
16, 17, 18, 20, 21, 24, 26, 27, 28\}.
$$
Then $t$ can be checked (for both groups) to satisfy the conditions of Lemma \ref{l:reg-1}(\ref{e:notreg}) with (in the notation of the lemma) $g$ the identity and $\{10\}$ the kernel class mapped to
$B \setminus \bar B$. Hence $t^2$ is not regular in $\langle G, t\rangle$. As $G$ has only one orbit witnessing $5$-et, the result follows.
\hfill$\Box$
\end{proof}
\begin{theorem}Suppose that $G\le S_n$, $n\ge 10$, has $5$-et, and that $B$ witnesses it. Then
$\langle G,t\rangle$ is regular for every $t \in T_{n,B}$ if and only if $G \ne \mathop{\mathrm{PGL}}(2,27), \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,27)$.
\end{theorem}
\begin{proof} The list of groups satisfying (or, in the case of $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$, potentially satisfying) $5$-et is given in Theorem \ref{th567}. If $G$ is intransitive, the results follows from Proposition \ref{p:regintrans}, if $G$ is $4$-ut from Theorem \ref{semimain}, if $G=\mathop{\mathrm{PGL}}(2,17)$, from Lemma \ref{l:pgl217k5}, if $G=\mathop{\mathrm{PGL}}(2,25)$,
$\mathop{\mathrm{PXL}}(2,25)$, or $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,25)$, from Lemma \ref{l:pgl225}, and if $G=\mathop{\mathrm{PGL}}(2,27)$ or $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,27)$,
from Lemma \ref{l:pgl227}.
In all remaining cases, we have checked by computer that the groups satisfy the conditions of Lemma \ref{l:reg-1}(\ref{e:reg}). As these groups are also $3$-ut, the results follows. \hfill$\Box$ \end{proof}
That is, the groups $G$ introducing regularity in this way are those satisfying the following conditions:
\begin{enumerate}
\item $G$ fixes one point and acts $4$-homogeneously on the remaining ones, and
$B$ contains the fixed point;
\item $G $ is one of $\mathop{\mathrm{PSL}}(2,11)$, $M_{11}$, $\mathop{\mathrm{PGL}}(2,11)$ ($n=12$), $\mathop{\mathrm{PGL}}(2,13)$ ($n=14$), $\mathop{\mathrm{PGL}}(2,17)$ ($n=18$), $\mathop{\mathrm{PGL}}(2,$ $19)$ ($n=20$), $\mathop{\mathrm{PGL}}(2,23)$ ($n=24$),
$\mathop{\mathrm{PGL}}(2,25)$,
$\mathop{\mathrm{PXL}}(2,25)$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,25)$ ($n=26$), $\mathop{\mathrm{PSL}}(2,32)$ ($n=33$);
\item $G$ is on of $M_{10}$, $\mathop{\mathrm{PGL}}(2,9)$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,9)$ ($n=10$), $\mathop{\mathrm{PSL}}(2,11)$ ($n=11$), $\mathop{\mathrm{PSL}}(2,16)$, $\mathop{\mathrm{PSL}}(2,16):2$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,16)$ ($n=17$), $M_{22}$, $M_{22}:2$ ($n=22$), and $B$ contains exactly $4$ points from a circle/line/block of its circle geometry/biplane geometry/Steiner system $S(3,l,n)$.
\item $G$ is one of $M_{11}$ ($n=11$), or $M_{23}$ ($n=23$), and $B$ is not contained in or equal to a block of the Steiner system $S(4,l,n)$;
\item $G$ possesses $5$-ut, and hence is alternating, symmetric or one of
$M_{12}$ ($n=12$), $M_{24}$ ($n=24$), or $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2, 32)$ ($n=33$), and $B$ is arbitrary;
\item $G=\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$ ($n=129$), provided that it satisfies $5$-et.
\end{enumerate}
\begin{lemma} \label{l:regAGL42}Let $G=\mathop{\mathrm{AGL}}(4,2)$ or $G=2^4:A_7$ ($n=16$), and $B$ a set witnessing $6$-et. Then there exists a $t\in T_{16,B}$ such that $M=\langle G,t\rangle$ is not regular.
\end{lemma}
\begin{proof}
For either group, $B$ consists of $5$ affine independent points plus a point forming a plane with $3$ of the other elements. Say $p_1,p_2,p_3,p_4 \in B$ form a plane, and
$q,q'\in B$ are the additional points. Moreover, $G$ acts transitively on those $5$-sets that contain an affine plane.
Consider $t \in T_{16,B}$ that is the identity on $\{p_1,\dots,p_4,q\}$, maps $q'$ to $q$, and all additional elements to $q'$.
Let $q''$ be the fourth element of the plane containing $p_3,p_4,q$, and $g \in G$ map $\{p_1,\dots, p_4\}$ to $\{p_3,p_4,q,q''\}$ and $q$ to $p_2$.
Then $t'=t^2gt$ has image $B\setminus\{p_1\}$, and hence is affine independent. Its kernel consists of $4$ singletons forming an affine plane, and another kernel class containing the remaining elements. This kernel will not admit an affine independent section. Now if $I$ is any $5$-set of affine independent points, so will $Ig$, for any $g \in G$. Moreover,
$It$ will either be affine independent or have rank at most $4$. Hence $Is$ will satisfies one of these conditions, for any $s \in \langle G,t \rangle$. It follows that $t'st'$
has rank at most $4$, and so $t'$ is not regular.
\hfill$\Box$
\end{proof}
\begin{lemma}\label{l:pgl217k6} Let $G=\mathop{\mathrm{PGL}}(2,17)$ ($n=18$) and $B$ a set witnessing $6$-et. Then $\langle G, t\rangle$ is regular, for each $t\in T_{18,B}$.
\end{lemma}
\begin{proof} The group $G$ has the $3$-et property, has one orbit on $4$-sets that fails to witness $4$-et, two orbits $O, O'$ on $5$-sets that fail to witness $5$-et, and one orbit that witnesses $6$-et.
Three $5$-subsets of $B$ do not witness $5$-et, with two of those belonging to the same orbit, say $O'$. In addition, three $4$-subsets of $B$ do not witness $4$-et.
It suffices to show regularity for the $t' \in \langle G, t\rangle$ of rank $4$ or $5$, whose images do not witness $4$-et or $5$-et. For $t'$ of rank $5$ we use a series of computations similar to the one in Lemma \ref{l:pgl225}. Consider first that the image of $t'$ lies in the orbit $O$ that only contains one subset of $B$ (in GAP this orbit is represented by
[4, 6, 10, 13, 17] ). Computation shows that every $6$-partition of $\Omega$ contains at least two $O$- and two $O'$-subsections, which pairwise intersect different parts of the partition. Applying this to the kernel of
$t$, we see as in Lemma \ref{l:pgl225} that for suitable $g, g_1,g_2 \in G$, the image of either $t'gt$ or $t'g_1tg_2t$ witnesses $5$-et, implying the regularity of $t'$.
If the image of $t'$ belongs to the orbit $O'$, we can similarly confirm that for any $6$-partition, there are at least three $O'$-subsections that pairwise intersect different parts.
Hence $t'gt$, for suitable $g \in G$ has an image that either witnesses $5$-et, or belong to $O$, which implies that $t'$ is regular.
Finally, for $t'$ of rank $4$, we similarly checked that the subsections of the kernel of $t$ that do not witness $4$-et include one whose image under $t$ does witness $4$-et. The result follows. \hfill$\Box$
\end{proof}
\begin{lemma}\label{l:pgaml227k6} Let $G =\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,27)$ ($n=28$), and $B$ witness $6$-et. Then there exists $t\in T_{28,B}$ such that $\langle G, t\rangle$ is not regular.
\end{lemma}
\begin{proof} The group $G$ preserves a circle geometry with circles of size $4$. From this, it follows easily that $B$ contains exactly one circle $C=\{c_1,c_2,c_3,c_4\}$.
Let $e,d$ be the other elements of $B$. Moreover, as can be checked computationally, the $5$-sets containing exactly one circle form an orbit of $G$.
Let $f$ be the additional element in the circle containing $\{c_3,c_4,d\}$. Let $t\in T_{28,B}$ map $C$ identically, map $f$ to $e$ and every other element to $d$. As they lie in the same orbit on $5$-sets, there exists a $g\in G$ that maps $C\cup\{d\}$ to $\{c_2, c_3,c_4,d,f\}$. Consider $t'=t^2gt$. We claim the $t'$ is not regular in $\langle G, t\rangle$.
Note first the if any $5$-set $S$ does not contain a circle, then neither do $St$ or $Sg$, for any $g\in G$. The image $\{c_2, c_3,c_4,d,e\}$ of $t'$ is such a set, and hence
the image of $t's$ is without circle as well, for any $s\in \langle G, t\rangle$. However, the kernel of $t'$ has $4$ singleton sets corresponding to the elements of $C$.
It follows that $t'st'$ has rank at most $4$, for any $s \in \langle G, t\rangle$, and so is not regular.\hfill$\Box$
\end{proof}
\begin{theorem}Suppose that $G\le S_n$, $n\ge 12$, has $6$-et, and that $B$ witnesses it. Then
$\langle G,t\rangle$ is regular for every $t \in T_{n,B}$, if and only if $G$ is intransitive, $\mathop{\mathrm{PGL}}(2,17)$ ($n=18$), $M_{11}$ ($n=12$), $M_{23}$ ($n=23$), or $5$-ut.
\end{theorem}
\begin{proof} The list of groups satisfying $6$-et is given in Theorem \ref{th567}. If $G$ is intransitive, the results follows from Proposition \ref{p:regintrans}, and if $G$ is $5$-ut from Theorem \ref{semimain}. If $G=\mathop{\mathrm{AGL}}(4,2)$ or $2^4:A_7$, the result follows from Lemma \ref{l:regAGL42}, if $G=\mathop{\mathrm{PGL}}(2,17)$,
from Lemma \ref{l:pgl217k6}, and if $G=\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,27)$, from Lemma \ref{l:pgaml227k6}. For $G=M_{11}$ ($n=12$), or $G=M_{23}$, we have checked by computer that
$G$ satisfies the conditions of Lemma \ref{l:reg-1}(\ref{e:reg}). As these groups are also $4$-ut, the result follows. \hfill$\Box$ \end{proof}
That is, the groups $G$ introducing regularity in this way are those satisfying the following conditions:
\begin{enumerate}
\item $G$ fixes one point and acts $5$-homogeneously on the remaining ones, and
$B$ contains the fixed point;
\item $G $ is one of $\mathop{\mathrm{PGL}}(2,17)$ ($n=18$), $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,32)$ ($n=33$);
\item $G$ is one of $M_{11}$, $M_{12}$ ($n=12$), $M_{24}$ ($n=24$), and $B$ not is contained or equal to a block of the Steiner system $S(5,l,n)$ preserved by $G$;
\item $G=M_{23}$ ($n=23$), and $B$ contains exactly $5$ points from one block of the Steiner system $S(4,7,23)$;
\item $G$ is $6$-homogeneous and hence alternating or symmetric, and $B$ is
arbitrary.
\end{enumerate}
\begin{theorem}Suppose that $G\le S_n$, $n\ge 14$, has $7$-et, and that $B$ witnesses it.
Then
$\langle G,t\rangle$ is regular for every $t \in T_{n,B}$.
\end{theorem}
\begin{proof} By Theorem \ref{th567}, the $G\ne M_{24}$ satisfying $7$-et are either intransitive or $7$-homogeneous. If $G$ is intransitive, the results follows from Proposition \ref{p:regintrans}, and if $G$ is $7$-homogeneous and hence $6$-ut from Theorem \ref{semimain}.
So assume that $G=M_{24}$. Recall that $G$ preserves a Steiner system with parameters $(5,8,24)$, and that $B$ witnesses $7$-et if there is a block of the system containing exactly $6$ points of $B$. Moreover, $G$ has two orbits on $6$-sets consisting of those sets that are contained in a block or not, with the later witnessing $6$-et. Hence
$G$ satisfies the condition of Lemma \ref{l:reg-1}.
The Steiner system has the property that any $7$-set contains $6$ points from a block. From this it follows easily that $G$ satisfies the conditions of Lemma \ref{l:reg-1}(\ref{e:reg}),
and hence all rank $6$ elements in $\langle G,t\rangle$ are regular.
As $G$ possesses the $5$-ut property, $\langle G,t\rangle$ is regular, for all $t \in T_{24,B}$.
\hfill$\Box$ \end{proof}
That is, the groups $G$ introducing regularity in this way are those satisfying the following conditions:
\begin{enumerate}
\item $G$ fixes one point and acts $6$-homogeneously on the remaining ones,
and $B$ contains the fixed point;
\item $G=M_{24}$ ($n=24$), and $B$ consists of seven points not in a block;
\item $G$ is $7$-homogeneous and hence alternating or symmetric, and $B$ is
arbitrary.
\end{enumerate}
\section{Problems}
We give here some problems to encourage further research on this topic.
\begin{problem}
Settle the remaining cases in Theorems~\ref{th567} and~\ref{th:4-et}. That is,
\begin{enumerate}
\item decide the 4-et property for the following groups:
\begin{itemize}
\item $n=q+1$: $\mathop{\mathrm{PSL}}(2,q) \le G \le \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,q)$, $G\ne \mathop{\mathrm{P}\Gamma\mathrm{L}}(2,128)$ for $q\ge 51$ a prime power;
\item $n = 65$: $\mathop{\mathrm{Sz}}(8)$;
\item $n = 126$: $\mathop{\mathrm{PGU}}(3, 5)$, $\mathop{\mathrm{P\Gamma U}}(3, 5)$;
\item $n = 176$: $HS$;
\item $n = 513$: $\mathop{\mathrm{PSU}}(3, 8).3$, $\mathop{\mathrm{PSU}}(3, 8).6$, $\mathop{\mathrm{PSU}}(3, 8).2^3$, $\mathop{\mathrm{P}\Gamma\mathrm{L}}(3,8)$;
\item $n = 730$: $\mathop{\mathrm{P\Gamma U}}(3, 9)$;
\item $n = 1025$: $\mathop{\mathrm{Sz}}(32) : 5$;
\item $n = 4097$: $\mathop{\mathrm{P\Gamma U}}(3, 16)$.
\end{itemize}
\item decide the $5$-et property for the following group: $n = 129$: $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2, 128)$.
\end{enumerate}
\end{problem}
\begin{problem}
There is a dual concept to the et property. We say that the permutation group
$G$ has the \emph{dual $k$-et property} with \emph{witnessing $k$-partition
$\mathcal{P}$} if, for every $k$-set $A$, there exists $g\in G$ such that
$Ag$ is a section for $\mathcal{P}$. Which groups have this property?
\end{problem}
\begin{problem}
Which groups $G$ have $k$-et for $k>n/2$? When is it the case that
$\langle G,t\rangle$ is regular for all $t$ whose image is a witnessing set?
\end{problem}
Let $\Omega$ be a finite set. We say that a set $\Sigma$ of $k$-subsets of $\Omega$ dominates a
set $\Pi$ of $k$-partitions of $\Omega$ if for every $\mathcal{P}\in\Pi$ there exists $S\in\Sigma$ such that $S$
is a transversal of $\mathcal{P}$. Similarly, we say that $\Pi$ dominates $\Sigma$ if given any set
$S\in\Sigma$ there exists $\mathcal{P}\in\Pi$ such that $S$ is a transversal for $\mathcal{P}$. Many arguments
in the classification of $k$-et groups would certainly be very simplified if the
answer to the following purely combinatorial questions was known.
\begin{problem}
Let $\Omega$ be a finite set and let $k\le |\Omega|/2$. Let $K$ be the set of all
$k$-subsets of $\Omega$ and let $P$ be the set of all $k$-partitions of $\Omega$.
\begin{enumerate}
\item Find the minimum of the set
\[\{|\Sigma| \mid \Sigma\subseteq K \hbox{ and } \Sigma \hbox{ dominates }P\}.\]
\item Find the minimum of the set
\[\{|\Pi| \mid \Pi\subseteq P \hbox{ and }\Pi \hbox{ dominates }K\}.\]
\end{enumerate}
\end{problem}
For non trivial bounds on (a) please see \cite{BT}. Assuming (b) is very difficult
too, at least provide some non trivial bound.
Paper \cite{ArMiSc} immediately prompts the following problem.
\begin{problem}
Classify the permutation groups on a finite set $\Omega$ that satisfy the following property: there exists $B\subseteq\Omega$ such that for all
transformations $t$ on $\Omega$ with image $B$, the semigroup
$\langle G, t\rangle \setminus G$ is idempotent generated.
\end{problem}
There are linear versions of these problems that we generally recall here (for more details and extensions to independence algebras please see \cite{ArCa}).
\begin{problem}
Let $V$ be a finite dimension vector space over a finite field. Classify the linear groups $G\le \mathop{\mathrm{Aut}}(V)$ such that for all linear transformations $t\in \mathop{\mathrm{End}}(V)$ the semigroup $\langle G,t\rangle$ is regular. If this problem could be solved it would yield the linear analogue of the main result in \cite{ArMiSc}.
Find linear analogous to the main results in this paper and in \cite{ArCa}.
\end{problem}
\section*{Acknowledgements}
The authors would like to thank Jo\~ao Pedro Ara\'ujo (University of Lisbon) for his help automating some computations, and Markus Pfeiffer (University of St Andrews) for his help with the computation confirming the $6$-et property for $\mathop{\mathrm{P}\Gamma\mathrm{L}}(2,32)$.
The first author was partially supported by the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia
(Portuguese Foundation for Science and Technology)
through the project
CEMAT-CI\^ENCIAS UID/Multi/04621/2013, and through project ``Hilbert's 24th problem'' (PTDC/MHC-FIL/2583/2014).
The second author was supported by travel grants from the University of Hull's Faculty of Science and Engineering and the Center for Computational and Stochastic Mathematics.
|
1,108,101,566,446 | arxiv | \section{Problem Definition}
After introducing the general program synthesis paradigm in the previous section, we are now in a position to define the DSL-based program synthesis problem formally. Given a DSL $L$, we aim to learn a synthesis algorithm $A$ such that given a set of text description and its corresponding code snippet, {($i_1$,$o_1$), $\dots$, ($i_n$,$o_n$)}. The synthesis algorithm $A$ learns a program $P\in L$, such that it satisfies all the corresponding input-output test cases $e_j$'s of description ($NL$) and code snippet ($i_j$,$o_j$) pair, i.e.,
\begin{equation}
\begin{aligned}
\label{eq:problem}
\forall{j,k}: P(e_{j(k,in)}) = e_{j(k,out)}: \\
\;1\le j \le n \;\&\; 1\le k\le l
\end{aligned}
\end{equation}
Where, $e_{j(k,in)}$ and $e_{j(k,out)}$ represents the input and output of the $k^{th}$ test case of description and code snippet ($i_j$,$o_j$) pair, respectively. Here, $l$ represents the number of test cases corresponding to each description and code snippet pair. Note that, in Eq.~\ref{eq:problem}, we match the test cases and not the actual generated code; a given textual description can possibly generate structurally dissimilar variants of the ground truth code, preserving the logical functionalities.
Formally, an adversarial text description $(NL')$ for a program synthesis model generates a program ($P_{adv}$) such that:
\begin{equation}
\begin{aligned}
\label{eq:problem1}
\forall{j,k}: P_{adv}(e_{j(k,in)}) \ne e_{j(k,out)}: \\
\;1\le j \le n \;\&\; 1\le k\le l
\end{aligned}
\end{equation}
under the constraint that-
\[||NL'-NL|| \le \delta\]
\noindent where $\delta$ denotes the amount of perturbation. Let $P_{orig}$ denotes the program corresponding to $NL$ and $P_{adv\_sol}$ corresponds to a program that can correctly solve $NL'$. Depending on whether $P_{adv\_sol}$ is the same as $P_{orig}$, attacks can be classified into the following two categories:
\textbf{Program Invariance Attacks:} In these types of attacks, we perturb $NL$ such that the original program is also a solution of $NL'$ i.e., ($P_{orig}=P_{adv\_sol}$).
\textbf{Program Directional Attacks:} In these type of attacks, we perturb $NL$ such that the original program is not a solution of $NL'$ i.e., ($P_{orig} \ne P_{adv\_sol}$).
\section{Dataset}
\label{sec:datasets}
In this paper, we use a synthetically constructed code generation dataset \textsc{AlgoLisp}~\cite{neuralprogramsearch2018}. \textsc{AlgoLisp} is constructed over a domain-specific language (DSL), inspired by Lisp. Instead of existing programming languages (like Python, C, or Java), DSLs provide flexibility in converting to other target languages and adding constraints to simplify its automated generation~\cite{neuralprogramsearch2018}. The dataset comprises the problem description and the corresponding implementations of the problem in a Lisp-inspired DSL. Each problem description is accompanied by a code snippet and 10 test cases. Each test case is an input/output pair, where input is to be fed into the synthesized program, and output represents the expected output the program should produce. Figure~\ref{fig:example_problem} illustrates an example problem showing a textual description, its corresponding Lisp DSL program tree, and few I/O test pairs. Overall, the dataset contains 100,000 problems with average textual description length and average code snippet length of 37.97 and 45.13 characters. The \textsc{AlgoLisp} dataset comprises train, validation, and test split of 79214, 9352, and 10940 examples, respectively. The average depth of the program tree is 10.28. Table~\ref{table:dataset} lists the detailed statistics of original \textsc{AlgoLisp} dataset.
\begin{table}[]
\centering
\begin{tabular}{l|c|c}
\toprule
& \textbf{Original} & \textbf{Filtered}\\\hline
No. of instances & 100,000 & 90,153 \\
Avg. text length & 37.97 & 37.75\\
Avg. code depth & 10.35& 10.28\\
Avg. code length & 45.13& 44.86\\
Vocabulary size & 288 & 287\\
\bottomrule
\end{tabular}
\caption{Statistics of the \textsc{AlgoLisp} dataset.}\label{table:dataset}
\end{table}
In 2018, Bednarek~\textit{et al.}~\cite{bednarek2018ain} showed multiple instances of compilation errors in the original \textsc{AlgoLisp} dataset. Specifically, the DSL compiler fails to pass I/O pairs with ground truth code. They, therefore, constructed a filtered subset of \textsc{AlgoLisp} dataset containing only those problem instances that pass all the input-output test cases\footnote{Even though we find few instances that resulted in partial passing of test cases.}. Overall, the filtered dataset contains 90,153 instances. Table~\ref{table:dataset} also details the statistics of the filtered \textsc{AlgoLisp} dataset.
To the best of our knowledge, except NAPS~\cite{zavershynskyi2018naps} and Karel~\cite{karel}, no similar code synthesis dataset exists that contains problem description along with the test cases and other meta information. Popular datasets like JAVA~\cite{java}, WikiSQL~\cite{sql} only contain problem description and the corresponding code, leading to limitations in evaluating structurally different but logically similar synthesized codes. Although NAPS and Karel contains all the required meta-information, Karel does not deal with any natural language; it is a robotic programming language. On the other hand, in our internal data analysis, NAPS shows several data inconsistencies\footnote{The NAPS dataset is very noisy due to crowd-sourcing.} such as the presence of a long sequence of characters like {\ttfamily{abcdabcd}}, which conveys no meaning and inconsistent tokenization of sentences.
\section{The SOTA Code Generation Models}
\label{sec:sota_works}
In this paper, we thoroughly experiment with state-of-the-art DSL-based code generation model, \textbf{SketchAdapt}~\cite{nye2019learning}. SketchAdapt (hereafter \textit{`SA'}) synthesizes programs from textual descriptions as well as input-output test examples. It combines neural networks and symbolic synthesis by learning an intermediate `sketch' representation. It has been demonstrated empirically that \textsc{SA} recognizes similar descriptive patterns as effectively as pure RNN approaches while matching or exceeding the generalization of symbolic synthesis methods. The \textsc{SA} system consists of two main modules: 1) a sketch generator and 2) a program synthesizer. Given an encoded input description, the sketch generator generates a distribution over program sketches. The generator is trained using a sequence-to-sequence recurrent neural network with attention to assign a high probability to sketches that are likely to yield programs satisfying the specification. The program synthesizer takes a sketch as a starting point and performs an explicit symbolic search to ``fill in the holes'' in order to find a program that satisfies the specification.
The pre-trained model, along with the relevant codebase, is available at \url{https://github.com/mtensor/neural_sketch}.
Additionally, we found two more relevant baselines, Structure-Aware Program Synthesis ~\cite{bednarek2018ain} and \textsc{Seq2Tree} model~\cite{Ma2017Seq2TreeAT}. Structure-Aware Program Synthesis (hereafter, \textit{`SAPS'}) adapts the program synthesis problem under the Neural Machine Translation framework by employing a bi-directional multi-layer LSTM network for generating code sequence corresponding to textual descriptions. The Seq2Tree model consists of a sequence encoder and a tree decoder. The sequence encoder reads the problem description, and a tree decoder augmented with attention computes probabilities of each symbol in a syntax tree node one node at a time. However, both baselines cannot be implemented due to the unavailability of a code repository or the pre-trained model\footnote{The results cannot be reproduced due to missing experimental details.}. We, therefore, thoroughly experiment with \textit{SA} as the only baseline system.
\noindent \textbf{Evaluating Generation Performance:}
We evaluate the above state-of-the-art code synthesis systems on the filtered \textsc{AlgoLisp} dataset. We, verbatim, follow the experimental settings presented in SA and \textsc{SAPS}. Note that, in the filtered dataset, the number of test cases is lesser than the original dataset.
\begin{table}[!t]
\centering
\begin{tabular}{lc}
\toprule
\bfseries Model&\bfseries Accuracy Scores\\\hline
\textsc{SA} &0.958\\
\textsc{SAPS}* & 0.929\\
\textsc{Seq2Tree}* & $0.858^\dagger$\\
\hline
\textsc{VAC} &\textbf{0.968} \\
\textsc{GAC} &0.963 \\
\bottomrule
\end{tabular}
\caption{Comparing state-of-the-art code generation models. * represents accuracy scores taken, verbatim, from the corresponding papers due to unavailability of code or pretrained model. $^\dagger$ represents accuracy scores on the original test set.}\label{tab:accuracy}
\end{table}
At the same time, the training data remains the same as the original dataset. We compute accuracy scores ($A$) for performance evaluation on holdout test set defined as $A = \frac{n}{N}$, where $n$ is the number of problems for which the generated code passes all the 10 test cases and $N$ is the total number of problems in the holdout test set. Table~\ref{tab:accuracy} shows the accuracy scores for three state-of-the-art code generation systems. As expected, \textsc{SA} outperformed the rest of the two baseline systems with a significant margin.
\section{\textsc{AutoCoder}}
In this section, we discuss our implementation of the neural model \textsc{AutoCoder} to address the automatic code generation problem. Recently, Transformers~\cite{DBLP:journals/corr/VaswaniSPUJGKP17} have shown state-of-the-art performance for several Natural Language Processing tasks~\cite{wang2019learning,li2019neural,abzianidze2019first}, including machine translation, classification, etc. Inspired by its success, we propose a transformer-based code-generation model to generate code based on natural language descriptions automatically. Specifically, the model encodes the textual description using multiple layers of encoders and then decodes a program token-by-token while attending to the description. The basic pipeline of our proposed model is analogous to the simpler sequence-to-sequence models that are employed for similar generation tasks. These models usually have an encoder-decoder architecture~\cite{cho2014learning,britz2017massive}. The encoder maps an input sequence of symbol representations consisting of tokens $x_1,x_2,\cdots,x_n$ to intermediate representation which the decoder then generates an output sequence $y_1,y_2,\cdots,y_m$ of symbols one element at a time. At each step, the model is auto-regressive, consuming the previously generated symbols as additional input when generating the next output symbol. As depicted in Figure~\ref{fig:model}, we utilize the core Transformer~\cite{DBLP:journals/corr/VaswaniSPUJGKP17} model implementation and propose significant structural alterations in the attention module to develop \textsc{AutoCoder}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\linewidth]{figures/model_gated.pdf}
\caption{Transformer model. (b) The self-attention mechanism. (c) The gated-attention mechanism.
}
\label{fig:model}
\end{figure*}
\noindent\textbf{The encoding and decoding layers:} We keep the number of encoder layers (=6) the same as the core Transformer model. Similar to core implementation, we keep the output dimension size as 512 in all the encoding sub-layers of the model as well as in the embedding layers. The decoder side is also stacked with six identical layers. In Figure~\ref{fig:model}a, the sub-layers of encoder and decoder layers are described using standard notations.
\noindent\textbf{The different attention mechanisms:} Attention mechanisms have become an integral part of sequence modeling tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences. Specifically, we experiment with two attention mechanisms: (i) vanilla self-cross attention and (ii) gated cross attention. The Vanilla self-cross attention is the basic attention mechanism used in traditional transformer models~\cite{DBLP:journals/corr/VaswaniSPUJGKP17}. The self-attention module relates different positions of an input sequence in order to compute a representation of the input sequence. This module is present in both encoder and decoder layers. The cross attention module relates different positions of an input sequence to the output sequence. This module connects the encoder and decoder components. We term \textsc{AutoCoder} variant that uses the standard self-cross attention module as \textbf{\textsc{Vanilla-AutoCoder} (VAC)}.
\begin{equation}
f_{SA}=f_{dot}(Q_e,K_e,V_e)=softmax\left( \frac{Q_eK_e^T}{\sqrt{d}} \right) V_e
\end{equation}
\begin{equation}
f_{CA}=f_{dot}(Q_e,K_d,V_d)=softmax\left( \frac{Q_eK_d^T}{\sqrt{d}} \right) V_d
\end{equation}
where $f_{dot}$ represents scaled dot product attention~\cite{DBLP:journals/corr/VaswaniSPUJGKP17}, ($Q_e,K_e,V_e$) denotes encoder sequence representations in terms of query, key and value respectively, and ($Q_d,K_d,V_d$) denotes decoder sequence representation in terms of query, key and value respectively.
The gated cross attention mechanism filters out the unnecessary part of the sequence by attending over the generated cross attention scores $f_{CA}$ and determining the relevant attention. The gated cross attention module ($f_{GA}$) uses a sigmoidal gating mechanism for filtering out irrelevant cross attention while decoding the output. It generates an \textit{information vector}($i$) which carries relevant representation of the input vector and an \textit{attention gate} ($g$) that filters the relevant attention scores. Now this filtered attention is applied to the information vector to obtain \textit{attended information},
or the relevant information.
\begin{equation}
\begin{aligned}
f_{GA}=\sigma(W_q^gQ_e + W_v^gf_{CA} + b^g) \\
\odot (W_q^iQ_e + W_v^if_{CA} + b^i)
\end{aligned}
\end{equation}
$\sigma$ denotes sigmoid activation, $\odot$ denotes element-wise product, $W_q^i$ and $W_v^i$ represent weight matrices corresponding to value query and value at \textit{information vector}, respectively, $W_q^g$ and $W_v^g$ represent weight matrices corresponding to value query and value at \textit{attention gate}, respectively. Note that, $\{W_q^i,W_v^i,W_q^g,W_v^g\} \in \mathbb{R}^{d \times d}$ and $\{b^i,b^g\} \in \mathbb{R}^{d}$. Figure~\ref{fig:model}c shows the gated cross-attention architecture. We term \textsc{AutoCoder} variant that uses gated cross attention as \textbf{\textsc{Gated-AutoCoder} (GAC)}.
\noindent \textbf{Comparing \textsc{AutoCoder} against baselines:} Table~\ref{tab:accuracy} also compares \textsc{AutoCoder} variants against baseline systems. Both variants outperformed the three baselines. Among the two variants, VAC performed marginally better than GAC. To summarize, the results showcase that even simple Transformer variants can result in high gains in code synthesis.
\section{The Adversarial Experiments}
\label{sec:adversarial}
\subsection{Adversarial Attack Types}
We define five classes of adversarial examples. All our proposed attacks are black-box un-targeted attacks. Our attacks do not have any knowledge of the target model, nor does it have any information about the gradients and model parameters. Table~\ref{tab:adversarial_examples} shows representative examples of actual descriptions and corresponding adversarial descriptions. The classes are:
\begin{enumerate}[nosep,noitemsep]
\item \textbf{Variable Change (VC):} Changing single and multi-character variables and argument names in the original problem description, input arguments, and program trees to examine if the model correctly generates the corresponding output code.
\item \textbf{Redundancy Removal (RR):} Removing filler or redundant words without affecting the overall meaning of the input description.
\item \textbf{Synonym Replacement (SR):} Replacing words with their corresponding synonyms.
\item \textbf{Voice Conversion (VoC):} Converting a problem description in the active voice to its corresponding passive voice.
\item \textbf{Variable Interchange (VI):} Interchanging variable names in problem descriptions comprising multiple variables.
\end{enumerate}
The classes \textbf{VI} and \textbf{VC} belong to program directional attack category, whereas classes \textbf{RR}, \textbf{SR}, \textbf{VoC} belong to program invariance attack category. For example, consider the representative example for \textbf{VC} class in Table \ref{tab:adversarial_examples}, changing variable name from {\ttfamily{a}} to {\ttfamily{b}} led to the change in the ground truth program that can solve the problem i.e. from {\ttfamily{(strlen a)}} to {\ttfamily{(strlen b)}}. Now, model predicting any other token except the variable {\ttfamily{b}} is an adversary. In case of \textbf{RR}, removing redundant token is a program invariance perturbation, hence the ground truth program remains unchanged.
\begin{table}[!t]
\centering
\small{
\begin{tabular}{c|p{11.2 cm}}
\toprule
\textbf{Class}&\textbf{Representative Example}\\\hline
\multirow{4}{*}{VC} & \textbf{OD:} \textcolor{blue}{Given a string a, what is the length of a.} \\
&\textbf{OO:}
\textcolor{blue}{\small{\ttfamily{(strlen a)}}}\\
& \textbf{AD:} \textcolor{mauve}{Given a string b, what is the length of b.} \\
& \textbf{AO:} \textcolor{mauve}{\small{\ttfamily{(strlen a)}}}\\\hline
\multirow{7}{*}{RR} & \textbf{OD:} \textcolor{blue}{Given a number a, compute the product of \textbf{all} the numbers from 1 to a.} \\
&\textbf{OO:}
\textcolor{blue}{\small{\ttfamily{(invoke1 (lambda1 (if ( $\leq$ arg1 1 )1(*( self( -arg1 1 )) arg1 ))) a)}}}\\
& \textbf{AD:} \textcolor{mauve}{Given a number a, compute the product of the numbers from 1 to a.} \\
& \textbf{AO:} \textcolor{mauve}{\small{\ttfamily{( * a 1 )}}}\\ \hline
\multirow{8}{*}{SR} & \textbf{OD:} \textcolor{blue}{consider an array of numbers, what is reverse of elements in the given array that are odd} \\
&\textbf{OO}:
\textcolor{blue}{\small{\ttfamily{(reverse ( filter a ( lambda1 ( == ( \% arg1 2 )1))))}}}\\
& \textbf{AD:} \textcolor{mauve}{consider an array of numbers, what equals reverse of elements in the given array that are odd} \\
& \textbf{AO:} \textcolor{mauve}{\small{\ttfamily{(reduce ( filter a ( lambda1 ( == ( \% arg1 2 )1))))}}}\\\hline
\multirow{7}{*}{VoC} & \textbf{OD:} \textcolor{blue}{Given a number a, your task is to compute a factorial} \\
&\textbf{OO}:
\textcolor{blue}{\small{\ttfamily{invoke1(lambda1(if(<= arg1 1) 1 (*(self(-arg1 1)) arg1)))a)}}}\\
& \textbf{AD:} \textcolor{mauve}{Your task is to compute a factorial, given a number a} \\
& \textbf{AO:} \textcolor{mauve}{\small{\ttfamily{(filter a ( partial1 b >))}}}\\ \hline
\multirow{10}{*}{VI} & \textbf{OD:} \textcolor{blue}{you are given an array of numbers a and numbers b, c and d, define e as elements in a starting at position b ending at the product of c and d ( 0 based ), what is e} \\
&\textbf{OO}:
\textcolor{blue}{\small{\ttfamily{( slice a d ( * c b ) )}}}\\
& \textbf{AD:} \textcolor{mauve}{you are given an array of numbers a and numbers b , c and e , define d as elements in a starting at position b ending at the product of c and e ( 0 based ) , what is d} \\
& \textbf{AO:} \textcolor{mauve}{\small{\ttfamily{( slice a d ( * c b ) )}}}\\
\bottomrule
\end{tabular}}
\caption{Representative examples from each adversarial class. Here, OD, OO, AD, and AO represent the original description, original output, adversarial description, and adversarial output, respectively.}
\label{tab:adversarial_examples}
\end{table}
\subsection{Adversarial Performance}
\label{sec:adv_old}
In this section, we discuss the adversarial instance construction process. We construct adversarial examples using the holdout test instances following classwise constraints in a semi-supervised fashion. For example, an adversarial instance belonging to the \textbf{VI} class can only be generated if the problem description contains two or more variables. In addition, we used several NLP libraries for basic linguistic tasks. For example, we use the NLTK library to stochastically remove some stopwords from the program descriptions to generate instances for \textbf{RR} class. Similarly, we leverage POS tagging to identify active/passive voice to construct instances for the \textbf{VoC} class. And POS tagging and Wordnet hierarchies to construct instances for \textbf{SR} class. Overall, we use about 1000 adversarial instances, equally divided per adversary class, for evaluating program synthesis systems.
Table~\ref{tab:ad_test} presents generation performance of \textsc{SA} under adversarial settings using error percentage i.e. (100 - Accuracy \%), lower the error \% better is the adversarial robustness. Surprisingly, \textsc{SA} fails to generalize and produce significantly poor results under the adversarial setting. In particular, it performs very poorly on the \textbf{VoC} and variable \textbf{VI} classes.
\begin{table}[!t]
\centering
\small{
\begin{tabular}{c|ccc||ccc}
\multirow{2}{*}{\textbf{Adv. Class}} & \multicolumn{3}{c||}{\textbf{Error (\%)}} & \multicolumn{3}{c}{\textbf{Distance}}\\
&\textbf{SA}& \textbf{VAC} &\textbf{GAC}& \textbf{Lev} &\textbf{LevR} & \textbf{BERT}\\\hline
VC & 48.0 & \textbf{42.5} & \textbf{42.5} & 2.24 & .05 & .005\\
RR & 4.70 & 3.70 & \textbf{3.20} &4.55 & .13 & .044\\
SR & \textbf{5.70} & 8.10 & 8.10 & 1 & .03 & .013\\
VoC & 70.2 & 24.9& \textbf{24.4}& 16.54 & .54 & .015 \\
VI & 70.0 & 67.7 &\textbf{67.2} & 4.2& .08 & .043\\
\end{tabular}
}
\caption{Error percentage (columns 2--4) of SA, VAC and GAC for different adversarial classes.
Distance between (columns 5--7) adversarial and the corresponding original description. }
\label{tab:ad_test}
\end{table}
This is because the model does not predict the correct code when the sentences that are generally active in the dataset are converted to passive sentences. Further, in our analysis, if variables {\ttfamily{b}} and {\ttfamily{d}} are interchanged, the model fails to recognize this change and outputs code as if no change has been done on the input sentences. Table~\ref{tab:ad_test} also presents generation performance of \textsc{AutoCoder} under adversarial settings. \textsc{AutoCoder} variants show more robustness than \textsc{SA} in four out of five classes. We observe that one of the possible reason for the poor performance of \textsc{AutoCoder} variants is incorrect cross attending. For example, the variable {\ttfamily{a}} in the output is not attending the corresponding variable {\ttfamily{a}} in the problem description.
Even though \textsc{AutoCoder} showed more robustness than \textsc{SA} under adversarial settings, we observe a significant drop in the overall performance in both systems. We claim that the performance drop under the adversarial setting is attributed to bias in the synthetic dataset generation process. Some of the potential bias scenarios are: (1) small set of chosen variable names, (2) limited number of operations, (3) limited vocabulary usage, (4) variables occur in a sequential and alphabetical manner.
\subsection{Measuring Extent and Quality of Perturbations}
\subsubsection{Extent of Perturbations}
To measure the extent of perturbation in our proposed adversarial attacks, we experiment with the following two distance metrics:
\textbf{Edit Distance:} We use the popular Levenshtein distance (hereafter, \textit{`Lev'}) to calculate the distance between adversarial description and the corresponding original description. It is defined as the minimum number of edit operations (delete, insert and substitute) required to convert one string to the other. We also report the ratio of Levenshtein distance to the length of sentences (hereafter, \textit{`LevR'}) to measure the extent of perturbation per length of the sentence.
Table~\ref{tab:ad_test} (columns 5 and 6) shows distance values for the five adversarial classes. Except for \textbf{VoC} where the entire sentence structure changes, the other classes comprise examples constructed from significantly low perturbations. Note that, we limit the perturbation rate in \textbf{SR} to 1, as higher perturbations were leading to out-of-vocabulary problems and other grammatical inconsistencies.
\textbf{Embedding Similarity:} We also measure the cosine similarity between adversarial description and the corresponding original description using sentence embeddings derived from pretrained model BERT~\cite{devlin2019bert}. The sentence embeddings are derived from a Siamese network trained using triplet loss~\cite{sent-bert}. We convert the similarity value into a distance value by subtracting it by 1 (hereafter, \textit{`BERT'}). We keep the embedding length as 768.
Table~\ref{tab:ad_test} (column 7) reiterate the distance-based observations. Note that, as contextual embeddings successfully capture voice-related changes, the adversarial class \textbf{VoC} also shows low perturbation distance.
\subsubsection{Human Evaluation}
We employ two undergraduate students expert in programming to evaluate the quality of constructed adversarial attacks. For this experiment, we randomly select ten instances from each adversary class along with the corresponding original instance (a sample dataset of a total of 100 instances). We first-of-all educate evaluators about the task by presenting them a set of program descriptions from the original ALGOLISP dataset. Next, we instruct them to evaluate each instance in the sampled set based on the following two criteria:
\textbf{Grammatical Correctness:} We ask the evaluators to rate the grammatical correctness of the sentences on a scale of 1--5. The rating of 1 being `completely grammatically incorrect description' and 5 representing `grammatically sound and correct'.
\iffalse{}
\begin{enumerate}
\item Lot of grammatical mistakes are present.
\item Grammatically incorrect with a notable number of mistakes.
\item Grammatical correct with few mistakes (e.g. around 3 to 4)
\item Grammatical correct with very few mistakes (eg.1 or 2)
\item Grammatical sound and correct
\end{enumerate}
\fi
\textbf{Naturalness:} We also ask the evaluators to judge the quality of the sentences on the basis of \textit{naturalness} of the texts i.e., how likely the test samples are drawn from the original data distribution. We ask to rate each sample on a scale of 1--5. The rating of 1 being `completely outside data distribution/unfamiliar example' and 5 representing `definitely from original data distribution'.
\iffalse{}
\begin{enumerate}
\item Sentence is completely outside data distribution/unfamiliar example.
\item Some part of the sentence can be from original data samples not sure though/
\item Sentence looks somewhat from the original data distribution/samples.
\item Sentence looks very similar to the original data distribution.
\item Sentence is definitely from original data distribution.
\end{enumerate}
\textcolor{blue}{
We also perform human evaluation for the generated adversarial attacks. For this experiment, we consider only 50 samples of adversarial type, 10 from each class randomly sampled using seed $42$. We now take 50 samples from data distribution corresponding to same adversarial samples. Final we add these 50 original samples to our \textit{"human evaluation"} dataset and use this shuffled data for human evaluation. Before we present the \textit{"human evaluation"} data to the evaluators, we first presented them with some samples from the original data distribution so that they can get a sense of how the original distribution looks like.\\
For our evaluation, we considered a total of 2 evaluators having familiarity with the Lisp language. The reason to choose such evaluators is that they are able to understand the problem description very well and its helps the evaluators to understand and judge each description carefully.\\
\textbf{Note:} The human evaluators are given only the natural language description not the type(adversarial/original) and other meta informations like class of problem etc.\\
We ask the human evaluators to judge the textual description of the problem on the basis of following criteria -\\
\noindent
}
\fi
\begin{table}[!t]
\centering
\small{
\begin{tabular}{c|ccc|ccc}
\toprule
\multirow{2}{*}{\textbf{Adv. Class}} & \multicolumn{3}{c|}{\textbf{Grammatical Score}} & \multicolumn{3}{c}{\textbf{Naturalness Score}}\\
&Original&Adversarial&\%confusion&Original&Adversarial&\%confusion\\\hline
VC & 4.2 &\textbf{4.25} & 99\%& \textbf{3.95} &3.85& 98\%\\\hline
RR & \textbf{4.20} &3.60& 88\% & \textbf{4.15} &3.60& 89\%\\ \hline
SR & \textbf{4.40} & 3.85 & 90\% & \textbf{4.25}& 3.90& 92\%\\ \hline
VoC & \textbf{4.00}& 3.45 &89\%& \textbf{3.90} &3.65& 98\%\\ \hline
VI & \textbf{3.70} &3.50& 96\% &3.45 &\textbf{3.60}& 95\%\\\hline\hline
\textbf{Average} &\textbf{4.10} & 3.73 & 92.4\% & \textbf{3.94} & 3.71 & 94.4\%\\
\bottomrule
\end{tabular}
}
\caption{Class-wise comparison of human evaluation results}
\label{tab:human_result}
\end{table}
We summarize the human evaluation experiment in Table \ref{tab:human_result}. As evident from the table, the grammatical score and naturalness score of original sentences are higher than adversarial sentences. The evaluators were correctly able to identify the minor grammatical mistakes present in the \textbf{RR} class. Also, since changing the variables only does not add much human notable noise, evaluators were finding it difficult to distinguish between original sentences and adversarial sentences for \textbf{VC} and \textbf{VI} classes as depicted in the results of Table \ref{tab:human_result}. We also present the \% confusion score that reflects how much difficulty evaluators are facing in distinguishing between adversarial and original sentences.
Mathematically, it is defined as $\%confusion = \Big( 1- \frac{|\textit{ original value - adversarial value }|}{5} \Big)\times 100$. The high \% confusion scores in Table~\ref{tab:human_result} showcase the quality of constructed adversarial examples.
\iffalse{}
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.5\textwidth]{attn1.png}
\caption{Visualization of Decoder Attention. The variable `a' in the output is not attending the corresponding token `a' in the problem description.}
\label{fig:decoder_attn}
\end{figure}
\fi
\begin{table}[!b]
\centering
\small{%
\begin{tabular}{c|c|p{11.2cm}}
\toprule
\multirow{2}{*}{\rotatebox[origin=c]{90}{\textbf{\centering RD}}}&\textbf{OS} & Consider an array of numbers a, your task is to find if a reads \textbf{the} same from both ends. \\
&\textbf{FS} & Consider an array of numbers a, your task to find if a reads same from both ends.\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{\textbf{\centering RI}}}&\textbf{OS} &Consider an array of numbers a, your task to find if a reads same from both ends.\\
&\textbf{FS} & Consider \textbf{on} an array of \textbf{regular} numbers a, your task is to find if a reads the same from both ends\\
\hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{\centering \textbf{RS}}}&\textbf{OS} & Consider an array of \textbf{numbers} a, your task is to find if a reads the \textbf{same} from both ends
\\
&\textbf{FS} & Consider an array of \textbf{integers} a, your task is to find if a reads the \textbf{integers} from both ends\\
\hline
\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{ \centering BT}}}&\textbf{OS} & Given arrays of numbers a and b, what is the difference of elements of a and median in b. \\
&\textbf{IS} & Was ist der Unterschied zwischen den Elementen von a und dem Median in b, wenn Arrays von Zahlen a und b gegeben sind? \\
&\textbf{FS} & What is the difference between the elements of a and the median in b given arrays of numbers a and b?\\
\hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{\centering \textbf{AR}}}&\textbf{OS} & you are given \textbf{an} array of numbers a, find not prime values in a
\\
&\textbf{FS} & you are given \textbf{at} array of numbers a, find not prime values in a\\
\bottomrule
\end{tabular}}
\caption{Illustrative examples of different operations to modify \textsc{AlgoLisp} dataset. Here OS, IS and FS represents original, intermediate and final sentence, respectively.}
\label{tab:example_BT}
\end{table}
\section{\textsc{AlgoLisp++}: The Debiased Dataset}
To mitigate the poor performance of \textsc{SA} and \textsc{AutoCoder} variants under adversarial settings, we extend the original \textsc{AlgoLisp} by adding a highly diversified collection of examples. However, as we see in previous sections, the automatic synthesis of instances is a challenging task. We, therefore, present a automatic instance generation algorithm inspired by the concepts of basic string editing~\cite{wei2019eda}, back translation~\cite{sennrich2016improving} and neural editing~\cite{hsieh-etal-2019-robustness}. We propose the following three classes of operations:
\begin{enumerate}[noitemsep,nosep]
\item \textbf{Basic editing operations (BE)}
We randomly edit tokens from the descriptions except few tokens that convey high semantic importance with respect to the programming languages. For example, the token {\ttfamily{concatenation}} conveys special meaning to the sentence and should not be edited. We reserve $\sim10\%$ of the vocabulary tokens as non-editable. The non-editable list includes tokens such as {\ttfamily{times}}, {\ttfamily{sum}}, {\ttfamily{digits}}, {\ttfamily{maximum}}, {\ttfamily{prime}}, {\ttfamily{last}},
etc. We define a parameter $\alpha$ to regulate the number editable tokens in a sentence. The number of editable token is given by $\lfloor \alpha L \rfloor$, where $L$ is the length of the sentence. In our experiments, we assign $\alpha = 0.1$. Next, we define three basic token-level edit operations:
\noindent \textbf{Random Deletion (RD):} Randomly removing one or more words from the sentences.\\
\noindent \textbf{Random Insertion (RI): } Randomly inserting one or more words in the sentences.\\
\noindent \textbf{Random Substitution (RS):} Randomly substituting one or more words in the sentences.
For RI and RS, we use BERT~\cite{devlin2019bert} uncased language model trained on monolingual English language. In RI, we randomly add $\lfloor \alpha L \rfloor$ masked tokens to the input sentence and predict tokens corresponding to these masked positions. In the case of RS, we randomly select $\lfloor \alpha L \rfloor$ editable tokens in a problem description and mask them. Further, the masked sentence is fed to the pre-trained BERT model to predict the mask tokens. RD is reasonably straightforward as we randomly pick $\lfloor \alpha L \rfloor$ tokens from a sentence and delete them.
\item \textbf{Back-Translation (BT)}
In \textit{Back Translation (BT)}, a sentence is, first, translated to an intermediate language and again translated back to the original language. BT leads to the paraphrasing of the original sentence~\cite{sennrich2016improving}. In our case, the original language is English, and the intermediate language is German\footnote{We use German as one of the representative language due to the availability of good quality translations.}. Table~\ref{tab:example_BT} presents an illustrative example of a BT operation. We leverage native Google Translate API for English to German translation and vice-versa.
\item \textbf{Attention-based replace operation (AR)}
Inspired by the quality of augmented sentences in ~\cite{conaug}, we propose an attention-based augmentation operation that extracts the attention vector from the first encoder layer of the transformer and randomly replaces the maximally attended word with a random word in the vocabulary except the non-editable words to preserve the meaning of the sentence~\cite{hsieh-etal-2019-robustness}.
\end{enumerate}
\begin{table}[!t]
\centering
\small{
\begin{tabular}{l|ccc|ccc}
\multirow{2}{*}{\bfseries Name} & \multicolumn{3}{c|}{\bfseries Dataset Statistics}&\multicolumn{3}{c}{\bfseries Accuracy Scores}\\
&\bfseries Instances&\bfseries Vocab. size &\bfseries Avg. length &\bfseries SA &\bfseries VAC & \bfseries GAC\\\hline
\textsc{AlgoLisp} & 79214 & 292 & 38.17 &0.958&\textbf{0.968}&0.963\\
\textsc{AlgoLisp++}& 142644 & 3152 &37.97&0.944&0.943&\textbf{0.947}\\
\bottomrule
\end{tabular}}
\caption{Statistics (columns 2--4) of \textsc{AlgoLisp} and \textsc{AlgoLisp++} training datasets. Accuracy scores (columns 5--7) of SA, VAC and, GAC for two \textsc{AlgoLisp} variants.}\label{tab:algolispspecs}
\end{table}
\begin{table}[!t]
\centering
\begin{tabular}{lccc}
\multirow{2}{*}{\textbf{Adv. Class}} & \multicolumn{3}{c}{\textbf{Error (\%)}}\\\cline{2-4}
& \textbf{SA}& \textbf{\textsc{VAC}} & \textbf{\textsc{GAC}}\\\hline
VC & {41.5}(48)& 36.00(42)& \textbf{34.40}(42)\\
RR & 3.70(5) &\textbf{3.20}(4) & 4.20(5)\\
SR & \textbf{4.40}(6) & 4.70(9) & 8.20(9)\\
VoC & \textbf{19.60}(71) & 24.40(25) & 23.60(24)\\
VI & 67.90(70) & \textbf{62.50}(68) & 67.70(69)\\\hline
\textbf{Average} & 27.4 & \textbf{26.1} & 27.1\\
\bottomrule
\end{tabular}
\caption{Comparing adversarial robustness of \textsc{AutoCoder} variants against \textsc{SA} for \textsc{AlgoLisp++}. The value present inside the bracket represent corresponding \textsc{AlgoLisp} error percentage.}
\label{tab:final_result}
\end{table}
\noindent \textbf{The Generation Algorithm}
Algorithm~\ref{algo} details the data augmentation pipeline. Each sentence in the \textsc{AlgoLisp} dataset undergoes a series of edit operations parameterized by six free parameters $\rho_1$, $\rho_2$, $\rho_3$, $\sigma_1$, $\sigma_2$, and $\sigma_3$. $\rho_1$, $\rho_2$, and $\rho_3$ represent probability of token-level edit operations, back translation, and attention-based replace, respectively. $\sigma_1$, $\sigma_2$, and $\sigma_3$ represent probability of deletion, insertion, and substitution, such that $\sigma_1 + \sigma_2 + \sigma_3 =1$. In our experiments, we keep $\rho_1=0.5$, $\rho_2=0.2$ and $\rho_3=0.1$. In case of a length of a sentence greater than the average length, we assign $\sigma_1 =0.5$, $\sigma_2=0.25$, and $\sigma_3=0.25$. Whereas if the length of a sentence lesser than the average length, we assign $\sigma_1 =0.2$, $\sigma_2=0.4$, and $\sigma_3=0.4$. Overall, the augmentation approach has resulted in new 89,214 instances.
\begin{algorithm}[!t]
\caption{Generating \textsc{AlgoLisp++}.}\label{algo}
\begin{algorithmic}[1]
\small{
\REQUIRE $D \gets$ \textsc{AlgoLisp} dataset\\
\hspace{2em} $D' \gets$ \textsc{AlgoLisp++} dataset (initially empty) \\
\iffalse{}
$\psi \gets$ randomly chooses one of the available options with the given probability values;
$\rho_1 \gets$ probability of token-level edit operations; $\rho_2\gets$ probability of back translation; $\rho_3\gets$ probability of attention-based replace; $\sigma_1\gets$ probability of deletion; $\sigma_2 \gets$ probability of insertion; $\sigma_3 \gets$ probability of substitution;
\fi
\hspace{2em}$BE()$ $\gets$ performs basic edits \\
\hspace{2em}$BT()$ $\gets$ performs back translation \\
\hspace{2em} $AR()$ $\gets$ performs attention-based replace\\
\STATE $\Sigma=(\sigma_1,\sigma_2,\sigma_3)$
\FOR{each sample $\chi$ in $D$}
\STATE toss coin with head prob. $\rho_1$
\IF{head}
\IF{LEN($ \chi $) $>$ AVG\_LEN(D)}
\STATE Assign $\sigma_1 \ge \sigma_2$ and $\sigma_1 \ge \sigma_3$
\ELSE
\STATE Assign $\sigma_1 \le \sigma_2$ and $\sigma_1 \le \sigma_3$
\ENDIF
\STATE op $\gets$ sample an operation according to the multinomial distribution $\Sigma$
\STATE Add BE($\chi$,op) to $D'$
\ENDIF
\STATE toss coin with head prob. $\rho_2$
\IF{head}
\STATE Add BT($\chi$) to $D'$
\ENDIF
\ENDFOR
\FOR{each sample $\chi$' in $\{D - D'\}$}
\STATE toss coin with head prob. $\rho_3$
\IF{head}
\STATE Add AR($\chi$) to $D'$
\ENDIF
\ENDFOR
\STATE Add all the examples of $D$ to $D'$
}
\end{algorithmic}\label{algo}
\end{algorithm}
Table~\ref{tab:algolispspecs} compares statistics of the newly constructed \textsc{AlgoLisp++} dataset against the original \textsc{AlgoLisp} dataset.
\noindent \textbf{System evaluations on \textsc{AlgoLisp++}:}
Table~\ref{tab:algolispspecs} also compares code generation performance of \textsc{AutoCoder} variants against state-of-the-art system \textsc{SA} on \textsc{AlgoLisp++} dataset.
We observe an overall marginal decrease in the generative performance of all systems under adversarial conditions.
However, \textsc{AlgoLisp++} has resulted into high gains under adversarial setting (see Table~\ref{tab:final_result}). Specifically, \textsc{SA} shows more performance gain than \textsc{AutoCoder} variants under adversarial settings, especially in the VoC class with a decrease of more than 50 points in error percentage.
\section{Conclusion}
\label{sec:conc}
In this paper, we propose a series of adversarial attacks to showcase limitations in SOTA code synthesis models' robustness. We experimented with Transformer-based model variants to showcase performance gain over previous SOTA systems and robustness under adversarial setting. Finally, we proposed a data augmentation pipeline to increase the adversarial robustness of code generation models.
In the future, we plan to extend our methodology and develop a general framework to study the adversarial robustness of code generation systems trained on synthetic and natural programming datasets.
|
1,108,101,566,447 | arxiv | \section{Introduction}
Molecular spectroscopy is a well established discipline and the increasing precision of measurements has provided the capacity to test fundamental physics. Recently, a set of several ``forbidden'' $\Delta k\!=\!\pm 3$ transitions between the rotation-inversion energy levels of $^{14}$NH$_3$ in the $\nu_2$ vibrational state were proposed as a promising tool to probe a possible space-time variation of the proton-to-electron mass ratio $\mu=m_p/m_e$~\citep{Jansen:2014,Spirko:2014}. The anomalous mass dependency of these transitions arises from accidental near-degeneracies of the involved energy levels. The sensitivity coefficient $T_{u,l}$, defined as
\begin{equation}
T_{u,l}=\frac{\mu}{E_u-E_l}\Bigl(\frac{{\rm d}E_u}{{\rm d}\mu}-\frac{{\rm d}E_l}{{\rm d}\mu}\Bigr),
\label{eq.T}
\end{equation}
where $E_u$ and $E_l$ refer to the energy of the upper and lower state respectively, allows one to quantify the effect that a possible variation of $\mu$ would have for a given transition. The larger the magnitude of $T_{u,l}$, the more favourable a transition is to test for a drifting $\mu$.
The so-called ammonia method~\citep{Flambaum:2007}, which was adapted from \citet{Veldhoven:2004}, relies on the inversion transitions in the vibrational ground state of $^{14}$NH$_3$. Constraints on a temporal variation of $\mu$ have been determined using this method from measurements of the object B0218$+$357 at redshift $z\sim 0.685$~\citep{Flambaum:2007,Murphy:2008,Kanekar:2011}, and of the system PKS1830$-$211 at $z\sim 0.886$~\citep{Henkel:2009}. A major source of systematic error when using the ammonia method is the comparison with rotational lines from other molecular species, particularly molecules that are non-nitrogen-bearing (see \citet{Murphy:2008}, \citet{Henkel:2009}, and \citet{Kanekar:2011} for a more complete discussion). The most stringent limit using ammonia~\citep{Kanekar:2011} has since been improved upon with methanol absorption spectra observed in the lensing galaxy PKS1830$-$211~\citep{Bagdonaite:2013b}. Three different radio telescopes were used to measure ten absorption lines with sensitivity coefficients ranging from $T=-1.0$ to $-32.8$.
Here we present a comprehensive study of the mass sensitivity of the vibration-rotation-inversion transitions of $^{14}$NH$_3$, $^{15}$NH$_3$, $^{14}$ND$_3$, and $^{15}$ND$_3$. A joint comparison of all relevant isotopic transitions could open the door to an all-ammonia detection, and potentially eliminate certain systematic errors that arise from using alternative reference molecules. We also note that the transitions of the $^{15}$N isotopes are optically thin and free of the nuclear quadrupole structures, providing a simpler radiative and line-shape analysis. A rigorous evaluation of the sensitivity coefficients will hopefully offer new scope for the ammonia method, and guide future measurements that could be carried out for example at the Atacama Large Millimeter/submillimeter Array (ALMA).
\section{Methods}
\subsection{Background}
The induced frequency shift of a probed transition is given as
\begin{equation}
\frac{\Delta\nu}{\nu_0}=T_{u,l}\frac{\Delta\mu}{\mu_0},
\label{eq.shift}
\end{equation}
where $\Delta\nu=\nu_{obs}-\nu_0$ is the change in the frequency and $\Delta\mu=\mu_{obs}-\mu_0$ is the change in $\mu$, both with respect to their accepted values $\nu_0$ and $\mu_0$. Using this relation one can easily show that the rotation-inversion transitions associated with the $\nu_2$ vibrational state of ammonia may exhibit induced frequency shifts more than one order of magnitude larger than the pure inversion transitions in the vibrational ground state, which are currently used in the probing of $\mu$ both temporally~\citep{Flambaum:2007,Murphy:2008,Menten:2008,Henkel:2009,Kanekar:2011} and spatially~\citep{Molaro:2009,Levshakov:2010b,Levshakov:2010a,Levshakov:2013}. Various $^{14}$NH$_3$ ro-inversional transitions have already been observed extraterrestrially~\citep{Mauersberger:1988,Schilke:1990,Schilke:1992}, whilst others with notable sensitivities possess Einstein coefficients comparable to those of the observed transitions. It is legitimate then to expect their eventual extragalactic detection, and when combined with their enhanced sensitivity, there is scope for a major improvement of the current ammonia analyses.
Another promising anomaly exhibited by the spectra of ammonia is caused by the
so-called ``giant'' $l$-type doubling, which leads to a ``reversal'' of the
inversion doublets in the $K=1$ levels in the $+l$ component of the $\nu_4$
states of $^{14}$NH$_3$ and $^{15}$NH$_3$. The inversion doublets are reversed
because for $K=1$, only one of the $A_1$ or $A_2$ sublevels is shifted by the
Coriolis interactions, and only the $A_2$ states have non-zero spin statistical
weights (see Fig. 1 and \citet{Spirko:1976}). So far these transitions have not
been detected extraterrestrially. This is to be expected since the physical
temperatures prevailing in the interstellar medium are too low to provide
significant population of the aforementioned states. However they could be
effectively populated by exoergic chemical formation processes, resulting in
the detection of highly excited states~\citep{Mills:2013,Lis:2014}.
Interestingly, the `highest energy' $(J,K)=(18,18)$ line of $^{14}$NH$_3$
observed towards the galactic centre star forming region Sgr B2, corresponds to
the state lying $3130{\,}$K above the ground vibrational
state~\citep*{Wilson:2006}.
\begin{figure}
\centering
\label{fig:theoref1}
\includegraphics[width=0.7\columnwidth]{ltype.pdf}
\caption{``Reversal'' of the inversion doublets in the +$l$ component of the $\nu_4$ level by the ``giant'' $l$-type doubling effect. Values in parentheses are the spin statistical weights.}
\end{figure}
The most common approach to computing sensitivity coefficients for a molecular system makes use of an effective Hamiltonian model, and determining how the parameters of this model depend on $\mu$~\citep{Jansen:2011a,Jansen:2011,Kozlov:2011,Levshakov:2011,Ilyushin:2012,Ilyushin:2014,Viatkina:2014}. For ammonia, the semiclassical Wentzel-Kramers-Brillouin (WKB) approximation has been used to obtain a general relationship to estimate the sensitivity of pure inversion frequencies in the ground vibrational state for $^{14}$NH$_3$~\citep{Flambaum:2007}, $^{15}$NH$_3$~\citep{Sartakov:2008}, $^{14}$ND$_3$~\citep{Flambaum:2007}, and $^{15}$ND$_3$~\citep{Veldhoven:2004}, whilst rotation-inversion transitions have been considered for the partly deuterated species $^{14}$NH$_2$D and $^{14}$ND$_2$H by \citet*{Kozlov:2010}.
The vibration-rotation-inversion transitions of $^{14}$NH$_3$ were investigated by \citet{Spirko:2014}, but theoretical calculations of the sensitivities using perturbation theory may not be entirely robust since the nominator and denominator in Eq.~(1) contain differences of large numbers. We thus find it worthwhile not only to check the literature data for $^{14}$NH$_3$ by means of highly accurate variational calculations, but to extend the treatment to $^{15}$NH$_3$, $^{14}$ND$_3$ and $^{15}$ND$_3$, which are equally valid probes of $\mu$. It is also straightforward to incorporate the so far unprobed $\nu_4$ states into the present study.
The advantage of our variational approach is that along with sensitivity coefficients, reliable theoretical transition frequencies can be generated if no experimental data is available, and for all selected transitions, Einstein $A$ coefficients can be calculated to guide future observations.
\subsection{Variational Calculations}
The variational nuclear motion program TROVE~\citep*{TROVE:2007} has provided highly accurate theoretical frequency, intensity, and thermodynamic data for both $^{14}$NH$_3$ and $^{15}$NH$_3$~\citep{09YuBaYa.NH3,10YaYuPa.NH3,11YuBaTe.method,Yurchenko:2011,14SoHYu.PH3,Yurchenko:2015}. We use the potential energy surface and computational setup as described in \citet{Yurchenko:2011} and \citet{Yurchenko:2015}, which can naturally be extended to treat $^{14}$ND$_3$ and $^{15}$ND$_3$. Here we only discuss the calculation of sensitivity coefficients, for the method used to compute transition frequencies and Einstein $A$ coefficients we refer the reader to \citet{09YuBaYa.NH3}.
We rely on the assumption that the baryonic matter may be treated equally~\citep{Dent:2007}, i.e. $\mu$ is assumed to be proportional to the molecular mass. It is then sufficient to perform a series of calculations employing suitably scaled values for the mass of ammonia. We choose the scaling coefficient $f_m=\lbrace0.9996,0.9998,1.0000,1.0002,1.0004\rbrace$ such that the scaled mass, $m_{\mathrm{NH_3}}^{\prime}=f_m \times m_{\mathrm{NH_3}}$. The mass dependency of any energy level can be found by using finite differences for (a) the $f_m=\lbrace0.9998,1.0002\rbrace$, and (b) the $f_m=\lbrace0.9996,1.0004\rbrace$ calculated energies. Both (a) and (b) should yield identical results, with the latter values used to verify the former. Numerical values for the derivatives ${\rm d}E/{\rm d}\mu$ are easily determined and then used in Eq.~(1), along with accurate experimental values for the transition frequencies, to calculate sensitivity coefficients. Calculations with $f_m=1.0000$ provide theoretical frequency data and Einstein $A$ coefficients.
The variational approach is powerful in that it allows a comprehensive treatment of a molecule to be undertaken. All possible transitions and their mass dependence can be calculated. This permits a simple exploration of the sensitivities for any molecule, provided the necessary steps have been taken to perform accurate variational calculations in the first place. As a cross-check, we also employ the nonrigid inverter theory~\citep{Spirko:1976,Spirko:1983} to compute sensitivity coefficients as was done by \citet{Spirko:2014}. In the following we evaluate both approaches. Note that the standard Herzberg convention~\citep{Herzberg:1945} is used to label the vibrational states of ammonia with the normal mode quantum numbers $v_1$, $v_2$, $v_3$, $v_4$, $l_3$ and $l_4$. The $\nu_2$ state corresponds to the singly excited inversion mode $v_2=1$, whilst $\nu_4$ is the singly excited asymmetric bending mode $v_4=|l_4|=1$ (see \citet{Down:2013} for more details).
\section{Results and Discussion}
The variationally calculated sensitivities for $^{14}$NH$_3$ and $^{15}$NH$_3$ are listed in Tables \ref{tab:v2_14nh3_15nh3_1} to \ref{tab:roinv_ground_14nh3_15nh3}. The results are consistent with previous perturbative values~\citep{Spirko:2014} obtained using the nonrigid inverter theory approach~\citep{Spirko:1976,Spirko:1983}, and `Born-Oppenheimer' estimates from \citet{Jansen:2014} (subsequently referred to as JBU). For transitions involving the $\nu_2$ vibrational states shown in Tables \ref{tab:v2_14nh3_15nh3_1}, \ref{tab:v2_14nh3_15nh3_2} and \ref{tab:v2_14nh3_15nh3_3}, the agreement is near quantitative with the exception of the ``forbidden'' combination difference $|a,J\!\!=\!\!3,K\!\!=\!\!3,v_{2}\!\!=\!\!1\rangle$ - $|s,J\!\!=\!\!3,K\!\!\!=\!\!0,v_{2}\!~\!=\!\!1\rangle$. The profoundly different sensitivities for these transitions when going from $^{14}$NH$_3$ to $^{15}$NH$_3$ is of particular interest. A possible variation of $\mu$ requires the measurement of at least two transitions with differing sensitivities. In the case of $|a,J\!\!=\!\!3,K\!\!=\!\!3,v_{2}\!\!=\!\!1\rangle$ - $|s,J\!\!=\!\!3,K\!\!\!=\!\!0,v_{2}\!~\!=\!\!1\rangle$, both isotopologues possess a large value of $T$. Importantly though they are of opposite sign, thus enabling a conclusive detection with regard to these particular transitions. An all-ammonia observation of a drifting $\mu$ would circumvent some of the intrinsic difficulties that appear when using other reference molecules~\citep{Murphy:2008,Henkel:2009,Kanekar:2011}, which may not necessarily reside at the same location in space.
\begin{table*}
\vspace*{-0.0cm}
\caption{The rotation-inversion frequencies ($\nu$), Einstein coefficients ($A$), and sensitivities ($T$) of $^{14}$NH$_3$ and their $^{15}$NH$_3$ counterparts in the $\nu_2$ vibrational state.}
\label{tab:v2_14nh3_15nh3_1}
\resizebox{\linewidth}{!}{\begin{tabular}{c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.1000in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.2000in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.1500in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$\Gamma^\prime$}&\multicolumn{1}{c}{$p^\prime$}&\multicolumn{1}{c}{$J^\prime$}&\multicolumn{1}{c}{$K^\prime$}&\multicolumn{1}{c}{$v_{2}^\prime$}&\multicolumn{1}{c}{$\Gamma^{\prime\prime}$}&
\multicolumn{1}{c}{$p^{\prime\prime}$}&\multicolumn{1}{c}{$J^{\prime\prime}$}&\multicolumn{1}{c}{$K^{\prime\prime}$}&\multicolumn{1}{c}{$v_{2}^{\prime\prime}$}&
\multicolumn{1}{c}{$\nu$/MHz}&\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$} \\[1mm]
\hline \\[-1mm]
& & & & & & & & $^{14}$NH$_3$ & & & \\[1mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 1 & 140142$^a$ & 0.1474$\times 10^{-4}$ & 17.24(16.92$^b$) \\[0.3mm]
$A_2^{\prime\prime}$ & a & 0 & 0 & 1 & $A_2^\prime$ & s & 1 & 0 & 1 & 466244$^c$ & 0.1824$\times 10^{-2}$ & -6.587(-6.409) \\[1mm]
& & & & & & & & $^{15}$NH$_3$ & & & \\[1mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 1 & 175053 & 0.2939$\times 10^{-4}$ & 13.33(13.28) \\[0.3mm]
$A_2^{\prime\prime}$ & a & 0 & 0 & 1 & $A_2^\prime$ & s & 1 & 0 & 1 & 430038 & 0.1425$\times 10^{-2}$ & -6.894(-6.908) \\[1mm]
\hline \\[-3mm]
\end{tabular}}
\vspace*{1mm}
\\
\footnotesize $^{14}$NH$_3$: Einstein coefficients from \citet{Yurchenko:2011}; $^a$Astronomical observation from \citet*{Mauersberger:1988} and \citet{Schilke:1990}; $^b$JBU sensitivity coefficient reaches a value of 18.8 (see \citet{Jansen:2014}); $^c$Astronomical observation from \citet{Schilke:1992}; values in parentheses from \citet{Spirko:2014}, obtained using the nonrigid inverter theory. \\
$^{15}$NH$_3$: Frequencies and Einstein coefficients from \citet{Urban:1985} and \citet{Yurchenko:2015}, respectively; values in parentheses obtained using the nonrigid inverter theory with the frequencies from \citet{Urban:1985}. \\
\end{table*}
\begin{table*}
\vspace*{-0.0cm}
\caption{The wavenumbers ($\nu$), wavelengths ($\lambda$), Einstein coefficients ($A$), and sensitivities ($T$) for transitions between the ground and $\nu_2$ vibrational state of $^{14}$NH$_3$ and their $^{15}$NH$_3$ counterparts.}
\label{tab:v2_14nh3_15nh3_2}
\resizebox{\linewidth}{!}{\begin{tabular}{c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.10in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.2000in}}
c @{\extracolsep{0.2000in}}
c @{\extracolsep{0.2200in}}
c @{\extracolsep{0.2200in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$\Gamma^\prime$}&\multicolumn{1}{c}{$p^\prime$}&\multicolumn{1}{c}{$J^\prime$}&\multicolumn{1}{c}{$K^\prime$}&\multicolumn{1}{c}{$v_{2}^\prime$}&\multicolumn{1}{c}{$\Gamma^{\prime\prime}$}&
\multicolumn{1}{c}{$p^{\prime\prime}$}&\multicolumn{1}{c}{$J^{\prime\prime}$}&\multicolumn{1}{c}{$K^{\prime\prime}$}&\multicolumn{1}{c}{$v_{2}^{\prime\prime}$}&
\multicolumn{1}{c}{$\nu$/cm$^{-1}$}&\multicolumn{1}{c}{$\lambda$/$\mu$m}&\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$}\\[1mm]
\hline \\[-1mm]
& & & & & & & & & $^{14}$NH$_3$ & & & \\[1mm]
$A_2^\prime$ & s & 6 & 6 & 1 & $A_2^{\prime\prime}$ & a & 6 & 6 & 0 & 927.3230 & 10.7837 & 0.1316$\times 10^{+2}$ & -0.367(-0.356) \\[0.3mm]
$E^\prime$ & s & 2 & 2 & 1 & $E^{\prime\prime}$ & a & 2 & 2 & 0 & 931.3333 & 10.7373 & 0.1030$\times 10^{+2}$ & -0.371(-0.366) \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 0 & 971.8821 & 10.2893 & 0.5238$\times 10^{+1}$ & -0.399(-0.394) \\[0.3mm]
$E^{\prime\prime}$ & s & 1 & 1 & 1 & $E^\prime$ & a & 2 & 1 & 0 & 891.8820 & 11.2122 & 0.6795$\times 10^{+1}$ & -0.344(-0.339) \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 1 & $A_2^{\prime\prime}$ & a & 2 & 0 & 0 & 892.1567 & 11.2088 & 0.9054$\times 10^{+1}$ & -0.344(-0.339) \\[0.3mm]
$A_2^{\prime\prime}$ & s & 3 & 3 & 1 & $A_2^\prime$ & a & 3 & 3 & 0 & 930.7571 & 10.7439 & 0.1158$\times 10^{+2}$ & -0.370(-0.366) \\[1mm]
& & & & & & & & & $^{15}$NH$_3$ & & & \\[1mm]
$A_2^\prime$ & s & 6 & 6 & 1 & $A_2^{\prime\prime}$ & a & 6 & 6 & 0 & 923.4541 & 10.8289 & 0.1290$\times 10^{+2}$ & -0.365(-0.365) \\[0.3mm]
$E^\prime$ & s & 2 & 2 & 1 & $E^{\prime\prime}$ & a & 2 & 2 & 0 & 927.4034 & 10.7828 & 0.1010$\times 10^{+2}$ & -0.373(-0.373) \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 0 & 967.8597 & 10.3321 & 0.5133$\times 10^{+1}$ & -0.400(-0.400) \\[0.3mm]
$E^{\prime\prime}$ & s & 1 & 1 & 1 & $E^\prime$ & a & 2 & 1 & 0 & 888.0413 & 11.2607 & 0.6664$\times 10^{+1}$ & -0.345(-0.345) \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 1 & $A_2^{\prime\prime}$ & a & 2 & 0 & 0 & 888.3174 & 11.2572 & 0.8878$\times 10^{+1}$ & -0.346(-0.346) \\[0.3mm]
$A_2^{\prime\prime}$ & s & 3 & 3 & 1 & $A_2^\prime$ & a & 3 & 3 & 0 & 926.8378 & 10.7894 & 0.1135$\times 10^{+2}$ & -0.372(-0.372) \\[1mm]
\hline \\[-3mm]
\end{tabular}}
\vspace*{1mm}
\\
\footnotesize $^{14}$NH$_3$: Wavenumbers and Einstein coefficients from \citet{Urban:1984} and \citet{Yurchenko:2011}, respectively; Astronomical observations reported in \citet{Betz:1979} and \citet{Evans:1991}; values in parentheses from \citet{Spirko:2014}, obtained using the nonrigid inverter theory. \\
$^{15}$NH$_3$: Wavenumbers provided by Fusina, Di Lonardo \& Predoi-Cross (in preparation), Einstein coefficients from \citet{Yurchenko:2015}; values in parentheses obtained using the nonrigid inverter theory with the frequencies from Fusina, Di Lonardo \& Predoi-Cross (in preparation). \\
\end{table*}
\begin{table}
\vspace*{-1.5cm}
\caption{The vibration-rotation-inversion transitions associated with the $|a,J,K\!\!=\!\!3,v_{2}\!\!=\!\!1\rangle$ - $|s,J,K\!\!\!=\!\!0,v_{2}\!~\!=\!\!1\rangle$ resonances.}
\label{tab:v2_14nh3_15nh3_3}
\resizebox{\linewidth}{!}{\begin{tabular}{c @{\extracolsep{0.01in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.2000in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.3000in}}
c @{\extracolsep{0.1200in}}
c @{\extracolsep{0.1200in}}
c @{\extracolsep{0.0800in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$\Gamma^\prime$}&\multicolumn{1}{c}{$p^\prime$}&\multicolumn{1}{c}{$J^\prime$}&\multicolumn{1}{c}{$K^\prime$}&\multicolumn{1}{c}{$v_{2}^\prime$}&\multicolumn{1}{c}{$\Gamma^{\prime\prime}$}&
\multicolumn{1}{c}{$p^{\prime\prime}$}&\multicolumn{1}{c}{$J^{\prime\prime}$}&\multicolumn{1}{c}{$K^{\prime\prime}$}&\multicolumn{1}{c}{$v_{2}^{\prime\prime}$}&
\multicolumn{1}{c}{$\nu$/MHz}&\multicolumn{1}{c}{$A$/s$^{-1}$}&
\multicolumn{1}{c}{$T$}&\multicolumn{1}{c}{Obs. Ref.} \\[2mm]
\hline \\[-1mm]
& & & & & & & & $^{14}$NH$_3$ & & & \\[1mm]
$A_2^\prime$ & a & 3 & 3 & 1 & $A_2^{\prime\prime}$ & s & 3 & 3 & 0 & 29000313.7 & 0.1176$\times 10^{+2}$ & -0.484(-0.484) & $^b$ \\[0.3mm]
$A_2^\prime$ & s & 3 & 0 & 1 & $A_2^{\prime\prime}$ & s & 3 & 3 & 0 & 28997430.0 & 0.2025$\times 10^{0}$ & -0.405(-0.405) & $^b$ \\[0.3mm]
& a & 3 & 3 & 1 & & s & 3 & 0 & 1 & 2883.7 & & -790.6(-1001$^a$)& $^b$ \\[1.4mm]
$A_2^\prime$ & a & 3 & 3 & 1 & $A_2^{\prime\prime}$ & a & 2 & 0 & 1 & 772594.9 & 0.6018$\times 10^{-4}$ & -0.868(-0.868) & $^c$ \\[0.3mm]
$A_2^\prime$ & s & 3 & 0 & 1 & $A_2^{\prime\prime}$ & a & 2 & 0 & 1 & 769710.2 & 0.3471$\times 10^{-2}$ & 2.090(2.089) & $^c$ \\[0.3mm]
& a & 3 & 3 & 1 & & s & 3 & 0 & 1 & 2884.7 & & -790.3(-1001)& $^c$ \\[1.4mm]
$A_2^\prime$ & a & 3 & 3 & 1 & $A_2^{\prime\prime}$ & s & 3 & 3 & 1 & 1073050.7 & 0.1634$\times 10^{-1}$ & -3.350(-3.353) & $^c$ \\[0.3mm]
$A_2^\prime$ & s & 3 & 0 & 1 & $A_2^{\prime\prime}$ & s & 3 & 3 & 1 & 1070166.6 & 0.2765$\times 10^{-3}$ & -1.228(-1.229) & $^c$ \\[0.3mm]
& a & 3 & 3 & 1 & & s & 3 & 0 & 1 & 2884.1 & & -790.5(-1001)& $^c$ \\[1.4mm]
$A_2^\prime$ & a & 5 & 3 & 1 & $A_2^{\prime\prime}$ & s & 5 & 3 & 0 & 28971340.5 & 0.4692$\times 10^{+1}$ & -0.484(-0.484) & $^d$ \\[0.3mm]
$A_2^\prime$ & s & 5 & 0 & 1 & $A_2^{\prime\prime}$ & s & 5 & 3 & 0 & 29050552.5 & 0.2147$\times 10^{-2}$ & -0.408(-0.408) & $^d$ \\[0.3mm]
& a & 5 & 3 & 1 & & s & 5 & 0 & 1 & 79212.0 & & 27.38(27.35) & $^d$ \\[1.4mm]
$A_2^\prime$ & a & 5 & 3 & 1 & $A_2^{\prime\prime}$ & s & 5 & 3 & 1 & 979649.1 & 0.5141$\times 10^{-2}$ & -3.425(-3.427) & $^d$ \\[0.3mm]
$A_2^\prime$ & s & 5 & 0 & 1 & $A_2^{\prime\prime}$ & s & 5 & 3 & 1 & 1058861.1 & 0.3714$\times 10^{-5}$ & -1.120(-1.120) & $^d$ \\[0.3mm]
& a & 5 & 3 & 1 & & s & 5 & 0 & 1 & 79212.0 & & 27.38(27.35) & $^d$ \\[1.4mm]
$A_2^\prime$ & a & 5 & 3 & 1 & $A_2^{\prime\prime}$ & a & 4 & 0 & 1 & 1956241.1 & 0.4129$\times 10^{-4}$ & -0.988(-0.988) & $^d$ \\[0.3mm]
$A_2^\prime$ & s & 5 & 0 & 1 & $A_2^{\prime\prime}$ & a & 4 & 0 & 1 & 2035453.1 & 0.7023$\times 10^{-1}$ & 0.116(0.116) & $^d$ \\[0.3mm]
& a & 5 & 3 & 1 & & s & 5 & 0 & 1 & 79212.0 & & 27.38(29.35) & $^d$ \\[1.4mm]
$A_2^\prime$ & a & 7 & 3 & 1 & $A_2^{\prime\prime}$ & s & 7 & 3 & 0 & 28934099.5 & 0.2399$\times 10^{+1}$ & -0.480(-0.480) & $^d$ \\[0.3mm]
$A_2^\prime$ & s & 7 & 0 & 1 & $A_2^{\prime\prime}$ & s & 7 & 3 & 0 & 29118808.5 & 0.1095$\times 10^{-3}$ & -0.416(-0.416) & $^d$ \\[0.3mm]
& a & 7 & 3 & 1 & & s & 7 & 0 & 1 & 184709.0 & & 9.561(9.582) & $^d$ \\[1.4mm]
$A_2^\prime$ & a & 9 & 3 & 1 & $A_2^{\prime\prime}$ & s & 9 & 3 & 0 & 28892089.9 & 0.1444$\times 10^{+1}$ & -0.475(-0.475) & $^d$ \\[0.3mm]
$A_2^\prime$ & s & 9 & 0 & 1 & $A_2^{\prime\prime}$ & s & 9 & 3 & 0 & 29194454.6 & 0.1029$\times 10^{-3}$ & -0.425(-0.425) & $^d$ \\[0.3mm]
& a & 9 & 3 & 1 & & s & 9 & 0 & 1 & 302364.7 & & 4.350(4.363) & $^d$ \\[1mm]
& & & & & & & & $^{15}$NH$_3$ & & & & \\[1mm]
$A_2^\prime$ & a & 3 & 3 & 1 & $A_2^{\prime\prime}$ & s & 3 & 3 & 0 & 28843885.0 & 0.1171$\times 10^{+2}$ & -0.486(-0.486) & $^e$ \\[0.3mm]
$A_2^\prime$ & s & 3 & 0 & 1 & $A_2^{\prime\prime}$ & s & 3 & 3 & 0 & 28872669.9 & 0.2187$\times 10^{-2}$ & -0.403(-0.403) & $^e$ \\[0.3mm]
& a & 3 & 3 & 1 & & s & 3 & 0 & 1 & 28784.9 & & 82.96(81.69) & $^e$ \\[1.4mm]
$A_2^\prime$ & a & 3 & 3 & 1 & $A_2^{\prime\prime}$ & a & 2 & 0 & 1 & 774222.8 & 0.7160$\times 10^{-6}$ & -0.999(-0.999 ) & $^f$ \\[0.3mm]
$A_2^\prime$ & s & 3 & 0 & 1 & $A_2^{\prime\prime}$ & a & 2 & 0 & 1 & 802986.7 & 0.4035$\times 10^{-2}$ & 2.011(2.010) & $^f$ \\[0.3mm]
& a & 3 & 3 & 1 & & s & 3 & 0 & 1 & 28763.9 & & 83.02(81.69) & $^f$ \\[1.4mm]
$A_2^\prime$ & a & 3 & 3 & 1 & $A_2^{\prime\prime}$ & s & 3 & 3 & 1 & 1035207.4 & 0.1491$\times 10^{-1}$ & -3.473(-3.476) & $^f$ \\[0.3mm]
$A_2^\prime$ & s & 3 & 0 & 1 & $A_2^{\prime\prime}$ & s & 3 & 3 & 1 & 1063971.3 & 0.3245$\times 10^{-5}$ & -1.228(-1.135) & $^f$ \\[0.3mm]
& a & 3 & 3 & 1 & & s & 3 & 0 & 1 & 28763.9 & & 83.02(81.69) & $^f$ \\[1.4mm]
$A_2^\prime$ & a & 5 & 3 & 1 & $A_2^{\prime\prime}$ & s & 5 & 3 & 0 & 28817906.5 & 0.4598$\times 10^{+1}$ & -0.483(-0.483) & $^e$ \\[0.3mm]
$A_2^\prime$ & s & 5 & 0 & 1 & $A_2^{\prime\prime}$ & s & 5 & 3 & 0 & 28927141.3 & 0.7768$\times 10^{-3}$ & -0.409(-0.409) & $^e$ \\[0.3mm]
& a & 5 & 3 & 1 & & s & 5 & 0 & 1 & 109234.8 & & 19.02(19.02) & $^e$ \\[1.4mm]
$A_2^\prime$ & a & 5 & 3 & 1 & $A_2^{\prime\prime}$ & s & 5 & 3 & 1 & 943226.9 & 0.4588$\times 10^{-2}$ & -3.453(-3.455) & $^f$ \\[0.3mm]
$A_2^\prime$ & s & 5 & 0 & 1 & $A_2^{\prime\prime}$ & s & 5 & 3 & 1 & 1052459.7 & 0.1548$\times 10^{-5}$ & -1.120(-1.121) & $^f$ \\[0.3mm]
& a & 5 & 3 & 1 & & s & 5 & 0 & 1 & 109232.8 & & 19.04(19.02) & $^f$ \\[1.4mm]
$A_2^\prime$ & a & 5 & 3 & 1 & $A_2^{\prime\prime}$ & a & 4 & 0 & 1 & 1955711.7 & 0.1882$\times 10^{-4}$ & -0.988(-0.988) & $^f$ \\[0.3mm]
$A_2^\prime$ & s & 5 & 0 & 1 & $A_2^{\prime\prime}$ & a & 4 & 0 & 1 & 2064944.5 & 0.7369$\times 10^{-1}$ & 0.071(0.071) & $^f$ \\[0.3mm]
& a & 5 & 3 & 1 & & s & 5 & 0 & 1 & 109232.8 & & 19.05(19.02) & $^f$ \\[1.4mm]
$A_2^\prime$ & a & 7 & 3 & 1 & $A_2^{\prime\prime}$ & s & 7 & 3 & 0 & 28784706.6 & 0.2399$\times 10^{+1}$ & -0.479(-0.479) & $^e$ \\[0.3mm]
$A_2^\prime$ & s & 7 & 0 & 1 & $A_2^{\prime\prime}$ & s & 7 & 3 & 0 & 28997286.1 & 0.1095$\times 10^{-3}$ & -0.418(-0.418) & $^e$ \\[0.3mm]
& a & 7 & 3 & 1 & & s & 7 & 0 & 1 & 212579.5 & & 7.898(7.073) & $^e$ \\[1.4mm]
$A_2^\prime$ & a & 9 & 3 & 1 & $A_2^{\prime\prime}$ & s & 9 & 3 & 0 & 28747714.9 & 0.1444$\times 10^{+1}$ & -0.479(-0.475) & $^e$ \\[0.3mm]
$A_2^\prime$ & s & 9 & 0 & 1 & $A_2^{\prime\prime}$ & s & 9 & 3 & 0 & 29075088.5 & 0.1029$\times 10^{-3}$ & -0.418(-0.427) & $^e$ \\[0.3mm]
& a & 9 & 3 & 1 & & s & 9 & 0 & 1 & 327373.6 & & 3.782(3.782) & $^e$ \\[1mm]
\hline \\[-3mm]
\end{tabular}}
\vspace*{1mm}
\\
\footnotesize $^{14}$NH$_3$: Einstein coefficients from \citet{Yurchenko:2011}; $^a$JBU sensitivity coefficient reaches a value of -938 (see \citet{Jansen:2014}); values in parentheses obtained using the nonrigid inverter theory with the calculated TROVE frequencies; $^b$\citet{Fichoux}; $^c$\citet{Belov}; $^d$\citet{Urban:1984}. \\
$^{15}$NH$_3$: Einstein coefficients from \citet{Yurchenko:2015}; \hspace{8mm} values in parentheses obtained using the nonrigid inverter theory with the calculated TROVE frequencies; $^e$Fusina, Di Lonardo \& Predoi-Cross (in preparation); $^f$\citet{Urban:1985}.\\
\end{table}
\begin{table*}
\vspace*{-0.0cm}
\caption{Inversion frequencies ($\nu$), Einstein coefficients ($A$), and sensitivities ($T$) of $^{14}$NH$_3$ and their $^{15}$NH$_3$ counterparts in the ground vibrational state.}
\label{tab:inv_ground_14nh3_15nh3}
\resizebox{\linewidth}{!}{\begin{tabular}{c @{\extracolsep{0.01in}}
c @{\extracolsep{0.10in}}
c @{\extracolsep{0.0800in}}
c @{\extracolsep{0.0800in}}
c @{\extracolsep{0.2500in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.0400in}}
c @{\extracolsep{0.0800in}}
c @{\extracolsep{0.0800in}}
c @{\extracolsep{0.0500in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$J$}&\multicolumn{1}{c}{$K$}&\multicolumn{1}{c}{$\nu$/MHz}&
\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$}&
\multicolumn{1}{c}{$J$}&\multicolumn{1}{c}{$K$}&\multicolumn{1}{c}{$\nu$/MHz}&
\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$} \\[1mm]
\hline \\[-1mm]
& & & & & \hspace*{-12mm}$^{14}$NH$_3$ & & & & \\[1mm]
1 & 1 & 23694.3 & 0.1657$\times 10^{-6}$ & -4.310(-4.365) & 4 & 3 & 22688.3 & 0.1311$\times 10^{-6}$ & -4.289(-4.514) \\
2 & 1 & 23098.8 & 0.5123$\times 10^{-7}$ & -4.297(-4.413) & 4 & 4 & 24139.4 & 0.2797$\times 10^{-6}$ & -4.317(-4.471) \\
2 & 2 & 23722.5 & 0.2216$\times 10^{-6}$ & -4.311(-4.385) & 5 & 1 & 19838.3 & 0.6540$\times 10^{-8}$ & -4.220(-4.700) \\
3 & 2 & 22834.2 & 0.9902$\times 10^{-7}$ & -4.288(-4.464) & 5 & 2 & 20371.5 & 0.2828$\times 10^{-7}$ & -4.231(-4.546) \\
3 & 3 & 23870.1 & 0.2538$\times 10^{-6}$ & -4.312(-4.419) & 5 & 3 & 21285.3 & 0.7239$\times 10^{-7}$ & -4.257(-4.634) \\
4 & 1 & 21134.3 & 0.1182$\times 10^{-7}$ & -4.249(-4.568) & 5 & 4 & 22653.0 & 0.1546$\times 10^{-6}$ & -4.282(-4.592) \\
4 & 2 & 21703.4 & 0.5114$\times 10^{-7}$ & -4.262(-4.545) & 5 & 5 & 24533.0 & 0.3053$\times 10^{-6}$ & -4.327(-4.509) \\[1mm]
& & & & & \hspace*{-12mm}$^{15}$NH$_3$ & & & & \\[1mm]
1 & 1 & 22624.9 & 0.1464$\times 10^{-6}$ & -4.352(-4.333) & 4 & 3 & 21637.9 & 0.1149$\times 10^{-6}$ & -4.330(-4.309) \\
2 & 1 & 22044.2 & 0.4521$\times 10^{-7}$ & -4.341(-4.321) & 4 & 4 & 23046.0 & 0.2469$\times 10^{-6}$ & -4.360(-4.341) \\
2 & 2 & 22649.8 & 0.1958$\times 10^{-6}$ & -4.349(-4.330) & 5 & 1 & 18871.5 & 0.5729$\times 10^{-8}$ & -4.264(-4.239) \\
3 & 2 & 21783.9 & 0.8730$\times 10^{-7}$ & -4.333(-4.312) & 5 & 2 & 19387.4 & 0.2480$\times 10^{-7}$ & -4.278(-4.254) \\
3 & 3 & 22789.4 & 0.2241$\times 10^{-6}$ & -4.356(-4.337) & 5 & 3 & 20272.1 & 0.6358$\times 10^{-7}$ & -4.299(-4.276) \\
4 & 1 & 20131.4 & 0.1039$\times 10^{-7}$ & -4.293(-4.270) & 5 & 4 & 21597.9 & 0.1360$\times 10^{-6}$ & -4.330(-4.309) \\
4 & 2 & 20682.8 & 0.4498$\times 10^{-7}$ & -4.306(-4.284) & 5 & 5 & 23422.0 & 0.2695$\times 10^{-6}$ & -4.366(-4.347) \\[1mm]
\hline \\[-3mm]
\end{tabular}}
\vspace*{1mm}
\\
\footnotesize $^{14}$NH$_3$: Frequencies and Einstein coefficients from \citet{Lovas:2009} and \citet{Yurchenko:2011}, respectively; values in parentheses from \citet{Spirko:2014}, obtained using the nonrigid inverter theory. \\
$^{15}$NH$_3$: Frequencies and Einstein coefficients from \citet{Urban:1985} and \citet{Yurchenko:2015}, respectively; values in parentheses obtained using the nonrigid inverter theory with the frequencies from \citet{Urban:1985}. \\
\end{table*}
\begin{table*}
\vspace*{-0.0cm}
\caption{The rotation-inversion frequencies ($\nu$), Einstein coefficients ($A$), and sensitivities ($T$) of $^{14}$NH$_3$ and their $^{15}$NH$_3$ counterparts in the ground vibrational state.}
\label{tab:roinv_ground_14nh3_15nh3}
\resizebox{\linewidth}{!}{\begin{tabular}{c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.1in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.2000in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.0500in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$\Gamma^\prime$}&\multicolumn{1}{c}{$p^\prime$}&\multicolumn{1}{c}{$J^\prime$}&\multicolumn{1}{c}{$K^\prime$}&\multicolumn{1}{c}{$v_{2}^\prime$}&\multicolumn{1}{c}{$\Gamma^{\prime\prime}$}&
\multicolumn{1}{c}{$p^{\prime\prime}$}&\multicolumn{1}{c}{$J^{\prime\prime}$}&\multicolumn{1}{c}{$K^{\prime\prime}$}&\multicolumn{1}{c}{$v_{2}^{\prime\prime}$}&
\multicolumn{1}{c}{$\nu$/MHz}&\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$}& \\[1mm]
\hline \\[-1mm]
& & & & & & & &$^{14}$NH$_3$ & & & & \\[1mm]
$A_2^\prime$ & s & 1 & 0 & 0 & $A_2^{\prime\prime}$ & a & 0 & 0 & 0 & 572498 & 0.1561$\times 10^{-2}$ & -0.860(-0.862) \\[0.3mm]
$A_2^{\prime\prime}$ & a & 2 & 0 & 0 & $A_2^\prime$ & s & 1 & 0 & 0 & 1214859 & 0.1791$\times 10^{-1}$ & -1.060(-1.063) \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 0 & $E^{\prime\prime}$ & s & 1 & 1 & 0 & 1215245 & 0.1344$\times 10^{-1}$ & -1.061(-1.064) \\[1mm]
& & & & & & & &$^{15}$NH$_3$ & & & & \\[1mm]
$A_2^\prime$ & s & 1 & 0 & 0 & $A_2^{\prime\prime}$ & a & 0 & 0 & 0 & 572112 & 0.1557$\times 10^{-2}$ & -0.865(-0.866) \\[0.3mm]
$A_2^{\prime\prime}$ & a & 2 & 0 & 0 & $A_2^\prime$ & s & 1 & 0 & 0 & 1210889 & 0.1774$\times 10^{-1}$ & -1.058(-1.058) \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 0 & $E^{\prime\prime}$ & s & 1 & 1 & 0 & 1211277 & 0.1331$\times 10^{-1}$ & -1.059(-1.059) \\[1mm]
\hline \\[-3mm]
\end{tabular}}
\vspace*{1mm}
\\
\footnotesize $^{14}$NH$_3$: Frequencies and Einstein coefficients from \citet{Persson:2010} and \citet{Yurchenko:2011}, respectively; values given in parentheses from \citet{Spirko:2014}, obtained using the nonrigid inverter theory. \\
$^{15}$NH$_3$: Frequencies and Einstein coefficients from \citet{Urban:1985} and \citet{Yurchenko:2015}, respectively; values in parentheses obtained using the nonrigid inverter theory with the frequencies from \citet{Urban:1985}. \\
\end{table*}
The inversion frequencies in the ground vibrational state, Table \ref{tab:inv_ground_14nh3_15nh3}, have comparable sensitivities for both $^{14}$NH$_3$ and $^{15}$NH$_3$, and this also holds true for the ro-inversional transitions shown in Table \ref{tab:roinv_ground_14nh3_15nh3}, demonstrating the validity of $^{15}$NH$_3$ as a probe of $\mu$. The sensitivity coefficients of the $\nu_4$ transitions shown in Table \ref{tab:v4_14nh3_15nh3}, although promising, do not acquire the impressive magnitudes of their $\nu_2$ counterparts. However the appearance of positive and negative values could be of real use in constraining $\mu$.
\begin{table*}
\vspace*{-0.0cm}
\caption{Inversion frequencies ($\nu$), Einstein coefficients ($A$), and sensitivities ($T$) of $^{14}$NH$_3$ and $^{15}$NH$_3$ in the $\nu_4$ vibrational state.}
\label{tab:v4_14nh3_15nh3}
\begin{tabular}{c @{\extracolsep{0.001in}}
c @{\extracolsep{0.001in}}
c @{\extracolsep{0.0051in}}
c @{\extracolsep{0.0200in}}
c @{\extracolsep{0.1200in}}
c @{\extracolsep{0.20000in}}
c @{\extracolsep{0.001in}}
c @{\extracolsep{0.001in}}
c @{\extracolsep{0.00510in}}
c @{\extracolsep{0.0200in}}
c @{\extracolsep{0.1200in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$J$}&\multicolumn{1}{c}{$K$}&\multicolumn{1}{l}{$l$}&\multicolumn{1}{c}{$\nu$/MHz}&
\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$}&
\multicolumn{1}{c}{$J$}&\multicolumn{1}{c}{$K$}&\multicolumn{1}{l}{$l$}&\multicolumn{1}{c}{$\nu$/MHz}&
\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$} \\[1mm]
\hline \\[-1mm]
& & & & & & \hspace*{-12mm}$^{14}$NH$_3$ & & & & & \\[1mm]
1 & 1 & -1 & 32400 & 0.4243$\times 10^{-6}$ & -4.268 & 4 & 3 & 1 & 57132 & 0.1968$\times 10^{-5}$ & 1.561 \\
1 & 1 & 1 & 57843 & 0.2411$\times 10^{-5}$ & -2.234 & 4 & 2 & -1 & 47526 & 0.5467$\times 10^{-6}$ & -1.550 \\
2 & 2 & -1 & 32111 & 0.5514$\times 10^{-6}$ & -4.250 & 4 & 2 & 1 & 46515 & 0.4020$\times 10^{-6}$ & -0.247 \\
2 & 2 & 1 & 40189 & 0.1056$\times 10^{-5}$ & -2.381 & 4 & 1 & -1 & 57681 & 0.2548$\times 10^{-6}$ & -0.220 \\
2 & 1 & -1 & 36797 & 0.2085$\times 10^{-6}$ & -3.133 & 4 & 1 & 1 &145888$^a$& 0.3787$\times 10^{-5}$ & -0.962 \\
2 & 1 & 1 & 20655 & 0.3743$\times 10^{-7}$ & 2.720 & 5 & 5 & -1 & 32037 & 0.6848$\times 10^{-6}$ & -4.264 \\
3 & 3 & -1 & 31893 & 0.6081$\times 10^{-6}$ & -4.259 & 5 & 5 & 1 & 68699 & 0.6198$\times 10^{-5}$ & 4.672 \\
3 & 3 & 1 & 46679 & 0.1863$\times 10^{-5}$ & -0.667 & 5 & 4 & -1 & 39071 & 0.8020$\times 10^{-6}$ & -2.832 \\
3 & 2 & -1 & 37500 & 0.4424$\times 10^{-6}$ & -2.961 & 5 & 4 & 1 & 73534 & 0.4807$\times 10^{-5}$ & 4.480 \\
3 & 2 & 1 & 44963 & 0.6906$\times 10^{-6}$ & -1.023 & 5 & 3 & -1 & 48346 & 0.8610$\times 10^{-6}$ & -1.506 \\
3 & 1 & -1 & 44755 & 0.1908$\times 10^{-6}$ & -1.687 & 5 & 3 & 1 & 64906 & 0.1799$\times 10^{-5}$ & 3.044 \\
3 & 1 & 1 &177783$^a$&0.1087$\times 10^{-4}$& -0.482 & 5 & 2 & -1 & 58699 & 0.6967$\times 10^{-6}$ & -0.181 \\
4 & 4 & -1 & 31884 & 0.6482$\times 10^{-6}$ & -4.258 & 5 & 2 & 1 & 44876 & 0.2025$\times 10^{-6}$ & 0.239 \\
4 & 4 & 1 & 55765 & 0.3325$\times 10^{-5}$ & 1.668 & 5 & 1 & 1 & 78141$^a$& 0.4324$\times 10^{-6}$ & 0.990 \\
4 & 3 & -1 & 38460 & 0.6451$\times 10^{-6}$ & -2.855 & 5 & 1 & -1 &380542$^a$& 0.4015$\times 10^{-4}$ & -0.178 \\[1mm]
& & & & & & \hspace*{-12mm}$^{15}$NH$_3$ & & & & & \\[1mm]
1 & 1 & -1 & 31108 & 0.3758$\times 10^{-6}$ & -4.291 & 4 & 3 & 1 & 51989 & 0.1501$\times 10^{-5}$ & 0.684 \\
1 & 1 & 1 & 55582 & 0.2142$\times 10^{-5}$ & -2.410 & 4 & 2 & -1 & 44599 & 0.4524$\times 10^{-6}$ & -1.765 \\
2 & 2 & -1 & 30825 & 0.4880$\times 10^{-6}$ & -4.271 & 4 & 2 & 1 & 43225 & 0.3278$\times 10^{-6}$ & -0.728 \\
2 & 2 & 1 & 37900 & 0.8883$\times 10^{-6}$ & -2.722 & 4 & 1 & -1 & 53406 & 0.2029$\times 10^{-6}$ & -0.558 \\
2 & 1 & -1 & 34950 & 0.1788$\times 10^{-6}$ & -3.273 & 4 & 1 & 1 &146961$^a$ & 0.3870$\times 10^{-5}$ & -0.983 \\
2 & 1 & 1 & 21904 & 0.4450$\times 10^{-7}$ & 2.351 & 5 & 5 & -1 & 30732 & 0.6050$\times 10^{-6}$ & -4.280 \\
3 & 3 & -1 & 30606 & 0.5377$\times 10^{-6}$ & -4.281 & 5 & 5 & 1 & 61128$^a$ & 0.4341$\times 10^{-5}$ & 2.771 \\
3 & 3 & 1 & 43275 & 0.1492$\times 10^{-5}$ & -1.358 & 5 & 4 & -1 & 37071 & 0.6856$\times 10^{-6}$ & -2.937 \\
3 & 2 & -1 & 35551 & 0.3772$\times 10^{-6}$ & -3.082 & 5 & 4 & 1 & 65945$^a$ & 0.3431$\times 10^{-5}$ & 3.150 \\
3 & 2 & 1 & 41928 & 0.5649$\times 10^{-6}$ & -1.494 & 5 & 3 & -1 & 45373 & 0.7129$\times 10^{-6}$ & -1.689 \\
3 & 1 & -1 & 41947 & 0.1574$\times 10^{-6}$ & -1.941 & 5 & 3 & 1 & 59236$^a$ & 0.1351$\times 10^{-5}$ & 2.151 \\
3 & 1 & 1&171460$^a$&0.9842$\times 10^{-5}$& -0.731 & 5 & 2 & -1 & 54322 & 0.5536$\times 10^{-6}$ & -0.440 \\
4 & 4 & -1 & 30591 & 0.5729$\times 10^{-6}$ & -4.281 & 5 & 2 & 1 & 42037$^a$ & 0.1659$\times 10^{-6}$ & -0.242 \\
4 & 4 & 1 & 50530 & 0.2502$\times 10^{-5}$ & 0.452 & 5 & 1 & -1 & 71752$^a$ & 0.3362$\times 10^{-6}$ & 0.639 \\
4 & 3 & -1 & 36472 & 0.5506$\times 10^{-6}$ & -2.978 & 5 & 1 & 1 &369287$^a$ & 0.3728$\times 10^{-4}$ & -0.379 \\[1mm]
\hline \\[-3mm]
\end{tabular}
\vspace*{1mm}
\\
\footnotesize Frequencies from \citet{Cohen:1974} and \citet{Cohen:1980}; $^a$TROVE calculated value \\
\end{table*}
Because of the substantial differences in size of the inversion splittings, the mass sensitivity of the $^{14}$ND$_3$ and $^{15}$ND$_3$ transitions exhibit centrifugal distortion and Coriolis interaction dependence significantly different from that exhibited by $^{14}$NH$_3$ and $^{15}$NH$_3$ (see Tables \ref{tab:inv_ground_14nd3_15nd3}, \ref{tab:roinv_ground_14nd3_15nd3}, \ref{tab:v2_14nd3_15nd3_1}, \ref{tab:v2_14nd3_15nd3_2} and Fig. 2). The effects of these interactions are non negligible and must be included in any critical analysis. As only a small fraction of the total presence of ammonia in the interstellar medium is heavy ammonia, a detection of `higher energy' transitions is rather improbable. All the ammonia isotopomers appear as suitable targets for terrestrial studies however, such as those reported by \citet{Veldhoven:2004}, \citet{Sartakov:2008}, and \citet{Quintero:2014}.
\begin{figure}
\label{fig:theoref2}
\hspace*{-7mm}\includegraphics[width=0.56\columnwidth,angle=0 ]{gsallND3.pdf}
\hspace*{-15mm}\includegraphics[width=0.56\columnwidth,angle=0]{gsall_ES_ND3.pdf}
\caption{The sensitivities, $T$, of the inversion transitions of the $(J,K=\pm3)$ rotational states of $^{14}$ND$_3$ and $^{15}$ND$_3$ in the ground (left panel) and $\nu_2$ (right panel) vibrational states.}
\end{figure}
\begin{table*}
\vspace*{-0.0cm}
\caption{Inversion frequencies ($\nu$), Einstein coefficients ($A$), and sensitivities ($T$) of $^{14}$ND$_3$ and $^{15}$ND$_3$ in the ground vibrational state.}
\label{tab:inv_ground_14nd3_15nd3}
\resizebox{\linewidth}{!}{\begin{tabular}{c @{\extracolsep{0.01in}}
c @{\extracolsep{0.10in}}
c @{\extracolsep{0.100in}}
c @{\extracolsep{0.100in}}
c @{\extracolsep{0.2500in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.0400in}}
c @{\extracolsep{0.1000in}}
c @{\extracolsep{0.1000in}}
c @{\extracolsep{0.0500in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$J$}&\multicolumn{1}{c}{$K$}&\multicolumn{1}{c}{$\nu$/MHz}&
\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$}&
\multicolumn{1}{c}{$J$}&\multicolumn{1}{c}{$K$}&\multicolumn{1}{c}{$\nu$/MHz}&
\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$} \\[1mm]
\hline \\[-1mm]
& & & & & \hspace*{-12mm}$^{14}$ND$_3$ & & & & \\[1mm]
1 & 1 & 1589.006 & 0.5764$\times 10^{-10}$& -5.541(-5.528)& 4 & 3 & 1558.600 & 0.4897$\times 10^{-10}$& -5.533(-5.520) \\
2 & 1 & 1568.357 & 0.1849$\times 10^{-10}$& -5.556(-5.542)& 4 & -3 & 1558.178 & 0.4893$\times 10^{-10}$& -5.534(-5.521) \\
2 & 2 & 1591.695 & 0.7721$\times 10^{-10}$& -5.543(-5.530)& 4 & 4 & 1612.997 & 0.9623$\times 10^{-10}$& -5.536(-5.525) \\
3 & 1 & 1537.915 & 0.8725$\times 10^{-11}$& -5.526(-5.511)& 5 & 1 & 1450.435$^a$ & 0.2937$\times 10^{-11}$& -5.511(-5.493) \\
3 & 2 & 1560.774 & 0.3644$\times 10^{-10}$& -5.537(-5.523)& 5 & 2 & 1471.785 & 0.1226$\times 10^{-10}$& -5.504(-5.487) \\
3 & 3 & 1599.645 & 0.8810$\times 10^{-10}$& -5.571(-5.559)& 5 & 3 & 1507.525 & 0.2960$\times 10^{-10}$& -5.553(-5.537) \\
3 & -3 & 1599.704 & 0.8811$\times 10^{-10}$& -5.571(-5.559)& 5 & -3 & 1509.218 & 0.2969$\times 10^{-10}$& -5.499(-5.484) \\
4 & 1 & 1498.270 & 0.4848$\times 10^{-11}$& -5.503(-5.487)& 5 & 4 & 1561.146 & 0.5827$\times 10^{-10}$& -5.524(-5.511) \\
4 & 2 & 1520.537 & 0.2025$\times 10^{-10}$& -5.493(-5.478)& 5 & 5 & 1631.784 & 0.1036$\times 10^{-9}$& -5.561(-5.551) \\[1mm]
& & & & & \hspace*{-12mm}$^{15}$ND$_3$ & & & & \\[1mm]
1 & 1 & 1430.340 & 0.4227$\times 10^{-10}$& -5.600(-5.577) & 4 & 3 & 1401.312 & 0.3578$\times 10^{-10}$& -5.600(-5.577) \\
2 & 1 & 1410.980 & 0.1354$\times 10^{-10}$& -5.613(-5.589) & 4 & -3 & 1400.878 & 0.3575$\times 10^{-10}$& -5.602(-5.578) \\
2 & 2 & 1432.641 & 0.5661$\times 10^{-10}$& -5.604(-5.581) & 4 & 2 & 1366.027 & 0.1476$\times 10^{-10}$& -5.586(-5.561) \\
3 & 1 & 1382.510 & 0.5374$\times 10^{-11}$& -5.585(-5.560) & 5 & 1 & 1300.841$^a$ & 0.2130$\times 10^{-11}$& -5.562(-5.534) \\
3 & 2 & 1403.684 & 0.2665$\times 10^{-10}$& -5.566(-5.542) & 5 & 2 & 1320.460$^a$ & 0.8907$\times 10^{-11}$& -5.575(-5.547) \\
3 & 3 & 1439.719 & 0.5458$\times 10^{-10}$& -5.601(-5.579) & 5 & 3 & 1353.451 & 0.2153$\times 10^{-10}$& -5.585(-5.559) \\
3 & -3 & 1439.783 & 0.6459$\times 10^{-10}$& -5.601(-5.579) & 5 & -3 & 1355.161 & 0.2162$\times 10^{-10}$& -5.551(-5.526) \\
4 & 1 & 1345.533$^a$ & 0.3530$\times 10^{-11}$& -5.564(-5.538) & 5 & 4 & 1403.179 & 0.4254$\times 10^{-10}$& -5.606(-5.583) \\
4 & 2 & 1366.027 & 0.1476$\times 10^{-10}$& -5.586(-5.561) & 5 & 5 & 1468.666 & 0.7595$\times 10^{-10}$& -5.639(-5.619) \\[1mm]
\hline \\[-3mm]
\end{tabular}}
\vspace*{1mm}
\\
\footnotesize Unless stated otherwise, $^{14}$ND$_3$ and $^{15}$ND$_3$ frequencies from \citet{Murzin} and \citet{Carlotti}, respectively; values in parentheses obtained using the nonrigid inverter theory with the calculated TROVE frequencies; the $K=-3$ values refer to transitions between levels with spin statistical weight $=10$ ($A_{1}^\prime$, $A_{1}^{\prime\prime}$ species), the $K=3$ values refer to transitions between levels with spin statistical weight $=1$ ($A_{2}^\prime$, $A_{2}^{\prime\prime}$) species); $^a$\citet{Bester}.\\
\end{table*}
\begin{table*}
\vspace*{-0.0cm}
\caption{The rotation-inversion frequencies ($\nu$), Einstein coefficients ($A$), and sensitivities ($T$) of $^{14}$ND$_3$ and $^{15}$ND$_3$ in the ground vibrational state.}
\label{tab:roinv_ground_14nd3_15nd3}
\begin{tabular}{c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.100in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.0100in}}
c @{\extracolsep{0.200in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.0500in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$\Gamma^\prime$}&\multicolumn{1}{c}{$p^\prime$}&\multicolumn{1}{c}{$J^\prime$}&\multicolumn{1}{c}{$K^\prime$}&\multicolumn{1}{c}{$v_{2}^\prime$}&\multicolumn{1}{c}{$\Gamma^{\prime\prime}$}&
\multicolumn{1}{c}{$p^{\prime\prime}$}&\multicolumn{1}{c}{$J^{\prime\prime}$}&\multicolumn{1}{c}{$K^{\prime\prime}$}&\multicolumn{1}{c}{$v_{2}^{\prime\prime}$}&
\multicolumn{1}{c}{$\nu$/MHz}&\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$}& \\[1mm]
\hline \\[-1mm]
& & & & & & & &$^{14}$ND$_3$ & & & & \\[1mm]
$A_1^{\prime\prime}$ & a & 1 & 0 & 0 & $A_1^\prime$ & s & 0 & 0 & 0 & 309909$^a$ & 0.2530$\times 10^{-3}$ & -1.022 \\[0.3mm]
$A_2^{\prime\prime}$ & a & 2 & 0 & 0 & $A_2^\prime$ & s & 1 & 0 & 0 & 618075$^a$ & 0.2409$\times 10^{-2}$ & -1.009 \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 0 & $E^{\prime\prime}$ & s & 1 & 1 & 0 & 618124$^a$ & 0.1807$\times 10^{-2}$ & -1.009 \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 0 & $A_2^{\prime\prime}$ & a & 0 & 0 & 0 & 306737$^a$ & 0.2450$\times 10^{-3}$ & -0.973 \\[0.3mm]
$A_1^\prime$ & s & 2 & 0 & 0 & $A_1^{\prime\prime}$ & a & 1 & 0 & 0 & 614933$^a$ & 0.2371$\times 10^{-2}$ & -0.985 \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 0 & $E^\prime$ & a & 1 & 1 & 0 & 614968$^a$ & 0.1778$\times 10^{-2}$ & -0.985 \\[0.3mm]
$A_1^{\prime\prime}$ & a & 3 & 0 & 0 & $A_1^\prime$ & s & 2 & 0 & 0 & 925947 & 0.8681$\times 10^{-2}$ & -1.005 \\[0.3mm]
$E^\prime$ & a & 3 & 1 & 0 & $E^{\prime\prime}$ & s & 2 & 1 & 0 & 926018 & 0.7717$\times 10^{-2}$ & -1.005 \\[0.3mm]
$E^{\prime\prime}$ & a & 3 & 2 & 0 & $E^\prime$ & s & 2 & 2 & 0 & 926228 & 0.4824$\times 10^{-2}$ & -1.005 \\[0.3mm]
$A_2^\prime$ & s & 3 & 0 & 0 & $A_2^{\prime\prime}$ & a & 2 & 0 & 0 & 922857 & 0.8591$\times 10^{-2}$ & -0.989 \\[0.3mm]
$E^{\prime\prime}$ & s & 3 & 1 & 0 & $E^\prime$ & a & 2 & 1 & 0 & 922911 & 0.7637$\times 10^{-2}$ & -0.989 \\[0.3mm]
$E^\prime$ & s & 3 & 2 & 0 & $E^{\prime\prime}$ & a & 2 & 2 & 0 & 923076 & 0.4773$\times 10^{-2}$ & -0.999 \\[1mm]
& & & & & & & &$^{15}$ND$_3$ & & & & \\[1mm]
$A_1^{\prime\prime}$ & a & 1 & 0 & 0 & $A_1^\prime$ & s & 0 & 0 & 0 & 308606$^a$ & 0.2499$\times 10^{-3}$ & -1.020 \\[0.3mm]
$A_2^{\prime\prime}$ & a & 2 & 0 & 0 & $A_2^\prime$ & s & 1 & 0 & 0 & 615628$^a$ & 0.2381$\times 10^{-2}$ & -1.008 \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 0 & $E^{\prime\prime}$ & s & 1 & 1 & 0 & 615677$^a$ & 0.1785$\times 10^{-2}$ & -1.009 \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 0 & $A_2^{\prime\prime}$ & a & 0 & 0 & 0 & 305750$^a$ & 0.2427$\times 10^{-3}$ & -0.975 \\[0.3mm]
$A_1^\prime$ & s & 2 & 0 & 0 & $A_1^{\prime\prime}$ & a & 1 & 0 & 0 & 612801$^a$ & 0.2346$\times 10^{-2}$ & -0.987 \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 0 & $E^\prime$ & a & 1 & 1 & 0 & 612836$^a$ & 0.1760$\times 10^{-2}$ & -0.987 \\[0.3mm]
$A_1^{\prime\prime}$ & a & 3 & 0 & 0 & $A_1^\prime$ & s & 2 & 0 & 0 & 922356& 0.8582$\times 10^{-2}$& -1.004 \\[0.3mm]
$E^\prime$ & a & 3 & 1 & 0 & $E^{\prime\prime}$ & s & 2 & 1 & 0 & 922426& 0.7628$\times 10^{-2}$& -1.004 \\[0.3mm]
$E^{\prime\prime}$ & a & 3 & 2 & 0 & $E^\prime$ & s & 2 & 2 & 0 & 922636& 0.4768$\times 10^{-2}$& -1.004 \\[0.3mm]
$A_2^\prime$ & s & 3 & 0 & 0 & $A_2^{\prime\prime}$ & a & 2 & 0 & 0 & 919577& 0.8501$\times 10^{-2}$& -0.990 \\[0.3mm]
$E^{\prime\prime}$ & s & 3 & 1 & 0 & $E^\prime$ & a & 2 & 1 & 0 & 919632& 0.7556$\times 10^{-2}$& -0.990 \\[0.3mm]
$E^\prime$ & s & 3 & 2 & 0 & $E^{\prime\prime}$ & a & 2 & 2 & 0 & 919800& 0.4723$\times 10^{-2}$& -0.990 \\[1mm]
\hline \\[-3mm]
\end{tabular}
\vspace*{1mm}
\\
\footnotesize Unless stated otherwise, frequencies from \cite{Bester}; $^a$\citet{Helminger1} and \citet{Helminger2}.\\
\end{table*}
\begin{table*}
\vspace*{-0.0cm}
\caption{The rotation-inversion frequencies ($\nu$), Einstein coefficients ($A$), and sensitivities ($T$) of $^{14}$ND$_3$ and $^{15}$ND$_3$ in the $\nu_2$ vibrational state.}
\label{tab:v2_14nd3_15nd3_1}
\begin{tabular}{c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.1in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.2000in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.0500in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$\Gamma^\prime$}&\multicolumn{1}{c}{$p^\prime$}&\multicolumn{1}{c}{$J^\prime$}&\multicolumn{1}{c}{$K^\prime$}&\multicolumn{1}{c}{$v_{2}^\prime$}&\multicolumn{1}{c}{$\Gamma^{\prime\prime}$}&
\multicolumn{1}{c}{$p^{\prime\prime}$}&\multicolumn{1}{c}{$J^{\prime\prime}$}&\multicolumn{1}{c}{$K^{\prime\prime}$}&\multicolumn{1}{c}{$v_{2}^{\prime\prime}$}&
\multicolumn{1}{c}{$\nu$/MHz}&\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$}& \\[1mm]
\hline \\[-1mm]
& & & & & & & &$^{14}$ND$_3$ & & & & \\[1mm]
$A_1^{\prime\prime}$ & a & 1 & 0 & 1 & $A_1^\prime$ & s & 0 & 0 & 1 & 412847 & 0.4983$\times 10^{-3}$ & -2.030 \\[0.3mm]
$A_2^{\prime\prime}$ & a & 2 & 0 & 1 & $A_2^\prime$ & s & 1 & 0 & 1 & 718585 & 0.3131$\times 10^{-2}$ & -1.585 \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 1 & $E^{\prime\prime}$ & s & 1 & 1 & 1 & 719092 & 0.2352$\times 10^{-2}$ & -1.588 \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 1 & $A_2^{\prime\prime}$ & a & 0 & 0 & 1 & 200763 & 0.5423$\times 10^{-4}$ & 1.119 \\[0.3mm]
$A_1^\prime$ & s & 2 & 0 & 1 & $A_1^{\prime\prime}$ & a & 1 & 0 & 1 & 508364 & 0.1082$\times 10^{-2}$ & -0.170 \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 1 & 507940 & 0.8088$\times 10^{-3}$ & -0.166 \\[0.3mm]
$A_1^{\prime\prime}$ & a & 3 & 0 & 1 & $A_1^\prime$ & s & 2 & 0 & 1 & 1023449 & 0.9673$\times 10^{-2}$ & -1.404 \\[0.3mm]
$E^\prime$ & a & 3 & 1 & 1 & $E^{\prime\prime}$ & s & 2 & 1 & 1 & 1023971 & 0.8608$\times 10^{-2}$ & -1.405 \\[0.3mm]
$E^{\prime\prime}$ & a & 3 & 2 & 1 & $E^\prime$ & s & 2 & 2 & 1 & 1025546 & 0.5399$\times 10^{-2}$ & -1.411 \\[0.3mm]
$A_2^\prime$ & s & 3 & 0 & 1 & $A_2^{\prime\prime}$ & a & 2 & 0 & 1 & 816294 & 0.4830$\times 10^{-2}$ & -0.491 \\[0.3mm]
$E^{\prime\prime}$ & s & 3 & 1 & 1 & $E^\prime$ & a & 2 & 1 & 1 & 815898 & 0.4286$\times 10^{-2}$ & -0.488 \\[0.3mm]
$E^\prime$ & s & 3 & 2 & 1 & $E^{\prime\prime}$ & a & 2 & 2 & 1 & 814696 & 0.2663$\times 10^{-2}$ & -0.480 \\[0.3mm]
$A_2^{\prime\prime}$ & a & 4 & 0 & 1 & $A_2^\prime$ & s & 3 & 0 & 1 & 1327334 & 0.2188$\times 10^{-1}$ & -1.304 \\[0.3mm]
$E^\prime$ & a & 4 & 1 & 1 & $E^{\prime\prime}$ & s & 3 & 1 & 1 & 1327865 & 0.2053$\times 10^{-1}$ & -1.305 \\[0.3mm]
$E^{\prime\prime}$ & a & 4 & 2 & 1 & $E^\prime$ & s & 3 & 2 & 1 & 1329473 & 0.1647$\times 10^{-1}$ & -1.309 \\[0.3mm]
$A_2^\prime$ & a & 4 & 3 & 1 & $A_2^{\prime\prime}$ & s & 3 & 3 & 1 & 1332194 & 0.9646$\times 10^{-2}$ & -1.317 \\[0.3mm]
$A_1^\prime$ & a & 4 &-3 & 1 & $A_1^{\prime\prime}$ & s & 3 &-3 & 1 & 1332194 & 0.9646$\times 10^{-2}$ & -1.317 \\[0.3mm]
$A_1^\prime$ & s & 4 & 0 & 1 & $A_1^{\prime\prime}$ & a & 3 & 0 & 1 & 1124392 & 0.1315$\times 10^{-1}$ & -0.637 \\[0.3mm]
$E^{\prime\prime}$ & s & 4 & 1 & 1 & $E^\prime$ & a & 3 & 1 & 1 & 1124025 & 0.1231$\times 10^{-1}$ & -0.636 \\[0.3mm]
$E^\prime$ & s & 4 & 2 & 1 & $E^{\prime\prime}$ & a & 3 & 2 & 1 & 1122914 & 0.9805$\times 10^{-2}$ & -0.630 \\[0.3mm]
$A_2^{\prime\prime}$ & s & 4 & 3 & 1 & $A_2^\prime$ & a & 3 & 3 & 1 & 1121023 & 0.5679$\times 10^{-2}$ & -0.621 \\[0.3mm]
$A_1^{\prime\prime}$ & s & 4 &-3 & 1 & $A_1^\prime$ & a & 3 &-3 & 1 & 1121023 & 0.5679$\times 10^{-2}$ & -0.621 \\[0.3mm]
$A_1^{\prime\prime}$ & a & 5 & 0 & 1 & $A_1^\prime$ & s & 4 & 0 & 1 & 1630141 & 0.4149$\times 10^{-1}$ & -1.239 \\[0.3mm]
$E^\prime$ & a & 5 & 1 & 1 & $E^{\prime\prime}$ & s & 4 & 1 & 1 & 1630681 & 0.3986$\times 10^{-1}$ & -1.240 \\[0.3mm]
$E^{\prime\prime}$ & a & 5 & 2 & 1 & $E^\prime$ & s & 4 & 2 & 1 & 1632314 & 0.3494$\times 10^{-1}$ & -1.243 \\[0.3mm]
$A_2^\prime$ & a & 5 & 3 & 1 & $A_2^{\prime\prime}$ & s & 4 & 3 & 1 & 1635074 & 0.2671$\times 10^{-1}$ & -1.249 \\[0.3mm]
$A_1^\prime$ & a & 5 &-3 & 1 & $A_1^{\prime\prime}$ & s & 4 &-3 & 1 & 1635075 & 0.2671$\times 10^{-1}$ & -1.249 \\[0.3mm]
$E^{\prime\prime}$ & a & 5 & 4 & 1 & $E^\prime$ & s & 4 & 4 & 1 & 1639027 & 0.1509$\times 10^{-1}$ & -1.258 \\[0.3mm]
$A_2^\prime$ & s & 5 & 0 & 1 & $A_2^{\prime\prime}$ & a & 4 & 0 & 1 & 1432485 & 0.2790$\times 10^{-1}$ & -0.722 \\[0.3mm]
$E^{\prime\prime}$ & s & 5 & 1 & 1 & $E^\prime$ & a & 4 & 1 & 1 & 1432151 & 0.2676$\times 10^{-1}$ & -0.721 \\[0.3mm]
$E^\prime$ & s & 5 & 2 & 1 & $E^{\prime\prime}$ & a & 4 & 2 & 1 & 1431137 & 0.2333$\times 10^{-1}$ & -0.717 \\[0.3mm]
$A_2^{\prime\prime}$ & s & 5 & 3 & 1 & $A_2^\prime$ & a & 4 & 3 & 1 & 1429410 & 0.1768$\times 10^{-1}$ & -0.710 \\[0.3mm]
$A_1^{\prime\prime}$ & s & 5 &-3 & 1 & $A_1^\prime$ & a & 4 &-3 & 1 & 1429409 & 0.1768$\times 10^{-1}$ & -0.710 \\[0.3mm]
$E^\prime$ & s & 5 & 4 & 1 & $E^{\prime\prime}$ & a & 4 & 4 & 1 & 1426908 & 0.9864$\times 10^{-2}$ & -0.700 \\[1mm]
& & & & & & & &$^{15}$ND$_3$ & & & & \\[1mm]
$A_1^{\prime\prime}$ & a & 1 & 0 & 1 & $A_1^\prime$ & s & 0 & 0 & 1 & 402779 & 0.4636$\times 10^{-3}$ & -1.979 \\[0.3mm]
$A_2^{\prime\prime}$ & a & 2 & 0 & 1 & $A_2^\prime$ & s & 1 & 0 & 1 & 707552 & 0.2995$\times 10^{-2}$ & -1.551 \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 1 & $E^{\prime\prime}$ & s & 1 & 1 & 1 & 708033 & 0.2250$\times 10^{-2}$ & -1.554 \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 1 & $A_2^{\prime\prime}$ & a & 0 & 0 & 1 & 208813 & 0.6139$\times 10^{-4}$ & 0.891 \\[0.3mm]
$A_1^\prime$ & s & 2 & 0 & 1 & $A_1^{\prime\prime}$ & a & 1 & 0 & 1 & 515358 & 0.1131$\times 10^{-2}$ & -0.241 \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 1 & 514961 & 0.8458$\times 10^{-3}$ & -0.237 \\[1mm]
\hline \\[-3mm]
\end{tabular}
\vspace*{1mm}
\\
\footnotesize Frequencies from \citet{Bester}. \\
\end{table*}
\begin{table*}
\vspace*{-0.0cm}
\caption{The wavenumbers ($\nu$), wavelengths ($\lambda$), Einstein coefficients ($A$), and sensitivities ($T$) for transitions between the ground and $\nu_2$ vibrational state of $^{14}$ND$_3$ and $^{15}$ND$_3$.}
\label{tab:v2_14nd3_15nd3_2}
\resizebox{\linewidth}{!}{\begin{tabular}{c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.1in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.01in}}
c @{\extracolsep{0.2000in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.1500in}}
c @{\extracolsep{0.0500in}}
c}
\hline \\[-2mm]
\multicolumn{1}{c}{$\Gamma^\prime$}&\multicolumn{1}{c}{$p^\prime$}&\multicolumn{1}{c}{$J^\prime$}&\multicolumn{1}{c}{$K^\prime$}&\multicolumn{1}{c}{$v_{2}^\prime$}&\multicolumn{1}{c}{$\Gamma^{\prime\prime}$}&
\multicolumn{1}{c}{$p^{\prime\prime}$}&\multicolumn{1}{c}{$J^{\prime\prime}$}&\multicolumn{1}{c}{$K^{\prime\prime}$}&\multicolumn{1}{c}{$v_{2}^{\prime\prime}$}&
\multicolumn{1}{c}{$\nu$/cm$^{-1}$}&\multicolumn{1}{c}{$\lambda$/$\mu$m}&\multicolumn{1}{c}{$A$/s$^{-1}$}&\multicolumn{1}{c}{$T$}\\[1mm]
\hline \\[-1mm]
& & & & & & & & & $^{14}$ND$_3$ & & & \\[1mm]
$A_1^{\prime\prime}$ & a & 1 & 0 & 1 & $A_1^\prime$ & s & 0 & 0 & 0 & 759.3704 & 13.1688 & 0.1955$\times 10^{+1}$ & -0.475 \\[0.3mm]
$A_2^{\prime\prime}$ & a & 2 & 0 & 1 & $A_2^\prime$ & s & 1 & 0 & 0 & 769.5283 & 12.9950 & 0.2444$\times 10^{+1}$ & -0.482 \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 1 & $E^{\prime\prime}$ & s & 1 & 1 & 0 & 769.5306 & 12.9949 & 0.1834$\times 10^{+1}$ & -0.482 \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 1 & $A_2^{\prime\prime}$ & a & 0 & 0 & 0 & 755.7906 & 13.2312 & 0.1948$\times 10^{+1}$ & -0.454 \\[0.3mm]
$A_1^\prime$ & s & 2 & 0 & 1 & $A_1^{\prime\prime}$ & a & 1 & 0 & 0 & 765.9901 & 13.0550 & 0.2434$\times 10^{+1}$ & -0.461 \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 0 & 765.9767 & 13.0552 & 0.1827$\times 10^{+1}$ & -0.461 \\[0.3mm]
$E^\prime$ & a & 1 & 1 & 1 & $E^{\prime\prime}$ & s & 1 & 1 & 0 & 749.0866 & 13.3496 & 0.2810$\times 10^{+1}$ & -0.468 \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 1 & $E^{\prime\prime}$ & s & 2 & 1 & 0 & 748.9645 & 13.3518 & 0.9344$\times 10^{0}$ & -0.468 \\[0.3mm]
$E^{\prime\prime}$ & a & 2 & 2 & 1 & $E^\prime$ & s & 2 & 2 & 0 & 748.9671 & 13.3517 & 0.3744$\times 10^{+1}$ & -0.468 \\[0.3mm]
$E^{\prime\prime}$ & s & 1 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 0 & 745.4912 & 13.4140 & 0.2798$\times 10^{+1}$ & -0.446 \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 2 & 1 & 0 & 745.4112 & 13.4154 & 0.9305$\times 10^{0}$ & -0.446 \\[0.3mm]
$E^\prime$ & s & 2 & 2 & 1 & $E^{\prime\prime}$ & a & 2 & 2 & 0 & 745.3664 & 13.4162 & 0.3729$\times 10^{+1}$ & -0.446 \\[0.3mm]
$A_2^{\prime\prime}$ & a & 0 & 0 & 1 & $A_2^\prime$ & s & 1 & 0 & 0 & 738.8622 & 13.5343 & 0.5381$\times 10^{+1}$ & -0.461 \\[0.3mm]
$A_1^{\prime\prime}$ & a & 1 & 0 & 1 & $A_1^\prime$ & s & 2 & 0 & 0 & 728.5209 & 13.7264 & 0.3427$\times 10^{+1}$ & -0.453 \\[0.3mm]
$E^\prime$ & a & 1 & 1 & 1 & $E^{\prime\prime}$ & s & 2 & 1 & 0 & 728.5205 & 13.7264 & 0.2572$\times 10^{+1}$ & -0.453 \\[0.3mm]
$A_1^\prime$ & s & 0 & 0 & 1 & $A_1^{\prime\prime}$ & a & 1 & 0 & 0 & 735.2618 & 13.6006 & 0.5358$\times 10^{+1}$ & -0.439 \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 1 & $A_2^{\prime\prime}$ & a & 2 & 0 & 0 & 724.9421 & 13.7942 & 0.3412$\times 10^{+1}$ & -0.431 \\[0.3mm]
$E^{\prime\prime}$ & s & 1 & 1 & 1 & $E^\prime$ & a & 2 & 1 & 0 & 724.9258 & 13.7945 & 0.2560$\times 10^{+1}$ & -0.431 \\[1mm]
& & & & & & & & & $^{15}$ND$_3$ & & & \\[1mm]
$A_1^{\prime\prime}$ & a & 1 & 0 & 1 & $A_1^\prime$ & s & 0 & 0 & 0 & 752.9702 & 13.2807 & 0.1888$\times 10^{+1}$ & -0.475 \\[0.3mm]
$A_2^{\prime\prime}$ & a & 2 & 0 & 1 & $A_2^\prime$ & s & 1 & 0 & 0 & 763.1000 & 13.1044 & 0.2359$\times 10^{+1}$ & -0.482 \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 1 & $E^{\prime\prime}$ & s & 1 & 1 & 0 & 763.0998 & 13.1044 & 0.1770$\times 10^{+1}$ & -0.482 \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 1 & $A_2^{\prime\prime}$ & a & 0 & 0 & 0 & 749.6973 & 13.3387 & 0.1881$\times 10^{+1}$ & -0.455 \\[0.3mm]
$A_1^\prime$ & s & 2 & 0 & 1 & $A_1^{\prime\prime}$ & a & 1 & 0 & 0 & 759.8667 & 13.1602 & 0.2351$\times 10^{+1}$ & -0.463 \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 0 & 759.8517 & 13.1605 & 0.1764$\times 10^{+1}$ & -0.462 \\[0.3mm]
$E^\prime$ & a & 1 & 1 & 1 & $E^{\prime\prime}$ & s & 1 & 1 & 0 & 742.7222 & 13.4640 & 0.2713$\times 10^{+1}$ & -0.468 \\[0.3mm]
$E^\prime$ & a & 2 & 1 & 1 & $E^{\prime\prime}$ & s & 2 & 1 & 0 & 742.6101 & 13.4660 & 0.9023$\times 10^{0}$ & -0.468 \\[0.3mm]
$E^{\prime\prime}$ & a & 2 & 2 & 1 & $E^\prime$ & s & 2 & 2 & 0 & 742.6053 & 13.4661 & 0.3616$\times 10^{+1}$ & -0.468 \\[0.3mm]
$E^{\prime\prime}$ & s & 1 & 1 & 1 & $E^\prime$ & a & 1 & 1 & 0 & 739.4346 & 13.5238 & 0.2702$\times 10^{+1}$ & -0.448 \\[0.3mm]
$E^{\prime\prime}$ & s & 2 & 1 & 1 & $E^\prime$ & a & 2 & 1 & 0 & 739.3626 & 13.5252 & 0.8988$\times 10^{0}$ & -0.448 \\[0.3mm]
$E^\prime$ & s & 2 & 2 & 1 & $E^{\prime\prime}$ & a & 2 & 2 & 0 & 739.3131 & 13.5261 & 0.3602$\times 10^{+1}$ & -0.447 \\[0.3mm]
$A_2^{\prime\prime}$ & a & 0 & 0 & 1 & $A_2^\prime$ & s & 1 & 0 & 0 & 732.5333 & 13.6513 & 0.5197$\times 10^{+1}$ & -0.461 \\[0.3mm]
$A_1^{\prime\prime}$ & a & 1 & 0 & 1 & $A_1^\prime$ & s & 2 & 0 & 0 & 722.2354 & 13.8459 & 0.3311$\times 10^{+1}$ & -0.452 \\[0.3mm]
$E^\prime$ & a & 1 & 1 & 1 & $E^{\prime\prime}$ & s & 2 & 1 & 0 & 722.2324 & 13.8460 & 0.2484$\times 10^{+1}$ & -0.453 \\[0.3mm]
$A_1^\prime$ & s & 0 & 0 & 1 & $A_1^{\prime\prime}$ & a & 1 & 0 & 0 & 729.2409 & 13.7129 & 0.5176$\times 10^{+1}$ & -0.440 \\[0.3mm]
$A_2^\prime$ & s & 1 & 0 & 1 & $A_2^{\prime\prime}$ & a & 2 & 0 & 0 & 718.9634 & 13.9089 & 0.3296$\times 10^{+1}$ & -0.432 \\[0.3mm]
$E^{\prime\prime}$ & s & 1 & 1 & 1 & $E^\prime$ & a & 2 & 1 & 0 & 718.9456 & 13.9093 & 0.2474$\times 10^{+1}$ & -0.432 \\[1mm]
\hline \\[-3mm]
\end{tabular}}
\vspace*{1mm}
\\
\footnotesize Wavenumbers from \citet{Bester}. \\
\end{table*}
It is expected that any variation in the fundamental constants will be confirmed, or refuted, over a series of independent measurements on a variety of molecular absorbers. As a relevant astrophysical molecule, and with certain inversion transitions already detected extraterrestrially~\citep{Mauersberger:1986,Johnston:1989,Schilke:1991}, $^{15}$NH$_3$ has potential to aid this search along with the already established probes of $^{14}$NH$_3$. For the deuterated species $^{14}$ND$_3$ and $^{15}$ND$_3$, it is perhaps more likely that their use will be restricted to precision measurements in the laboratory, despite possessing larger sensitivity coefficients for the pure inversion frequencies in the ground vibrational state.
\section{Conclusion}
A comprehensive study of the vibration-rotation-inversion transitions of all stable, symmetric top isotopomers of ammonia has been performed. The variational method offers a new and robust approach to computing sensitivity coefficients. The calculated mass sensitivities provide perspectives for the further development of the ammonia method, used in the probing of the cosmological variability of the proton-to-electron mass ratio. Most notably the reliance on other reference molecular species, which is the main source of systematic error, can be avoided. Although ammonia is not a primordial molecule and cannot be studied at extremal redshifts such as H$_2^+$, D$_2^+$ and He$_2^+$ for instance~\citep{Lucie:2014}, it can be detected in a wide variety of regions~\citep{Ho:1983}, and at redshifts which dramatically enhance spectral shifts (see \citet{Riechers:2013} and also Eq.~(1) of \citet{Spirko:2014}). The accuracy of the predicted sensitivities seems to fulfil the requirements needed for a reliable analysis of spectral data obtained at `rotational' resolution. To go beyond this limit, one should account for the hyperfine interactions and this requires a correct description of the `hyperfine' effects, which in turn should respect both the centrifugal distortion and Coriolis interaction~\citep{Patrick:1990}. A study along these lines is in progress in our laboratory. We also note that the ammonia rovibrational dynamics show the same characteristics as those of other inverting molecules, notably the hydronium cation (see \citet{Kozlov:2011}), thus calling for a rigorous investigation into such systems.
\section*{Acknowledgements}
The work was a part of the research project RVO:61388963 (IOCB) and was supported by the Czech Science Foundation (grant P209/15-10267S). S.Y. thanks ERC Advanced Investigator Project 267219 and A.O. thanks the UCL Impact scheme. The authors are grateful to Luciano Fusina, Gianfranco Di Lonardo and Adriana Predoi-Cross for providing their $^{15}$NH$_3$ data prior to publishing.
\singlespacing
\bibliographystyle{apsrev}
|
1,108,101,566,448 | arxiv | \section{Introduction}
It is by now clear that
$p$--branes are fated to play a central role in the duality relations
occurring in string theories and $M$--theory. Among these dualities an
interesting one is the heterotic string/heterotic five--brane
strong--weak--coupling duality \cite{DUAL}.
Unfortunately until now no consistent five--brane theory, based on a
classical action triggering also its quantum dynamics, does exist.
The principal problems are the following.
\noindent
1) Whereas there exists a $\kappa$--invariant classical action for the
gravitational sector of the five--brane, for its heterotic sector
no $\kappa$--invariant classical action is known.
\noindent
2) The heterotic five--brane sigma model appears to be power counting
non--renormalizable.
\noindent
3) Is there a ten--dimensional space--time interpretation for the
physical modes of the gravitational sector of the heterotic five--brane?
These physical modes are four fermionic plus four bosonic modes which
do not span a representation of $SO(8)$, the little ten--dimensional
Lorentz group.
\noindent
4) What is the quantum heterotic five--brane? A classification of the
six--dimensional topologies is not available, and, moreover
a term like $\int{\sqrt{g}}
R\varphi$, which furnishes in the case of the NSR--string
the quantum expansion parameter, seems not available in the case of
Green--Schwarz (GS)--extended objects.
\noindent
5) How many fermions are there in the heterotic sector?
\noindent
6) Do the anomalies in the heterotic five--brane cancel?
\noindent
7) Can the resulting heterotic five--brane be dual to the heterotic
string?
The problems 1) -- 4) will not be addressed in this talk.
For what concerns 1), if one chooses
fermions as basic fields for the heterotic sector and constructs a simply
minded action -- for example introducing a minimal coupling with the external
gauge fields -- $\kappa$--invariance is destroyed.
For what concerns renormalizability, point 2), from a dimensional point of
view the theory, living in six dimensions, does not seem renormalizable;
but if eventually a $\kappa$--invariant formulation will be found
it is possible that
$\kappa$--invariance prevents the appearance of non--renormalizable
divergences in the effective action.
A similar conjecture has, in fact, been made for the eleven dimensional
membrane by Paccanoni {\it et al} \cite{PPT}.
Actually, the analysis of this paper is not complete
since an exhaustive classification of all the possible divergences
is very difficult to realize and has not been made. On the other hand,
GS--strings conformal invariance, which is fundamental
for its quantum consistency, is entailed by $\kappa$--invariance
and it may be that for five--branes, and other GS--extended objects,
$\kappa$--invariance is just as fundamental for their quantum consistency
as conformal invariance is for strings.
The points 5) -- 7) will be addressed in this talk and we concentrate on the
possible quantum consistency of the heterotic super--five--brane sigma model
embedded in an $N=1$, $D=10$ target superspace, i.e. on the derivation and
cancellation of its anomalies. This analysis will give us a concrete information
on the field content of the heterotic sector and shed some new unexpected
light on string/five--brane duality. The results presented in this talk
have been obtained in refs. \cite{LT1,LT2} to which we refer the reader
for the details of their derivation and for more detailed references.
Since $p$--brane sigma models are defined by GS--type actions,
like the GS--heterotic string, a natural attempt
in the five--brane anomaly analysis consists in trying to extend,
as much as possible,
the techniques we use in GS--string theory
to the five--brane sigma model. We will start with the world--volume
anomalies which are conformal anomalies for the string and $SO(1,5)$
local Lorentz anomalies for the five--brane. Actually, for the GS heterotic
string
the conformal anomaly is cohomologically tied to the $SO(1,1)$ local
Lorentz anomaly and the $\kappa$--anomaly, while for the five--brane the
$SO(1,5)$ anomaly is cohomologically tied to the $\kappa$--anomaly, via the
Wess--Zumino consistency conditions. This means that it is sufficient to
worry about $SO(1,1)$ ($SO(1,5)$) anomalies only: once they cancel
all other worldsheet (worldvolume) anomalies will cancel automatically.
So first of all we have to have a good understanding of the $SO(1,1)$
anomaly cancellation in the string.
This leads us to face the "conformal anomaly puzzle" i.e. a naif counting of
the chiral fermionic degrees of freedom in the GS heterotic string
leaves a non vanishing anomaly there: the left handed $\vartheta$--fields
are 16 which by $\kappa$--symmetry are reduced to 8, while the right
handed heterotic fermions are 32, so the $\vartheta$'s are by a factor of
4 to short to cancel the $SO(1,1)$ anomaly. For what concerns the conformal
anomaly in the non--supersymmetric sector,
the $X^m$ plus $(b,c)$--fields count $10-26=-16$ while the
$\vartheta$'s count ${1\over 2}\cdot 8=4$ and again their contribution should
in some way be multiplied by 4 to lead to a cancellation.
The conformal anomaly cancellation mechanism for the GS--string
has been discovered in the flat case, in a $D=10$ Lorentz
covariant background gauge, by Wiegmann \cite{Wieg}. Our procedure for
the $SO(1,1)$ anomaly cancellation
mechanism in the sigma--model is based on this paper.
The cancellation of the target space anomalies (via the GS--mechanism)
is necessary for the quantum consistency of the string/five--brane
sigma--models since they are cohomologically tied to genuine sigma--model
worldvolume $\kappa$--anomalies \cite{LT1,LT2,CLT}
i.e. which vanish only in the flat limit.\footnote{A part from these
one expects additional genuine sigma--model $\kappa$--anomalies which are
$SO(1,9)$ and $SO(1,1)/SO(1,5)$ invariant and can be cancelled by modifying
the target superspace constraints \cite{CLT,T}, which we do not consider
here.}
In section 2 we will show that the above mentioned quadruplication is
intimately related
to the target space $SO(1,9)$ local Lorentz anomaly in the GS--string,
and obtain the expected complete
four--form anomaly polynomial for the heterotic string. Encouraged by this
result we will in section 3 extend this
method to the five--brane sigma model and compute its complete
eight--form anomaly polynomial, under the assumption that the heterotic
sector is made out of a certain number of fermions. Our principal results
are the following. The number of fermions needed to cancel the worldvolume
anomaly is sixteen rather than the expected thirtytwo.
On the other hand
the coefficient of the $D=10$ target space Lorentz anomaly carries a
factor of $1/2$ with respect to what is expected on the basis of duality.
Section 4 is devoted to a brief discussion of our results.
\section{Heterotic string anomalies}
The sigma--model action for the heterotic Green--Schwarz string
with gauge group $SO(32)$ in ten target space--time dimensions
is given by
\begin{equation}
S_2 = - {1 \over 2\pi \alpha'} \int d^2 \sigma \left({1\over2}
\sqrt{g}\ g^{i j} V_i^a V_{j a}
+\widetilde{B_2}
- {1\over2}\sqrt{g}\ e_-^j \psi
(\partial_j - A_j) \psi \right).
\label{az2}
\end{equation}
Here the string fields are the supercoordinates $Z^M = (X^m,
\vartheta^\mu )$, the 32 heterotic fermions $\psi $ and the
worldsheet zweibeins $e_{\pm}^i, g^{ij} = e_+^{(i} e_-^{j)}$.
The induced zehnbeins are
given by $V_i^A = \partial_i Z^M E_M{}^A (Z)$ and $\widetilde{B_2}$ is
the pullback on the string worldsheet of the supergravity two--superform
$B_2$.
This action is invariant under $d=2$ diffeomorphisms, local
$SO(1,1)$ Lorentz transformations, conformal and $\kappa$--transformations.
Diffeomorphisms anomalies can always be eliminated at the expense of
conformal/$SO(1,1)$ anomalies, so we will not dwell upon them. Since the
coefficient of the conformal and $\kappa$--anomalies is tied for
cohomological reasons to the $SO(1,1)$ anomaly we will now concentrate
on the last one. Since this is an ABBJ--anomaly only fermions will
contribute, in our case the $\vartheta$'s and the $\psi$. The contribution
of the latters is standard, so we will now consider in detail the
formers. It is most convenient to use the background field method
together with a normal coordinate expansion; calling the quantum
$\vartheta$'s $y^{\alpha}$ where $\alpha= 1,\cdots,16$ the relevant part
of the expanded action becomes
\begin{equation}
I (V, \Omega, y) = {1 \over 2} \int d^2 \sigma \sqrt{g}\ g^{ij} V^a_i \
y\ \Gamma_a
{1 - \Gamma \over 2} D_j {1 - \Gamma \over 2} \ y
\label{ae}
\end{equation}
where $D_j \equiv \partial_j - {1 \over 4} \Gamma{}_{cd}
\Omega_j{}^{cd}$, $\Omega_j{}^{cd}$ is the $SO(1,9)$ target space
Lorentz connection, the $\Gamma^a$ are ten dimensional Dirac matrices
and we defined the matrix
$
\Gamma^\alpha{}_\beta = {1\over V_+^a V_{a-}} \cdot{\varepsilon^{i j} \over
\sqrt{g}} V^a_i V^b_j (\Gamma_{ab})^\alpha{}_\beta.
$
An $SO(1,9)$--covariant background gauge fixing can now be achieved
by imposing ${1 + \Gamma \over 2} \ y= 0$,
which reduces the physical $y$'s from 16 to 8,
but the problematic feature of (\ref{ae}) is that the kinetic term for the
$y$'s is not canonical in that it is multiplied by the external (classical)
fields $V_i^a$ and one can not define a propagator. Eq. (\ref{ae}) can be
transformed to an action with a canonical kinetic term, taking advantage
from its manifest classical $SO(1,9)$ invariance, by applying
a convenient $SO(1,9)$ Lorentz rotation with group element
$\Lambda_{a}{}^b$. But, since
the integration measure $\int \{{\cal D} y\}$ under local
$SO(1,9)$ transformations is not invariant \cite{CLT}, this rotation
gives in general rise to a Wess--Zumino term. The $SO(1,9)$ Lorentz
anomaly, contrary to the $SO(1,1)$ anomaly, can be computed with standard
techniques and the corresponding polynomial turns out to be \cite{CLT}
\begin{equation}
X_L^{(2)}={1\over 8\pi}tr{\rm R}^2\equiv {1\over 8\pi} \ d\omega_3(\Omega),
\label{L2}
\end{equation}
where $R_a{}^b$ is the $D=10$ Lorentz curvature two--form and
$\omega_3(\cdot)$ is the standard Chern-Simons three--form. Therefore,
for a generic rotation, $\Lambda$, the measure $\int \{{\cal D} y\}$, and
hence the effective action, change by a Wess--Zumino term given by
\begin{equation}
\Gamma_{WZ}={1\over 8\pi}\int_{D_3}\left(\omega_3(\Omega)-
\omega_3(\Omega^\Lambda)\right),
\label{WZ}
\end{equation}
where the boundary of $D_3$ is the worldsheet.
The crucial point is that for
the particular $\Lambda_a{}^b$ which renders the kinetic term of
the $y$'s canonical \cite{LT1} one has
$\omega_3(\Omega^\Lambda)=\omega_3(\omega^{(2)}) + Y_3+dY_2,$
where $\omega^{(2)}$ is the {\it two}--dimensional Lorentz connection,
$Y_2$ is a local form and can therefore be disregarded and $Y_3$ is
an $SO(1,1)$ {\it and} $SO(1,9)$--invariant form. The Wess--Zumino term
(\ref{WZ}) contributes therefore to the $SO(1,1)$ anomaly with a
polynomial which is given by
\begin{equation}
X_{WZ}^{(2)}=-{1\over 8\pi}tr{\cal R}^2=-{1\over 192\pi}\cdot
24 \ tr{\cal R}^2,
\label{WZA2}
\end{equation}
where ${\cal R}$ is the two--dimensional Lorentz curvature
two--form (all traces are in the fundamental representations of the
orthogonal groups). The
functional integral over the (transformed) $y$'s is now canonical and
corresponds to eight Weyl--Majorana fermions with
effective action given by 8 $\ell n \det ^{1/2}(\sqrt{g}\ \partial_+)$;
this entails a contribution to the anomaly given by \cite{AGG}
\begin{equation}
X^{(2)}_{naif}=-{1\over 192\pi}\cdot 8 \ tr{\cal R}^2.
\label{A02}
\end{equation}
The total contribution of the quantum $\vartheta$'s to $SO(1,1)$ and
$SO(1,9)$ anomalies is thus obtained by
summing up (\ref{L2}),(\ref{WZA2}) and (\ref{A02}):
\begin{equation}
X^{(2)}_\vartheta={1\over 2\pi}\left(-{8+24\over 96}tr{\cal R}^2
+{1\over 4}tr{\rm R}^2\right).
\label{vartheta2}
\end{equation}
We see that the Wess--Zumino term leads to a quadruplication of the
"naif" $SO(1,1)$ anomaly.
The contribution of $N_\psi$ right--handed heterotic Majorana--Weyl
fermions, which contribute only to $SO(1,1)$ and Yang--Mills anomalies,
can be read directly from the index theorem \cite{AGG},
$
X^{(2)}_\psi={1\over 2\pi}\left({N_\psi\over 96}tr {\cal R}^2
-{1\over 4}tr {\rm F}^2\right).
$
Summing up this and (\ref{vartheta2}) we obtain the total worldsheet and
target space anomaly polynomial for the heterotic string as
\begin{equation}
X^{(2)}={1\over 2\pi}\left({N_\psi-(8+24)\over 96} \ tr{\cal R}^2
+{1\over 4}\left(tr{\rm R}^2-tr{\rm F}^2\right)\right).
\label{A2}
\end{equation}
The worldsheet anomaly cancels for 32 heterotic fermions, the gauge group
can therefore be taken to be $SO(32)$ and the remaining target space anomaly
can be cancelled by modifying the $B_2$ Bianchi identity to
\begin{equation}
dH_3=-2\pi\alpha'\cdot{1\over 8 \pi}\left(tr{\rm R}^2-tr{\rm F}^2\right)
\equiv -2\pi\alpha'\cdot I_4,
\label{I4}
\end{equation}
in agreement with the GS mechanism.
\section{Heterotic five--brane anomalies}
The action for the
super--fivebrane sigma--model \cite{BTS} embedded in an $N=1$, $D=10$
target space supergravity background is given by
\begin{equation}
S_6=-{1\over(2\pi)^3\beta^\prime} \int d^6\sigma
\left( {1\over 2} e^{-{2\over 3}\varphi}\sqrt{g} g^{ij} V_i^a V_{ja}
-\widetilde {B_6} -2 \sqrt{g}\right),
\label{az6}
\end{equation}
where $\widetilde{B_6}$ is the pullback on the six--dimensional
worldvolume of the dual supergravity six--superform $B_6$. $S_6$
is invariant under $\kappa$--transformations, $d=6$ diffeomorphisms
and $SO(1,5)$ local Lorentz transformations if one replaces the
metric $g_{ij}$ with sechsbeins. As in the case of the string it is
sufficient to worry about $SO(1,5)$ and $SO(1,9)$ anomalies only.
As we will see, the action in eq. (\ref{az6}) will give rise to a
non--vanishing $SO(1,5)$ anomaly, therefore one {\it must} add a
heterotic sector to cancel this anomaly.
Despite the difficulties mentioned in the introduction
we will assume that this sector is made out of a certain number $N_\psi$
of $d=6$ complex Weyl fermions, minimally coupled to Yang--Mills fields
of a gauge group $G$. A part from this, the derivation of the anomalies
follows mainly the strategy we adopted in section 2 for the string,
so we will only report the results referring to \cite{LT2} for the
details of their derivation.
The total $SO(1,5)$ and $SO(1,9)$ anomaly
due to the $\vartheta$'s is
again a sum of three terms, like (\ref{L2}),(\ref{WZA2}) and (\ref{A02}),
$X^{(6)}_\vartheta= X_L^{(6)}+X_{WZ}^{(6)}+X_{naif}^{(6)}$,
and the formula analogous to (\ref{vartheta2}) is
\begin{eqnarray}
X^{(6)}_\vartheta &=&
{1\over 192(2\pi)^3}
\left( ( -1-15)\left({1\over 30}tr {\cal R}^4 + {1\over 24}
\left(tr {\cal R}^2\right)^2\right)\right.\nonumber\\
& &\left.+ \ tr {\cal R}^2 tr {\rm R}^2 - {3\over 8}
\left(tr {\rm R}^2\right)^2 +
{1\over 2}tr {\rm R}^4\right).
\label{vartheta6}
\end{eqnarray}
In this case the Wess--Zumino term (counting for 15 complex Weyl fermions)
amounts to multiply the naif $SO(1,5)$
anomaly (corresponding to 1 fermion, i.e. the 8 physical real $\vartheta$'s)
by a factor of 16. The index theorem gives for the heterotic fermions,
with chirality opposite to that of the $\vartheta$'s,
\begin{equation}
X_\psi^{(6)}=
{1\over 192(2\pi)^3}
\left( N_\psi\left({1\over 30}tr {\cal R}^4 + {1\over 24}
\left(tr {\cal R}^2\right)^2\right)
-2\ tr {\rm F}^2 tr{\cal R}^2 +8\ tr {\rm F}^4
\right).
\label{Het6}
\end{equation}
The total heterotic five--brane anomaly, which is gotten summing up
(\ref{vartheta6}) and (\ref{Het6}), becomes:
\begin{eqnarray}
X^{(6)}&=&
{1\over 192 (2\pi)^3} \left((N_\psi-1-15) \left( {1\over 30}tr
{\cal R}^4 +{1\over 24} \left(tr {\cal R}^2\right)^2\right)\right.
\nonumber\\
& &+ \left(2 tr {\cal R}^2 -
tr {\rm R}^2\right) \left({1\over 2}tr {\rm R}^2 - tr{\rm F}^2\right)
\nonumber\\
& & \left. + {1\over 2}\left(tr {\rm R}^4 + {1\over 4}
\left(tr {\rm R}^2\right)^2\right) - tr {\rm F}^2 tr {\rm R}^2 + 8 tr
{\rm F}^4\right).
\label{A6}
\end{eqnarray}
\section{Discussion}
One aspect of the string/five-brane duality conjecture emerges from the
factorization of the $N=1$, $D=10$ supergravity anomaly polynomial,
$I_{12}={1\over 2\pi}I_4\cdot I_8$, where for the gauge group $SO(32)$
$I_4$ is given in eq. (\ref{I4}) and
\begin{equation}
I_8
= {1\over 192 (2\pi)^3} \left(tr{\rm R}^4 + {1\over 4}
(tr {\rm R}^2)^2 - tr {\rm R}^2 tr{\cal F}^2 + 8\ tr {\cal F}^4\right),
\label{I8}
\end{equation}
where the Yang--Mills curvature ${\cal F}$ belongs to the fundamental
representation of $SO(32)$. According to the conjecture, once in (\ref{A6})
the worldvolume anomaly cancels, the remaining
target space anomaly polynomial should coincide with (\ref{I8}). To
cancel the worldvolume anomaly
one needs $N_\psi=16$, i.e. sixteen heterotic fermions, and therefore the
gauge group can not be $SO(32)$ (and not even $E_8\otimes E_8$) and one can
not identify ${\rm F}$ with ${\cal F}$. Moreover, there are mixed terms in
(\ref{A6}), $2 tr {\cal R}^2 \cdot\left({1\over 2}tr {\rm R}^2 - tr{\rm F}^2
\right)$, which can be cancelled in no way, and the weights of the
leading target space Lorentz anomaly, $tr{\rm R}^4$, in $X^{(6)}$ and
$I_8$ differ by a factor of $1/2$.
To quantify these discrepancies let us assume that the $\vartheta$'s count
for {\it two}, instead of one, complex Weyl fermions. In this case the
total anomaly polynomial would be given by
$\widetilde{X^{(6)}}=2\cdot X^{(6)}_\vartheta+X^{(6)}_\psi$ which can be written
as
\begin{equation}
\widetilde{X^{(6)}}=I_8+{1\over 48(2\pi)^2}\left(2 tr {\cal R}^2 -
tr {\rm R}^2\right)\cdot I_4
+{N_\psi-32\over 192(2\pi)^3}\cdot
\left({1\over 30}tr {\cal R}^4 + {1\over 24}
\left(tr {\cal R}^2\right)^2\right).
\end{equation}
In this case one would need 32 heterotic fermions in the fundamental
representation of $SO(32)$, the term proportional to $I_4$ would correspond
to a trivial anomaly thanks to (\ref{I4}), and $\widetilde{X^{(6)}}$ would
reduce to $I_8$ -- in complete agreement with duality -- which could be
eliminated by modifying the Bianchi identity of $B_6$ to
$dH_7=(2\pi)^3\beta^\prime I_8$. Since, according to duality, $H_7$ has
to be the Hodge--dual of $H_3$ this Bianchi identity, together with
(\ref{I4}), would imply a relation between the charges of strings
and five--branes involving the ten--dimensional Newton's constant
$\kappa$, i.e. $2\kappa^2=(2\pi)^5\alpha^\prime\beta^\prime$, which
corresponds to a Dirac--like quantization condition \cite{NT} with $n=1$.
So our principal conclusion is that the five--brane $\vartheta$--anomaly
is only half
of what is expected on the basis of string/five--brane duality, adding a new
problem to the ones already mentioned in the introduction. We can
nevertheless mention that if we set in $X^{(6)}$ and $I_8$ the gauge fields
to zero, ${\cal F}={\rm F}=0$, then the worldvolume anomaly cancels for
sixteen heterotic fermions and by subtracting a suitable trivial anomaly,
as above, $X^{(6)}$ would reduce to ${1\over 2}\cdot I_8$. This would imply
the quantization condition $2\kappa^2={1\over 2}\cdot
(2\pi)^5\alpha^\prime\beta^\prime$ i.e. $n={1\over 2}$ which signals
the presence of half--charged five--branes.
Half--charged fivebranes arose, actually, in ref. \cite{Pol}
where they appear, however, always in pairs such that their total
charge is always integer. Half integral magnetic charges have arisen
also on fixed points of $Z_2$-orbifold compactifications of $N=1,D=11$
Supergravity in ref. \cite{HW}.
|
1,108,101,566,449 | arxiv | \section{Introduction}
Experiments using radioactive beams have brought the development of physics of unstable nuclei.
Neutron halo structure is one of new phenomena of nuclear structures appearing in the drip-line nuclei, such as $^6$He, $^{11}$Li, and $^{11}$Be \cite{tanihata85,tanihata13}.
One of the characteristic features in unstable nuclei is a weak binding nature of a last few nucleons and this property causes many states observed above the particle thresholds.
This means that spectroscopy of resonances of unstable nuclei provides the important information to understand the nuclear structure.
There are two sides of neutron-rich and proton-rich in unstable nuclei with a large isospin and the comparison of the structures of these mirror systems is also interesting
to understand the isospin-symmetry property with a large isospin system.
So far, many experiments have been performed for neutron-rich $^8$He \cite{korsheninnikov93,iwata00,meister02,chulkov05,skaza07,mueller07,golovkov09} and proton-rich $^8$C \cite{charity10,charity11},
which are in the mirror relation with the isospin $T=2$ system.
The ground state of $^8$He has a neutron skin structure of four neutrons around $^4$He with a small separation energy of about 3 MeV.
The excited states in $^8$He are not settled yet and are considered to exist above the $^4$He+$4n$ threshold energy.
This means that the observed resonances of $^8$He can decay into the various channels of $^7$He+$n$, $^6$He+2$n$, $^5$He+3$n$, and $^4$He+4$n$.
This property of the multiparticle decays causes the difficulty to determine the energy positions of resonances in $^8$He experimentally.
Theoretically the di-neutron cluster correlation is suggested in the excited state of $^8$He \cite{enyo07}.
The ground state of the unbound nucleus $^8$C is experimentally located at 3.4 MeV above the $^4$He+$4p$ threshold energy \cite{charity11}, and the excited states of $^8$C have not yet been confirmed.
Similar to $^8$He, the $^8$C states can decay into the channels of $^7$B+$p$, $^6$Be+2$p$, $^5$Li+3$p$, and $^4$He+4$p$.
The comparison of $^8$He and $^8$C is interesting to understand effects of the Coulomb interaction in proton-rich nuclei and the nuclear isospin symmetry.
In the picture consisting of $^4$He and four valence nucleons, we analyze the He isotopes and their mirror nuclei with the $^4$He+$N+N+N+N$ five-body model \cite{myo10,myo12,myo14b}.
We solve the motion of multivalence nucleons around $^4$He in the cluster orbital shell model (COSM) \cite{suzuki88,masui06,myo07,myo11}.
The advantage of the COSM is that we can reproduce the threshold energies of the subsystems in the $A$=$8$ systems.
This aspect is important to describe the open channels for nucleon emissions and then we can treat the many-body decaying phenomena.
We describe many-body resonances applying the complex scaling method (CSM) \cite{ho83,moiseyev98,aoyama06,myo14a,myo20}
giving the correct boundary conditions for decay channels.
In the CSM, the wave function of resonance is obtained by solving the eigenvalue problem of the complex-scaled Hamiltonian using the $L^2$ basis functions.
Results of nuclear resonances using the CSM have been successfully shown not only for energies and decay widths, but also for spectroscopic factors and the transition strength functions by using the Green's function \cite{myo14a,myo20,myo98,myo01,suzuki05}.
In our previous works of neutron-rich He isotopes and their mirror proton-rich nuclei \cite{myo10,myo12,myo14b,myo11},
we discussed the isospin symmetry in $^7$He and $^7$B with the $^4$He+$N+N+N$ model \cite{myo11}.
The isospin-symmetry breaking occurs in their ground states for the mixing of $2^+$ states of the $A=6$ subsystems.
This is because the relative energy distances between the $A$=$7$ states and the ``$A$=$6$''+$N$ thresholds can be different
in $^7$He and $^7$B due to the Coulomb interaction in $^7$B.
For $A$=$8$ systems of $^8$He and $^8$C, we calculated only $0^+$ states due to the limited numerical resources to treat the large Hamiltonian matrix \cite{myo10,myo12,myo14b}.
We compared the spatial structures in the radii of two nuclei, and it is found that the Coulomb barrier prevents the valence nucleons from the spatial extension, which results in the smaller radius of $^8$C than that of $^8$He in their corresponding excited $0^+_2$ resonances.
The same relation can also be seen between $^6$He and $^6$Be \cite{myo14b}.
In this paper, we proceed our study of many-body resonances of $^8$He and $^8$C with the $^4$He+$N+N+N+N$ five-body model.
This paper is the extension of the previous ones, in which only the $0^+$ states were investigated \cite{myo10,myo12}.
We fully calculate the other possible spin states in addition to the $0^+$ for the complete understanding of the resonance spectroscopy of two nuclei.
We predict resonances of two nuclei and examine the dominant configurations of four nucleons in each state.
These information is useful for the future experiments for two nuclei.
We also compare the configuration structures of $^8$He and $^8$C in the viewpoint of the isospin symmetry.
In Sec.~\ref{sec:method}, we explain the COSM and the CSM.
In Sec.~\ref{sec:result}, we show the results of five-body bound and resonant states obtained in $^8$He and $^8$C.
A summary is given in Sec.~\ref{sec:summary}.
\section{Method}\label{sec:method}
\subsection{Cluster orbital shell model}
We explain the COSM with the $^4$He+$N$+$N$+$N$+$N$ five-body systems for $^8$He and $^8$C.
Motion of four nucleons around $^4$He is solved in the COSM.
The relative coordinates of four nucleons are $\{\vc{r}_i\}$ with $i=1,\ldots,4$ as are shown in Fig.~\ref{fig:COSM}.
We employ the common Hamiltonian used in the previous studies~\cite{myo10,myo12,myo14b};
\begin{eqnarray}
H
&=& t_0+ \sum_{i=1}^4 t_i - T_G + \sum_{i=1}^4 v^{\alpha N}_i + \sum_{i<j}^4 v^{NN}_{ij}
\\
&=& \sum_{i=1}^4 \left( \frac{\vc{p}^2_i}{2\mu} + v^{\alpha N}_i \right) + \sum_{i<j}^4 \left( \frac{\vc{p}_i\cdot \vc{p}_j}{4m} + v^{NN}_{ij} \right) .
\label{eq:Ham}
\end{eqnarray}
The kinetic energy operators $t_0$, $t_i$, and $T_G$ are for $^4$He, a valence nucleon and the center-of-mass parts, respectively.
The operator $\vc{p}_i$ is the relative momentum between $^4$He and a valence nucleon
with the reduced mass $\mu$=$4m/5$ using a nucleon mass $m$.
The $^4$He--nucleon interaction $v^{\alpha N}$ has the nuclear and Coulomb parts.
The nuclear part is the microscopic Kanata-Kaneko-Nagata-Nomoto potential \cite{aoyama06,kanada79},
which reproduces the $^4$He-nucleon scattering data.
The Coulomb part is obtained by folding the density of $^4$He with the $(0s)^4$ configurations.
For nucleon-nucleon interaction $v^{NN}$, we use the Minnesota nuclear potential \cite{tang78}
and the point Coulomb interaction between protons.
\begin{figure}[t]
\centering
\includegraphics[width=4.5cm,clip]{COSM5.eps}
\caption{Spatial coordinates of the $^4$He+$N$+$N$+$N$+$N$ system in the COSM.}
\label{fig:COSM}
\end{figure}
We explain the COSM wave function.
We assume the $^4$He wave function $\Phi(^4{\rm He})$ with the $(0s)^4$ configuration in a harmonic oscillator basis state.
The range parameter of the $0s$ state is 1.4 fm reproducing the charge radius of $^4$He.
We expand the wave functions of the $^4$He+$N$+$N$+$N$+$N$ system using the COSM configurations \cite{masui06,myo10,myo12}.
The total wave function of a nucleus $\Psi^J$ with total spin $J$ is given in the form of the linear combination of the COSM configuration $\Psi^J_c$ as
\begin{eqnarray}
\Psi^J
&=& \sum_c C^J_c \Psi^J_c,
\label{WF0}
\\
\Psi^J_c
&=& {\cal A}' \left\{ \Phi(^4{\rm He}), \Phi^J_c \right\},
\\
\Phi^J_c
&=& {\cal A} \left[ \left[ \phi_{\alpha_1}, \phi_{\alpha_2} \right]_{j_{12}},\left[ \phi_{\alpha_3},\phi_{\alpha_4} \right]_{j_{34}} \right]_J .
\label{WF1}
\end{eqnarray}
The single-particle wave function is $\phi_{\alpha}(\vc{r})$ with the quantum number $\alpha$ as the set of $\{n,\ell,j\}$ in the $jj$ coupling scheme.
The index $n$ is to distinguish the different radial component and $\ell$ is the orbital angular momentum with $j=[\ell,1/2]$.
The spins of $j_{12}$ and $j_{34}$ are for the coupling of two nucleons.
The operators ${\cal A'}$ and ${\cal A}$ are for the antisymmetrization between $^4$He and a valence nucleon
and between valence nucleons, respectively.
The former effect is considered by using the orthogonality condition model \cite{aoyama06},
in which the relative $0s$ component is removed from the $\phi_{\alpha}$.
The index $c$ in Eq.~(\ref{WF0}) indicates the set of $\alpha_i$, $j_{12}$, and $j_{34}$ as $c=\{\alpha_1,\ldots,\alpha_4,j_{12},j_{34}\}$.
We take a summation of the available COSM configurations for a total spin $J$ and superpose them with the amplitude of $C^J_c$ in Eq.~(\ref{WF0}).
We calculate the Hamiltonian matrix in the COSM and solve the following eigenvalue problem
\begin{eqnarray}
\sum_{c'}\langle \Psi^J_c |H| \Psi^J_{c'} \rangle\, C^J_{c'} &=& E^J C_{c}^J .
\end{eqnarray}
We obtain all the amplitudes $\{C_c^J\}$ in Eq.~(\ref{WF0}), which determine the total wave function, with the energy eigenvalue $E^J$
measured from the threshold energy of $^4$He+$N$+$N$+$N$+$N$.
The single-particle wave function $\phi_\alpha(\vc{r})$ is a function of the relative coordinate $\vc{r}$
from the center-of-mass of $^4$He to a valence nucleon as shown in Fig.~\ref{fig:COSM}.
We prepare a sufficient number of a single-particle basis function with various spatial distributions.
We expand $\phi_\alpha(\vc{r})$ by using the Gaussian functions for each single-particle orbit
\begin{eqnarray}
\phi_\alpha(\vc{r})
&=& \sum_{k=1}^{N_{\ell j}} d^k_{\alpha}\ u_{\ell j}(\vc{r},b_{\ell j}^k)\, ,
\label{spo}
\\
u_{\ell j}(\vc{r},b_{\ell j}^k)
&=& N_k \, r^{\ell} e^{-(r/b_{\ell j}^k)^2/2}\, [Y_{\ell}(\hat{\vc{r}}),\chi^\sigma_{1/2}]_{j}\, ,
\label{Gauss}
\\
\langle \phi_\alpha | \phi_{\alpha'} \rangle
&=& \delta_{\alpha,\alpha'}
~=~ \delta_{n,n'}\, \delta_{\ell,\ell'}\, \delta_{j,j'}.
\label{Gauss2}
\end{eqnarray}
The index $k$ is to specify the Gaussian functions with the range parameter $b_{\ell j}^k$
with $k=1,\ldots, N_{\ell j}$ for radial correlation. The normalization factor is given by $N_k$.
The coefficients $\{d^k_{\alpha}\}$ in Eq.~(\ref{spo}) are obtained from the orthogonal property of the basis states $\phi_\alpha$ in Eq.~(\ref{Gauss2}).
The length parameters $b_{\ell j}^k$ are chosen in geometric progression.
Basis number $N_{\ell j}$ for $\phi_\alpha$ is determined to converge the numerical results and we use 14 Gaussian functions at most in the ranges of $b_{\ell j}^k$ from 0.2 fm to around 50 fm.
We expand each of the COSM configuration $\Phi^J_c$ in Eq.~(\ref{WF1}) using a finite number of single-particle basis states $\phi_\alpha$ for each nucleon.
After solving the eigenvalue problem of Hamiltonian, we obtain the energy eigenvalues, which are discretized for bound, resonant and continuum states.
For the single-particle orbits $\phi_\alpha$, we consider the basis states with the orbital angular momenta $\ell\le 2$, and this condition gives the two-neutron energy of $^6$He($0^+$) in the accuracy of 0.3 MeV from the convergent calculation with a large $\ell$.
In this paper, we adopt the 173.7 MeV of the repulsive strength of the Minnesota potential from the original 200 MeV to fit the two-neutron separation energy of $^6$He with the experimental one of 0.975 MeV.
This treatment nicely works to reproduce the energy levels of He isotopes and their mirror nuclei systematically \cite{myo14a}.
\subsection{Complex scaling method}
We explain the CSM to treat resonances and continuum states in the many-body system \cite{ho83,moiseyev98,aoyama06,myo14a,myo20}.
The resonances are defined to be the eigenstates having the complex eigenenergies as the Gamow states with the outgoing boundary condition, and the continuum states are orthogonal to the resonances.
In the CSM, all the relative coordinates $\{\vc{r}_i\}$ in the $^4$He+$N$+$N$+$N$+$N$ system as shown in Fig. \ref{fig:COSM}, are transformed using a common scaling angle $\theta$ as
\begin{eqnarray}
\vc{r}_i \to \vc{r}_i\, e^{i\theta}.
\end{eqnarray}
The Hamiltonian in Eq.~(\ref{eq:Ham}) is transformed into the complex-scaled Hamiltonian $H_\theta$, and the complex-scaled Schr\"odinger equation is written as
\begin{eqnarray}
H_\theta\Psi^J_\theta
&=& E^J \Psi^J_\theta .
\label{eq:eigen}
\\
\Psi^J_\theta
&=& \sum_c C^J_{c,\theta} \Psi^J_c.
\label{WF_CSM}
\end{eqnarray}
The eigenstates $\Psi^J_\theta$ are determined by solving the eigenvalue problem in Eq.~(\ref{eq:eigen}).
In the total wave function, the $\theta$ dependence is included in the expansion coefficients $C_{c,\theta}^J$ in Eq.~(\ref{WF_CSM}), which can be complex numbers in general.
We obtain the energy eigenvalues $E^J$ of bound and unbound states on a complex energy plane, which are governed by the ABC theorem \cite{ABC}.
From the ABC theorem, the asymptotic boundary condition of resonances is transformed to the damping behavior.
This proof is mathematically general in many-body systems.
The boundary condition of the resonances in the CSM makes it possible to use the numerical method to obtain the bound states in the calculation of resonances.
In the CSM, the Riemann branch cuts are commonly rotated down by $2\theta$ in the complex energy plane,
in which each of the branch cuts starts from the corresponding threshold energy.
On the other hand, the energy eigenvalues of bound and resonant states are independent of $\theta$ from the ABC theorem.
We identify the resonances with complex energy eigenvalues as $E=E_r-i\Gamma/2$,
where $E_r$ and $\Gamma$ are the resonance energies and the decay widths, respectively.
The scaling angle $\theta$ is determined in each resonance to give the stationary point of the energy eigenvalue on the complex energy plane.
In the CSM, resonance wave functions can be expanded in terms of the $L^2$ basis functions because of the damping boundary condition,
and the amplitudes of resonances are normalized with the condition of $\sum_{c} \big(C^J_{c,\theta}\big)^2=1$.
It is noted that the Hermitian product is not adopted due to the bi-orthogonal property of the adjoint states \cite{ho83,moiseyev98,berggren68}.
Hence the components of the COSM configurations $\big(C^J_{c,\theta}\big)^2$ can be a complex number
and are independent of the scaling angle $\theta$ when we obtain the converging solutions of resonances \cite{myo20}.
\section{Results}\label{sec:result}
\subsection{Energy spectra of He isotopes and mirror nuclei}
We discuss the resonances in $^8$He and $^8$C in the COSM.
The energy eigenvalues obtained in two nuclei are listed in Tables \ref{ene_8He} and \ref{ene_8C}, which are measured from the thresholds of $^4$He+$N+N+N+N$.
We obtain five states in each nucleus, and only the ground state of $^8$He is a bound state and the others are resonances.
For the ground state of $^8$He, the relative energy from $^4$He is 3.22 MeV and close to the experimental value of 3.11 MeV.
The matter and charge radii of $^8$He are 2.52 and 1.92 fm, respectively, which also reproduce the experiments \cite{mueller07}.
The detailed analysis of this state was reported in the previous analysis \cite{myo10,myo14b}.
For the $2^+_1$ resonance of $^8$He, we obtain the relative energy from $^4$He is 0.32 MeV and the decay width $\Gamma$ of 0.66 MeV, which are consistent to the old experimental value of
the corresponding energy of 0.46$\pm$0.12 MeV and $\Gamma$=0.5 $\pm$ 0.35 MeV \cite{korsheninnikov93}.
In the two Tables \ref{ene_8He} and \ref{ene_8C}, it is found that the energies of the $^8$C states are entirely located higher than those of $^8$He and the decay widths becomes larger for resonances
in $^8$C than $^8$He. This indicates the dynamical isospin-symmetry breaking induced by the Coulomb interaction for valence protons in $^8$C.
The detailed configurations of each state will be discussed later.
\begin{table}[t]
\caption{Energy eigenvalues of $^8$He measured from the $^4$He+$n$+$n$+$n$+$n$ threshold energy in units of MeV.
Data of $0^+_{1,2}$ are taken from Ref. \cite{myo10}. The values in the square brackets are the experimental ones of $0^+_1$
and $2^+_1$ \cite{korsheninnikov93}. }
\label{ene_8He}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc}
& Energy~(MeV) & Decay width~(MeV) \\ \hline
$0^+_1$ & $-$3.22 [$-$3.11] & --- \\
$0^+_2$ & 3.07 & 3.19 \\
$1^+ $ & 1.65 & 3.57 \\
$2^+_1$ &~~0.32~[0.46$\pm$0.12] & ~~0.66~[0.5$\pm$0.35] \\
$2^+_2$ & 4.52 & 4.39 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[t]
\caption{Energy eigenvalues of $^8$C measured from the $^4$He+$p$+$p$+$p$+$p$ threshold energy in units of MeV.
Data of $0^+_{1,2}$ are taken from Ref. \cite{myo12}.
The values in the square brackets are the experimental ones for the ground state \cite{charity11}. }
\label{ene_8C}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc}
& Energy~(MeV) & Decay width~(MeV) \\ \hline
$0^+_1$ & 3.32~[3.449(30)] & 0.072~[0.130(50)] \\
$0^+_2$ & 8.88 & 6.64 \\
$1^+ $ & 7.89 & 7.28 \\
$2^+_1$ & 6.38 & 4.29 \\
$2^+_2$ & 9.70 & 9.10 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm,clip]{8He_lev5.eps}
\caption{Energy levels of $^{4-8}$He measured from the $^4$He energy. Units are in MeV.
Black and gray lines are the values of theory and experiments, respectively. Small numbers indicate the decay widths $\Gamma$ of resonances.
For $^7$He($1/2^-$), the experimental data are taken from Ref. \cite{skaza06}.
For $^8$He, the experimental data are taken from Ref. \cite{golovkov09}.}
\label{fig:He}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[width=8.5cm,clip]{C8_lev5.eps}
\caption{Energy levels of $^5$Li, $^6$Be, $^7$B, and $^8$C. Units are in MeV.
Notations are the same as shown in Fig.~\ref{fig:He}.}
\label{fig:mirror}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[width=8.5cm,clip]{ene_2+_18_01d.eps}
\caption{
Energy eigenvalue distribution of $^8$He ($2^+$) measured from the $^4$He+$n$+$n$+$n$+$n$ threshold energy in the complex energy plane,
where scaling angle is 22$^\circ$. Units are in MeV.
The energies with double circles indicate the $2^+_1$ and $2^+_2$ resonances.
Several groups of the continuum states are shown with their configurations.}
\label{fig:CSM}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[width=5.0cm,clip]{C8_mirror1.eps}
\caption{Comparison of the excitation energy spectra of $^8$He and $^8$C in the units of MeV.}
\label{fig:excite}
\end{figure}
We show the systematic behavior of energy levels of $^{4-8}$He in Fig. \ref{fig:He}
and their mirror nuclei of $^5$Li, $^6$Be, $^7$B, and $^8$C in Fig. \ref{fig:mirror}.
In these levels, new results in the present analysis are the $1^+$ and $2^+$ states of $^8$He and $^8$C.
In Fig. \ref{fig:CSM}, we show the example of the calculated energy eigenvalues of $^8$He($2^+$) in the CSM using $\theta=22^\circ$,
which gives the stationary condition for the energy of the $2^+_1$ state.
It is confirmed that in addition to the $2^+_{1,2}$ resonances which are clearly confirmed,
many kinds of the threshold energy positions and the corresponding continuum states are obtained with discretized spectra.
In particular, it is found that the $2^+_1$ resonance is located near the threshold energies of various continuum states as shown in Fig. \ref{fig:CSM}.
This indicates that one should carefully distinguish the components of resonance and continuum states in the observables.
It is interesting to investigate the effects of resonances and continuum states on the cross section in the future,
which can be performed by using the Green's function with complex scaling \cite{myo98,myo01,myo14a}.
It is meaningful to discuss the isospin symmetry between $^8$He and $^8$C for four valence neutrons and protons above $^4$He with $T=2$.
We compare the excitation energy spectra of two nuclei measured from the ground states using their resonance energies in Fig.~\ref{fig:excite}.
The level orders are the same in two nuclei, but, the level spacing is smaller in $^8$C than that of $^8$He,
indicating a dynamical isospin-symmetry breaking induced by the Coulomb interaction for protons.
This result is also related to the fact that the resonances in $^8$C have larger decay widths than those of $^8$He as shown in Tables \ref{ene_8He} and \ref{ene_8C}.
When we compare the direct distance between the complex energy eigenvalues of $E_r-i\Gamma/2$, for example, $0^+_1$ and $2^+_2$,
$^8$He and $^8$C give 8.05 and 7.84 MeV, respectively, and these values are close to each other.
In this sense, we confirm the similar energy spectra in two nuclei for complex energy eigenvalues in the complex energy plane.
\subsection{Configurations of $^8$He and $^8$C}
\begin{table}[tb]
\caption{Dominant parts of the squared amplitudes $(C_{c,\theta}^J)^2$ of the $0^+_1$ states of $^8$He and $^8$C.
The values are taken from Ref.\cite{myo14b}.
}
\label{comp8_1}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc}
Configuration & $^8$He($0^+_1$) & $^8$C($0^+_1$) \\ \hline
$(p_{3/2})^4$ & 0.860 & $0.878-i0.005$ \\
$(p_{3/2})^2(p_{1/2})^2$ & 0.069 & $0.057+i0.001$ \\
$(p_{3/2})^2(1s_{1/2})^2$ & 0.006 & $0.010+i0.003$ \\
$(p_{3/2})^2(d_{3/2})^2$ & 0.008 & $0.007+i0.000$ \\
$(p_{3/2})^2(d_{5/2})^2$ & 0.042 & $0.037+i0.000$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[tb]
\caption{Dominant parts of the squared amplitudes $(C_{c,\theta}^J)^2$ of the $0^+_2$ states of $^8$He and $^8$C.
The values are taken from Ref.\cite{myo14b}.
}
\label{comp8_2}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc}
Configuration & $^8$He($0^+_2$) & $^8$C($0^+_2$) \\ \hline
$(p_{3/2})^4$ & $0.020-i0.009$ & $0.044+i0.007$ \\
$(p_{3/2})^2(p_{1/2})^2$ & $0.969-i0.011$ & $0.934-i0.012$ \\
$(p_{3/2})^2(1s_{1/2})^2$ & $-0.010-i0.001$ & $-0.001+i0.000$ \\
$(p_{3/2})^2(d_{3/2})^2$ & $0.018+i0.022$ & $0.020+i0.003$ \\
$(p_{3/2})^2(d_{5/2})^2$ & $0.002+i0.000$ & $0.002+i0.001$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[th]
\caption{Dominant parts of the squared amplitudes $(C_{c,\theta}^J)^2$ of the $1^+$ states of $^8$He and $^8$C.}
\label{comp8_3}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc}
Configuration & $^8$He($1^+$) & $^8$C($1^+$) \\ \hline
$(p_{3/2})^3(p_{1/2})$ & $0.949-i0.027$ & $0.962+i0.008$ \\
$(p_{3/2})(p_{1/2})^2(\tilde{p}_{1/2})$ & $0.017+i0.020$ & $0.011-i0.011$ \\
$(p_{3/2})(p_{1/2})(d_{5/2})^2$ & $0.013+i0.006$ & $0.009+i0.002$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[th]
\caption{Dominant parts of the squared amplitudes $(C_{c,\theta}^J)^2$ of the $2^+_1$ states of $^8$He and $^8$C.}
\label{comp8_4}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc}
Configuration & $^8$He($2^+_1$) & $^8$C($2^+_1$) \\ \hline
$(p_{3/2})^3(p_{1/2})$ & $0.922-i0.000$ & $ 0.922+i0.017$ \\
$(p_{3/2})^2(p_{1/2})^2$ & $0.021-i0.009$ & $ 0.035-i0.028$ \\
$(p_{3/2})(p_{1/2})^2(\tilde{p}_{1/2})$ & $0.015+i0.009$ & $-0.009+i0.014$ \\
$(p_{3/2})(p_{1/2})(d_{5/2})^2$ & $0.010+i0.003$ & $ 0.008+i0.003$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[th]
\caption{Dominant parts of the squared amplitudes $(C_{c,\theta}^J)^2$ of the $2^+_2$ states of $^8$He and $^8$C.}
\label{comp8_5}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc}
Configuration & $^8$He($2^+_2$) & $^8$C($2^+_2$) \\ \hline
$(p_{3/2})^2(p_{1/2})^2$ & $0.908+i0.015$ & $ 0.955+i0.052$ \\
$(p_{3/2})^3(p_{1/2})$ & $0.026-i0.011$ & $ 0.035-i0.031$ \\
$(p_{3/2})(p_{1/2})^2(\tilde{p}_{1/2})$ & $0.032-i0.006$ & $-0.017+i0.006$ \\
$(p_{3/2})^2(d_{3/2})^2$ & $0.032+i0.004$ & $ 0.010-i0.024$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
We discuss the configurations of four valence nucleons in the five states of $^8$He and $^8$C.
For $0^+_{1,2}$, we already discussed the results in Refs. \cite{myo10,myo12,myo14b}, hence
we only show the values in Tables \ref{comp8_1} and \ref{comp8_2} for reference.
New results are $1^+$ and $2^+_{1,2}$ as shown in Tables \ref{comp8_3}, \ref{comp8_4}, and \ref{comp8_5}, respectively.
We show the dominant parts of the squared amplitude $(C_{c,\theta}^J)^2$ of the COSM configurations in Eq.~(\ref{WF_CSM}) for each state.
In $(C_{c,\theta}^J)^2$, the magnitude of the imaginary parts are very small for all states.
In this case, we can use the real parts of $(C_{c,\theta}^J)^2$ to interpret the states in the physical meaning.
For the $1^+$ states in two nuclei, they are dominated by the single configuration of $(p_{3/2})^3(p_{1/2})$ for four valence nucleons.
It is noted that in the configuration of $(p_{3/2})(p_{1/2})^2(\tilde{p}_{1/2})$, the $\tilde{p}_{1/2}$ state is orthogonal to the $p_{1/2}$ state.
For the $2^+_1$ states in two nuclei, they are dominated by the single configuration of $(p_{3/2})^3(p_{1/2})$ for four valence nucleons, which are the same as the $1^+$ results.
For the $2^+_2$ states, they are dominated by the single configuration of $(p_{3/2})^2(p_{1/2})^2$ for four valence nucleons and this single-particle configuration is the same results as the $0^+_2$ cases.
From these configuration results, when we see the five states of $^8$He and $^8$C,
the valence nucleons above $^4$He are dominantly in the $p$-shell configurations and the mixing of $sd$-shell configurations is small.
When we compare the amount of the configuration mixings among the five states of $^8$He and $^8$C,
only their ground $0^+_1$ states show the mixing stronger than other four states in each nucleus, as shown in Table \ref{comp8_1}.
Namely, the ground states are the most correlated ones.
This is because for $^8$He, the ground state is a bound state and for $^8$C, the ground state is the resonance with a small decay width of 0.07 MeV.
Hence, the spatial distributions of four valence nucleons in their ground states are considered to be more compact than other four resonances in each nucleus.
This property works to enhance the couplings between different configurations in the interaction region of four valence nucleons.
This point was discussed in Ref. \cite{myo14b} by comparing the radius of valence nucleons in the $0^+_{1,2}$ states of two nuclei.
If we adopt the bound-state approximation with $\theta=0$, namely, without the CSM,
we can obtain the $2^+_1$ state of $^8$He with the positive energy of 0.36 MeV from the $^4$He+$4n$ threshold,
and the mixings of the configurations of $(p_{3/2})^3(p_{1/2})$ and $(p_{3/2})^2(p_{1/2})^2$ are 0.89 and 0.04,
which shows the enhancement of the configuration mixing from the converging values shown in Table \ref{comp8_4} with the CSM.
This fact indicates the importance of the correct treatment of the boundary condition in the analysis of the resonances.
We remark on the experimental situation of the resonances of $^8$He and $^8$C.
For $^8$He, $2^+_1$ has been reported in some experiments with the consistent energy region \cite{korsheninnikov93,golovkov09}, but
$0^+_1$, $1^+$, and $2^+_2$ are not settled yet, although the possible signature has been reported \cite{golovkov09}.
For $^8$C, the only ground $0^+_1$ state has been reported \cite{charity11}.
It is interesting to compare the present theoretical predictions with the experimental observations
to get knowledge of the isospin-symmetry breaking in drip-line nuclei.
\newpage
\section{Summary}\label{sec:summary}
We investigated the resonances of $^8$He and $^8$C using the $^4$He+$N+N+N+N$ five-body model for neutron-rich and proton-rich nuclei.
We use the cluster orbital shell model to describe the multi-nucleon motion around $^4$He in the weakly bound and unbound states.
We also use the complex scaling method to treat many-body resonances under the correct boundary condition.
We found five states in both $^8$He and $^8$C, which are resonances except for the ground state of $^8$He.
We obtained the resonance energies and decay widths from the complex energy eigenvalues of the resonance poles.
The dynamical isospin-symmetry breaking is confirmed in the energy spectra of $^8$He and $^8$C including their decay widths,
induced by the Coulomb interaction for protons in $^8$C.
The obtained states are dominantly explained in the $p$-shell configurations, and
the resonances with larger decay widths tend to be dominated by the single configuration in the $jj$-coupling picture.
The excited resonances obtained in the present paper of $^8$He and $^8$C are the prediction for the future experiments.
This paper is the extension of the previous systematic study of neutron-rich He isotopes and their proton-rich mirror nuclei.
In the future, the detailed analysis of the each resonance will be performed for $^8$He and $^8$C.
For $^8$He, there are many kinds of open channels in the low-energy region, and
the transition strength from the ground state is interesting to investigate the effect of not only resonances, but also non-resonant continuum states related to the open channels.
In the calculation of the strength functions, the Green's function with complex scaling is useful \cite{myo98,odsuren14,odsuren15,odsuren17}.
The isospin-symmetry analysis is also an interesting aspect in two nuclei near the drip lines.
\section*{Acknowledgments}
This work was supported by JSPS KAKENHI Grants No. JP18K03660 and No. JP20K03962.
Numerical calculations were partly achieved through the use of OCTOPUS at the Cybermedia Center, Osaka University.
\clearpage
\section*{References}
\def\JL#1#2#3#4{ {{\rm #1}} \textbf{#2}, #4 (#3)}
\nc{\PR}[3] {\JL{Phys. Rev.}{#1}{#2}{#3}}
\nc{\PRC}[3] {\JL{Phys. Rev.~C}{#1}{#2}{#3}}
\nc{\PRA}[3] {\JL{Phys. Rev.~A}{#1}{#2}{#3}}
\nc{\PRL}[3] {\JL{Phys. Rev. Lett.}{#1}{#2}{#3}}
\nc{\NP}[3] {\JL{Nucl. Phys.}{#1}{#2}{#3}}
\nc{\NPA}[3] {\JL{Nucl. Phys.}{A#1}{#2}{#3}}
\nc{\PL}[3] {\JL{Phys. Lett.}{#1}{#2}{#3}}
\nc{\PLB}[3] {\JL{Phys. Lett.~B}{#1}{#2}{#3}}
\nc{\PTP}[3] {\JL{Prog. Theor. Phys.}{#1}{#2}{#3}}
\nc{\PTPS}[3] {\JL{Prog. Theor. Phys. Suppl.}{#1}{#2}{#3}}
\nc{\PRep}[3] {\JL{Phys. Rep.}{#1}{#2}{#3}}
\nc{\JP}[3] {\JL{J. of Phys.}{#1}{#2}{#3}}
\nc{\PPNP}[3] {\JL{Prog. Part. Nucl. Phys.}{#1}{#2}{#3}}
\nc{\PTEP}[3] {\JL{Prog. Theor. Exp. Phys.}{#1}{#2}{#3}}
\nc{\andvol}[3] {{\it ibid.}\JL{}{#1}{#2}{#3}}
|
1,108,101,566,450 | arxiv |
\section{Conclusion}
\label{sec:conclusion}
This paper studied the time required for a random walk to reach one or more nodes in a target set. We demonstrated that the problem of selecting a set of nodes in order to minimize random walk times including commute, cover, and hitting times has an inherent submodular structure that enables development of efficient approximation algorithms, as well as optimal solutions for some special cases as stated below.
We considered two cases, namely, walks with fixed distribution, as well as walks in which the distribution is jointly optimized with the target set by selecting a control policy in order to minimize the walk times. In the first case, we developed a unifying framework for proving submodularity of the walk times, based on proving submodularity of selecting a subset of stopping times, and derived submodularity of commute, cover, and hitting time as special cases. As a consequence, we showed that a set of nodes that minimizes the cover time can be selected using only polynomial number of evaluations of the cover time, and further derived solution algorithms for bounds on the cover time.
In the case where the distribution and target set are jointly optimized, we investigated the problems of maximizing the probability of reaching the target set, minimizing the average cost per cycle of the walk, as well as joint optimization of these metrics. We proved that the former problem admits a relaxation that can be solved in polynomial time, while the latter problem can be approximated via submodular optimization methods. In particular, the average cost per cycle can be minimized by minimizing the volume of an associated linear polytope, which we proved to be a supermodular function of the input set.
\section{Submodularity of Walk Times with Fixed Distribution}
\label{sec:fixed}
This section demonstrates submodularity of random walk times for walks with a probability distribution that does not depend on the set of nodes $S$. We first state a general result on submodularity of stopping times, and then prove submodularity of hitting, commute, cover, and coupling times as special cases.
\subsection{General Result}
\label{subsec:fixed-general}
Consider a set of stopping times $Z_{1},\ldots,Z_{N}$ for a random process $X_{k}$. Let $W = \{1,\ldots,N\}$, and define two functions $f,g: 2^{W} \rightarrow \mathbb{R}$ by
\begin{eqnarray*}
f(S) &=& \mathbf{E}\left\{\max{\{Z_{i} : i \in S\}}\right\} \\
g(S) &=& \mathbf{E}\left\{\min{\{Z_{i} : i \in S\}}\right\}
\end{eqnarray*}
We have the following general result.
\begin{proposition}
\label{prop:stopping-time-fixed}
The functions $f(S)$ and $g(S)$ are nondecreasing submodular and nonincreasing supermodular, respectively.
\end{proposition}
\begin{IEEEproof}
For any $S$ and $T$ with $S \subseteq T$, we have that
\begin{eqnarray*}
f(T) &=& \mathbf{E}\left\{\max{\left\{\max{\{Z_{i} : i \in S\}},\right.}\right. \\
&& \quad \left. \left. \max{\{Z_{i} : i \in T \setminus S\}}\right\}\right\} \\
&\geq& \mathbf{E}\left\{\max{\{Z_{i} : i \in S\}}\right\} = f(S),
\end{eqnarray*}
implying that $f(S)$ is nondecreasing. The proof of monotonicity for $g(S)$ is similar.
To show submodularity of $f(S)$, consider any sample path $\omega$ of the random process $X_{k}$. For any sample path, $Z_{i}$ is a deterministic nonnegative integer. Consider any sets $S,T \subseteq \{1,\ldots,N\}$, and suppose without loss of generality that $\max{\{Z_{i}(\omega) : i \in S\}} \geq \max{\{Z_{i}(\omega) : i \in T\}}$. Hence for any sets $S,T \subseteq \{1,\ldots,N\}$, we have
\begin{eqnarray*}
\max_{i \in S}{Z_{i}(\omega)} + \max_{i \in T}{Z_{i}(\omega)} &=& \max_{i \in S \cup T}{Z_{i}(\omega)} + \max_{i \in T}{Z_{i}(\omega)} \\
&\geq& \max_{i \in S \cup T}{Z_{i}(\omega)} + \max_{i \in S \cap T}{Z_{i}(\omega)}.
\end{eqnarray*}
due to submodularity of the max function.
Now, let $Q = (i_{1},\ldots,i_{N})$, where each $i_{j} \in \{1,\ldots,N\}$, be a random variable defined by $Z_{i_{1}} \leq Z_{i_{2}} \leq \cdots \leq Z_{i_{N}}$ for each sample path, so that $Q$ is the order in which the stopping times $Z_{i}$ are satisfied. Define $\alpha_{jQ} = \mathbf{E}(Z_{i_{j}}|Q)$. For any ordering $(i_{1},\ldots,i_{N})$, we have
\begin{IEEEeqnarray*}{rCl}
\IEEEeqnarraymulticol{3}{l}{
\mathbf{E}\left\{\max_{i \in S}{\{Z_{i}\}} | Q=(i_{1},\ldots,i_{N})\right\}} \\
&=& \mathbf{E}\left\{\max_{j : i_{j} \in S}{\{Z_{i_{j}}\}} | Q = (i_{1},\ldots,i_{N})\right\} \\
&=& \max_{j : i_{j} \in S}{\mathbf{E}(Z_{i_{j}}|Q)} = \max_{j: i_{j} \in S}{\alpha_{jQ}}.
\end{IEEEeqnarray*}
which is submodular by the same analysis as above. Taking expectation over all the realizations of $Q$, we have
\begin{eqnarray*}
f(S) &=& \sum_{(i_{1},\ldots,i_{N})}{\mathbf{E}\left\{\max_{i \in S}{\{Z_{i}\}}|Q=(i_{1},\ldots,i_{N})\right.} \\
&& \cdot \left.Pr(Q=i_{1},\ldots,i_{N})\right\} \\
&=& \sum_{(i_{1},\ldots,i_{N})}{\left[\left(\max_{j : i_{j} \in S}{\alpha_{jQ}}\right)Pr(Q=i_{1},\ldots,i_{N})\right]}
\end{eqnarray*}
which is a finite nonnegative weighted sum of submodular functions and hence is submodular. The proof of supermodularity of $g(S)$ is similar.
\end{IEEEproof}
Proposition \ref{prop:stopping-time-fixed} is a general result that holds for any random process, including random processes that are non-stationary.
In the following sections, we apply these results and derive tighter conditions for hitting, commute, and cover times.
\subsection{Submodularity of Hitting and Commute Times}
\label{subsec:hitting}
In this section, we consider the problem of selecting a subset of nodes $S$ in order to minimize the hitting time to $S$, $H(\pi,S)$, as well as selecting a subset of nodes $S$ in order to minimize the commute time $K(\pi,S)$. The following result is a corollary of Proposition \ref{prop:stopping-time-fixed}.
\begin{lemma}
\label{lemma:hitting-time-submodular}
$H(\pi,S)$ is supermodular as a function of $S$.
\end{lemma}
\begin{IEEEproof}
Let $Z_{i}$ denote the stopping time corresponding to the event $\{X_{k} = i\}$, where $X_{k}$ is a random walk on the graph. Then $H(\pi,S)$ is supermodular by Proposition \ref{prop:stopping-time-fixed}.
\end{IEEEproof}
The submodularity of $H(\pi,S)$ implies that the following greedy algorithm can be used to approximate the solution to $\min{\{H(\pi,S) : |S| \leq k\}}$. In the greedy algorithm, at each iteration the node $v$ that minimizes $H(\pi, S \cup \{v\})$ is added to $S$ at each iteration. Letting $S^{\ast}$ denote the minimizer of $\{H(\pi, S) : |S| \leq k\}$. Then the set $\hat{S}$ obtained by the greedy algorithm satisfies $$H(\pi, \hat{S}) \leq \left(1-\frac{1}{e}\right)H(\pi,S^{\ast}) + \frac{1}{e}\max_{v}{H(\pi,\{v\})}.$$ An analogous lemma for the commute time is as follows.
\begin{lemma}
\label{lemma:commute-time-supermodular}
For any distribution $u$, the function $K(u,S)$ is supermodular as a function of $S$.
\end{lemma}
\begin{IEEEproof}
Let $Z_{i}$ denote the stopping time corresponding to the event $\{X_{k} = u, X_{l} = i \mbox{ for some } l < k\}$. Then $K(S,u) = \mathbf{E}(\min_{i \in S}{Z_{i}})$, and hence $K(u,S)$ is supermodular as a function of $S$.
\end{IEEEproof}
Lemma \ref{lemma:commute-time-supermodular} can be extended to distributions $\pi$ over the initial state $u$ as $K(\pi,S) = \sum_{u}{K(u,S)\pi(u)}$. The function $K(\pi,S)$ is then a nonnegative weighted sum of supermodular functions, and hence is supermodular.
\subsection{Submodularity of Cover Time}
\label{subsec:cover}
The submodularity of the cover time is shown as follows.
\begin{figure*}
\centering
$\begin{array}{ccc}
\includegraphics[width=2.25in]{Figures/ACPC-dual.eps} &
\includegraphics[width=2.25in]{Figures/ACPC-comparison.eps} &
\includegraphics[width=2.25in]{Figures/comparison_results.eps} \\
(a) & (b) & (c)
\end{array}$
\caption{Numerical evaluation of submodular optimization of random walk times. (a) Minimum number of input nodes for minimizing average cost per cycle of a uniformly random MDP. (b) Performance of Algorithm \ref{algo:ACPC} for selecting a given number of input nodes to minimize the average cost per cycle. (c) Comparison of minimum cover time using random, greedy, and submodular optimization algorithms.}
\label{fig:fixed}
\end{figure*}
\begin{proposition}
\label{prop:cover-time}
The cover time $C(S)$ is nondecreasing and submodular as a function of $S$.
\end{proposition}
\begin{IEEEproof}
The result can be proved by using Proposition \ref{prop:stopping-time-fixed} with the set of events $\{Z_{i} : i \in S\}$ where the event $Z_{i}$ is given as $Z_{i} = \{X_{k} = i\}$.
An alternative proof is as follows. Let $S \subseteq T \subseteq V$, and let $v \in V \setminus T$. The goal is to show that $$C(S \cup \{v\}) - C(S) \geq C(T \cup \{v\}) - C(T).$$ Let $\tau_{v}(S)$ denote the event that the walk reaches $v$ before reaching all nodes in the set $S$, noting that $\tau_{v}(T) \subseteq \tau_{v}(S)$. Let $Z_{S}$, $Z_{T}$, and $Z_{v}$ denote the times when $S$, $T$, and $v$ are reached by the walk, respectively. We prove that the cover time is submodular for each sample path of the walk by considering different cases.
In the first case, the walk reaches node $v$ before reaching all nodes in $S$. Then $C(S) = C(S \cup \{v\})$ and $C(T) = C(T \cup \{v\})$, implying that submodularity holds trivially. In the second case, the walk reaches node $v$ after reaching all nodes in $S$, but before reaching all nodes in $T$. In this case, $C(S \cup \{v\}) - C(S) = Z_{v} - Z_{S}$, while $C(T \cup \{v\})-C(T) = 0$. In the last case, the walk reaches $v$ after reaching all nodes in $T$. In this case, $$C(S \cup \{v\}) - C(S) = Z_{v} - Z_{S} \geq Z_{v} - Z_{T} = C(T \cup \{v\}) - C(T),$$ implying submodularity. Taking the expectation over all sample paths yields the desired result.
\end{IEEEproof}
The submodularity of cover time implies that the problem of maximizing the cover time can be approximated up to a provable optimality bound. Similarly, we can select a set of nodes to minimize the cover time by $$\min{\{C(S) - \psi |S| : S \subseteq V|\}}.$$ The cover time, however, is itself computationally difficult to approximate. Instead, upper bounds on the cover time can be used. We have the following preliminary result.
\begin{proposition}[\cite{levin2009markov}, Prop. 11.4]
\label{prop:Matthews}
For any set of nodes $A$, define $t_{min}^{A} = \min_{a,b \in A, a \neq b}{H(a,b)}$. Then the cover time $C(S)$ is bounded by $$C(S) \geq \max_{A \subseteq S}{\left\{t_{min}^{A}\left(1+ \frac{1}{2} + \cdots + \frac{1}{|A|-1}\right)\right\}}.$$
\end{proposition}
Define $c(k) = 1 + \frac{1}{2} + \cdots + \frac{1}{|A|-1}$, and define $\hat{f}(S)$ by $$\hat{f}(S) = \max_{A \subseteq S}{\left\{c(|A|)\min_{\stackrel{b \in A}{a \in V}}{H(a,b)}\right\}}.$$ The approximation $\hat{f}(S)$ can be minimized as follows. We first have the following preliminary lemma.
\begin{lemma}
\label{lemma:Matthews-approx}
The function $f^{\prime}(S)$ is equal to $$\hat{f}(S) = \max_{k=1,\ldots,|S|}{\alpha_{k}c(k)},$$ where $\alpha_{k}$ is the $k$-th largest value of $H(a,b)$ among $b \in S$.
\end{lemma}
\begin{IEEEproof}
Any set $A$ with $|A| = k$ will have the same value of $c(A)$. Hence it suffices to find, for each $k$, the value of $A$ that maximizes $t_{min}^{A}$ with $|A| = k$. That maximizer is given by the $k$ elements of $S$ with the largest values of $\min_{a \in V}{H(a,b)}$, and the corresponding value is $\alpha_{k}$.
\end{IEEEproof}
By Lemma \ref{lemma:Matthews-approx}, in order to select the minimizer of $\hat{f}(S) - \lambda|S|$, the following procedure is sufficient. For each $k$, select the $k$ elements of $S$ with the smallest value of $\min{\{H(a,b) : a \in V\}}$, and compute $\beta_{k}c(k) - \lambda k$, where $\beta_{k}$ is the $k$-th smallest value of $\min_{a \in V}{H(a,b)}$ over all $b \in S$.
In addition, we can formulate the following problem of minimizing the probability that the cover time is above a given threshold. The value of $Pr(C(S) > L)$ can be approximated by taking a set of sample paths $\omega_{1},\ldots,\omega_{N}$ of the walk and ensuring that $C(S ; \omega_{i}) > L$ in each sample path. This problem can be formulated as $$Pr(C(S) > L) \approx \frac{1}{N}\sum_{i=1}^{N}{\mathbf{1}\{C(S ; \omega_{i}) > L\}}.$$ The function $\mathbf{1}\{C(S ; \omega_{i}) > L\}$ is increasing and submodular, since it is equal to $1$ if there is a node in $S$ that is not reached during the first $L$ steps of the walk and $0$ otherwise. Hence the problem of minimizing the probability that the cover time exceeds a given threshold can be formulated as $$\min{\left\{Pr(C(S) > L\ - \psi |S| : S \subseteq V\right\}}$$ and solved in polynomial time
\section{Introduction}
\label{sec:intro}
A random walk is a stochastic process over a graph, in which each node transitions to one of its neighbors at each time step according to a (possibly non-stationary) probability distribution \cite{levin2009markov,lovasz1993random}. Random walks on graphs are used to model and design a variety of stochastic systems. Opinion propagation in social networks, as well as gossip and consensus algorithms in communications and networked control systems, are modeled and analyzed via random walks~\cite{ghaderi2013opinion,boyd2006randomized,jadbabaie2003coordination,sarkar2011random}. Random walks also serve as distance metrics in clustering, image segmentation, and other machine learning applications \cite{khoa2011large,yen2007graph,fouss2007random}. The behavior of physical processes, such as heat diffusion and electrical networks, can also be characterized through equivalent random walks~\cite{lawler2010random, doyle1984random}.
One aspect of random walks that has achieved significant research attention is the expected time for the walk to reach a given node or set of nodes~\cite{brightwell1990maximum,feige1995tight}. Relevant metrics include the hitting time, defined as the expected time for a random walk to reach any node in a given set, the commute time, defined as the expected time for the walk to reach any node in a given set and then return to the origin of the walk, and the cover time, which is the time for the walk to reach all nodes in a desired set. These times give rise to bounds on the rate at which the walk converges to a stationary distribution~\cite{levin2009markov}, and also provide metrics for quantifying the centrality or distance between nodes~\cite{fouss2007random,yen2008family}.
The times of a random walk are related to system performance in a diverse set of network applications. The convergence rate of gossip and consensus algorithms is determined by the hitting time to a desired set of leader or anchor nodes~\cite{hunt2016algorithm,clark2014supermodular}. The effective resistance of an electrical network is captured by its commute time to the set of grounded nodes~\cite{chandra1996electrical}. The performance of random-walk query processing algorithms is captured by the cover time of the set of nodes needed to answer the query~\cite{avin2004efficient}. Optimal control problems such as motion planning and traffic analysis are described by the probability of reaching a target set or the resource cost per cycle of reaching a target set infinitely often~\cite{ding2011mdp}.
In each of these application domains, a set of nodes is selected in order to optimize one or more random walk metrics. The optimization problem will depend on whether the distribution of the random walk is affected by the choice of input nodes. Some systems will have a fixed walk distribution, determined by physical laws (such as heat diffusion or social influence propagation), for any set of input nodes. In other applications, the distribution can be selected by choosing a control policy, and hence the distribution of the walk and the set to be reached can be jointly optimized, as in Markov decision processes. In both cases, however, the number of possible sets of nodes is exponential in the network size, making the optimization problem computationally intractable in the worst case, requiring additional structure of the random walk times. At present, computational structures of random walks have received little attention by the research community.
In this paper, we investigate the hitting, commute, cover, and cycle times, as well as the reachability probability, of a random walk as functions of the set of network nodes that are reached by the walk. We show that these metrics exhibit a submodular structure, and give a unifying proof of submodularity for the different metrics with respect to the chosen set of nodes. We consider both fixed random walk transition distributions and the case where the set of nodes and control policy are jointly selected. We make the following specific contributions:
\begin{itemize}
\item We formulate the problem of jointly selecting a set of nodes and a control policy in order to maximize the probability of reaching the set from any initial state, as well as the average (per cycle) cost of reaching the set infinitely often.
\item We prove that each problem can be approximated in polynomial time with provable optimality bounds using submodular optimization techniques. Our approach is to relate the existence of a probability distribution that satisfies a given cycle cost to the volume of a linear polytope, which is a submodular function of the desired set. We extend our approach to joint optimization of reachability and cycle cost, which we prove is equivalent to a matroid optimization problem.
\item In the case where the probability distribution is fixed, we develop a unifying framework, based on the submodularity of selecting subsets of stopping times, which includes proofs of the supermodularity of hitting and commute times and the submodularity of the cover time as special cases. Since the cover time is itself NP-hard to compute, we study and prove submodularity of standard upper bounds for the cover time.
\item We evaluate our results through numerical study, and show that the submodular structure of the system enables improvement over other heuristics such as greedy and centrality-based algorithms.
\end{itemize}
This paper is organized as follows. In Section \ref{sec:related}, we review the related work. In Section \ref{sec:preliminaries}, we present our system model and background on submodularity. In Section \ref{sec:optimal}, we demonstrate submodularity of random walk times when the walk distribution is chosen to optimize the times. In Section \ref{sec:fixed}, we study submodularity of random walk times with fixed distribution. In Section \ref{sec:simulation}, we present numerical results. In Section \ref{sec:conclusion}, we conclude the paper.
\section{Background and Preliminaries}
\label{sec:preliminaries}
This section gives background on random walk times, Markov decision processes, and submodularity. Notations used throughout the paper are introduced.
\subsection{Random Walk Times}
\label{subsec:RW}
Throughout the paper, we let $\mathbf{E}(\cdot)$ denote the expectation of a random variable. Let $G = (V,E)$ denote a graph with vertex set $V$ and edge set $E$. A random walk is a discrete-time random process $X_{k}$ with state space $V$. The distribution of the walk is defined by a set of functions $P_{k} : V^{k} \rightarrow \Pi^{V}$, where $$V^{k} = \underbrace{V \times \cdots \times V}_{k \mbox{ times}}$$ and $\Pi^{V}$ is the simplex of probability distributions over $V$. The function $P_{k}$ is defined by $$P_{k}(v_{1},\ldots,v_{k}) = Pr(X_{k} = v_{k} | X_{1} = v_{1}, \cdots, X_{k-1} = v_{k-1}).$$ The random walk is Markovian if, for each $k$, there exists a stochastic matrix $P_{k}$ such that $Pr(X_{k} = v_{k} | X_{1} = v_{1},\ldots,X_{k-1} = v_{k-1}) = P_{k}(v_{k-1},v_{k})$. Matrix $P_{k}$ is denoted as the transition matrix at time $k$. The random walk is Markovian and stationary if $P_{k} \equiv P$ for all $k$ and some stochastic matrix $P$. A Markovian and stationary walk is \emph{ergodic}~\cite{aldous2002reversible} if there exists a probability distribution $\pi$ such that, for any initial distribution $\phi$, $\lim_{k \rightarrow \infty}{\phi P^{k}} = \pi$, implying that the distribution of the walk will eventually converge to $\pi$ regardless of the initial distribution.
Let $S \subseteq V$ be a subset of nodes in the graph. The hitting, commute, and cover time of $S$ are defined as follows.
\begin{definition}
\label{def:times}
Let random variables $\nu(S)$, $\kappa(S)$, and $\phi(S)$ be defined as
\begin{eqnarray*}
\nu(S) &=& \min{\{k : X_{k} \in S\}}\\
\kappa(S) &=& \min{\{k : X_{k} = X_{1}, X_{j} \in S \mbox{ for some } j < k\}} \\
\phi(S) &=& \min{\{k : \forall s \in S, \ \exists j \leq k \mbox{ s.t. } X_{j} = s\}}
\end{eqnarray*}
The hitting time of $S$ from a given node $v$ is equal to $H(v,S) \triangleq \mathbf{E}(\nu(S) | X_{1} = v)$. The commute time of $S$ from node $v$ is equal to $K(v,S) = \mathbf{E}(\kappa(S) | X_{1} = v)$. The cover time of $S$ from node $v$ is equal to $C(v,S) = \mathbf{E}(\phi(S) | X_{1} = v)$.
\end{definition}
Intuitively, the hitting time is the expected time for the random walk to reach the set $S$, the commute time is the expected time for a walk starting at a node $v$ to reach any node in $S$ and return to $v$, and the cover time is the expected time for the walk to reach all nodes in the set $S$. If $\pi$ is a probability distribution over $V$, we can further generalize the above definitions to $H(\pi,S)$, $K(\pi,S)$, and $C(\pi,S)$, which are the expected hitting, commute, and cover times when the initial state is chosen from distribution $\pi$.
The times described above are all special cases of stopping times of a stochastic process.
\begin{definition}[\cite{levin2009markov}, Ch. 1]
\label{def:stopping-time}
A stopping time $Z$ of a random walk $X_{k}$ is a $\{1,2,\ldots,\infty\}$-valued random variable such that, for all $k$, the event $\{Z = k\}$ is determined by $X_{1},\ldots,X_{k}$.
\end{definition}
The hitting, cover, and commute times are stopping times.
\subsection{Markov Decision Processes}
\label{subsec:MDP}
A Markov Decision Process (MDP) is a generalization of a Markov chain, defined as follows.
\begin{definition}
\label{def:MDP}
An MDP $\mathcal{M}$ is a discrete-time random process $X_{k}$ defined by a tuple $\mathcal{M} = (V, \{A_{i} : i \in V\}, P)$, where $V$ is a set of states, $A_{i}$ is a set of actions at state $i$, and $P$ is a transition probability function defined by $$P(i,a,j) = Pr(X_{k+1} = j | X_{k} = i, \mbox{action $a$ chosen at time $k$}).$$
\end{definition}
Define $\mathcal{A} = \bigcup_{i=1}^{n}{A_{i}}$ and $A = |\mathcal{A}|$. When the action set of a state is empty, the transition probability is a function of the current state only.
The set of actions at each state corresponds to the possible control inputs that can be supplied to the MDP at that state. A \emph{control policy} is a function that takes as input a sequence of states $X_{1},\ldots,X_{k}$ and gives as output an action $a_{k} \in A_{X_{k}}$. A stationary control policy is a control policy $\mu$ that depends only on the current state, and hence can be characterized by a function $\mu: V \rightarrow A$. We let $\mathcal{P}$ denote the set of valid policies, i.e., the policies $\mu$ satisfying $\mu(i) \in A_{i}$ for all $i$. The random walk induced by a stationary policy $\mu$ is a stationary random walk with transition matrix $P_{\mu}$ defined by $P_{\mu}(i,j) = P(i,\mu(i),j)$.
The control policy $\mu$ is selected in order to achieve a design goal of the system. Such goals are quantified via specifications on the random process $X_{k}$. Two relevant specifications are \emph{safety} and \emph{liveness} constraints. A safety constraint specifies that a set of states $R$ should never be reached by the walk. A liveness constraint specifies that a given set of states $S$ must be reached infinitely often. In an MDP, two optimization problems arise in order to satisfy such constraints, namely, the reachability and average-cost-per-cycle problems.
The reachability problem consists of selecting a policy in order to maximize the probability that a desired set of states $S$ is reached by the Markov process while the unsafe set $R$ is not reached. The average cost per cycle (ACPC) problem is defined via the following metric.
\begin{definition}
\label{def:ACPC}
The average cost per cycle metric from initial state $s \in V$ under policy $\mu$ is defined by
\begin{equation}
\label{eq:ACPC}
J_{\mu}(s) = \limsup_{N \rightarrow \infty}{\mathbf{E}\left\{\frac{\sum_{k=0}^{N}{g(X_{k},\mu_{k}(X_{k}))}}{C(\mu,N)}\right\}},
\end{equation}
where $g(X_{k},\mu_{k}(X_{k}))$ is the cost of taking an action $\mu_{k}(X_{k})$ at state $X_{k}$ and $C$ is the number of cycles up to time $N$.
\end{definition}
The average cost per cycle can be viewed as the average number of steps in between times when the set $S$ is reached. The ACPC problem consists of choosing the set $S$ and policy $\mu$ in order to minimize $J(s)$. Applications of this problem include motion planning, in which the goal is to reach a desired state infinitely often while minimizing energy consumption.
We define $J_{\mu} \in \mathbb{R}^{n}$ as the vector of ACPC values for different initial states, so that $J_{\mu}(s)$ is the ACPC value for state $s$. In the special case where all actions have cost $1$, Eq. (\ref{eq:ACPC}) is equivalent to
\begin{equation}
\label{eq:ACPC-same-cost}
J_{\mu}(s) = \limsup_{N \rightarrow \infty}{\left\{\frac{N}{C(\mu,N)}\right\}}.
\end{equation}
We focus on this case in what follows. It has been shown in \cite{ding2014optimal} that the optimal policy $\mu^{\ast}$ that minimizes the ACPC is independent of the initial state. The following theorem characterizes the minimum ACPC policy $\mu^{\ast}$.
\begin{theorem}[\cite{ding2014optimal}]
\label{theorem:min-ACPC}
The optimal ACPC is given by $J_{\mu^{\ast}} = \lambda\mathbf{1}$, where $\lambda \in \mathbb{R}$ and there exist vectors $h$ and $\nu$ satisfying
\begin{eqnarray
J_{\mu^{\ast}} + h &=& \mathbf{1} + P_{\mu^{\ast}}h + \overline{P}_{\mu^{\ast}}J_{\mu^{\ast}} \\
P_{\mu^{\ast}} &=& (I-\overline{P}_{\mu^{\ast}})h + \nu \\
\nonumber
\lambda + h(i) &=& \min_{a \in A_{i}}{\left[1 + \sum_{j=1}^{n}{P(i,a,j)h(j)}\right.} \\
\label{eq:ACPC-optimal}
&& \left. \qquad + \lambda \sum_{j \notin S}{P(i,a,j)}\right].
\end{eqnarray}
$P_{\mu^{\ast}}$ is the transition matrix induced by $\mu^{\ast}$ and
\begin{equation}
\overline{P}_{\mu^{\ast}}(i,j) = \left\{
\begin{array}{ll}
P_{\mu}(i,j), & j \notin S \\
0, \mbox{else}
\end{array}
\right.
\end{equation}
\end{theorem}
\subsection{Submodularity}
\label{subsec:submod}
Submodularity is a property of set functions $f: 2^{W} \rightarrow \mathbb{R}$, where $W$ is a finite set and $2^{W}$ is the set of all subsets of $W$. A function is submodular if, for any sets $S$ and $T$ with $S \subseteq T \subseteq W$ and any $v \notin T$, $$f(S \cup \{v\}) - f(S) \geq f(T \cup \{v\}) - f(T).$$ A function is supermodular if $-f$ is submodular, while a function is modular if it is both submodular and supermodular. For any modular function $f(S)$, a set of coefficients $\{c_{i} : i \in W\}$ can be defined such that $$f(S) = \sum_{i \in S}{c_{i}}.$$ Furthermore, for any set of coefficients $\{c_{i} : i \in W\}$, the function $f(S) = \max{\{c_{i} : i \in S\}}$ is increasing and submodular, while the function $\min{\{c_{i} : i \in S\}}$ is decreasing and supermodular. Any nonnegative weighted sum of submodular (resp. supermodular) functions is submodular (resp. supermodular).
A matroid is defined as follows.
\begin{definition}
\label{def:matroid}
A matroid $\mathcal{M}=(V,\mathcal{I})$ is defined by a set $V$ and a collection of subsets $\mathcal{I}$. The set $\mathcal{I}$ satisfies the following conditions: (i) $\emptyset \in \mathcal{I}$, (ii) $B \in \mathcal{I}$ and $A \subseteq B$ implies that $A \in \mathcal{I}$, and (iii) If $|A| < |B|$ and $A, B \in \mathcal{I}$, then there exists $v \in B \setminus A$ such that $(A \cup \{v\}) \in \mathcal{I}$.
\end{definition}
The collection of sets $\mathcal{I}$ is denoted as the independent sets of the matroid. A \emph{basis} is a maximal independent set. The uniform matroid $\mathcal{M}_{k}$ is defined by $A \in \mathcal{I}$ iff $|A| \leq k$. A \emph{partition matroid} is defined as follows.
\begin{definition}
\label{def:partition}
Let $V = V_{1} \cup \cdots \cup V_{m}$ with $V_{i} \cap V_{j} = \emptyset$ for $i \neq j$ be a partition of a set $V$. The partition matroid $\mathcal{M} = (V, \mathcal{I})$ is defined by $A \in \mathcal{I}$ if $|A \cap V_{i}| \leq 1$ for all $i=1,\ldots,m$.
\end{definition}
Finally, given two matroids $\mathcal{M}_{1} = (V, \mathcal{I}_{1})$ and $\mathcal{M}_{2} = (V, \mathcal{I}_{2})$, the union $\mathcal{M} = \mathcal{M}_{1} \vee \mathcal{M}_{2}$ is a matroid in which $A \in \mathcal{I}$ iff $A = A_{1} \cup A_{2}$ for some $A_{1} \in \mathcal{I}_{1}$ and $A_{2} \in \mathcal{I}_{2}$.
\section{Related Work}
\label{sec:related}
Commute, cover, and hitting times have been studied extensively, dating to classical bounds on the mixing time \cite{aldous2002reversible,levin2009markov}. Generalizations to these times have been proposed in \cite{coppersmith1993collisions,banderier2000generalized}. Connections between random walk times and electrical resistor networks were described in \cite{chandra1996electrical}. These classical works, however, do not consider the submodularity of the random walk times.
Submodularity of random walk times has been investigated based on connections between the hitting and commute times of random walks, and the performance of linear networked systems \cite{clark2014supermodular,clark2014minimizing}. The supermodularity of the commute time was shown in \cite{clark2014supermodular}. The supermodularity of the hitting time was shown in \cite{clark2014minimizing} and further studied in \cite{hunt2016algorithm}. These works assumed a fixed, stationary transition probability distribution, and also did not consider submodularity of the cover time. Our framework derives these existing results as special cases of a more general result, while also considering non-stationary transition probability distributions.
Random walk times have been used for image segmentation and clustering applications. In these settings, the distance between two locations in an image is quantified via the commute time of a random walk with a probability distribution determined by a heat kernel \cite{yen2007graph}. Clustering algorithms were then proposed based on minimizing the commute time between two sets \cite{qiu2007clustering}.
This work is related to the problem of selecting an optimal control policy for a Markov decision process~\cite{fu2016optimal}. Prior work has investigated selecting a control policy in order to maximize the probability of reaching a desired target set \cite{papadimitriou1987complexity,maiga2016comprehensive}, or to minimize the cost per cycle of reaching the target set \cite{belta2016formal}. These works assume that the target set is given. In this paper, we consider the dual problem of selecting a target set in order to optimize these metrics.
\section{Numerical Results}
\label{sec:simulation}
We evaluated our approach through numerical study using Matlab. We simulated both the fixed and optimal distribution cases. In the case of fixed distribution, our goal was to determine how the minimum cover time varied as a function of the number of input nodes and the network size. We generated an Erdos-Renyi random graph $G(N,p)$, in which there is an edge $(i,j)$ from node $i$ to node $j$ with probability $p$, independent of all other edges, and the total number of nodes is $N$. The value of $N$ varied from $N=10$ to $N=50$.
In the MDP case, we simulated the average cost per cycle (ACPC) problem. We first considered a randomly generated MDP, in which each state $i$ had four actions and the probability distribution $P(i,a,\cdot)$ was generated uniformly at random. We considered the problem of selecting a minimum-size set of states $S$ in order to satisfy a given bound on ACPC. The results are shown in Figure \ref{fig:fixed}(a). We found that the submodular approach outperformed a random selection heuristic even for the relatively homogeneous randomly generated MDPs. We also found that the number of states required to achieve a given bound on ACPC satisfied a diminishing returns property consistent with the submodular structure of ACPC
We then considered a lattice graph. The set of actions corresponded to moving left, right, up, or down. For each action, the walker was assumed to move in the desired direction with probability $p_{c}$, and to move in a uniformly randomly chosen direction otherwise. If an ``invalid'' action was chosen, such as moving up from the top-most position in the lattice, then a feasible step was chosen uniformly at random.
Figure \ref{fig:fixed}(b) shows a comparison between three algorithms. The first algorithm selects a random set of $k$ nodes as inputs. The second algorithm selects input nodes via a centrality-based heuristic, in which the most centrally located nodes are chosen as inputs. The third algorithm is the proposed submodular approach (Algorithm \ref{algo:ACPC}). We found that the submodular approach slightly outperformed the centrality-based method while significantly improving on random selection.
Figure \ref{fig:fixed}(c) compares the optimal selection algorithm with a greedy heuristic and random selection of inputs. We found that the greedy algorithm closely approximates the optimum at a lower computation cost, while both outperformed the random input selection.
\section{Random Walks on Markov Decision Processes}
\label{sec:optimal}
In this section, we consider the problem of selecting a set $S$ of states for an MDP to reach in order to optimize a performance metric. We consider two problems, namely, the problem of selecting a set of states $S$ in order to maximize the reachability probability to $S$ while minimizing the probability of reaching an unsafe set $R$, and the problem of selecting a set of states $S$ in order to minimize the ACPC. A motivating scenario is a setting where an unmanned vehicle must reach a refueling station or transit depot infinitely often, and the goal is to place the set of stations in order to maximize the probability of reaching one or minimize the cost of reaching.
\subsection{Reachability Problem Formulation}
\label{subsec:reachability-formulation}
The problem of selecting a set $S$ with at most $k$ nodes in order to maximize the probability of reaching $S$ under the optimal policy $\mu$ is considered as follows. Let $\sigma(S)$ denote the event that the walk reaches $S$ at some finite time and does not reach the unsafe state $R$ at any time. The problem formulation is given by
\begin{equation}
\label{eq:reachability-formulation}
\begin{array}{ll}
\mbox{maximize} & \max_{\mu \in \mathcal{P}}{Pr(\sigma(S)|\mu)} \\
S \subseteq V & \\
\mbox{s.t.} & |S| \leq k
\end{array}
\end{equation}
Here $Pr(\sigma(S) | \mu)$ denotes the probability that $\sigma(S)$ occurs when the policy is $\mu$. This formulation is equivalent to
\begin{equation}
\label{eq:equiv-reachability-formulation}
\begin{array}{lll}
\mbox{maximize} & \mbox{max} & Pr(\sigma(S)|\mu) \\
\mu & S: |S| \leq k &
\end{array}
\end{equation}
The following known result gives a linear programming approach to maximizing the probability of reachability for a fixed set $S$.
\begin{lemma}[\cite{baier2008principles}, Theorem 10.105]
\label{lemma:fixed-S-reachability}
The optimal value of $\max{\{Pr(\sigma(S)|\mu) : \mu \in \mathcal{P}\}}$ is equal to
\begin{equation}
\label{eq:fixed-S-reachability}
\begin{array}{ll}
\mbox{min} & \mathbf{1}^{T}\mathbf{x} \\
\mathbf{x} \in \mathbb{R}^{n} & \\
\mbox{s.t.} & x_{i} \in [0,1] \ \forall i \\
& x_{i} = 1 \ \forall i \in S, x_{i} = 0 \ \forall i \in R \\
& x_{i} \geq \sum_{j=1}^{n}{P(i,a,j)x_{j}} \ \forall i \in V \setminus R, a \in A_{i}
\end{array}
\end{equation}
\end{lemma}
The optimal solution $\mathbf{x}$ to the linear program (\ref{eq:fixed-S-reachability}) is a vector in $\mathbb{R}^{n}$, where $x_{i}$ is the probability that $\sigma(S)$ occurs under the optimal policy when the initial state is $i$.
In addition to giving the optimal value of the maximal reachability problem, Eq. (\ref{eq:fixed-S-reachability}) can also be used to compute the optimal policy. In order for $\mathbf{x}$ to be the solution to (\ref{eq:fixed-S-reachability}), for each $i \in V \setminus (R \cup S)$, there must be an action $a_{i}^{\ast}$ such that $$x_{i} = \sum_{j=1}^{n}{P(i,a_{i}^{\ast},j)x_{j}}.$$ Otherwise, it would be possible to decrease $x_{i}$ and hence the objective function of (\ref{eq:fixed-S-reachability}). Hence the optimal policy $\mu$ is given by $\mu(i) = a_{i}^{\ast}$.
We first define a relaxation of (\ref{eq:fixed-S-reachability}). Let $\rho > 0$, and define the relaxation by
\begin{equation}
\label{eq:relaxed-fixed-opt}
\begin{array}{ll}
\mbox{minimize} & \mathbf{1}^{T}\mathbf{x} + \rho\left(\sum_{i \in S}{(1-x_{i})}\right) \\
\mathbf{x} & \\
\mbox{s.t.} & \mathbf{x} \in [0,1]^{n} \\
& x_{i} = 0 \ \forall i \in R \\
& x_{i} \geq \sum_{j=1}^{n}{P(i,a,j)x_{j}} \ \forall i \in V \setminus R, a \in A_{i}
\end{array}
\end{equation}
The following lemma shows that (\ref{eq:relaxed-fixed-opt}) is equivalent to (\ref{eq:fixed-S-reachability}).
\begin{lemma}
\label{lemma:relaxed-reachability-equivalent}
When $\rho > n$, the optimal solutions and optimal values of (\ref{eq:fixed-S-reachability}) and (\ref{eq:relaxed-fixed-opt}) are equal.
\end{lemma}
\begin{IEEEproof}
We first show that the solution to (\ref{eq:relaxed-fixed-opt}) satisfies $x_{i}=1$ for all $i \in S$. Suppose that this is not the case. Let $\mathbf{x}^{\ast}$ denote the solution to (\ref{eq:relaxed-fixed-opt}), and suppose that $x_{r}^{\ast} = 1-\epsilon$ for some $\epsilon > 0$ and $r \in S$. Now, construct a new vector $\mathbf{x}^{\prime} \in \mathbb{R}^{n}$ as
\begin{displaymath}
x_{i}^{\prime} = \left\{
\begin{array}{ll}
0, & i \in R \\
\min{\{x_{i}^{\ast} + \epsilon, 1\}}, & i \notin R
\end{array}
\right.
\end{displaymath}
Note that for all $i$, $0 \leq (x_{i}^{\prime} - x_{i}^{\ast}) \leq \epsilon$. We will first show that $\mathbf{x}^{\prime}$ is feasible under the constraints of (\ref{eq:relaxed-fixed-opt}), and then show that the resulting objective function value is less than the value produced by $\mathbf{x}^{\ast}$, contradicting optimality of $\mathbf{x}^{\ast}$.
By construction, $\mathbf{x}^{\prime} \in [0,1]^{n}$ and $x_{i}^{\prime} = 0$ for all $i \in R$. For each $i \notin R$, suppose first that $x_{i}^{\prime} = 1$. Then $$x_{i}^{\prime} = 1 = \sum_{j=1}^{n}{P(i,a,j)1} \geq \sum_{j=1}^{n}{P(i,a,j)x_{j}^{\prime}}.$$ Suppose next that $x_{i}^{\prime} = x_{i} + \epsilon$. Then for all $a \in A_{i}$,
\begin{IEEEeqnarray*}{rCl}
\sum_{j=1}^{n}{P(i,a,j)x_{j}^{\prime}} &=& \sum_{j=1}^{n}{P(i,a,j)(x_{j}^{\ast} + (x_{j}^{\prime}-x_{j}^{\ast}))} \\
&=& \sum_{j=1}^{n}{P(i,a,j)x_{j}^{\ast}} + \sum_{j=1}^{n}{P(i,a,j)(x_{j}^{\prime}-x_{j}^{\ast})} \\
&\leq& \sum_{j=1}^{n}{P(i,a,j)x_{j}^{\ast}} + \epsilon \sum_{j=1}^{n}{P(i,a,j)} \\
&\leq& x_{i}^{\ast} + \epsilon = x_{i}^{\prime}
\end{IEEEeqnarray*}
implying that $\mathbf{x}^{\prime}$ is feasible. The objective function value of $\mathbf{x}^{\prime}$ is given by
\begin{IEEEeqnarray*}{rCl}
\mathbf{1}^{T}\mathbf{x}^{\prime} + \rho \left(\sum_{i \in S}{(1-x_{i}^{\prime})}\right) &=& \mathbf{1}^{T}\mathbf{x}^{\ast} + \mathbf{1}^{T}(\mathbf{x}^{\prime}-\mathbf{x}^{\ast}) \\
&& + \rho\left(\sum_{i \in S \setminus \{r\}}{(1-x_{i}^{\prime})}\right) \\
&\leq& \mathbf{1}^{T}\mathbf{x}^{\ast} + \epsilon n + \rho\left(\sum_{i \in S \setminus \{r\}}{(1-x_{i}^{\ast})}\right) \\
&<& \mathbf{1}^{T}\mathbf{x}^{\ast} + \epsilon\rho + \rho\sum_{i \in S \setminus \{r\}}{(1-x_{i}^{\ast})} \\
&=& \mathbf{1}^{T}\mathbf{x}^{\ast} + \rho\sum_{i \in S}{(1-x_{i}^{\ast})}
\end{IEEEeqnarray*}
contradicting the assumption that $\mathbf{x}^{\ast}$ is the optimal value of (\ref{eq:relaxed-fixed-opt}). Hence the optimal value of (\ref{eq:relaxed-fixed-opt}) minimizes $\mathbf{1}^{T}\mathbf{x}$ while satisfying $x_{i} = 1$ for all $i \in S$, which is equivalent to the solution of (\ref{eq:fixed-S-reachability}).
\end{IEEEproof}
The problem of maximizing reachability is therefore equivalent to
\begin{equation}
\label{eq:relaxed-opt}
\begin{array}{lll}
\mbox{maximize} & \mbox{min} & \mathbf{1}^{T}\mathbf{x} + \rho \left(\sum_{i \in S}{(1-x_{i})}\right) \\
S: |S| \leq k & \mathbf{x} \in \Pi &
\end{array}
\end{equation}
The min-max inequality implies that (\ref{eq:relaxed-opt}) can be bounded above by
\begin{equation}
\label{eq:reachability-tractable}
\begin{array}{lll}
\mbox{min} & \mbox{max} & \mathbf{1}^{T}\mathbf{x} + \rho \left(\sum_{i \in S}{(1-x_{i})}\right) \\
\mathbf{x} \in \Pi & S: |S| \leq k &
\end{array}
\end{equation}
The objective function of (\ref{eq:reachability-tractable}) is a pointwise maximum of convex functions and is therefore convex. A subgradient of the objective function at any point $\mathbf{x}_{0}$, denoted $v(\mathbf{x}_{0})$, is given by
\begin{displaymath}
v(\mathbf{x}_{0})_{i} = \left\{
\begin{array}{ll}
1-\rho, & i \in S_{max}(\mathbf{x}_{0}) \\
1, & \mbox{else}
\end{array}
\right.
\end{displaymath}
where $$S_{max}(\mathbf{x}_{0}) = \arg\min{\left\{\sum_{i \in S}{(x_{0})_{i}} : |S| \leq k\right\}}.$$ This subgradient can be computed efficiently by selecting the $k$ largest elements of $\mathbf{x}_{0}$. A polynomial-time algorithm for solving (\ref{eq:reachability-tractable}) can be obtained using interior-point methods, as shown in Algorithm \ref{algo:reachability}.
\begin{center}
\begin{algorithm}[!htp]
\caption{Algorithm for selecting a set of states $S$ to maximize probability of reachability.}
\label{algo:reachability}
\begin{algorithmic}[1]
\Procedure{Max\_Reach}{$G=(V,E)$, $A$, $P$, $R$, $k$, $\epsilon$, $\delta$}
\State \textbf{Input}: Graph $G=(V,E)$, set of actions $A_{i}$ at each state $i$, probability distribution $P$, unsafe states $R$, number of states $k$, convergence parameters $\epsilon$ and $\delta$.
\State \textbf{Output}: Set of nodes $S$
\State $\Phi \leftarrow$ barrier function for polytope $\Pi$
\State $\mathbf{x} \leftarrow 0$
\State $\mathbf{x}^{\prime} \leftarrow \mathbf{1}$
\While{$||\mathbf{x}-\mathbf{x}^{\prime}||_{2} > \epsilon$}
\State $S \leftarrow \arg\max{\{\sum_{i \in S}{x_{i}}: |S| \leq k\}}$
\State $v \leftarrow 1$
\State $v_{i} \leftarrow (1-\rho) \ \forall i \in S$
\State $w \leftarrow \nabla_{\mathbf{x}}{\Phi(\mathbf{x})}$
\State $\mathbf{x}^{\prime} \leftarrow \mathbf{x}$
\State $\mathbf{x} \leftarrow \mathbf{x} + \delta (v + w)$
\EndWhile
\State $S \leftarrow \arg\max{\{\sum_{i \in S}{x_{i}}: |S| \leq k\}}$
\State \Return{$S$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{center}
The interior-point approach of Algorithm \ref{algo:reachability} gives an efficient algorithm for maximizing the probability of reaching $S$. We further observe that more general constraints than $|S| \leq k$ can be constructed. One possible constraint is to ensure that, for some partition $V_{1}, \ldots, V_{m}$, we have $|S \cap V_{i}| \geq 1$ for each $i=1,\ldots,m$. Intuitively, this constraint implies that there must be at least one state to be reached in each partition set $V_{i}$. This constraint is equivalent to the constraint $S \in \mathcal{M}_{k}$, where $\mathcal{M}_{k}$ is the union of the partition matroid and the uniform matroid of rank $k-m$. The calculation of $S_{max}(\mathbf{x}_{0})$ then becomes $$S_{max}(\mathbf{x}_{0}) = \arg\min{\left\{\sum_{i \in S}{(x_{0})_{i}} : S \in \mathcal{M}_{k}\right\}}.$$ This problem can also be solved in polynomial time due to the matroid structure of $\mathcal{M}_{k}$ using a greedy algorithm.
\subsection{Minimizing the Average Cost Per Cycle}
\label{subsec:ACPC}
This section considers the problem of selecting a set $S$ in order to minimize the ACPC.
Based on Theorem \ref{theorem:min-ACPC}, in order to ensure that the minimum ACPC is no more than $\lambda$, it suffices to show that there is no $h$ satisfying (\ref{eq:ACPC-optimal}) for the chosen set $S$. Note that this condition is sufficient but not necessary. Hence the following optimization problem gives a lower bound on the ACPC
\begin{equation}
\label{eq:ACPC-opt}
\begin{array}{lll}
\mbox{minimize} & \mbox{max} & \lambda \\
S & \lambda, h & \\
&
\mbox{s.t.} & \lambda + h(i) \leq 1 + \sum_{j=1}^{n}{P(i,a,j)h(j)} \\
& & +\lambda\sum_{j \notin S}{P(i,a,j)} \ \forall i, a \in A_{i} \\
\end{array}
\end{equation}
The following theorem gives a sufficient condition for the minimum ACPC.
\begin{theorem}
\label{theorem:ACPC-sufficient}
Suppose that, for any $h \in \mathbb{R}^{n}$, there exist $a$ and $i$ such that
\begin{equation}
\label{eq:ACPC-sufficient}
\lambda\mathbf{1}\{i \in S\} + h(i) - \sum_{j=1}^{n}{P(i,a,j)h(j)} > 1.
\end{equation}
Then the ACPC is bounded above by $\lambda$.
\end{theorem}
In order to prove Theorem \ref{theorem:ACPC-sufficient}, we introduce an extended state space that will have the same ACPC. The state space is defined $\hat{V} = V \times \{0,1\}$. The sets of actions satisfy $\hat{A}_{(i,0)} = \hat{A}_{(i,1)} = A_{i}$. The transition probabilities are given by $$\hat{P}((i,1),a,(i,0)) = 1, \quad \hat{P}((i,0),a,(j,1)) = P(i,j).$$ Finally, the set $\hat{S}$ is constructed from the set $S$ via $$\hat{S} = \{(i,0) : i \in S\}.$$ The following result establishes the equivalence between the graph formulations.
\begin{proposition}
\label{prop:ACPC-equivalent}
The minimum ACPC of $\mathcal{M}$ with set $S$ is equal to the minimum ACPC of $\hat{\mathcal{M}} = (\hat{V}, \hat{A}, \hat{P})$ with set $\hat{S}$.
\end{proposition}
\begin{IEEEproof}
There is a one-to-one correspondence between policies on $\mathcal{M}$ and policies on $\hat{\mathcal{M}}$. Indeed, any policy $\mu$ on $\mathcal{M}$ can be extended to a policy $\hat{\mu}$ on $\hat{\mathcal{M}}$ by setting $\hat{\mu}(i,0) = \mu(i)$ and $\hat{\mu}(i,1) = 1$ for all $i$. Furthermore, by construction, the ACPC for $\mathcal{M}$ with policy $\mu$ will be equal to the ACPC with policy $\hat{\mu}$. In particular, the cost per cycle of the minimum-cost policies will be equal.
\end{IEEEproof}
We are now ready to prove Theorem \ref{theorem:ACPC-sufficient}.
\begin{IEEEproof}[Proof of Theorem \ref{theorem:ACPC-sufficient}]
For the MDP $\hat{\mathcal{M}}$, the constraint of Eq. (\ref{eq:ACPC-opt}) is equivalent to
\begin{eqnarray}
\label{eq:ACPC-1}
\lambda \mathbf{1}\{i \in S\} + \hat{h}(i,1) &\leq& \hat{h}(i,0) \\
\label{eq:ACPC-2}
\hat{h}(i,0) - \sum_{j=1}^{n}{P(i,a,j)\hat{h}(j,1)} &\leq& 1
\end{eqnarray}
for all $i$, $j$, and $u$. Hence, in order for the minimum cost per cycle to be less than $\lambda$, at least one of (\ref{eq:ACPC-1}) or (\ref{eq:ACPC-2}) must fail for each $\hat{h} \in \mathbb{R}^{2N}$. For each $\hat{h}$, let $S_{\hat{h}} = \{i: \lambda + \hat{h}(i,1) > \hat{h}(i,0)\}$, so that the condition that the ACPC is less than $\lambda$ is equivalent to $$S_{\hat{h}} \cap S \neq \emptyset \quad \forall \hat{h} \in \mathbb{R}^{2N}.$$ Furthermore, we can combine Eq. (\ref{eq:ACPC-1}) and (\ref{eq:ACPC-2}) to obtain $$\lambda\mathbf{1}\{i \in S\} + \hat{h}(i,1) \leq 1 + \sum_{j=1}^{n}{P(i,a,j)\hat{h}(j,1)}.$$ For the MDP $\mathcal{M}$, define $h \in \mathbb{R}^{N}$ by $h(i) = h(i,1)$ for all $i \in V$, so that
\begin{equation}
\label{eq:another-ACPC}
\lambda\mathbf{1}\{i \in S\} + h(i) \leq 1 + \sum_{j=1}^{n}{P(i,a,j)h(j)}.
\end{equation}
Since $\mathcal{M}$ and $\hat{\mathcal{M}}$ have the same ACPC, (\ref{eq:another-ACPC}) is a necessary condition for the ACPC of $\mathcal{M}$ to be at least $\lambda$. This is equivalent to (\ref{eq:ACPC-sufficient}).
\end{IEEEproof}
We will now map the minimum ACPC problem to submodular optimization. As a preliminary, define the matrix $\overline{A} \in \mathbb{R}^{nA \times n}$, with rows indexed $\{(i,a) : i \in 1,\ldots,n, a \in \mathcal{A}\}$, as
\begin{displaymath}
\overline{A}((i,a),j) = \left\{
\begin{array}{ll}
-P(i,a,j), & i \neq j \\
1-P(i,a,j), & i = j
\end{array}
\right.
\end{displaymath}
and the vector $b(S) \in \mathbb{R}^{nA}$, with entries indexed $\{(i,a): i =1,\ldots,n, a \in \mathcal{A}\}$, as
\begin{displaymath}
(b(S))_{i,a} = \left\{
\begin{array}{ll}
1-\lambda, & i \in S \\
1, & i \notin S
\end{array}
\right.
\end{displaymath}
\begin{proposition}
\label{prop:ACPC-submod}
For any $\zeta > 0$, let $\mathcal{P}(\lambda,S)$ denote the polytope $$\mathcal{P}(\lambda,S) = \{h: \overline{A}h \leq b(S)\} \cap \{||h||_{\infty} \leq \zeta\}.$$ Then the function $r_{\lambda}(S) = \mbox{vol}(\mathcal{P}(\lambda,S))$ is decreasing and supermodular as a function of $S$. Furthermore, if $r_{\lambda}(S) = 0$, then the ACPC is bounded above by $\lambda$.
\end{proposition}
\begin{IEEEproof}
Define a sequence of functions $r^{N}_{\lambda}(S)$ as follows. For each $N$, partition the set $\mathcal{P}(0, \emptyset) = \{h : \overline{A}h \leq \mathbf{1}\}$ into $N$ equally-sized regions with center $x_{1},\ldots,x_{N}$ and volume $\delta_{N}$. Define $$r^{N}_{\lambda}(S) = \sum_{l=1}^{N}{\delta_{N}\mathbf{1}\{x_{l} \in \mathcal{P}(\lambda, S)\}}.$$
Since $\mathcal{P}(\lambda,S) \subseteq \mathcal{P}(0,\emptyset)$ for all $S$ and $\lambda$, we have that $$\mbox{vol}(\mathcal{P}(\lambda,S)) \approx r^{N}_{\lambda}(S)$$ and $$\lim_{N \rightarrow \infty}{r^{N}_{\lambda}(S)} = r_{\lambda}(S).$$
The term $\mathbf{1}\{x_{l} \in \mathcal{P}(\lambda,S)\}$ is equal to the decreasing supermodular function $$\mathbf{1}\{S_{x_{l}} \cap S = \emptyset \}.$$ Hence $r^{N}_{\lambda}(S)$ is a decreasing supermodular function, and $r_{\lambda}(S)$ is a limit of decreasing supermodular functions and is therefore decreasing supermodular. Finally, we have that if $r_{\lambda}(S) = 0$, then there is no $h$ satisfying $\overline{A}h \leq b(S)\}$, and hence the ACPC is bounded above by $\lambda$ by Theorem \ref{theorem:ACPC-sufficient}.
\end{IEEEproof}
In Proposition \ref{prop:ACPC-submod}, the constraint $||h||_{\infty} \leq \zeta$ is added to ensure that the polytope is compact.
Proposition \ref{prop:ACPC-submod} implies that ensuring that the ACPC is bounded above by $\lambda$ is equivalent to the submodular constraint $r_{\lambda}(S) = 0$. Hence, $r_{\lambda}(S)$ is a submodular metric that can be used to ensure a given bound $\lambda$ on ACPC. This motivates a bijection-based algorithm for solving the minimum ACPC problem (Algorithm \ref{algo:ACPC}).
\begin{center}
\begin{algorithm}[!htp]
\caption{Algorithm for selecting a set of states $S$ to minimize average cost per cycle.}
\label{algo:ACPC}
\begin{algorithmic}[1]
\Procedure{Min\_ACPC}{$V$, $A$, $P$, $R$, $k$}
\State \textbf{Input}:Set of states $V$, set of actions $A_{i}$ at each state $i$, probability distribution $P$, unsafe states $R$, number of states to be chosen $k$.
\State \textbf{Output}: Set of nodes $S$
\State $\lambda_{max} \leftarrow $ max ACPC for any $v \in \{1,\ldots,n\}$
\State $\lambda_{min} \leftarrow 0$
\While{$|\lambda_{max}-\lambda_{min}| > \delta$}
\State $S \leftarrow \emptyset$
\State $\lambda \leftarrow \frac{\lambda_{max}+\lambda_{min}}{2}$
\While{$r_{\lambda}(S) > 0$}
\State $v^{\ast} \leftarrow \arg\min{\{r_{\lambda}(S \cup \{v\}) : v \in \{1,\ldots,n\}\}}$
\State $S \leftarrow S \cup \{v^{\ast}\}$
\EndWhile
\If{$|S| \leq k$}
\State $\lambda_{max} \leftarrow \lambda$
\Else
\State $\lambda_{min} \leftarrow \lambda$
\EndIf
\EndWhile
\State \Return{$S$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{center}
The following theorem describes the optimality bounds and complexity of Algorithm \ref{algo:ACPC}.
\begin{theorem}
\label{theorem:ACPC-complexity}
Algorithm \ref{algo:ACPC} terminates in $O\left(kn^{6}\log{\lambda_{max}}\right)$ time. For any $\lambda$ such that there exists a set $S$ of size $k$ satisfying $r_{\lambda}(S) = 0$, Algorithm \ref{algo:ACPC} returns a set $S^{\prime}$ with $$\frac{|S^{\prime}|}{|S|} \leq 1 + \log{\left\{\frac{r_{\lambda}(\emptyset)}{\min_{v}{\{r_{\lambda}(\hat{S} \setminus \{v\})\}}}\right\}}.$$
\end{theorem}
\begin{IEEEproof}
The number of rounds in the outer loop is bounded by $\log{\lambda_{max}}$. For each iteration of the inner loop, the objective function $r_{\lambda}(S)$ is evaluated $kn$ times. Computing $r_{\lambda}(S)$ is equivalent to computing the volume of a linear polytope, which can be approximated in $O(n^{5})$ time~\cite{lovasz1993volume}, for a total runtime of $O(kn^{6}\log{\lambda_{max}})$.
From \cite{wolsey1982analysis}, for any monotone submodular function $f(S)$ and the optimization problem $\min{\{|S| : f(S) \leq \alpha\}}$, the set $\hat{S}$ returned by the algorithm satisfies $$\frac{|\hat{S}|}{|S^{\ast}|} \leq 1 + \log{\left\{\frac{f(V) - f(\emptyset)}{f(V) - f(\hat{S}_{T-1})}\right\}},$$ where $\hat{S}_{T-1}$ is the set obtained at the second-to-last iteration of the algorithm. Applied to this setting, we have
\begin{eqnarray*}
\frac{|\hat{S}|}{|S^{\ast}|} &\leq& 1 + \log{\left\{\frac{r_{\lambda}(V) - r_{\lambda}(\emptyset)}{r_{\lambda}(V) - r_{\lambda}(\hat{S}_{T-1})}\right\}} \\
&=& 1 + \log{\left\{\frac{r_{\lambda}(\emptyset)}{r_{\lambda}(\hat{S}_{T-1})}\right\}} \\
&\leq& 1 + \log{\left\{\frac{r_{\lambda}(\emptyset)}{\min_{v}{\{r_{\lambda}(\hat{S} \setminus \{v\})\}}}\right\}}.
\end{eqnarray*}
\end{IEEEproof}
We note that the complexity of Algorithm \ref{algo:ACPC} is mainly determined by the complexity of computing the volume of the polytope $\mathcal{P}(\lambda,S)$. This complexity can be reduced to $O(n^{3})$ by computing the volume of the minimum enclosing ellipsoid of $\mathcal{P}(\lambda,S)$ instead.
The approach for minimizing ACPC presented in this section is applicable to problems such as motion planning for mobile robots. The set $S$ represents locations that must be reached infinitely often, as in a surveillance problem, while minimizing ACPC can be viewed as minimizing the total resource consumption (e.g., fuel costs) while reaching the desired states infinitely often.
\subsection{Joint Optimization of Reachability and ACPC}
\label{subsec:joint-reach-ACPC}
In this section, we consider the problem of selecting a set $S$ to satisfy safety and liveness constraints with maximum probability and minimum cost per cycle. This problem can be viewed as combining the maximum reachability and minimum ACPC problems formulated in the previous sections. As a preliminary, we define an end component of an MDP.
\begin{definition}
\label{def:EC}
An end component (EC) of an MDP $\mathcal{M} = (V, \mathcal{A}, P)$ is an MDP $\mathcal{M}^{\prime} = (V^{\prime}, \mathcal{A}^{\prime}, P^{\prime})$ where (i) $V^{\prime} \subseteq V$, (ii) $A_{i}^{\prime} \subseteq A_{i}$ for all $i \in V^{\prime}$, (iii) $P^{\prime}(i,a,j) = P(i,a,j)$ for all $i,j \in V^{\prime}$ and $a \in A_{i}^{\prime}$, and (iv) if $i \in V^{\prime}$ and $P(i,a,j) > 0$ for some $a \in A_{i}^{\prime}$, then $j \in V^{\prime}$.
\end{definition}
Intuitively, an end component $(V^{\prime}, \mathcal{A}^{\prime}, P^{\prime})$ is a set of states and actions such that if only the actions in $\mathcal{A}^{\prime}$ are selected, the MDP will remain in $V^{\prime}$ for all time. A maximal end component (MEC) $(V^{\prime}, \mathcal{A}^{\prime},P^{\prime})$ such that for any $V^{\prime\prime}$ and $\mathcal{A}^{\prime\prime}$ with $V^{\prime} \subseteq V^{\prime\prime}$ and $\mathcal{A}^{\prime} \subseteq \mathcal{A}^{\prime\prime}$, there is no EC with vertex set $V^{\prime\prime}$ and set of actions $\mathcal{A}^{\prime\prime}$.
\begin{lemma}[\cite{ding2014optimal}]
\label{lemma:EC-safety-liveness}
The probability that an MDP satisfies a safety and liveness specification defined by $R$ and $S$ is equal to the probability that the MDP reaches an MEC $(V^{\prime}, \mathcal{A}^{\prime}, P^{\prime})$ with $V^{\prime} \cap S \neq \emptyset$ and $V^{\prime} \cap R = \emptyset$.
\end{lemma}
We define an MEC satisfying $V^{\prime} \cap R = \emptyset$ to be an accepting maximal end component (AMEC). By Lemma \ref{lemma:EC-safety-liveness}, the problem of maximizing the probability of satisfying the safety and liveness constraints is equal to the probability of reaching an AMEC with $V^{\prime} \cap S \neq \emptyset$. Let $\mathcal{M}_{1} = (V_{1}^{\prime}, A_{1}^{\prime}, P_{1}^{\prime}), \ldots, \mathcal{M}_{N}=(V_{N}^{\prime},A_{N}^{\prime},P_{N}^{\prime})$ denote the set of AMECs of $\mathcal{M}$ satisfying $V_{i}^{\prime} \cap S \neq \emptyset$.
We now formulate two problems of joint reachability and ACPC. The first problem is to minimize the ACPC, subject to the constraint that the probability of satisfying the constraints is maximized. The second problem is to maximize the probability of satisfying safety and liveness properties, subject to a constraint on the average cost per cycle. In order to address the first problem, we characterize the sets $S$ that maximize the reachability probability.
\begin{lemma}
\label{lemma:max_reach}
Suppose that for each AMEC $(V^{\prime}, \mathcal{A}^{\prime}, P^{\prime})$, $S \cap V^{\prime} \neq \emptyset$. Then $S$ maximizes the probability of satisfying the safety and liveness constraints of the MDP.
\end{lemma}
\begin{IEEEproof}
By Lemma \ref{lemma:EC-safety-liveness}, the safety and liveness constraints are satisfied if the walk reaches an MEC satisfying $S \cap V^{\prime} \neq \emptyset$. Hence, for any policy $\mu$, the probability of satisfying the constraints is maximized when $S \cap V^{\prime} \neq \emptyset$ for all MECs.
\end{IEEEproof}
Note that the converse of the lemma may not be true. There may exist policies that maximize the probability of satisfaction and yet do not reach an AMEC $(V^{\prime},\mathcal{A}^{\prime}, P^{\prime})$ with positive probability.
Lemma \ref{lemma:max_reach} implies that in order to formulate the problem of minimizing the ACPC such that the probability of achieving the specifications is maximized, it suffices to ensure that there is at least one state in each AMEC that belongs to $S$. We will show that this is equivalent to a matroid constraint on $S$. Define a partition matroid by $\mathcal{N}_{1} = (V, \mathcal{I})$ where $$\mathcal{I} = \{S : |S \cap V^{\prime}| \leq 1 \ \forall \mbox{ AMEC } \mathcal{M}_{m}^{\prime}, m=1,\ldots,N\}.$$ Let $\mathcal{N}_{k-N}$ denote the uniform matroid with cardinality $(k-N)$. Finally, let $\mathcal{N} = \mathcal{N}_{1} \vee \mathcal{N}_{k-N}$. The following theorem gives the equivalent formulation.
\begin{theorem}
\label{theorem:joint-matroid}
Let $q(S)$ denote the ACPC for set $S$. Then the problem of selecting a set of up to $k$ nodes in order to minimize the ACPC while maximizing reachability probability is equivalent to
\begin{equation}
\label{eq:joint-equivalent}
\begin{array}{ll}
\mbox{minimize} & q(S) \\
\mbox{s.t.} & S \in \mathcal{N}
\end{array}
\end{equation}
\end{theorem}
\begin{IEEEproof}
Since $q(S)$ is strictly decreasing in $S$, the minimum value is achieved when $|S| = k$. In order to maximize the probability of satisfying the safety and liveness constraints, $S$ must also contain at least one state in each AMEC, implying that $S$ contains a basis of $\mathcal{N}_{1}$. Hence the optimal set $S^{\ast}$ consists of the union of one state from each AMEC (a basis of $\mathcal{N}_{1}$) and $(k-N)$ other nodes (a basis of $\mathcal{N}_{k-r}$), and hence is a basis of $\mathcal{N}$. Conversely, we have that the optimal solution to (\ref{eq:joint-equivalent}) satisfies the constraint $|S| \leq k$ and contains at least one node in each AMEC, and thus is also a feasible solution to the joint reachability and ACPC problem.
\end{IEEEproof}
Hence by Theorem \ref{theorem:ACPC-sufficient} and Theorem \ref{theorem:joint-matroid}, we can formulate the problem of selecting $S$ to minimize the ACPC as
\begin{equation}
\label{eq:joint-opt-1}
\begin{array}{ll}
\mbox{minimize} & \max{\lambda} \\
S \in \mathcal{N} & \\
\mbox{s.t.} & \lambda + h(i) \leq 1 + \sum_{j=1}^{n}P(i,a,j)h(j) \\
& + \lambda \sum_{j \notin S}{P(i,a,j)} \ \forall i \in V, a \in A_{i}
\end{array}
\end{equation}
If there are multiple AMECs, then each AMEC $(V_{m}^{\prime}, \mathcal{A}_{m}^{\prime}, P_{m}^{\prime})$ will have a distinct value of average cost per cycle $\lambda_{m}$, which will be determined by $S \cap V_{m}^{\prime}$.
The problem of minimizing the worst-case ACPC is then given by
\begin{equation}
\label{eq:joint-opt-2}
\begin{array}{ll}
\mbox{minimize} & \max{\{\lambda_{m}(S \cap V_{m}^{\prime}): m=1,\ldots,N\}} \\
S \in \mathcal{M} & \\
\mbox{s.t.} & \lambda_{m} + h(i) \leq 1 + \sum_{j=1}^{n}P(i,a,j)h(j) \\
& + \lambda_{m} \sum_{j \notin S}{P(i,a,j)} \ \forall i \in V_{m}^{\prime}, a \in A_{i}^{\prime}
\end{array}
\end{equation}
This problem is equivalent to
\begin{equation}
\label{eq:joint-opt-3}
\begin{array}{ll}
\mbox{minimize} & \lambda \\
S,\lambda & \\
\mbox{s.t.} & \lambda + h(i) \leq 1 + \sum_{j=1}^{n}{P(i,a,j)h(j)} \\
& + \lambda \sum_{j \notin S}{P(i,a,j)} \ \forall i, a \in A_{i}^{\prime} \\
& |S| \leq k
\end{array}
\end{equation}
By Proposition \ref{prop:ACPC-submod}, Eq. (\ref{eq:joint-opt-3}) can be rewritten as
\begin{equation}
\label{eq:joint-opt-4}
\begin{array}{ll}
\mbox{minimize} & \lambda \\
S, \lambda & \\
\mbox{s.t.} & \sum_{m=1}^{N}{r_{\lambda,m}(S \cap V_{m}^{\prime})} = 0 \\
& |S| \leq k
\end{array}
\end{equation}
where $r_{\lambda,m}(S)$ is the volume of the polytope $\mathcal{P}(\lambda,S \cap V_{m}^{\prime})$ defined as in Section \ref{subsec:ACPC} and restricted to the MDP $\mathcal{M}_{m}$. A bijection-based approach, analogous to Algorithm \ref{algo:ACPC}, suffices to approximately solve (\ref{eq:joint-opt-4}). This approach is given as Algorithm \ref{algo:joint-ACPC}.
\begin{center}
\begin{algorithm}[!htp]
\caption{Algorithm for selecting a set of states $S$ to jointly optimize ACPC and reachability.}
\label{algo:joint-ACPC}
\begin{algorithmic}[1]
\Procedure{Min\_ACPC\_Max\_Reach}{$V$, $A$, $P$, $R$, $k$}
\State \textbf{Input}: Set of states $V$, set of actions $A_{i}$ at each state $i$, probability distribution $P$, set of unsafe states $R$, number of states $k$.
\State \textbf{Output}: Set of nodes $S$
\State $\lambda_{max} \leftarrow $ max ACPC for any $v \in \{1,\ldots,n\}$
\State $\lambda_{min} \leftarrow 0$
\While{$|\lambda_{max}-\lambda_{min}| > \delta$}
\State $\lambda \leftarrow \frac{\lambda_{max}+\lambda_{min}}{2}$
\For{$m=1,\ldots,M$}
\State $S \leftarrow \emptyset$
\While{$r_{\lambda,m}(S) > 0$}
\State $v^{\ast} \leftarrow \arg\min{\left\{r_{\lambda,m}(S \cup \{v\}): \right. }$ \\
$\left. v \in \{1,\ldots,n\}\right\}$
\State $S \leftarrow S \cup \{v^{\ast}\}$
\EndWhile
\EndFor
\If{$|S| \leq k$}
\State $\lambda_{max} \leftarrow \lambda$
\Else
\State $\lambda_{min} \leftarrow \lambda$
\EndIf
\EndWhile
\State \Return{$S$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{center}
The following proposition describes the optimality bounds of Algorithm \ref{algo:joint-ACPC}.
\begin{proposition}
\label{prop:joint-ACPC-optimality}
Algorithm \ref{algo:joint-ACPC} returns a value of $\lambda$, denoted $\hat{\lambda}$, that satisfies $\hat{\lambda} < \lambda^{\ast}$, where $\lambda^{\ast}$ is the minimum ACPC that can be achieved by any set $S$ satisfying
\begin{equation}
\label{eq:joint-opt-bound}
|S| \leq k \left(1 + \log{\frac{\sum_{m=1}^{N}{r_{\lambda,m}(\emptyset)}}{\min_{v}{\sum_{m=1}^{N}{r_{\lambda,m}(\hat{S} \setminus \{v\})}}}}\right).
\end{equation}
\end{proposition}
\begin{IEEEproof}
The proof is analogous to the proof of Theorem \ref{theorem:ACPC-complexity}. The submodularity of $r_{\lambda,m}(S)$ implies that the set $|\hat{S}|$ is within the bound (\ref{eq:joint-opt-bound}) of the minimum-size set $|S^{\ast}|$ with $\sum_{m=1}^{N}{r_{\lambda,m}(S)} = 0$.
\end{IEEEproof}
We now turn to the second joint optimization problem, namely, maximizing the reachability probability subject to a constraint $\lambda$ on the average cost per cycle. We develop a two-stage approach. In the first stage, we select a set of input nodes for each AMEC $V_{1},\ldots,V_{N}$ in order to guarantee that the ACPC is less than $\lambda$. In the second stage, we select the set of AMECs to include in order to satisfy the ACPC constraint while minimizing the number of inputs needed.
In the first stage, for each AMEC we approximate the problem
\begin{equation}
\label{eq:joint-first-stage}
\begin{array}{ll}
\mbox{minimize} & |S_{m}| \\
\mbox{s.t.} & r_{\lambda,m}(S_{m}) = 0
\end{array}
\end{equation}
We let $c_{m} = |S_{m}|$ denote the number of states required for each AMEC $\mathcal{M}_{m}$.
The second stage problem can be mapped to a maximum reachability problem by defining an MDP $\tilde{M} = (\tilde{V}, \tilde{A}, \tilde{P})$, defined as follows. Let $\mathcal{M}_{1} = (V_{1}^{\prime}, A_{1}^{\prime}, P_{1}^{\prime}), \ldots, \mathcal{M}_{N}=(V_{N}^{\prime},A_{N}^{\prime},P_{N}^{\prime})$ denote the set of AMECs of $\mathcal{M}$ satisfying $V_{i}^{\prime} \cap S \neq \emptyset$. The node set of the augmented graph is equal to $V \setminus \left(\bigcup_{m=1}^{N}{V_{m}^{\prime}}\right) \cup \{l_{1},\ldots,l_{N}\}$. Here, each $l_{m}$ represents the AMEC $\mathcal{M}_{m}$, so that reaching $l_{m}$ is equivalent to reaching an AMEC $\mathcal{M}_{m}$. The actions for states in $V$ are unchanged, while the states $l_{1},\ldots,l_{N}$ have empty action sets. The transition probabilities are given by
\begin{displaymath}
\tilde{P}(i,a,j) = \left\{
\begin{array}{ll}
P(i,a,j), & i,j \notin \bigcup_{m=1}^{N}{V_{m}^{\prime}}, a \in A_{i} \\
\sum_{j \in V_{l}^{\prime}}{P(i,a,j)}, & i \notin \bigcup_{s=1}^{N}{V_{s}^{\prime}}, \\
& j = l_{m}, a \in A_{i} \\
1, & i=j=l_{m} \\
0, & i = l_{m}, i \neq j
\end{array}
\right.
\end{displaymath}
In this MDP, the probability of reaching the set $\{l_{1},\ldots,l_{N}\}$ is equal to the probability of satisfying the safety and liveness constraints, and hence maximizing the probability of satisfying the constraints is equivalent to a reachability problem.
The problem of selecting a subset of states to maximize reachability while satisfying this constraint on $\lambda$ can then be formulated as
\begin{equation}
\label{eq:joint-reachability-max}
\begin{array}{lll}
\mbox{minimize} & \max & \left(\sum_{i=1}^{|\tilde{V}|}{x_{i}} - \lambda\sum_{m \in S}{x_{l_m}}\right)\\
S \subseteq \{1,\ldots,N\}, & \mathbf{x} \in \Pi & \\
\sum_{m \in S}{c_{m}} \leq k & &
\end{array}
\end{equation}
by analogy to (\ref{eq:reachability-tractable}), where $\Pi$ is defined for the MDP $\tilde{\mathcal{M}}$. The inner optimization problem of (\ref{eq:joint-reachability-max}) is a knapsack problem, and hence is NP-hard and must be approximated at each iteration. In order to reduce the complexity of the problem, we introduce the following alternative formulation. We let $\mathcal{P}_{\lambda}$ denote the polytope satisfying the inequalities
\begin{eqnarray*}
\mathbf{1}^{T}\mathbf{z} &=& \beta \\
z_{i}(1-\lambda\mathbf{1}\{i \in S\}) &\geq& \sum_{j=1}^{n}{z_{j}P(i,a,j)}
\end{eqnarray*}
By Proposition \ref{prop:ACPC-submod}, the condition that the reachability probability is at most $\lambda$ is equivalent to $\mbox{vol}(\mathcal{P}(\lambda,S_{m}) = 0$. Letting $r_{\lambda}(S)$ denote the volume of $\mathcal{P}_{\lambda}$ when the set of desired states is $S$, the problem is formulated as
\begin{equation}
\label{eq:alternative-joint-formulation}
\begin{array}{ll}
\mbox{minimize} & \sum_{m \in T}{c_{m}} \\
\mbox{s.t.} & r_{\lambda}(T) = 0
\end{array}
\end{equation}
Problem (\ref{eq:alternative-joint-formulation}) is a submodular knapsack problem with coverage constraints. An algorithm for solving it is as follows. The set $T$ is initialized to $\emptyset$. At each iteration, find the element $m$ that minimizes $$\frac{c_{m}}{r_{\lambda}(T) - r_{\lambda}(T \cup \{m\})},$$ terminating when the condition $r_{\lambda}(T) = 0$ is satisfied.
Hence, the overall approach is to select a collection of subsets $\{S_{m} : m=1,\ldots,M\}$, representing the minimum-size subsets to satisfy the ACPC constraint on each AMEC $\mathcal{M}_{m}$. We then select a set of AMECs to include in order to satisfy a desired constraint on reachability while minimizing the total number of inputs. The set $S$ is then given by $$S = \bigcup_{m \in T}{S_{m}}.$$ The optimality gap of this approach is described as follows.
\begin{proposition}
\label{prop:joint-optimality-gap}
The set $T$ chosen by the greedy algorithm satisfies $$\frac{\sum_{m \in T}{c_{m}}}{\sum_{m \in T^{\ast}}{c_{m}}} \leq 1 + \log{\left\{\frac{1}{r_{\lambda}(\hat{T})}\right\}},$$ where $T^{\ast}$ is the optimal solution to (\ref{eq:alternative-joint-formulation}) and $\hat{T}$ is the set obtained by the greedy algorithm prior to convergence.
\end{proposition}
\begin{IEEEproof}
We have that $r_{\lambda}$ is monotone decreasing and supermodular. Hence, by Theorem 1 of \cite{wolsey1982analysis}, the optimality bound holds.
\end{IEEEproof}
Combining the optimality bounds yields
\begin{eqnarray*}
\frac{|S|}{|S^{\ast}|} &=& \frac{\sum_{m \in T}{c_{m}}}{\sum_{m \in T^{\ast}}{c_{m}^{\ast}}} \\
&=& \frac{\sum_{m \in T}{c_{m}}}{\sum_{m \in T^{\ast}}{c_{m}}} \frac{\sum_{m \in T^{\ast}}{c_{m}}}{\sum_{m \in T^{\ast}}{c_{m}^{\ast}}} \\
&\leq& \frac{\sum_{m \in T}{c_{m}}}{\sum_{m \in T^{\ast}}{c_{m}}} \max_{m}{{\left\{\frac{c_{m}}{c_{m}^{\ast}}\right\}}} \\
&\leq& \left(1 + \max_{m}{\log{\left\{\frac{1}{r_{\lambda,m}(\hat{S})}\right\}}}\right)\left(1 + \log{\left\{\frac{1}{r_{\lambda}(\hat{T})}\right\}}\right)
\end{eqnarray*}
\subsection{Optimal Hitting Time}
\label{subsec:optimal-hitting}
In this section, we consider the problem of choosing a set $S$ and policy $\mu$ in order to minimize the hitting time to $S$ in an MDP.
The hitting time of node $i$ under policy $\mu$, denoted $H(i,\mu,S)$, satisfies $$H(i,\mu,S) = 1+ \sum_{j=1}^{n}{P(i,\mu(i),j)H(j,\mu,S)}$$ for $i \notin S$ and $H(i,S) = 0$ for $i \in S$.
We define $\overline{H}(i,S)$ as the minimum hitting time over all policies $\mu$. Minimizing the hitting time for fixed $S$ is equivalent to solving a stochastic shortest path problem. The solution to this problem is described by the following lemma.
\begin{lemma}[\cite{bertsekas1995dynamic}, Prop 5.2.1]
The minimum hitting time $\overline{H}(i,S)$ for set $S$ satisfies
$$\overline{H}(i,S) = 1 + \min_{a \in A_{i}}{\left\{\sum_{j=1}^{n}{P(i,a,j)\overline{H}(j,S)}\right\}}$$
and is equivalent to the linear program
\begin{equation}
\label{eq:hitting-time-program}
\begin{array}{ll}
\mbox{maximize} & \mathbf{1}^{T}\mathbf{v} \\
\mbox{s.t.} & v_{i} \leq 1 + \sum_{j=1}^{n}{P(i,a,j)v_{j}} \ \forall i, a \in A_{i} \\
& v_{i} = 0, \ i \in S \\
\end{array}
\end{equation}
\end{lemma}
The following lemma leads to an equivalent formulation to (\ref{eq:hitting-time-program}).
\begin{lemma}
\label{lemma:hitting-time-condition}
For a given graph $G$, there exists $\theta > 0$ such that the conditions
\begin{eqnarray}
\label{eq:hitting-time-optimal1}
v_{i} &\leq& 1 + \sum_{j=1}^{n}{P(i,a,j)v_{j}} \ \forall i, a \in A_{i} \\
\label{eq:hitting-time-optimal2}
v_{i} &=& 0 \ \forall i \in S
\end{eqnarray}
are equivalent to
\begin{equation}
\label{eq:hitting-time-equivalent}
v_{i} + \theta \mathbf{1}\{i \in S\}v_{i} \leq 1 + \sum_{j=1}^{n}{P(i,a,j)v_{j}}.
\end{equation}
\end{lemma}
\begin{IEEEproof}
If (\ref{eq:hitting-time-equivalent}) holds, then for $\theta$ sufficiently large we must have $v_{i}=0$ for all $i \in S$. The condition then reduces to $$v_{i} \leq 1 + \sum_{j=1}^{n}{P(i,a,j)v_{j}}$$ for all $i \notin S$ and $a \in A_{i}$, which is equivalent to (\ref{eq:hitting-time-optimal1}).
\end{IEEEproof}
From Lemma \ref{lemma:hitting-time-condition}, it follows that in order to ensure that the optimal hitting time for each node is no more than $\zeta$, it suffices to ensure that for each $\mathbf{v}$ satisfying $\mathbf{1}^{T}\mathbf{v} \geq \zeta$, there exists at least one $i$ and $u$ such that $$1 + \sum_{j=1}^{n}{P(i,a,j)v_{j}} < (1 + \theta \mathbf{1}\{i \in S\})v_{i}.$$ We define the function $\chi_{v}(S)$ as
\begin{equation}
\label{eq:chi_v}
\chi_{v}(S) = \left\{
\begin{array}{ll}
1, & \mbox{(\ref{eq:hitting-time-optimal1}) and (\ref{eq:hitting-time-optimal2}) hold } \\
0, & \mbox{else}
\end{array}
\right.
\end{equation}
The following lemma relates the function $\chi_{v}(S)$ to the optimality conditions of Lemma \ref{lemma:hitting-time-condition}.
\begin{lemma}
\label{lemma:hitting-time-sufficient}
The optimal hitting time corresponding to set $S$ is bounded above by $\zeta$ if and only if
\begin{equation}
\label{eq:chi_zeta}
\chi(S,\zeta) \triangleq \int_{\{\mathbf{v}: \mathbf{1}^{T}\mathbf{v} \geq \zeta\}}{\chi_{v}(S) \ dv} = 0.
\end{equation}
\end{lemma}
\begin{IEEEproof}
Suppose that $\chi(S,\zeta) = 0$. Then for each $v$ satisfying $\mathbf{1}^{T}\mathbf{v} \geq \zeta$, we have that $\chi_{v}(S,\zeta) = 0$ and hence $$\sum_{j=1}^{n}{P(i,a,j)v_{j}} < (1 + \theta \mathbf{1}\{i \in S\})v_{i}$$ holds for some $i$ and $a \in A_{i}$, and hence there is no $\mathbf{v}$ satisfying the conditions of (\ref{eq:hitting-time-program}) with $\mathbf{1}^{T}\mathbf{v} = \zeta$ for the given value of $S$.
\end{IEEEproof}
The following result then leads to a submodular optimization approach to computing the set $S$ that minimizes the hitting time.
\begin{proposition}
\label{chi_v_supermodular}
The function $\chi(S,\zeta)$ is supermodular as a function of $S$ for any $\zeta \geq 0$.
\end{proposition}
\begin{IEEEproof}
We first show that $\chi_{v}(S)$ (Eq. (\ref{eq:chi_v})) is supermodular. We have that $\chi_{v}(S) = 1$ if and only if there exists $i \in S$ with $v_{i} \neq 0$, or equivalently, $\mbox{supp}(v) \cap S \neq \emptyset$. If $\chi_{v}(S) = \chi_{v}(S \cup \{u\}) = 0$, then the condition $\mbox{supp}(v) \cap S \neq \emptyset$ already holds, and hence $\chi_{v}(T) = \chi_{v}(T \cup \{u\})$. Since $\chi(S)$ is an integral of supermodular functions (\ref{eq:chi_zeta}), it is supermodular as well.
\end{IEEEproof}
Furthermore, $\chi(S)$ can be approximated in polynomial time via a sampling-based algorithm \cite{lovasz1993random}. Hence the problem of selecting a set of states $S$ in order to minimize the optimal hitting time can be stated as
\begin{equation}
\label{eq:opt-hitting-time-submodular}
\begin{array}{ll}
\mbox{minimize} & \zeta \\
\mbox{s.t.} & \min{\{|S| : \chi(S,\zeta) = 0\}} \leq k
\end{array}
\end{equation
Problem (\ref{eq:opt-hitting-time-submodular}) can be approximately solved using an algorithm analogous to Algorithm \ref{algo:ACPC}, with $r_{\lambda}(S)$ replaced by $\chi(S,\zeta)$ in Lines 9 and 10, and $\lambda$ replaced by $\zeta$ throughout. The following proposition gives optimality bounds for the revised algorithm.
\begin{proposition}
\label{prop:hitting-time-optimality}
The modified Algorithm \ref{algo:ACPC} guarantees that the hitting time satisfies $\overline{H}(i,S) \leq \overline{H}(i,\hat{S})$, where $\hat{S}$ is the solution to $\min{\{\overline{H}(i,S) : |S| \leq \hat{k}\}}$ and $\hat{k}$ satisfies
\begin{equation}
\label{eq:hitting-time-bound}
k = \hat{k}\left(1 + \log{\left\{\frac{\chi(\emptyset)}{\min_{v}{\chi(S \setminus \{v\})}}\right\}}\right).
\end{equation}
\end{proposition
\begin{IEEEproof}
Since the function $\chi(S,\zeta)$ is supermodular, the number of states $S$ selected by the greedy algorithm to satisfy $\chi(S,\zeta) = 0$ satisfies (\ref{eq:hitting-time-bound}). Hence, for the set $\hat{S}$, since $|S| \leq \hat{k}$, we have that $\overline{H}(i,S) \leq H(i,\hat{S})$.
\end{IEEEproof}
|
1,108,101,566,451 | arxiv | \section{Introduction}
Generative Adversarial Networks (GANs) have shown great promise for the generation of photo-realistic synthetic images ~\citep{goodfellow2014generative, radford2015unsupervised, denton2015deep, salimans2016improved}, and the highly-compelling nature of images generated by GANs has driven research into conditional image-generation and multimodal learning. In this paper, we focus on the task of text-to-image generation, that has emerged as an area of active research in recent years. Although much progress has been made in this area, the synthesis of high-quality, realistic images from text-descriptions remains a challenging task.
Current state-of-the-art methods \citep{xu2018attngan, li2019controllable, zhu2019dm} employ multiple-stages of image generation - typically, an initial image is first generated from a global sentence-level vector, and subsequent stages incorporate fine-grained information extracted from word-level vectors to refine image details. However, these methods suffer from three important limitations. The first problem is that by attempting to synthesize image features directly from a sentence-level vector, the initial generation stage fails to cleanly separate image attributes at a word-level. If potentially distinct objects such as `cat' and `tree' for example, are entangled in the sentence-level representation, then the presence of either word in a sentence could prompt the initial stage to generate the same hybrid image attributes. This is important because the subsequent refinement stage relies upon the initial image features to provide a meaningful basis for word-level refinement. By feeding it ambiguous and poorly formed initial features, we limit the scope of refinement. Secondly, current methods do not construct region-specific representations of text at refinement stages. This prevents us from interpreting words in fundamentally different ways based on the content of image regions. Whereas, in complex real-world scenes, the requirement for a region-contextualized interpretation of words is commonplace - based on the region under consideration, the same word often dictates fundamentally different types of refinement within a single image. The word `raining' for example dictates a requirement in the sky that is fundamentally different from the requirement that it dictates in the region of the ground. While the sky becomes more cloudy, the ground must become wet. To generate realistic images from natural text descriptions, it is important that we construct a refinement architecture that allows different image regions to assimilate region-contextualized information from text descriptions. Finally, we note that current methods generate refinement features (that modify previous image features) only once at each refinement stage and attempt to address all image aspects within a single-shot. This single-shot refinement limits the precision with which each refinement stage can learn to improve the prior image.
\\\\
In this paper, we propose a Multi-Headed and Spatial Dynamic Memory image refinement mechanism with a Multi-Tailed Word-level Initial Generation stage (MSMT-GAN) to address these three issues. Our contributions are summarized as follows:
\begin{itemize}
\item We introduce a novel "Multi-Tailed" Word-level Initial Generation stage (MTWIG), that generates a separate set of image features for each word n-gram, and iteratively fuses these sets together to obtain initial image features. We demonstrate that it is possible to improve the performance of previous methods by replacing their initial generation stage with ours.
\item We introduce a novel Spatial Dynamic Memory module (SDM) that fuses word-information in a custom way with each prior image region, to obtain region-contextualized text-representations. At each refinement stage we retrieve features for image improvement from this SDM module.
\item We introduce a novel Iterative Multi-Headed Mechanism (IMHM) of image refinement - wherein we explicitly allow each stage of refinement to make multiple distinct modifications to the prior image, under common discriminator feedback.
\end{itemize}
We evaluate our MSMT-GAN model on the Caltech-UCSD Birds 200 (CUB) dataset \citep{wah2011caltech} and the Microsoft Common Objects in Context (COCO) dataset \citep{lin2014microsoft}. Experiment results demonstrate that MSMT-GAN is competitive with current methods on the COCO dataset and significantly outperforms the previous state-of-the art on the CUB dataset, decreasing the lowest reported Fréchet Inception Distance (FID) \citep{heusel2017gans} by $21.58\%$ for CUB.
\section{Related Work}
\textbf{Text-to-Image Generators:}\cite{reed2016generative} first demonstrated that a translation model from natural language to image pixels could be learnt by conditioning both generator and discriminator networks of a GAN on input text-descriptions. There has since been a surge of interest in training multi-stage attention based GAN architectures for this task. While the conventional setting \citep{zhang2017stackgan, xu2018attngan, li2019controllable, zhu2019dm} assumes only the availability of (text,image) pairs at training time, recently a second setting has emerged that assumes availability of bounding-box/shape-mask information of objects attributes during training \citep{li2019object, hinz2019semantic, cho2020x, liang2020cpgan}. We highlight that this represents a significantly easier problem setting and that such methods are not feasible where bounding-box/shape information is unavailable (such as the CUB dataset). Our method does not assume the availability of bounding-box/shape information, and we make comparisons against prior work of the same setting.\\\\
\textbf{Memory Networks:} Memory Networks \citep{weston2014memory} combine inference components with a long-term memory module that can be dynamically written to and read from. Current methods \citep{miller2016key} query ``key encodings" of memory slots to retrieve a set of weights. These weights are used to combine separate ``value encodings" of the slots into a single response. A Dynamic Memory Generative Adversarial Network (DM-GAN) \citep{zhu2019dm} that retrieves information for image refinement from a memory module was recently proposed for text-to-image synthesis. In our SDM module, we too employ the \textit{memory-writing, key-addressing, value-reading} paradigm introduced by \citep{miller2016key}, but our method differs from \citep{zhu2019dm} in all three memory operations (Section \ref{subsection:SDM}). Fundamentally, DM-GAN does not create region-contextualized representations of text.\\\\
\textbf{Multi-Headed Attention:} Transformers \citep{vaswani2017attention} utilize a key-value mechanism similar to memory networks and introduced the idea of multi-headed attention. They linearly project query, keys and values to $h$ separate encodings, called ``attention heads", and each head is separately used to extract an output vector. These vectors are concatenated together and linearly projected to a single response. Inspired by the success of Transformers, we introduce the IMHM method for image refinement. However, our method differs in a few respects. We maintain separate SDM modules for each head and we obtain queries and fuse outputs in an iterative fashion. We also adopt a ``redundancy loss" (Section \ref{subsection:OBJfunc}) to encourage each head to focus on separate image aspects.
\section{MSMT-GAN}
\label{section:MSDM-GAN-label}
Our MSMT-GAN architecture (Figure \ref{fig:msdmgan}) comprises of three stages - a Multi-Tailed Word-level Initial Generation (MTWIG) stage, and two refinement stages. Each refinement stage is Multi-Headed, and each refinement head has a separate Spatial Dynamic Memory (SDM) module. Section \ref{subsection:DIG} presents our MTWIG stage, Section \ref{subsection:SDM} presents our SDM module for a single refinement head, and the details of our Iterative Multi-Headed Mechanism (IMHM) are presented in Section \ref{subsection:MIR}.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{MSDM-V4-P1.png}\\
\includegraphics[width=\linewidth]{MSDM-V4-P2.png}\\
\includegraphics[width=\linewidth]{MSDM-V4-P3.png}
\caption{Our MSMT-GAN architecture for text-to-image synthesis, showing a Mutli-Tailed Word-level Initial Generation stage, a Multi-Headed Spatial Dynamic Memory based refinement stage with three refinement heads, and image prediction.}
\label{fig:msdmgan}
\end{figure*}
\subsection{Multi-Tailed Word-level Initial Generation (MTWIG)}
\label{subsection:DIG}
We highlight that previous multi-stage methods \citep{zhang2017stackgan, Zhang2018stackgan++, li2019controllable, zhu2019dm} all rely on the same type of initial generation stage and focus only on improving the refinement stages - making the conventional assumption that the performance of multi-stage generators is primarily determined by the refinement stages, and that the quality of the "rough initial image" is of little importance. In our paper, we break from this tradition and demonstrate for the first time that gains can be achieved in the final stage of image refinement by making an improvement to the initial images. \\\\
The conventional approach synthesizes initial images directly from a sentence-level vector without attempting to separate image attributes at a word-level. As a result, words that are entangled at the sentence-level representation generate initial image attributes that are inherently ambiguous in nature. In our novel Multi-Tailed Word-level Initial Generation (MTWIG) stage, we overcome this shortcoming by explicitly creating separate sets of image attributes for each word n-gram.\\\\
First, we sample a vector of random noise $z$ from a normal distribution and use a pretrained text-encoder to extract a sentence-level vector and word-level vectors: $s$ and $W$ from the input text.
\begin{gather}
W = \{w_1, w_2, ..., w_L\}; \hspace*{0.1cm}w_l \in \mathbb{R}^{N_w} \;\hspace*{0.5cm};\hspace*{0.5cm}\; s \in \mathbb{R}^{N_s} \;\hspace*{0.5cm};\hspace*{0.5cm}\; z_n \sim \mathcal{N}(0,1); \hspace*{0.1cm}z \in \mathbb{R}^{N_z}
\end{gather}
Where $L$ is the number of words in the text-description, and $N_z$, $N_s$ and $N_w$ are the dimensions of the noise vector, sentence vector and word vectors respectively. To mitigate over-fitting, the Conditioning Augmentation technique \citep{zhang2017stackgan} is used to resample the sentence-vector from an independent Gaussian distribution. This resampled sentence vector $s'$ and the noise vector $z$ are then concatenated with each word-level vector $w_l$ from the input text sequence, and the sequence of concatenated vectors are passed through a 1D convolutional operation $V$ of stride 1 (see Figure \ref{fig:msdmgan}).
\begin{gather}
F = V(\{concat(s', \;z,\; w_l) \;|\; \;\forall\; w_l \in W \})
\end{gather}
The length $T$ of the output sequence $F$ depends on the kernel size used by $V$ and the vectors of the output sequence $f_t \in F$ are each separately passed through a series of upsampling blocks to generate corresponding sets of image features $S_t$. These sets of image features or "tails" each correspond to a different word n-gram from the input text sequence. If we use a kernel size of 1 for $V$, then each tail $S_t$ corresponds to a single word. If we use a kernel size of 2, then each tail $S_t$ corresponds to a word bi-gram, and so on. We combine our sequence of tails $\{S_t\}$ together in an iterative fashion using the adaptive gating fusion mechanism introduced by \cite{zhu2019dm} (discussed in Section \ref{subsection:fuse}).
\begin{gather}
S_{1:t} = fuse(S_{1:t-1},\;S_t,\; P^\text{MTWIG}, \rho^\text{MTWIG})\hspace*{0.5cm}; \hspace*{0.5cm}R_1 = S_{1:T}
\end{gather}
Where $P^\text{MTWIG}$ and $\rho^\text{MTWIG}$ denote parameter matrix and bias terms, $S_{1:t}$ denotes a combination of the first $t$ tails, and $S_{1:1}$ denotes the first tail $S_1$. The combination of all $T$ tails gives us the final image features $R_1$ for our initial stage. Notice that by concatenating each word vector $w_l$ with $s'$ and $z$ before the 1D convolution, each tail is created with some common information, so they may learn to fuse together coherently. Each upsampling block consists of a nearest neighbor upsampling layer and a 3×3 convolution operation. An initial image is predicted from $R_1$ using a 3×3 convolution.
\subsection{Spatial Dynamic Memory (SDM)}
\label{subsection:SDM}
In this section, we describe the operation of a single refinement head. Unlike previous methods, our novel Spatial Dynamic Memory (SDM) module creates a separate region-contextualized text representations for each image region. This allows us to interpret the same text in fundamentally different ways at different parts of an image and assimilate region-contextualized information from text at each part. To begin with, we have the set of word-level vectors $W$ and image features $R_{k-1}$ from the previous stage of generation.
\begin{gather}
{R_{k-1} = \{r_{1,1}\;,\; r_{1,2}\;, ...,\; r_{s,s}\}\;;\;\hspace*{0.1cm} r_{u,v} \in \mathbb{R}^{N_r}}
\end{gather}
Where $|s \times s|$ is the number of image pixels and $N_r$ is the dimension of pixel features. We obtain refinement features in three steps: \textit{Memory Writing}, \textit{Key Addressing} and \textit{Value Reading}. \\
\hspace*{0.3cm}\textbf{Memory Writing}:
First, we divide the fine-grained $s \times s$ inital image into a coarse $h \times h$ sized grid-map and average the pixel features within each grid-cell to get grid-level image features $C$.
\begin{gather}
C_{i,j} = \frac{1}{|p\times p|}\hspace*{0.2cm}\sum_{u=(i-1)*p +1}^{i*p}\hspace*{0.2cm}\sum_{v=(j-1)*p + 1}^{j*p}r_{u,v}
\end{gather}
Where $p = s/h$, so that $|p\times p|$ are the number of pixels represented by each grid cell. Then, we create $L \times h \times h$ memory slots $\{m_{l,i,j}\}$ - one corresponding to each word $l$ for each grid-cell $(i,j)$. These slots are our region-contextualized representations of each word, and each slot uses a separate memory writing gate $g_{l,i,j}^w$ to fuse information from each grid-cell $(i,j)$ with each word feature $w_l$.
\begin{gather}
g_{l,i,j}^w(R_{k-1}, w_L) = \sigma\left(A*w_l + B_{i,j}*C_{i,j}\right)\\
m_{l,i,j} = M_w(w_l) \odot g_{l,i,j}^w + M_c(C)_{i,j} \odot (1 - g_{l,i,j}^w)
\end{gather}
The grid-level features $C$ are encoded using a 2d convolution operation $M_c$ (with stride 1 and $N_m$ output filters) and we use a common 1x1 convolution operation $M_w$ to encode all word vectors to a $N_m$ dimensional space. $A$ and $B_{i,j}$ are $1\times N_w$ and $1\times N_r$ matrices respectively.\\
\hspace*{0.3cm}\textbf{Key Addressing:} In this step, we compute attention weights $\{\alpha_{l,i,j,a,b}\}$ over our region-contextualized text-representations $\{m_{l,i,j}\}$. The dimensions $(a,b)$ index pixels within grid-cell $(i,j)$, so that each slot $m_{l,i,j}$ gets a matrix $\alpha_{l,i,j}: p \times p$ of attention weights. Each weight is computed as a similarity probability between a key-encoding of the slot: $\phi_K(m_l)_{ij}$ and a query vector: $q_{i,j,a,b}$, where $\phi_K(.)$ is a 2d convolution operation with stride 1 and ($N_{r} + p^2)$ output filters.
\begin{gather}
\alpha_{l,i,j,a,b} = \frac{\exp(\phi_K(m_l)_{i,j} *q_{i,j,a,b})}{\sum_{l=1}^{L} \exp(\phi_K(m_l)_{i,j}*q_{i,j,a,b})}
\end{gather}
In the case of single headed image refinement, we use the previous image features $R_{k-1}$ to obtain the query vectors. A query vector $q_{i,j,a,b}$ is made up of three components, 1)A global-level query: $q^{global}$, 2)A grid-level query: $q_{i,j}^{grid}$, and 3)A pixel-level query: $q_{i,j,a,b}^{pixel}$. To obtain these three components, we encode $R_{k-1}$ using three separate 2d convolution operations: $\phi_{Q^{global}}(.)$, $\phi_{Q^{grid}}(.)$ and $\phi_{Q^{pixel}}(.)$, each with a stride of 1 and $N_r$ output filters.
\begin{gather}
\label{eq:query_formation}
Q^{global} = \phi_{Q^{global}}(R_{k-1}) \;\hspace*{0.2cm};\hspace*{0.2cm}\; Q^{grid} = \phi_{Q^{grid}}(R_{k-1}) \;\hspace*{0.2cm};\hspace*{0.2cm}\; Q^{pixel} = \phi_{Q^{pixel}}(R_{k-1})
\end{gather}
Then, the average of all pixel features of $Q^{global}$ becomes the global-level query component $q^{global}$. The average of pixel features within the grid cell $(i,j)$ of $Q^{grid}$ becomes the grid-level query $q_{i,j}^{grid}$, and the pixel feature at location $(a,b)$ within grid cell $(i,j)$ is extracted from $Q^{pixel}$ to give us the pixel-level query component $q_{i,j,a,b}^{pixel}$.
\begin{gather}
q^{global} = \frac{1}{|s\times s|}\sum_{u=1}^{s}\sum_{v=1}^{s}Q^{global}_{u,v}\hspace*{0.5cm};\hspace*{0.5cm} q^{grid}_{i,j} = \frac{1}{|p\times p|}\hspace*{0.2cm}\sum_{u=(i-1)*p +1}^{i*p}\hspace*{0.2cm}\sum_{v=(j-1)*p + 1}^{j*p}Q^{grid}_{u,v}
\\
\label{eqn:pixelquery}
q_{i,j,a,b}^{pixel} = Q^{local}_{h(i,a), \hspace*{0.1cm}h(j,b)} \;\hspace*{0.5cm};\hspace*{0.5cm}\;h(i,a) =(i-1)*p + a
\end{gather}
Where $(h(i,a),\;h(j,b))$ indexes the pixel at location $(a,b)$ within grid-cell $(i,j)$. To obtain the final query $q_{i,j,a,b}$, we concatene these three components together.\\
\hspace*{0.3cm}\textbf{Value Reading:} In the value reading step, for each pixel $(a,b)$ within a grid-cell $(i,j)$, we compute a weighted sum of value-encoded memory slots: $\phi_V(m_l)_{ij}$ along the word dimension $l$.
\begin{gather}
e_{i,j,a,b} = \sum_{l=1}^{L}\alpha_{l,i,j,a,b}\cdot\phi_V(m_l)_{i,j}
\end{gather}
$\phi_V(.)$ is a 2d convolution operation with stride 1 and $N_r$ output filters. We now have $e_{i,j}: p \times p \times N_r$ dimensional matrices - each of which corresponds to a single grid cell of our coarse $h \times h$ grid map. To obtain $s \times s$ fine-grained refinement features, we apply the mapping:
\begin{gather}
o_{h(i,a)\;, \;h(j,b)} = e_{i,j,a,b}
\end{gather}
Where $h(.,\;.)$ is the function defined in Eq.\ref{eqn:pixelquery}. That is, we populate each grid cell with $|p \times p|$ vectors of $N_r$ dimensionality. Since $p=s/h$, we are left with a set of refinement features $O =\{o_{u,v}\}$ that are made up of $|s \times s|$ vectors of $N_r$ dimensionality, each of which corresponds to a single pixel.
\subsection{Iterative Multi-Headed Mechanism (IMHM)}
\label{subsection:MIR}
Current methods generate refinement features only once at each refinement stage and attempt to address all image aspects in a single-shot. This limits the precision with which each refinement stage can learn to improve the prior image. In order to make it easier for each refinement stage to precisely address multiple image aspects, we introduce a novel iterative multi-headed mechanism that makes multiple distinct modifications to the prior image features under common discriminator feedback. Each head of our mechanism has a separate spatial dynamic memory module formed from $R_{k-1}$ and $W$. For the first refinement head, we use the previous image features $R_{k-1}$ to obtain a query matrix and extract a set of refinement features $O_1$ exactly as described in Section \ref{subsection:SDM}. Then, we fuse $O_1$ and $R_{k-1}$ using the fusion mechanism introduced by \cite{zhu2019dm} (described in Section \ref{subsection:fuse}) to obtain an updated set of image features $U_1$. If we use only a single refinement head, then this becomes our response for the refinement stage $k$. However, if we use more than one refinement head, then for the next head, we use $U_1$ to obtain a query matrix. That is, we follow the same mechanism outlined in Section \ref{subsection:SDM}, but replace $R_{k-1}$ with $U_1$ in Eq.\ref{eq:query_formation}. Doing so, we extract a second set of refinement features $O_2$, and we fuse $O_2$ and $U_1$ to obtain updated image features $U_2$. We proceed in this iterative fashion until we have used all of our refinement heads. The final updated image features are fused with the original image features $R_{k-1}$ in a skip-connection to obtain the final response of the refinement stage $k$. That is, if we have $T$ refinement heads:
\begin{gather}
U_t = fuse(U_{t-1},\; O_{t},\; P_t,\; \rho_t) \;\hspace*{0.5cm};\hspace*{0.5cm}\;response_k = fuse(U_{T},\; R_{k-1},\; P^{skip},\; \rho^{skip})
\end{gather}
Notice, we use separate parameter matrix and and bias terms $P$ and $\rho$ for each fusion operation, so that we combine refinement features and image features in a custom way for each head. The, $response_k$ is passed through several residual blocks \citep{he2016deep} and an upsampling block to obtain higher resolution image features $R_k$. Each block consists of a nearest neighbor upsampling layer and a 3×3 convolution operation. Finally, a refined image $x_k$ is predicted from $R_k$ using a 3×3 convolution operation.
\subsection{Objective Function}
\label{subsection:OBJfunc}
The objective function for our generator network is defined as:
\begin{gather}
L = \lambda_1 L_{CA} + \sum_{k=2}^K \lambda_2 L_{RED_k} + \sum_{k=1}^K \left(L_{G_k} + \lambda_3 L_{DAMSM_k}\right)
\end{gather}
Where, $L_{CA}$ denotes the conditioning augmentation loss \citep{zhang2017stackgan}, $G_{k}$ denotes the generator of the $k^{th}$ stage so that $L_{G_k}$ denotes the adversarial loss for $G_k$, $L_{RED_k}$ denotes our redundancy loss for the $k^{th}$ stage, and $L_{DAMSM_k}$ denotes the DAMSM text-image matching loss \citep{xu2018attngan} for the $k^{th}$ stage. $\lambda_1$, $\lambda_2$ and $\lambda_3$ are hyperparameters that combine the various losses.\\
\hspace*{0.3cm}\textbf{Redundancy Loss:}
To encourage each head of a refinement stage to focus on separate image aspects, we average region-wise information of each head's output refinement features and penalize similarity between different refinement heads. That is, for $T$ refinement heads:
\begin{gather}
f(t) = \frac{1}{|s\times s|}\sum_{u=1}^{s}\sum_{v=1}^{s}o_{u,v} \hspace*{0.1cm} \;\hspace*{0.5cm};\hspace*{0.5cm}\;
L_{RED_k} = \sum_{i=1}^T\;\sum_{j=i+1}^T sim(f(i), f(j))
\end{gather}
Where $o_{u,v} \in O_t \text{ in stage } k$ (see Section \ref{subsection:MIR}) and $sim$ is the cosine similarity between vectors. We call this sum of pairwise similarity $L_{RED_k}$ the ``redundancy loss" of the $k^{th}$ refinement stage.\\
\hspace*{0.3cm}\textbf{Adversarial Loss:} The adversarial loss for $G_k$ is defined as:
\begin{gather}
\footnotesize
L_{G_k} = -\frac{1}{2}[\mathbb{E}_{x\sim p_{G_k}} \log D_k(x) + \mathbb{E}_{x\sim p_{G_k}} \log D_k(x,s)]
\end{gather}
$D_{k}$ is the discriminator network for the $k^{th}$ stage of generation. The first term provides feedback on image realism independent of the input text, and the second term provides feedback on the realism of the image in light of the input text. Alternate to adversarial training of $G_k$, each discriminator $D_k$ is trained to classify images as real or fake
by minimizing the discriminator loss $L_{D_k}$.
\begin{equation}
\tiny{
\begin{split}
L_{D_k} = & \underbrace{-\frac{1}{2}[\mathbb{E}_{x\sim p_{data}} \log D_k(x) + \mathbb{E}_{x\sim p_{G_k}} \log (1 - D_k(x))]}_\text{unconditional loss} + \underbrace{-\frac{1}{2}[\mathbb{E}_{x\sim p_{data}} \log D_k(x,s) + \mathbb{E}_{x\sim p_{G_k}} \log (1 - D_k(x,s))]}_\text{conditional loss}
\end{split}}
\end{equation}
Where the unconditional component distinguishes synthetic and real images independent of the input text, and the conditional component distinguishes them in light of the input text.
\section{Experiments}
\label{section:Experiments-label}
\textbf{Datasets:} We evaluate our method on the Caltech-UCSD Birds 200 (CUB) dataset \citep{wah2011caltech} and the Microsoft Common Objects in Context (COCO) dataset \citep{lin2014microsoft}. The CUB dataset contains 8,855 training images and 2,933 test images, with 10 corresponding text
descriptions for each image. The COCO dataset, contains 82,783 training images and 40,504 test images, with 5 corresponding text descriptions for each image. We preprocess the datasets according to the methods introduced by \cite{zhang2017stackgan}.\\\\
\textbf{Evaluation metrics:} To evaluate the realism of images, we rely on the Fréchet Inception Distance (FID) \citep{heusel2017gans}. FID computes the distance between synthetic and real-world image distributions based on features extracted from a pre-trained Inception v3 network. A lower FID indicates greater image-realism. To evaluate the relevance of a synthetic image to it's generating text-description, we rely on the R-precision introduced by \cite{xu2018attngan}. R-precision is computed as the mean accuracy of using each synthetic image to retrieve one ground truth text-description from among 100 candidates. To evaluate a model on a dataset, we generate 30,000 synthetic images conditioned on text descriptions from the unseen test set.\\\\
\textbf{Implementation Details: }To obtain a sentence-level vector and word-level vectors for a given text description, we use the pretrained bidirectional LSTM text encoder employed by AttnGAN \citep{xu2018attngan}. Our MTWIG stage synthesizes images features with 64x64 resolution. Two refinement stages refine these features to 128x128 and 256x256 resolution respectively. At refinement stages, we use $T=6$ refinement heads and we use $h = 8$ to divide each input image into a coarse $8\times8$ grid map. We use the same discriminator network architecture employed by \cite{zhu2019dm}. Further implementation details are provided in Section \ref{subsection:AID}.
\subsection{Ablative Experiments}
\textbf{Effectiveness of Multi-Tailed Word-level Initial Generation:} In our experiments (Appendix A.1), we find that our MTWIG stage is most effective when used with a kernel size of 3, so that we generate a separate tail for each word tri-gram. To evaluate the effectiveness of our MTWIG(ks=3) stage in multi-stage models, we train our MTWIG(ks=3) method with DM-GAN \citep{zhu2019dm} style-refinement stages for 700 epochs on the CUB dataset, and observe that it decreases the FID score achieved by the original DM-GAN model by 7.72\% and increases R-precision by 2.76\% (Table \ref{table:t2}). Figure \ref{fig:ablation} shows the improved visual quality of the refined images. We again point out that previous multi-stage methods \citep{zhang2017stackgan, Zhang2018stackgan++, xu2018attngan, li2019controllable, zhu2019dm}) all rely on the same type of initial generation stage, and we expect a similar boost in performance if we replace their initial stage with our MTWIG stage. \\\\
\textbf{Effectiveness of Spatial Dynamic Memory:} In order to evaluate the effectiveness of our SDM based-refinement stages, we compare a multi-stage model that uses our MTWIG(ks=3) stage and DM-GAN's refinement stages against a multi-stage model that uses our MTWIG(ks=3) stage and our single-headed SDM-based refinement stages. Both models are trained on the CUB dataset for 700 epochs. We observe that our SDM-based refinement out-performs DM-GAN's refinement, decreasing FID score by 4.64 \% , and boosting R-precision by an additional 0.83\% (Table \ref{table:t2}). Figure \ref{fig:ablation} shows that SDM-based refinement generates images of better visual quality than those formed by DM-GAN's refinement stages for the same initial generation architecture. \\\\
\textbf{Effectiveness of the Iterative Multi-Headed Mechanism:} To evaluate the effectiveness of our IMHM refinement, we compare the model that uses MTWIG(ks=3) and single-headed SDM-based refinement stages against our full MSMT-GAN model - that uses MTWIG(ks=3) and six SDM-based refinement heads for each stage. As before, both models are trained for 700 epochs on the CUB dataset. We find that refinement stages that use our multi-headed mechanism out-perform single-headed refinement stages, decreasing FID score 2.38\%, and boosting R-precision by another 1.03\% (Table \ref{table:t2}). Visually, besides overall better image quality, we find that images generated by multi-headed refinement stages posess text-irrelevant content that is far more detailed than that observed in images generated by single-headed refinement stages (Figure \ref{fig:ablation}).
\begin{table}[!h]
\caption{FID and R-precision of DM-GAN and ablative versions of our model. (With all of our variants trained for 700 epochs on CUB)}
\label{table:t2}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{\hspace*{6.5cm} CUB} \\
\cmidrule(r){2-3}
Method & FID $\downarrow$ & R-prcn (\%) $\uparrow$ \\
\midrule
DM-GAN & 11.91 & 76.58 ± 0.53\\
\midrule
MTWIG w/ DM-GAN's refinement & 10.99 & 79.37 ± 0.73 \\
MTWIG w/ SDM refinement & 10.48 & 80.20 ± 0.67 \\
MTWIG w/ SDM and IMHM refinement (MSMT-GAN) & \textbf{10.23} & \textbf{81.23 ± 0.68} \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[!h]
\centering
\includegraphics[width=\linewidth]{ablation_eg.png}
\caption{Comparison of DM-GAN with ablative versions of our model trained for 700 epochs on the CUB dataset.}
\label{fig:ablation}
\end{figure*}
\subsection{Comparison with State of the Art}
\label{subsection:Sota-label}
To compare our architecture against the previous state-of-the-art for the task of text-to-image generation, we train MSMT-GAN models for 1000 epochs on the CUB dataset and for 210 epochs on the COCO dataset. As shown in Table \ref{table:t3}, our MSMT-GAN decreases the previous lowest reported FID score by 21.58\% on the CUB dataset -marking a significant improvement in image realism, and also boosts the previous best reported CUB R-precision by 4.24 \% - marking a large improvement in the similarity of synthetic images to their generating text. As shown in Table \ref{table:t4} and Table \ref{table:t3}, our model is comparable in size to previous methods, and outperforms the next closest contender of similar size for COCO (DM-GAN) by 4.21\% on FID score -making it highly competitive with the current state-of-the-art. We also observe a slight improvement of 0.23\% on COCO R-precision. Qualitatively, we observe that synthetic images generated by our model are typically sharper and more realistic than those generated by prior methods (Figure \ref{fig:sota}). In particular, we observe that our method generates synthetic images that possess greater detail and that are better matches for the generating text.
\begin{table}[!h]
\caption{Number of parameters required at test-time (including text-encoders) for the previous state-of-the-art in comparison to our MSMT-GAN (approximate values reported in millions).}
\label{table:t4}
\centering
\begin{tabular}{lll}
\toprule
Method & \# Paramers for CUB & \# Parameters for COCO\\
\midrule
AttnGAN & 9.16M & 22.43M\\
ControlGAN & 30.72M & 45.36M\\
DM-GAN & 23.52M & 30.96M\\
DF-GAN & 14.32M & 20.87M\\
XMC-GAN & - &>111M\\
\midrule
Our MSMT-GAN & 48.11M & 55.16M\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!h]
\caption{FID and R-precision of the previous state-of-the-art and our MSMT-GAN trained for 1000 epochs on CUB and 210 epochs on COCO.}
\label{table:t3}
\centering
\begin{tabular}{lllll}
\toprule
\multicolumn{2}{c}{\hspace*{4cm} CUB} & \multicolumn{2}{c}{\hspace*{4cm} COCO} \\
\cmidrule(r){2-3} \cmidrule(r){4-5}
Method & FID $\downarrow$ & R-prcn (\%) $\uparrow$ & FID $\downarrow$ & R-prcn (\%) $\uparrow$\\
\midrule
AttnGAN\textsuperscript{\ref{note1}} & 14.01 & 67.82 ± 4.43 & 29.53 & 85.47 ± 3.69 \\
ControlGAN & - & 69.33 ± 3.23 & - & 82.43 ± 2.43 \\
DM-GAN\textsuperscript{\ref{note1}} & 11.91 & 76.58 ± 0.53 & 24.24 & 92.23 ± 0.37 \\
DF-GAN\textsuperscript{\ref{note1}} & 13.48 & - & 33.29 & - \\
\midrule
Our MSMT-GAN& \textbf{9.34} & \textbf{80.82 ± 0.54} & \textbf{23.22} & \textbf{92.46 ± 0.28} \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[!h]
\centering
\includegraphics[width=\linewidth]{sota.png}
\caption{Comparison of MSMT-GAN with state-of-the art models on the CUB and COCO datasets.}
\label{fig:sota}
\end{figure*}
\section{Conclusion}
In this work, we have proposed the MSMT-GAN architecture for the task of text-to-image synthesis. First, we introduced a novel Multi-Tailed Word Level Initial Generation stage (MTWIG) that explicitly generates separate sets of image features for each word n-gram. Second, we proposed a novel Spatial Dynamic Memory (SDM) module to contextualize text representations by image-region. Third, we introduced a novel Iterative Multi-Headed Mechanism (IMHM) of image refinement to make it easier for each refinement stage to precisely address multiple image aspects. Our ablative experiments clearly demonstrate the effectiveness each of these three components, and we have shown that we are able to boost the performance of prior methods by replacing their initial stage with our MTWIG stage. Experiment results further demonstrate that our MSMT-GAN model significantly out-performs the previous state of the art on the CUB dataset, decreasing the lowest reported FID score by 21.58\% and boosting the CUB R-precision by 4.24\%. On the COCO dataset, we have demonstrated that MSMT-GAN is highly comptitive with current methods based on image realism and model size. In future work, we aim to design a discriminator model that provides more region-specific feedback than existing methods, to use in conjunction with our MSMT-GAN generator architecture.
\footnotetext[1]{\label{note1}We make our comparisons against the pretrained models released by the authors, and we report results using the official implementations of FID score.}
\section{Acknowledgements}
This work was performed while Amrit Diggavi Seshadri was a Post Baccalaureate Fellow at the Robert Bosch Center for Data Science and Artificial Intelligence (RBC-DSAI), at the Indian Institute of Technology Madras (IIT Madras). We would like to thank the center for sponsoring this fellowship and for providing us with sufficient resources.
|
1,108,101,566,452 | arxiv | \section{}
Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware. However, their application in machine learning have largely been limited to very shallow neural network architectures for simple problems. In this paper, we propose a novel algorithmic technique for generating an SNN with a deep architecture, and demonstrate its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet. Our technique applies to both VGG and Residual network architectures, with significantly better accuracy than the state-of-the-art. Finally, we present analysis of the sparse event-driven computations to demonstrate reduced hardware overhead when operating in the spiking domain.
\tiny
\fontsize{8}{11}\helveticabold { \section{Keywords:} Spiking Neural Networks, Event-Driven Neural Networks, Sparsity, Neuromorphic Computing, Visual Recognition}
\end{abstract}
\section{Introduction}
Spiking Neural Networks (SNNs) are a significant shift from the standard way of operation of Artificial Neural Networks (\cite {farabet2012comparison}). Most of the success of deep learning models of neural networks in complex pattern recognition tasks are based on neural units that receive, process and transmit analog information. Such Analog Neural Networks (ANNs), however, disregard the fact that the biological neurons in the brain (the computing framework after which it is inspired) processes binary spike-based information. Driven by this observation, the past few years have witnessed significant progress in the modeling and formulation of training schemes for SNNs as a new computing paradigm that can potentially replace ANNs as the next generation of Neural Networks.
In addition to the fact that SNNs are inherently more biologically plausible, they offer the prospect of event-driven hardware operation. Spiking Neurons process input information only on the receipt of incoming binary spike signals.
Given a sparsely-distributed input spike train, the hardware overhead (power consumption) for such a spike or event-based hardware would be significantly reduced since large sections of the network that are not driven by incoming spikes can be power-gated (\cite{chen1998estimation}). However, the vast majority of research on SNNs have been limited to very simple and shallow network architectures on relatively simple digit recognition datasets like MNIST (\cite{lecun1998gradient}) while only few works report their performance on more complex standard vision datasets like CIFAR-10 (\cite{krizhevsky2009learning}) and ImageNet (\cite{russakovsky2015imagenet}).
The main reason behind their limited performance stems from the fact that SNNs are a significant shift from the operation of ANNs due to their temporal information processing capability. This has necessitated a rethinking of training mechanisms for SNNs.
\section{Related Work}
Broadly, there are two main categories for training SNNs - supervised and unsupervised. Although unsupervised learning mechanisms like Spike-Timing Dependent Plasticity (STDP) are attractive for the implementation of low-power on-chip local learning, their performance is still outperformed by supervised networks on even simple digit recognition platforms like the MNIST dataset (\cite{diehl2015unsupervised}). Driven by this fact, a particular category of supervised SNN learning algorithms attempts to train ANNs using standard training schemes like backpropagation (to leverage the superior performance of standard training techniques for ANNs) and subsequently convert to event-driven SNNs for network operation (\cite{diehl2015fast,cao2015spiking,zhao2015feedforward,perez2013mapping}). This can be particularly appealing for NN implementations in low-power neuromorphic hardware specialized for SNNs (\cite{merolla2014million,akopyan2015truenorth}) or interfacing with silicon cochleas or event-driven sensors (\cite{posch2014retinomorphic,posch2011qvga}). Our work falls in this category and is based on the ANN-SNN conversion scheme proposed by authors in Ref. (\cite{diehl2015fast}). However, while prior work considers the ANN operation only during the conversion process, we show that considering the actual SNN operation during the conversion step is crucial for achieving minimal loss in classification accuracy. To that effect, we propose a novel weight-normalization technique that ensures that the actual SNN operation is in the loop during the conversion phase.
Note that this work tries to exploit neural activation sparsity by converting networks to the spiking domain for power-efficient hardware implementation and are complementary to efforts aimed at exploring sparsity in synaptic connections (\cite{han2015deep}).
\section{Main Contributions}
The specific contributions of our work are as follows:
(i) As will be explained in later sections, there are various architectural constraints involved for training ANNs that can be converted to SNNs in a near-lossless manner. Hence, it is unclear whether the proposed techniques would scale to larger and deeper architectures for more complicated tasks. \textbf{We provide proof of concept experiments that deep SNNs (extending from 16 to 34 layers) can provide competitive accuracies over complex datasets like CIFAR-10 and ImageNet.}
(ii) \textbf{We propose a new ANN-SNN conversion technique that statistically outperforms state-of-the-art techniques.} We report a classification error of \textbf{8.45\%} on the CIFAR-10 dataset which is the best-performing result reported for any SNN network, till date. For the first time, we report an SNN performance on the entire ImageNet 2012 validation set. We achieve a \textbf{30.04\%} top-1 error rate and \textbf{10.99\%} top-5 error rate for VGG-16 architectures.
(iii) We explore Residual Network (ResNet) architectures as a potential pathway to enable deeper SNNs. We present insights and design constraints that are required to ensure ANN-SNN conversion for ResNets. We report a classification error of \textbf{12.54\%} on the CIFAR-10 dataset and a \textbf{34.53\%} top-1 error rate and \textbf{13.67\%} top-5 error rate on the ImageNet validation set. \textbf{This is the first work that attempts to explore SNNs with residual network architectures.}
(iv) \textbf{We demonstrate that SNN network sparsity significantly increases as the network depth increases.} This further motivates the exploration of converting ANNs to SNNs for event-driven operation to reduce compute overhead.
\section{Preliminaries}
\label{Preliminaries}
\subsection{Input and Output Representation}
\begin{figure}[!t]
\centering
\includegraphics[width=4.5in]{fig1}
\caption{The extreme left panel depicts a particular input image from the CIFAR-10 dataset with per pixel mean (over the training set) subtracted that is provided as input to the original ANN. The middle panel represents a particular instance of the Poisson spike train generated from the analog input image. The accumulated events provided to the SNN over $1000$ timesteps is depicted in the extreme right panel. This justifies the fact that the input image is being rate encoded over time for SNN operation.}
\label{fig1_a}
\end{figure}
The main difference between ANN and SNN operation is the notion of time. While ANN inputs are static, SNNs operate based on dynamic binary spiking inputs as a function of time.
The neural nodes also receive and transmit binary spike input signals in SNNs, unlike in ANNs, where the inputs and outputs of the neural nodes are analog values.
In this work, we consider a rate-encoded network operation where the average number of spikes transmitted as input to the network over a large enough time window is approximately proportional to the magnitude of the original ANN inputs (pixel intensity in this case). The duration of the time window is dictated by the desired network performance (for instance, classification accuracy) at the output layer of the network. A Poisson event-generation process is used to produce the input spike train to the network. Every time-step of SNN operation is associated with the generation of a random number whose value is compared against the magnitude of the corresponding input. A spike event is triggered if the generated random number is less than the value of the corresponding pixel intensity. This process ensures that the average number of input spikes in the SNN is proportional to the magnitude of the corresponding ANN inputs and is typically used to simulate an SNN for recognition tasks based on datasets for static images (\cite{diehl2015fast}). Fig. \ref{fig1_a} depicts a particular timed-snapshot of the input spikes transmitted to the SNN for a particular image from the CIFAR-10 dataset. Note that since we are considering per pixel mean subtracted images, the input layer receives spikes whose rate is proportional to the input magnitude with sign equal to the input sign. However, for subsequent layers all spikes are positive in sign since there are generated by spiking neurons in the network. SNN operation of such networks are “pseudo-simultaneous”, i.e. a particular layer operates immediately on the incoming spikes from the previous layer and does not have to wait for multiple time-steps for information from the previous layer neurons to get accumulated. Given a Poisson-generated spike train being fed to the network, spikes will be produced at the network outputs. Inference is based on the cumulative spike count of neurons at the output layer of the network over a given time-window.
\subsection{ANN and SNN Neural Operation}
ANN to SNN conversion schemes usually consider Rectified Linear Unit (ReLU) as the ANN activation function. For a neuron receiving inputs $x_i$ through synaptic weights $w_i$, the ReLU neuron output $y$ is given by,
\begin{equation}
y = max \left( 0,\sum_{i} w_i.x_i \right)
\end{equation}
Although ReLU neurons are typically used in a large number of machine learning tasks at present, the main reason behind their usage for ANN-SNN conversion schemes is that they bear functional equivalence to an Integrate-Fire (IF) Spiking Neuron without any leak and refractory period (\cite{cao2015spiking,diehl2015fast}).
Note that this is a particular type of Spiking Neuron model (\cite{izhikevich2003simple}). Let us consider the ANN inputs $x_i$ encoded in time as a spike train $\mathbb{X}_i(t)$, where the average value of $\mathbb{X}_i(t)$, $\mathbb{E}[\mathbb{X}_i(t)] \propto x_i$ (for the rate encoding network being considered in this work). The IF Spiking Neuron keeps track of its membrane potential, $v_{mem}$, which integrates incoming spikes and generates an output spike whenever the membrane potential cross a particular threshold $v_{th}$. The membrane potential is reset to zero at the generation of an output spike.
All neurons are reset whenever a spike train corresponding to a new image/pattern in presented.
The IF Spiking Neuron dynamics as a function of time-step, $t$, can be described by the following equation,
\begin{equation}
v_{mem}(t+1) = v_{mem}(t) + \sum_{i} w_i.\mathbb{X}_i(t)
\label{lif_a}
\end{equation}
Note that the neuron dynamics is independent of the actual magnitude of the time-step. Let us first consider the simple case of a neuron being driven by a single input $\mathbb{X}(t)$ and a positive synaptic weight $w$. Due to the absence of any leak term in the neural dynamics, it is intuitive to show that the corresponding output spiking rate of the neuron is given by $\mathbb{E}[\mathbb{Y}(t)] \propto \mathbb{E}[\mathbb{X}(t)]$, with the proportionality factor being dependent on the ratio of $w$ and $v_{th}$.
In the case when the synaptic weight is negative, the output spiking activity of the IF neuron is zero since the neuron is never able to cross the firing potential $v_{th}$, mirroring the functionality of a ReLU.
The higher the ratio of the threshold with respect to the weight, the more time is required for the neuron to spike, thereby reducing the neuron spiking rate, $\mathbb{E}[\mathbb{Y}(t)]$, or equivalently increasing the time-delay for the neuron to generate a spike. A relatively high firing threshold can cause a huge delay for neurons to generate output spikes.
For deep architectures, such a delay can quickly accumulate and cause the network to not produce any spiking outputs for relatively long periods of time. On the other hand, a relatively low threshold causes the SNN to lose any ability to distinguish between different magnitudes of the spike inputs being accumulated to the membrane potential (the term $\sum_{i} w_i.\mathbb{X}_i(t)$ in Eq. \ref{lif_a}) of the Spiking Neuron, causing it to lose evidence during the membrane potential integration process. This, in turn, results in accuracy degradation of the converted network. Hence, an appropriate choice of the ratio of the neuron threshold to the synaptic weights is essential to ensure minimal loss in classification accuracy during the ANN-SNN conversion process (\cite{diehl2015fast}). Consequently, most of the research work in this field has been concentrated on outlining appropriate algorithms for threshold-balancing, or equivalently, weight normalizing different layers of a network to achieve near-lossless ANN-SNN conversion.
\subsection{Architectural Constraints}
\subsubsection{Bias in Neural Units}
Typically neural units used for ANN-SNN conversion schemes are trained without any bias term (\cite{diehl2015fast}). This is due to the fact that optimization of the bias term in addition to the spiking neuron threshold expands the parameter space exploration, thereby causing the ANN-SNN conversion process to be more difficult.
Requirement of bias less neural units also entails that Batch Normalization technique (\cite{ioffe2015batch}) cannot be used as a regularizer during the training process since it biases the inputs to each layer of the network to ensure each layer is provided with inputs having zero mean. Instead, we use dropout (\cite{srivastava2014dropout}) as the regularization technique. This technique simply masks portions of the input to each layer by utilizing samples from a Bernoulli distribution where each input to the layer has a specified probability of being dropped.
\subsubsection{Pooling Operation}
Deep convolutional neural network architectures typically consist of intermediate pooling layers to reduce the size of the convolution output maps. While various choices exist for performing the pooling mechanism, the two popular choices are either max-pooling (maximum neuron output over the pooling window) or spatial-averaging (two-dimensional average pooling operation over the pooling window). Since the neuron activations are binary in SNNs instead of analog values, performing max-pooling would result in significant information loss for the next layer. Consequently, we consider spatial-averaging as the pooling mechanism in this work (\cite{diehl2015fast}).
\section{Deep Convolutional SNN Architectures: VGG}
As mentioned previously, our work is based on the proposal outlined by authors in Ref. (\cite{diehl2015fast}) wherein the neuron threshold of a particular layer is set equal to the maximum activation of all ReLUs in the corresponding layer (by passing the entire training set through the trained ANN once after training is completed). Such a ``Data-Based Normalization" technique was evaluated for three-layered fully connected and convolutional architectures on the MNIST dataset (\cite{diehl2015fast}). Note that, this process is referred to as ``weight-normalization" and ``threshold-balancing" interchangeably in this text. As mentioned before, the goal of this work is to optimize the ratio of the synaptic weights with respect to the neuron firing threshold, $v_{th}$.
Hence, either all the synaptic weights preceding a neural layer are scaled by a normalization factor $w_{norm}$ equal to the maximum neural activation and the threshold is set equal to $1$ (``weight-normalization"), or the threshold $v_{th}$ is set equal to the maximum neuron activation for the corresponding layer with the synaptic weights remaining unchanged (``threshold-balancing"). Both operations are exactly equivalent mathematically.
\subsection{Proposed Algorithm: \textsc{Spike}-\textsc{Norm}}
However, the above algorithm leads us to the question: \textbf{Are ANN activations representative of SNN activations?} Let us consider a particular example for the case of maximum activation for a single ReLU. The neuron receives two inputs, namely $0.5$ and $1$. Let us consider unity synaptic weights in this scenario. Since the maximum ReLU activation is $1.5$, the neuron threshold would be set equal to $1.5$. However, when this network is converted to the SNN mode, both the inputs would be propagating binary spike signals. The ANN input, equal to $1$, would be converted to spikes transmitting at every time-step while the other input would transmit spikes approximately $50\%$ of the duration of a large enough time-window. Hence, the actual summation of spike inputs received by the neuron per time-step would be $2$ for a large number of samples, which is higher than the spiking threshold ($1.5$). Clearly, some information loss would take place due to the lack of this evidence integration.
Driven by this observation, we propose a weight-normalization technique that balances the threshold of each layer by considering the actual operation of the SNN in the loop during the ANN-SNN conversion process. The algorithm normalizes the weights of the network sequentially for each layer. Given a particular trained ANN, the first step is to generate the input Poisson spike train for the network over the training set for a large enough time-window. The Poisson spike train allows us to record the maximum summation of weighted spike-input (the term $\sum_{i} w_i.\mathbb{X}_i(t)$ in Eq. \ref{lif_a}, and hereafter referred to maximum SNN activation in this text) that would be received by the first neural layer of the network. In order to minimize the temporal delay of the neuron and simultaneously ensure that the neuron firing threshold is not too low, we weight-normalize the first layer depending on the maximum spike-based input received by the first layer. After the threshold of the first layer is set, we are provided with a representative spike train at the output of the first layer which enables us to generate the input spike-stream for the next layer. The process is continued sequentially for all the layers of the network. The main difference between our proposal and prior work (\cite{diehl2015fast}) is the fact that the proposed weight-normalization scheme accounts for the actual SNN operation during the conversion process. As we will show in the Results section, this scheme is crucial to ensure near-lossless ANN-SNN conversion for significantly deep architectures and for complex recognition problems. We evaluate our proposal for VGG-16 network (\cite{simonyan2014very}), a standard deep convolutional network architecture which consists of a 16 layer deep network composed of $3\times 3$ convolution filters (with intermediate pooling layers to reduce the output map dimensionality with increasing number of maps). The pseudo-code of the algorithm is given below.
\begin{algorithm}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Input Poisson Spike Train $spikes$, Number of Time-Steps $\#timesteps$}
\Output{Weight-normalization / Threshold-balancing factors $v_{th,norm}[i]$ for each neural layer ($net.layer[i]$) of the network $net$}
initialization $v_{th,norm}[i]=0$ $\forall$ $i = 1,..., \#net.layer$\;
\tcp{Set input of 1st layer equal to spike train}
$net.layer[1].input$ = $spikes$\;
\For{$i\leftarrow 1$ \KwTo $\#net.layer$}{
\For{$t\leftarrow 1$ \KwTo $\#timesteps$}{
\tcp{Forward pass spike-train for neuron layer-i characterized by membrane potential $net.layer[i].v_{mem}$ and threshold $net.layer[i].v_{th}$}
$net.layer[i]:forward(net.layer[i].input[t])$ \;
\tcp{Determine threshold-balancing factor according to maximum SNN activation, max($net.layer[i].weight*net.layer[i].input[t]$), where '*' represents the dot-product operation}
$v_{th,norm}[i]$ = max($v_{th,norm}[i]$,max($net.layer[i].weight*net.layer[i].input[t]$))\;
}
\tcp{Threshold-balance layer-i}
$net.layer[i].v_{th}$ = $v_{th,norm}[i]$\;
\tcp{Record input spike-train for next layer}
$net.layer[i+1].input$ = $net.layer[i]:forward(net.layer[i].input)$\;
}
\caption{\textsc{Spike}-\textsc{Norm}}
\end{algorithm}
\section{Extension to Residual Architectures}
\begin{figure}[!t]
\centering
\includegraphics[width=5.3in]{fig3}
\caption{(a) The basic ResNet functional unit, (b) Design constraints introduced in the functional unit to ensure near-lossless ANN-SNN conversion, (c) Typical maximum SNN activations for a ResNet having junction ReLU layers but the non-identity and identity input paths not having the same spiking threshold. While this is not representative of the case with equal thresholds in the two paths, it does justify the claim that after a few initial layers, the maximum SNN activations decay to values close to unity due to the identity mapping.}
\label{fig3_a}
\end{figure}
Residual network architectures were proposed as an attempt to scale convolutional neural networks to very deep layered stacks (\cite{he2016deep}). Although different variants of the basic functional unit have been explored, we will only consider identity shortcut connections in this text (shortcut type-A according to the paper (\cite{he2016deep})). Each unit consists of two parallel paths. The non-identity path consists of two spatial convolution layers with an intermediate ReLU layer. While the original ResNet formulation considers ReLUs at the junction of the parallel non-identity and identity paths (\cite{he2016deep}), recent formulations do not consider junction ReLUs in the network architecture (\cite{he2016identity}). Absence of ReLUs at the junction point of the non-identity and identity paths was observed to produce a slight improvement in classification accuracy on the CIFAR-10 dataset\footnote{http://torch.ch/blog/2016/02/04/resnets.html}. Due to the presence of the shortcut connections, important design considerations need to be accounted for to ensure near-lossless ANN-SNN conversion. We start with the basic unit, as shown in Fig. \ref{fig3_a}(a), and point-wise impose various architectural constraints with justifications. Note the discussion in this section is based on threshold-balancing (with synaptic weights remaining unscaled), i.e. the threshold of the neurons are adjusted to minimize ANN-SNN conversion loss.
\subsection{ReLUs at each junction point}
As we will show in the Results section, application of our proposed \textsc{Spike}-\textsc{Norm} algorithm on such a residual architecture resulted in a converted SNN that exhibited accuracy degradation in comparison to the original trained ANN. We hypothesize that this degradation is attributed mainly to the absence of any ReLUs at the junction points. Each ReLU when converted to an IF Spiking Neuron imposes a particular amount of characteristic temporal delay (time interval between an incoming spike and the outgoing spike due to evidence integration). Due to the shortcut connections, spike information from the initial layers gets instantaneously propagated to later layers. The unbalanced temporal delay in the two parallel paths of the network can result in distortion of the spike information being propagated through the network. Consequently, as shown in Fig. \ref{fig3_a}(b), we include ReLUs at each junction point to provide a temporal balancing effect to the parallel paths (when converted to IF Spiking Neurons). An ideal solution would be to include a ReLU in the parallel path, but that would destroy the advantage of the identity mapping.
\subsection{Same threshold of all fan-in layers}
As shown in the next section, direct application of our proposed threshold-balancing scheme still resulted in some amount of accuracy loss in comparison to the baseline ANN accuracy.
However, note that the junction neuron layer receives inputs from the previous junction neuron layer as well as the non-identity neuron path. Since the output spiking activity of a particular neuron is also dependent on the threshold-balancing factor, all the fan-in neuron layers should be threshold-balanced by the same amount to ensure that input spike information to the next layer is rate-encoded appropriately. However, the spiking threshold of the neuron layer in the non-identity path is dependent on the activity of the neuron layer at the previous junction. An observation of the typical threshold-balancing factors for the network without using this constraint (shown in Fig. \ref{fig3_a}(c)) reveal that the threshold-balancing factors mostly lie around unity after a few initial layers. This occurs mainly due to the identity mapping. The maximum summation of spike inputs received by the neurons in the junction layers are dominated by the identity mapping (close to unity). From this observation, we heuristically choose both the thresholds of the non-identity ReLU layer and the identity-ReLU layer equal to $1$. However, the accuracy is still unable to approach the baseline ANN accuracy, which leads us to the third design constraint.
\subsection{Initial Non-Residual Pre-Processing Layers}
An observation of Fig. \ref{fig3_a}(c) reveals that the threshold-balancing factors of the initial junction neuron layers are significantly higher than unity. This can be a primary reason for the degradation in classification accuracy of the converted SNN. We note that the residual architectures used by authors in Ref. (\cite{he2016deep}) use an initial convolution layer with a very wide receptive field ($7 \times 7$ with a stride of $2$) on the ImageNet dataset. The main motive behind such an architecture was to show the impact of increasing depth in their residual architectures on the classification accuracy. Inspired by the VGG-architecture, we replace the first $7\times 7$ convolutional layer by a series of three $3 \times 3$ convolutions where the first two layers do not exhibit any shortcut connections. Addition of such initial non-residual pre-processing layers allows us to apply our proposed threshold-balancing scheme in the initial layers while using a unity threshold-balancing factor for the later residual layers. As shown in the Results section, this scheme significantly assists in achieving classification accuracies close to the baseline ANN accuracy since after the initial layers, the maximum neuron activations decay to values close to unity because of the identity mapping.
\section{Experiments}
\subsection{Datasets and Implementation}
We evaluate our proposals on standard visual object recognition benchmarks, namely the CIFAR-10 and ImageNet datasets. Experiments performed on networks for the CIFAR-10 dataset are trained on the training set images with per-pixel mean subtracted and evaluated on the testing set. We also present results on the much more complex ImageNet 2012 dataset that contains 1.28 million training images and report evaluation (top-1 and top-5 error rates) on the $50,000$ validation set. $224 \times 224$ crops from the input images are used for this experiment.
We use VGG-16 architecture (\cite{simonyan2014very}) for both the datasets. ResNet-20 configuration outlined in Ref. (\cite{he2016deep}) is used for the CIFAR-10 dataset while ResNet-34 is used for experiments on the ImageNet dataset. As mentioned previously, we do not utilize any batch-normalization layers. For VGG networks, a dropout layer is used after every ReLU layer except for those layers which are followed by a pooling layer. For Residual networks, we use dropout only for the ReLUs at the non-identity parallel paths but not at the junction layers. We found this crucial for achieving training convergence. Note that we have tested our framework only for the above mentioned architectures and datasets. There is no selection bias in the reported results.
Our implementation is derived from the Facebook ResNet implementation code for CIFAR and ImageNet datasets available publicly\footnotemark[\value{footnote}]. We use same image pre-processing steps and scale and aspect-ratio augmentation techniques as used in\footnotemark[\value{footnote}] (for instance, random crop, horizontal flip and color normalization transforms for the CIFAR-10 dataset). We report single-crop testing results while the error rates can be further reduced with 10-crop testing (\cite{krizhevsky2012imagenet}). Networks used for the CIFAR-10 dataset are trained on $2$ GPUs with a batchsize of $256$ for $200$ epochs, while ImageNet training is performed on $8$ GPUs for $100$ epochs with a similar batchsize. The initial learning rate is $0.05$. The learning rate is divided by $10$ twice, at $81$ and $122$ epochs for CIFAR-10 dataset and at $30$ and $60$ epochs for ImageNet dataset. A weight decay of $0.0001$ and a momentum of $0.9$ is used for all the experiments. Proper weight initialization is crucial to achieve convergence in such deep networks without batch-normalization. For a non-residual convolutional layer (for both VGG and ResNet architectures) having kernel size $k \times k$ with $n$ output channels, the weights are initialized from a normal distribution and standard deviation $\sqrt{\frac{2}{k^2n}}$. However, for residual convolutional layers, the standard deviation used for the normal distribution was $\frac{\sqrt{2}}{k^2n}$. We observed this to be important for achieving training convergence and a similar observation was also outlined in Ref. (\cite{hardt2016identity}) although their networks were trained without both dropout and batch-normalization.
\footnotetext{https://github.com/facebook/fb.resnet.torch}
\subsection{Experiments for VGG Architectures}
Our VGG-16 model architecture follows the implementation outlined in \footnote{https://github.com/szagoruyko/cifar.torch} except that we do not utilize the batch-normalization layers. We used a randomly chosen mini-batch of size 256 from the training set for the weight-normalization process on the CIFAR-10 dataset. While the entire training set can be used for the weight-normalization process, using a representative subset did not impact the results. We confirmed this by running multiple independent runs for both the CIFAR and ImageNet datasets. The standard deviation of the final classification error rate after $2500$ time-steps was $\sim 0.01\%$.
All results reported in this section represent the average of 5 independent runs of the spiking network (since the input to the network is a random process). No notable difference in the classification error rate was observed at the end of $2500$ time-steps and the network outputs converged to deterministic values despite being driven by stochastic inputs.
For the SNN model based weight-normalization scheme (\textsc{Spike}-\textsc{Norm} algorithm) we used $2500$ time-steps for each layer sequentially to normalize the weights.
Table \ref{table_1_a} summarizes our results for the CIFAR-10 dataset. The baseline ANN error rate on the testing set was $8.3\%$. Since the main contribution of this work is to minimize the loss in accuracy during conversion from ANN to SNN for deep-layered networks and not in pushing state-of-the-art results in ANN training, we did not perform any hyper-parameter optimization.
However, note that despite several architectural constraints being present in our ANN architecture, we are able to train deep networks that provide competitive classification accuracies using the training mechanisms described in the previous subsection. Further reduction in the baseline ANN error rate is possible by appropriately tuning the learning parameters. For the VGG-16 architecture, our implementation of the ANN-model based weight-normalization technique, proposed by Ref. (\cite{diehl2015fast}), yielded an average SNN error rate of $8.54\%$ leading to an error increment of $0.24\%$. The error increment was minimized to $0.15\%$ on applying our proposed \textsc{Spike}-\textsc{Norm} algorithm. Note that we consider a strict model-based weight-normalization scheme to isolate the impact of considering the effect of an ANN versus our SNN model for threshold-balancing. Further optimizations of considering the maximum synaptic weight during the weight-normalization process (\cite{diehl2015fast}) is still possible.
Previous works have mainly focused on much shallower convolutional neural network architectures. Although Ref. (\cite{hunsberger2016training}) reports results with an accuracy loss of $0.18\%$, their baseline ANN suffers from some amount of accuracy degradation since their networks are trained with noise (in addition to architectural constraints mentioned before) to account for neuronal response variability due to incoming spike trains (\cite{hunsberger2016training}). It is also unclear whether the training mechanism with noise would scale up to deeper layered networks. \textbf{Our work reports the best performance of a Spiking Neural Network on the CIFAR-10 dataset till date.}
The impact of our proposed algorithm is much more apparent on the more complex ImageNet dataset. The rates for the top-1 (top-5) error on the ImageNet validation set are summarized in Table \ref{table_2_a}. Note that these are single-crop results. The accuracy loss during the ANN-SNN conversion process is minimized by a margin of $0.57\%$ by considering SNN-model based weight-normalization scheme. It is therefore expected that our proposed \textsc{Spike}-\textsc{Norm} algorithm would significantly perform better than an ANN-model based conversion scheme as the pattern recognition problem becomes more complex since it accounts for the actual SNN operation during the conversion process. Note that Ref. (\cite{hunsberger2016training}) reports a performance of $48.2\% (23.8\%)$ on the first 3072-image test batch of the ImageNet $2012$ dataset.
At the time we developed this work, we were unaware of a parallel effort to scale up the performance of SNNs to deeper networks and large-scale machine learning tasks. The work was recently published in Ref. (\cite{rueckauer2017conversion}). However, their work differs from our approach in the following aspects:
\newline (i) Their work improves on prior approach outlined in Ref. (\cite{diehl2015fast}) by proposing conversion methods for removing the constraints involved in ANN training (discussed in Section 4.3). We are improving on prior art by scaling up the methodology outlined in Ref. (\cite{diehl2015fast}) for ANN-SNN conversion by including the constraints.
\newline (ii) We are demonstrating that considering SNN operation in the conversion process helps to minimize the conversion loss. Ref. (\cite{rueckauer2017conversion}) uses ANN based normalization scheme used in Ref. (\cite{diehl2015fast}).
\newline While removing the constraints in ANN training allows authors in Ref. (\cite{rueckauer2017conversion}) to train ANNs with better accuracy, they suffer significant accuracy loss in the conversion process. This occurs due to a non-optimal ratio of biases/batch-normalization factors and weights (\cite{rueckauer2017conversion}). This is the primary reason for our exploration of ANN-SNN conversion without bias and batch-normalization. For instance, their best performing network on CIFAR-10 dataset incurs a conversion loss of $1.06\%$ in contrast to $0.15\%$ reported by our proposal for a much deeper network. The accuracy loss is much larger for their VGG-16 network on the ImageNet dataset - $14.28\%$ in contrast to $0.56\%$ for our proposal. Although Ref. (\cite{rueckauer2017conversion}) reports a top-1 SNN error rate of $25.40\%$ for an Inception-V3 network, their ANN is trained with an error rate of $23.88\%$. The resulting conversion loss is $1.52\%$ and much higher than our proposals. The Inception-V3 network conversion was also optimized by a voltage clamping method, that was found to be specific for the Inception network and did not apply to the VGG network (\cite{rueckauer2017conversion}). Note that the results reported on ImageNet in Ref. (\cite{rueckauer2017conversion}) are on a subset of $1382$ image samples for Inception-V3 network and 2570 samples for VGG-16 network. Hence, the performance on the entire dataset is unclear. Our contribution lies in the fact that we are demonstrating ANNs can be trained with the above-mentioned constraints with competitive accuracies on large-scale tasks and converted to SNNs in a near-lossless manner.
\textbf{This is the first work that reports competitive performance of a Spiking Neural Network on the entire $50,000$ ImageNet 2012 validation set.}
\begin{table}[t]
\renewcommand{\arraystretch}{1}
\small
\caption{Results for CIFAR-10 Dataset}
\label{table_1_a}
\centering
\begin{tabular}{ p{6cm} p{2.1cm} p{2.1cm} p{3.4cm} }
\hline
\hline
\bfseries {Network Architecture} & \bfseries {ANN \newline Error} & \bfseries {SNN \newline Error} & \bfseries {Error Increment}\\
\hline
{4-layered networks (\cite{cao2015spiking}) \newline (Input cropped to 24 x 24)} & {$20.88\%$} & {$22.57\%$} & {$1.69\%$}\\ \\
{3-layered networks (\cite{esser2016convolutional})} & {$-$} & {$10.68\%$} & {$-$}\\ \\
{8-layered networks (\cite{hunsberger2016training}) \newline (Input cropped to 24 x 24)} & {$16.28\%$} & {$16.46\%$} & {0.18\%}\\ \\
{6-layered networks (\cite{rueckauer2017conversion}) \newline } & {$8.09\%$} & {$9.15\%$} & {1.06\%}\\ \\
{VGG-16 \newline(ANN model based conversion)} & {$8.3\%$} & {$8.54\%$} & {$0.24\%$}\\ \\
{\textbf{VGG-16 \newline(\textsc{SPIKE}-\textsc{NORM})}} & {\textbf{8.3\%}} & {\textbf{8.45\%}} & {\textbf{0.15\%}}\\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Experiments for Residual Architectures}
Our residual networks for CIFAR-10 and ImageNet datasets follow the implementation in Ref. (\cite{he2016deep}). We first attempt to explain our design choices for ResNets by sequentially imposing each constraint on the network and showing their corresponding impact on network performance in Fig. \ref{fig4_a}. The ``Basic Architecture" involves a residual network without any junction ReLUs. ``Constraint 1" involves junction ReLUs without having equal spiking thresholds for all fan-in neural layers. ``Constraint 2" imposes an equal threshold of unity for all the layers while ``Constraint 3" performs best with two pre-processing plain convolutional layers ($3 \times 3$) at the beginning of the network. The baseline ANN ResNet-20 was trained with an error of $10.9\%$ on the CIFAR-10 dataset. Note that although we are using terminology consistent with Ref. (\cite{he2016deep}) for the network architectures, our ResNets contain two extra plain pre-processing layers. The converted SNN according to our proposal yielded a classification error rate of $12.54\%$. Weight-normalizing the initial two layers using the ANN-model based weight-normalization scheme produced an average error of $12.87\%$, further validating the efficiency of our weight-normalization technique.
On the ImageNet dataset, we use the deeper ResNet-34 model outlined in Ref. (\cite{he2016deep}). The initial $7 \times 7$ convolutional layer is replaced by three $3 \times 3$ convolutional layers where the initial two layers are non-residual plain units. The baseline ANN is trained with an error of $29.31\%$ while the converted SNN error is $34.53\%$ at the end of $2500$ timesteps. The results are summarized in Table. \ref{table_3_a} and convergence plots for all our networks are provided in Fig. \ref{fig5_a}.
\begin{table}[t]
\small
\renewcommand{\arraystretch}{1.3}
\caption{Results for ImageNet Dataset}
\label{table_2_a}
\centering
\begin{tabular}{ p{6.4cm} p{2.1cm} p{2.1cm} p{3.4cm} }
\hline
\hline
\bfseries {Network Architecture} & \bfseries {ANN \newline Error} & \bfseries {SNN \newline Error} & \bfseries {Error Increment}\\
\hline
{8-layered networks (\cite{hunsberger2016training}) \newline(Tested on subset of 3072 images)} & {$-$} & {$48.20\% \newline (23.80\%)$} & {$-$}\\ \\
{VGG-16 (\cite{rueckauer2017conversion}) \newline(Tested on subset of 2570 images)} & {$36.11\% \newline (15.14\%)$} & {$50.39\% \newline (18.37\%)$} & {$14.28\% \newline (3.23\%)$}\\ \\
{VGG-16 \newline(ANN model based conversion)} & {$29.48\% \newline (10.61\%)$} & {$30.61\% \newline (11.21\%)$} & {$1.13\% \newline (0.6\%)$}\\ \\
{\textbf{VGG-16 \newline(\textsc{SPIKE}-\textsc{NORM})}} & {\textbf{29.48\% \newline (10.61\%)}} & {\textbf{30.04\% \newline (10.99\%)}} & {\textbf{0.56\% \newline (0.38\%)}}\\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=2.3in]{fig4}
\caption{Impact of the architectural constraints for Residual Networks. ``Basic Architecture" does not involve any junction ReLU layers. ``Constraint 1" involves junction ReLUs while ``Constraint 2" imposes equal unity threshold for all residual units. Network accuracy is significantly improved with the inclusion of ``Constraint 3" that involves pre-processing weight-normalized plain convolutional layers at the network input stage. }
\label{fig4_a}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=3.8in]{fig5}
\caption{Convergence plots for the VGG and ResNet SNN architectures for CIFAR-10 and ImageNet datasets are shown above. The classification error reduces as more evidence is integrated in the Spiking Neurons with increasing time-steps. Note that although the network depths are similar for CIFAR-10 dataset, the ResNet-20 converges much faster than the VGG architecture. The delay for inferencing is higher for ResNet-34 on the ImageNet dataset due to twice the number of layers as the VGG network.}
\label{fig5_a}
\end{figure}
It is worth noting here that the main motivation of exploring Residual Networks is to go deeper in Spiking Neural Networks. We explore relatively simple ResNet architectures, as the ones used in Ref. (\cite{he2016deep}), which have an order of magnitude fewer parameters than standard VGG-architectures. Further hyper-parameter optimizations or more complex architectures are still possible. While the accuracy loss in the ANN-SNN conversion process is more for ResNets than plain convolutional architectures, yet further optimizations like including more pre-processing initial layers or better threshold-balancing schemes for the residual units can still be explored. \textbf{This work serves as the first work to explore ANN-SNN conversion schemes for Residual Networks and attempts to highlight important design constraints required for minimal loss in the conversion process.}
\subsection{Computation Reduction Due to Sparse Neural Events}
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig6}
\caption{Average cumulative spike count generated by neurons in VGG and ResNet architectures on the ImageNet dataset as a function of the layer number. $500$ timesteps were used for accumulating the spike-counts for VGG networks while $2000$ time-steps were used for ResNet architectures. The neural spiking sparsity increases significantly as network depth increases.}
\label{fig6_a}
\end{figure}
ANN operation for prediction of the output class of a particular input requires a single feed-forward pass per image. For SNN operation, the network has to be evaluated over a number of time-steps. However, specialized hardware that accounts for the event-driven neural operation and ``computes only when required" can potentially exploit such alternative mechanisms of network operation. For instance, Fig. \ref{fig6_a} represents the average total number of output spikes produced by neurons in VGG and ResNet architectures as a function of the layer for ImageNet dataset. A randomly chosen minibatch was used for the averaging process. We used $500$ timesteps for accumulating the spike-counts for VGG networks while $2000$ time-steps were used for ResNet architectures. This is in accordance to the convergence plots shown in Fig. \ref{fig5_a}. An important insight obtained from Fig. \ref{fig6_a} is the fact that neuron spiking activity becomes sparser as the network depth increases. Hence, benefits from event-driven hardware is expected to increase as the network depth increases. While an estimate of the actual energy consumption reduction for SNN mode of operation is outside the scope of this current work, we provide an intuitive insight by providing the number of computations per synaptic operation being performed in the ANN versus the SNN.
The number of synaptic operations per layer of the network can be easily estimated for an ANN from the architecture for the convolutional and linear layers. For the ANN, a multiply-accumulate (MAC) computation takes place per synaptic operation. On the other hand, a specialized SNN hardware would perform an accumulate computation (AC) per synaptic operation only upon the receipt of an incoming spike. Hence, the total number of AC operations occurring in the SNN would be represented by the dot-product of the average cumulative neural spike count for a particular layer and the corresponding number of synaptic operations. Calculation of this metric reveal that for the VGG network, the ratio of SNN AC operations to ANN MAC operations is $1.975$ while the ratio is $2.4$ for the ResNet (the metric includes only ReLU/IF spiking neuron activations in the network). However, note the fact that a MAC operation involves an order of magnitude more energy consumption than an AC operation (\cite{han2015learning}). Hence, the energy consumption reduction for our SNN implementation is expected to be significantly lower in comparison to the original ANN implementation. It is worth noting here that the real metric governing the energy requirement of SNN versus ANN is the number of spikes per neuron. Energy benefits are obtained only when the average number of spikes per neuron over the inference timing window is less than 1 (since in the SNN the synaptic operation is conditional based on spike generation by neurons in the previous layer). Hence, to get benefits for energy reductions in SNNs, one should target deeper networks due to neuron spiking sparsity.
While the SNN operation requires a number of time-steps in contrast to a single feed-forward pass for an ANN, the actual time required to implement a single time-step of the SNN in a neuromorphic architecture might be significantly lower than a feedforward pass for an ANN implementation (due to event-driven hardware operation). An exact estimate of the delay comparison is outside the scope of this article. Nevertheless, despite the delay overhead, as highlighted above, the power benefits of event-driven SNNs can significantly increase the energy (power x delay) efficiency of deep SNNs in contrast to ANNs.
\begin{table}[t]
\renewcommand{\arraystretch}{1.3}
\small
\caption{Results for Residual Networks}
\label{table_3_a}
\centering
\begin{tabular}{ p{2.4cm} p{2.8cm} p{2.4cm} p{2.4cm} }
\hline
\hline
\bfseries {Dataset} & \bfseries {Network \newline Architecture} & \bfseries {ANN \newline Error} & \bfseries {SNN \newline Error}\\
\hline
{CIFAR-10} & {ResNet-20} & {$10.9\%$} & {$12.54\%$}\\
{ImageNet} & {ResNet-34} & {$29.31\% \newline (10.31\%)$} & {$34.53\% \newline (13.67\%)$}\\
\hline
\hline
\end{tabular}
\end{table}
\section{Conclusions and Future Work}
This work serves to provide evidence for the fact that SNNs exhibit similar computing power as their ANN counterparts. This can potentially pave the way for the usage of SNNs in large scale visual recognition tasks, which can be enabled by low-power neuromorphic hardware. However, there are still open areas of exploration for improving SNN performance. A significant contribution to the present success of deep NNs is attributed to Batch-Normalization (\cite{ioffe2015batch}). While using bias less neural units constrain us to train networks without Batch-Normalization, algorithmic techniques to implement Spiking Neurons with a bias term should be explored. Further, it is desirable to train ANNs and convert to SNNs without any accuracy loss. Although the proposed conversion technique attempts to minimize the conversion loss to a large extent, yet other variants of neural functionalities apart from ReLU-IF Spiking Neurons could be potentially explored to further reduce this gap. Additionally, further optimizations to minimize the accuracy loss in ANN-SNN conversion for ResNet architectures should be explored to scale SNN performance to even deeper architectures.
\bibliographystyle{frontiersinSCNS_ENG_HUMS}
|
1,108,101,566,453 | arxiv | \section{introduction}
Kaluza-Klein black holes, which have compact extra dimensions, are
interesting spacetimes in a class of higher-dimensional black holes.
In particular, there exist a variety of exact solutions of
five-dimensional squashed Kaluza-Klein black holes, which have
squashed S$^3$ horizons
\cite{Dobiasch:1981vh, Gibbons:1985ac, Gauntlett:2002nw,
Gaiotto:2005gf, Ishihara:2005dp,
Wang:2006nw,
Yazadjiev:2006iv, Nakagawa:2008rm, Tomizawa:2008hw, Tomizawa:2008rh, Stelea:2008tt,
Tomizawa:2008qr, Bena:2009ev, Tomizawa:2010xq, Mizoguchi:2011zj,
Chen:2010ih, Nedkova:2011hx, Nedkova:2011aa, Tatsuoka:2011tx}.
The solutions behave as fully five-dimensional black holes in the vicinity of
the horizon,
while they behave as four-dimensional black holes in the region far away
from the horizon.
The squashed Kaluza-Klein black holes asymptote to four-dimensional flat spacetimes
with a twisted S$^1$ as a compactified extra dimension,
and we can regard a series of these solutions as models of realistic
higher-dimensional black holes.
In the Einstein-Maxwell theory in four dimensions,
it is well known that extremal charged black holes
can make multi-configurations \cite{Majumdar, Papapetrou}.
In five-dimensions, multi-black hole solutions are obtained both
in asymptotically flat spacetimes
\cite{Myers:1986rx, Breckenridge:1996is}
and in asymptotically Kaluza-Klein spacetimes
\cite{Myers:1986rx, Maeda:2006hd, Ishihara:2006iv, Matsuno:2008fn}.
These solutions are achieved by the balance of gravitational attraction and
electrical repulsion.
Until now, any regular asymptotically flat vacuum multi-black hole solutions with spherical horizons
in four or five dimensions have not been found \cite{Bunting:1987, Chrusciel:2005pa}.\footnote{
For non-spherical horizons in five dimensions,
black Saturns and black di-rings are
found as vacuum exact solutions \cite{Elvang:2007rd, Iguchi:2007is, Izumi:2007qx}.}
In contrast to the asymptotically flat case, in asymptotically Kaluza-Klein
spacetimes, in this paper,
we show that the metrics found by Clement \cite{IPUC-85-1} describe
five-dimensional regular maximally rotating vacuum multi-black holes
with a twisted S$^1$ as an extra dimension.
Each black hole has a horizon with the topology of lens space
$L(n; 1) =$ S$^3/\mathbb{Z}_n$.
If the size of extra dimension $L$ is fixed, the regularity condition requires the
quantization of black hole mass by $n L$.
This paper is organized as follows.
In Sec.\ref{solution}, we present explicit forms of solutions and constraints between parameters of the solutions.
Section \ref{Properties}
is devoted to an investigation of conserved charges, asymptotic structures
of the solutions, and the regularity at the horizons.
We conclude our studies with discussions in Sec.\ref{Discussions}.
\section{solutions}\label{solution}
We consider rotating multi-black hole solutions satisfying
the five-dimensional vacuum Einstein equation, $R_{\mu \nu } = 0$.
The metric is written as
\begin{align}
\label{mET1}
ds^2 = - H^{-2} dt^2 + H^2 (dx^2+dy^2+dz^2)
+ 2 \left[ \left(H^{-1} -1 \right) dt + \frac{L}{2 \sqrt 2} d\psi
+ \bm \omega \right]^2 ,
\end{align}
where
\begin{align}
H &= 1 + \sum_i \frac{m_i}{|\bm R - \bm R _i|}
\label{FUNCH}
\end{align}
is the harmonic function on the three-dimensional Euclid space
with point sources located at $\bm R = \bm R_i := (x_i, y_i, z_i)$.
The 1-form $\bm \omega $, which is determined by
\begin{align}
\nabla \times \bm \omega = \nabla H ,
\end{align}
has the explicit form
\begin{align}
\bm \omega =
\sum_i m_i \frac{z-z_i}{|\bm R - \bm R _i|}
\frac{(x-x_i)dy-(y-y_i)dx}{(x-x_i)^2+(y-y_i)^2} .
\label{one-form}
\end{align}
In the expressions \eqref{mET1}-\eqref{one-form}, $m_i$ and $L$ are constants.
The solutions \eqref{mET1} with \eqref{FUNCH} and \eqref{one-form} can be obtained
by uplifting the four-dimensional equally charged dyonic Majumdar-Papapetrou
solutions with a constant dilaton field
to the five-dimensional spacetimes \cite{IPUC-85-1}.
(See Appendix \ref{KKRED} for detail discussion.)\footnote{
In the single black hole case, $m_1 = m$ and $m_i = 0 ~(i \geq 2)$,
the solution \eqref{mET1} coincides with
an extremally rotating vacuum squashed Kaluza-Klein black hole solution
with a degenerate horizon \cite{Gibbons:1985ac}.
The solution \eqref{mET1} was also obtained
in the context of the ten-dimensional $N = 1$ supergravity \cite{Khuri:1995xq}.
However, to the best our knowledge,
properties of the solution \eqref{mET1}
like asymptotic structures and a smoothness of horizons have not been discussed.}
As will be shown later, $\bm R = \bm R_i$ are black hole horizons.
From the requirements for the absence of naked singularity
on and outside the black hole horizons,
the parameters are restricted to the range
\begin{align}\label{PARAREGS}
m _i > 0 .
\end{align}
We will see later that the regularity of horizons requires
the parameters $m_i$ to be quantized by
the size of the compactified dimension $L$ at infinity in the form
\begin{align}\label{QUANTIZE}
m _i = \frac{n _i L}{2 \sqrt 2} ,
\end{align}
where $n_i$ are the natural numbers.
\section{Properties}
\label{Properties}
\subsection{Basic properties }
It is clear that the metric \eqref{mET1} admits
two Killing vector fields,
\begin{align}
\xi_{(t)} = \partial / \partial t \quad
\mbox{and} \quad
\xi_{(\psi)} = \partial / \partial \psi.
\end{align}
Timelike Killing vectors which are timelike at
infinity are not unique because the Killing vector
along the compact extra dimension
$\xi_{(\psi)}$ has a finite norm at the infinity.
This fact is quite different from the asymptotically flat case, where only $\xi_{(t)}$
is timelike at the infinity.
Fortunately, however, we can select out the timelike Killing vector
$\xi_{(t)}$ that is hypersurface orthogonal at the infinity among them.
We define the Komar mass $M$ associated with
$\xi_{(t)}$, and obtain as
\begin{align}
M &= \frac{-3}{32\pi}\int_\infty dS_{\mu\nu}\nabla^\mu \xi_{(t)}^\nu
= \frac{3 L \sum_i m_i}{4 \pi} \mathcal A_{\rm S ^3} ,
\label{mASS}
\end{align}
where $\mathcal A_{\rm S ^3}$ denotes the area of a unit S$^3$.
We also obtain the angular momentum $J_{\psi}$
associated with the spacelike Killing vector
$\xi_{(\psi)}$
as
\begin{align}
J^{\psi} &= \frac{1}{16\pi}\int_\infty dS_{\mu\nu}
\nabla^\mu \xi_{(\psi)}^\nu
= \frac{L ^2 \sum_i m_i}{4 \sqrt 2 \pi} \mathcal A_{\rm S ^3} .
\label{ANGmOm}
\end{align}
We see that the spacetime \eqref{mET1} is rotating along
the extra dimension.
Because the present solutions are vacuum solutions, the mass and angular
momentum can be assigned to each black hole
\begin{align}
M_i
= \frac{3 L m_i}{4 \pi} \mathcal A_{\rm S ^3} ,
\label{mASS_each}
\\
J^{\psi}_i
= \frac{L ^2 m_i}{4 \sqrt 2 \pi} \mathcal A_{\rm S ^3} ,
\label{ANGmOm_each}
\end{align}
by taking integral for the closed
surface surrounding each black hole.
Substituting \eqref{QUANTIZE} into \eqref{mASS_each} and \eqref{ANGmOm_each},
we obtain a relation between the mass and the angular momentum as
\begin{align}
(J_i^{\psi})^{2} = \frac{2 \sqrt 2}{27 \pi n_i} M_i^3 .
\end{align}
This relation means that each black hole is maximally rotating.
We can also obtain the relation
\begin{align}
\frac{J_i^\psi}{M_i} = \frac{L}{3 \sqrt 2} .
\end{align}
Total mass and total angular momentum are given by the summations,
\begin{align}
M=\sum_i M_i,
\quad
J^{\psi}=\sum_i J^{\psi}_i,
\end{align}
which satisfy the condition
\begin{align}
(J^{\psi})^{2} = \frac{2 \sqrt 2}{27 \pi n} M^3,
\end{align}
where $n = \sum_i n_i$.
With respect to the timelike Killing vector $\xi_{(t)}$, we define
the ergosurfaces where the Killing vector becomes null, i.e.,
\begin{align}\label{ERGOEQ}
g_{tt} = \left( H^{-1} -2 \right)^2 -2 = 0 .
\end{align}
In the single black hole case, $m_1 = m$ and $m_i = 0 ~(i \geq 2)$,
the equation \eqref{ERGOEQ} reduces to
\begin{align}
g_{tt}= \frac{2 m^2 - R^2}{(R + m)^2} = 0 .
\end{align}
Then, the ergosurface exists at $R=\sqrt{2}m$.
In general, since $g_{tt}(\bm R = \bm R_i) = 2 > 0$ and $g_{tt}(\infty) = -1 < 0$
for the range of parameters \eqref{PARAREGS},
there always exist ergoregions
around the black hole horizons.
It depends on the configuration of point sources whether the ergoregions are
connected or not \cite{Matsuno:2008fn}.
\subsection{Asymptotic structure }
We assume that point sources exist in a bounded domain.
In the far region from the domain,
the harmonic function $H$ and the 1-form $\bm \omega$ behave as
\begin{align}
H &\simeq 1 + \frac{\sum _i m_i}{R} + O\left( R ^{-2} \right) ,
\\
\bm \omega &\simeq \left( \sum _i m_i \right)
\cos \theta d \phi + O\left( R ^{-1} \right) .
\end{align}
Then, using \eqref{QUANTIZE}, we see that the metric behaves as
\begin{align}
ds^2 \simeq - \left(1+\frac{m}{R}\right)^{-2} dt ^2
+ \left(1+\frac{m}{R}\right)^2 \left(dR^2 + R^2 d\Omega_{\rm S ^2}^2\right)
+ \frac{n ^2 L ^2}{4}
\left(-\frac{dt}{R} +\frac{d\psi}{n}+\cos \theta d \phi \right) ^2 ,
\end{align}
where $d\Omega _{\rm S ^2} ^2 = d\theta ^2 + \sin^2\theta d\phi^2$
denotes the metric of the unit two-sphere,
and $m = \sum_i m_i$.
The metric behaves as a single extremely rotating Kaluza-Klein
black hole \cite{Dobiasch:1981vh, Gibbons:1985ac}.
In the limit $R\to \infty$,
we see that the metric approaches as
\begin{align}
ds^2 \to - dt ^2 + dR^2 + R^2 d\Omega _{\rm S ^2} ^2
+ \frac{n ^2 L ^2}{4} \left( \frac{d\psi}{n} + \cos \theta d \phi \right) ^2 .
\label{asympt_metric}
\end{align}
The asymptotic structure of the spacetime \eqref{mET1} is asymptotically
locally flat, i.e.,
the metric asymptotes to a twisted constant S$^1$ fiber bundle over
the four-dimensional Minkowski spacetime,
and the spatial infinity has the structure of an S$^1$ bundle over an S$^2$
such that it is the lens space
$L(n; 1) =$ S$^3/\mathbb{Z}_n$ \cite{Ishihara:2006iv,Matsuno:2008fn}.
We see that the size of a twisted S$^1$ fiber as an extra dimension takes
the constant value $L$ everywhere.
\subsection{Near horizon}
For simplicity, we restrict ourselves to the cases of two-black holes,
i.e., $m _i = 0 ~(i \geq 3)$.
Without loss of generality, we can put the locations of two point sources as
$\bm R_1 = (0, 0, 0)$ and $\bm R_2 = (0, 0, a)$,
where the constant $a$ denotes the separation between two black holes.
In this case, the metric is
\begin{align}\label{mET2}
ds^2 = - H^{-2} dt^2 + H^2
\left( dR^2 + R^2 d\Omega _{\rm S ^2} ^2 \right)
+ 2 \left[ \left(H^{-1} -1 \right) dt + \frac{L}{2 \sqrt 2} d\psi
+ \bm \omega \right]^2 ,
\end{align}
where $H$ and $\bm \omega$ are given by
\begin{align}
H &= 1+ \frac{m_1}{R}
+ \frac{m_2}{\sqrt{R^2 + a^2 - 2 a R \cos \theta}} ,
\label{harmonics_2}
\\
\bm \omega &= \left(
m_1 \cos \theta + m_2 \frac{R \cos \theta - a }
{\sqrt{R^2 + a^2 - 2 a R \cos \theta}} \right) d\phi,
\label{form_2}
\end{align}
respectively.
The coordinates run the ranges of
$-\infty < t < \infty , ~0 < R < \infty,~0 \leq \theta \leq\pi,~ \quad
0 \leq \phi \leq 2\pi $, and $0 \leq \psi \leq 4\pi $.
In the coordinate system $(t, R, \theta , \phi , \psi )$,
the metric \eqref{mET2} diverges at the locations of two point sources, i.e.,
$\bm R = \bm R_1 ~(R=0)$ and $\bm R = \bm R_2 ~(R=a, \theta=0)$.
In order to remove apparent divergences
at $R = 0$,
we introduce new coordinates $(v, \psi' )$ such that
\begin{align}
dv &= dt + H^2 dR + W d\theta ,
\\
d\psi'
&= d\psi - \frac{2 \sqrt 2}{L}\left(dt + H dR + V d\theta
\right) ,
\end{align}
where the functions $W$ and $V$ are given by
\begin{align}
W \left(R, \theta\right)
&= \int dR \frac{\partial}{\partial \theta}\left( H^2 \right) ,
\label{FUNCW}
\\
V \left(R, \theta\right)&= \int dR \frac{\partial}{\partial \theta} H ,
\label{FUNCV}
\end{align}
respectively.
Then, the metric \eqref{mET2} takes the form of
\begin{align}
ds^2 = & - H^{-2} \left(dv - W d\theta \right)^2
+ 2 dR \left(dv - W d\theta \right)
+ H^2 R^2 d\Omega _{\rm S ^2} ^2
\notag \\
&\hspace{7mm} + 2 \left[
H^{-1} dv + \bm \omega
+ \left(V - H^{-1}W \right) d\theta + \frac{L}{2 \sqrt 2} d\psi'
\right]^2 .
\label{mET3}
\end{align}
In the neighborhood of $R = 0$, the functions $H,~W, ~V$, and the 1-form
$\bm \omega$
can be expanded by power series in $R$, and leading orders are
\begin{align}
H &\simeq \frac{m_1}{R}
+ {\cal O}(1) ,
\label{EXPANDH}
\\
W &\simeq - \frac{2 m_1 m_2 \sin \theta}{a^2} R
+{\mathcal O}(R^2),
\label{EXPANDW}
\\
V &\simeq
- \frac{m_2 \sin \theta}{2a^2} R^2 +{\cal O}(R^3),
\label{EXPANDV}
\\
\bm \omega &\simeq \left( m_1 \cos \theta - m_2
+{\cal O}(R^2)
\right)d\phi ,
\label{EXPANDOmEGA}
\end{align}
where we have chosen the integral constants of $W$ and $V$
given by \eqref{FUNCW} and \eqref{FUNCV} suitably.
Then, near $R = 0$, the metric \eqref{mET3} behaves as
\begin{align}
ds^2 \simeq & \frac{R^2}{m_1 ^2} dv^2 + 2 dv dR
+ m_1 ^2 \left[ d\Omega _{\rm S ^2} ^2
+ 2 \left(\frac{L}{2\sqrt{2}m_1}d\psi''
+ \cos \theta d\phi \right)^2 \right]
\cr
& + 4 R \left[
\frac{m_1 m_2 \sin \theta }{a^2} dR d\theta
+ \left( dv + \frac{2 m_1 m_2 \sin \theta }{a^2} R d\theta \right)
\left( \frac{L}{2\sqrt{2}m_1}d\psi'' + \cos \theta d\phi \right)
\right] \cr
& + {\mathcal O}(R^3) ,
\label{near_horison_metric}
\end{align}
where we have used
\begin{align}
d\psi'' = d\psi' - \frac{2\sqrt 2}{L} m _2 d\phi .
\end{align}
If the factor $2\sqrt{2}m_1/L$ is a natural number, say $n_1$,
the induced metric on the three-dimensional spatial cross section of
$R = 0$ with a time slice is
\begin{align}\label{INDmET}
ds^2 |_{R=0}
= \frac{n_1^2 L^2}{8} \left[ d\Omega _{\rm S ^2} ^2
+ 2 \left( \frac{d\psi''}{n_1} + \cos \theta d\phi \right)^2 \right] .
\end{align}
That is, if the mass quantization condition \eqref{QUANTIZE} holds, the $R=0$
surface admits the smooth metric of the squashed lens space
$L(n_1;1)=$ S$^3/\mathbb{Z}_{n_1}$.
The area of the surface is
\begin{align}
\mathcal A|_{R=0}
= \frac{n_1^2 L^3}{2} \mathcal A_{\rm S ^3} .
\end{align}
Under the condition \eqref{QUANTIZE},
we see that $R=0$ is a null surface where
the metric \eqref{mET3} is regular and each component is
an analytic function of $R$.
Therefore the metric \eqref{mET3} gives analytic extension
across the surface $R = 0$.
By the same discussion, we see that
the metric \eqref{mET2} also admits analytic extension across
the surface $\bm R = \bm R _2$.
We also see that $\eta=\partial_v$ is a Killing vector field
that becomes null at $R=0$.
Furthermore, $\eta$ is hypersurface orthogonal to the surface $R=0$, i.e.,
$\eta_{ \mu} dx^\mu = g_{vR} dR = dR $ there.
These mean that the null hypersurface $R=0$ is a Killing horizon.
Similarly, $\bm R = \bm R _2$ is also a Killing horizon.
Hence, we can see that
the solutions \eqref{mET2} with \eqref{harmonics_2} and \eqref{form_2} describe
Kaluza-Klein multi-black holes,
which have smooth Killing horizons without singularity on and outside
the black hole horizons.
The topology of each black hole horizon is
the lens spaces $L(n_i;1)$.
Since the $\phi$-$\psi$ part of the metric is positive definite,
it is clear that no closed timelike curve exists.
Near each horizon limit, the metric \eqref{mET2}
approaches the $L(n_i;1)$ bundle
over the AdS$_2$ space at the horizon \cite{Reall:2002bh,Kunduri:2007vf}.
\section{summary and discussions}
\label{Discussions}
We have investigated extremely rotating Kaluza-Klein multi-black hole solutions
in the five-dimensional pure Einstein theory given by the metric \eqref{mET1}
with \eqref{FUNCH} and \eqref{one-form}.
The metric asymptotes to the effectively four-dimensional
spacetime and the size of the compactified extra dimension takes the constant
value everywhere.
We have shown that
each black hole has a smooth horizon and its topology is the lens space.
Furthermore,
the mass and the angular momentum of the black hole satisfy the extremality
condition and
the horizon size of each black hole is quantized by the size of the compactified
dimension.
To sum up,
the exact solutions describe five-dimensional regular vacuum rotating Kaluza-Klein
multi-black holes.
In the solutions, for each black hole,
the ratio of the mass and the angular momentum is determined rigidly by
a value of order of unity.
We can interpret that this comes from the force balance between
gravitational force and spin-spin repulsive force between black holes.
This corresponds to the balance of
gravitational force and Coulomb repulsive force after Kaluza-Klein reduction.
Furthermore, the black hole mass should be quantized by the size of
extra dimension $L$ from the regularity of the horizon.
The minimum size of black hole, which is comparable to $L$, exists.
Then, we cannot get asymptotically flat solutions from the present solutions
by taking a limit $L \to \infty$ keeping the black hole mass constant.
This is consistent with the fact that no vacuum multi-black hole has been found
in five-dimensional asymptotically flat spacetime.
The asymptotic structure of the solution with a compact dimension affects
the existence of the multi black holes.
Whether more general solutions exist or not is open question.
\section*{Acknowledgments}
This work is supported by the Grant-in-Aid for Scientific Research No.19540305.
MK is supported by the JSPS Grant-in-Aid for Scientific Research No.11J02182.
|
1,108,101,566,454 | arxiv | \section{Conclusion}
We presented a deep learning technique for estimating 3D human body shape and pose from a single color image. To that end, we propose an iterative training approach that alternates between deformable surface registration and training of deep ConvNets, which gradually improves accuracy of predictions by extracting and aggregating 3D information from dense correspondences provided on 2D images. This approach allows us to learn 3D body shapes and pose from 2D dataset only without having to use 3D annotations that are in general very expensive to obtain. More importantly, as our approach does not rely on statistical body models or 3D annotations captured in a controlled environment, our method is not restricted to the space of the pre-built statistical model and can capture body shape and pose details contained in in-the-wild images better than previous approaches.
In future work, we would like to address the modeling of clothing and details. We are interested in designing a single unified network that can handle from 2D detection to 3D body shape/pose prediction all at once, which would be more efficient and faster. It would also be beneficial to find a better representation of 3D human body and pose than body shape/pose parametric representation, which is more friendly to be used by ConvNets.
\section{Introduction}
Estimating 3D human pose and body shape from a single image is a challenging yet important problem, with a wide variety of applications such as computer animation and virtual try-on in fashion.
Capturing and modeling of 3D body shape and pose has been mostly done in controlled settings using specialized 3D scanners such as a whole-body laser range scanner and motion capture (MoCap) system. With the progress of deep convolutional neural networks (ConvNets), 3D body shape can be estimated from a single image by regressing the parameters of statistical human body models. Most of current methods rely on 3D database for both body shape and pose, which still requires expensive 3D scanning systems to construct and extend their dataset that they use for training.
Nonetheless, the capability of the current methods for expressing body shape and pose is rather limited because of two main reasons. Firstly, regression of body shape parameters is inherently difficult for deep ConvNets. The mapping between the input image and parameters of statistical body models is highly nonlinear and is currently difficult to learn. The second challenge is the lack of a large-scale 3D dataset. In fact, most of 3D body shape dataset are limited to the age range from young adult to middle age. Also, MoCap dataset for 3D human pose estimation are limited to a small variety of subjects, since it needs a complicated experimental setup where MoCap and RGB video cameras have to be synchronized. Because those motion data are acquired in a controlled environment, they are somewhat different from the natural poses that can be found in the in-the-wild images.
{\it ``Can we learn 3D human body shape and pose directly from 2D images?''} In this paper, we tackle this challenging problem to bypass the 3D dataset scarcity problem by extracting and aggregating 3D information from dense correspondences annotated on 2D images. We propose a strategy called ``deform-and-learn" where we alternates deformable surface registration that fits a 3D model to 2D images and training of deep neural network that predicts 3D body shape and pose from a single image. Given dense image-to-surface correspondences, the first registration step fits a template model to images. The result is then used as supervisional signals of 3D body shape and pose for training deep ConvNets in the second registration step. These two processes are iterated to improve accuracy. Unlike previous approaches, our method does not require statistical body shape models, 3D pose annotations from MoCap dataset or human interventions to validate 3D pose.
The contributions of this paper are summarized as follows:
\begin{itemize}
\item
We propose a deform-and-learn training strategy that alternates deformable registration and training of a deep ConvNets for estimating human body shape and pose. It uses dense correspondences annotated on 2D images, without having to use 3D dataset (body shape and pose).
\item
To design a pose prior from 2D pose dataset, we propose a conditional generative adversarial networks (cGANs) for detecting 3D human joint positions from 2D keypoints. We incorporate geometric constraints in cGANs to further constrain 3D human pose predictions. The results are used as soft constraints to guide the training of deep ConvNets for body shape and pose estimation.
\item
We propose a skeleton-based deformable registration technique using back propagation, which can be implemented using a deep learning framework and parallelized with GPU. With the autograd technique, adding and minimization of various types of losses can be made simple, which frees our method from relying on 3D dataset and pre-built 3D statistical models.
\item
We propose a deep ConvNets that predicts body shape and pose using scalings of body segments as body shape representation. With the final refinement step based on deformable registration using dense correspondence predictions, we can further align a body mesh model to an image.
\end{itemize}
\begin{figure*}[h]
\centering
\includegraphics[width=1\linewidth]{images/concept.jpg}
\caption{Overview of our deform-and-learn training strategy that iteratively performs deformable registration and deep learning. Let $\mathbf{\theta}$ be the parameters of the body model, such as body shape and pose. In the very beginning, the initial pose of registration is in the T-pose, ${\bf \theta}_0$. Given dense image-to-surface correspondences, the first registration step fits a template model to images. After registration, we obtain a collection of ${\bf \theta}_{\rm fit}$ which is then used as supervisional signals ${\bf \theta}_{\rm anno}$ to train deep ConvNets that predicts body parameters ${\bf \theta}_{\rm conv}$. The results of ConvNets are used as initial poses of deformable registration in the next round. These two steps are iterated to get better results.}
\label{fig:overview}
\end{figure*}
\section{Conditional generative adversarial networks for 3D human pose with geometric constraints}
\label{sec:cgan}
We propose a conditional generative adversarial networks (GANs) to predict depths of joints from 2D keypoints in an unsupervised manner. The results of the generator is used as soft constraints to guide image-surface registration in the next section.
We take a similar approach as Kudo et al. \cite{kudo2018} and Chen et al. \cite{Chen2019} where the 3D joint positions produced by a generator network ($G$) is projected to the image plane to obtain 2D joint positions and a discriminator ($D$) judges real or fake in 2D image space. The key difference of our model from previous approaches \cite{kudo2018} is that our approach incorporates geometric constraints, such as bone symmetry constraints, to further constrain the space of solution. The network architecture is depicted in Fig. \ref{fig:GAN}. The input to the generator is the 2D key points of $N$ joints and the output is depths of those joints. The predicted depths values $z_i$ are then concatenated with $x_i$ and $y_i$ coordinates, rotated around the vertical axis and projected to the image space. The discriminator inputs the projected joint positions as $fake$ and the 2D keypoint data as $real$. For both networks, we use multi-layer perceptron (MLP) with eight linear layers to map 2D coordinates to depths and binary class.
Let ${\bf u}$ be the 2D joint positions of a skeleton. Also let us denote an angle around the vertical axis as $\phi$. Our 3D human pose cGANs uses the following standard adversarial loss functions for $G$ and $D$:
\begin{align}
{\cal L}_{\rm adv}^G &= E_{{\bf u}, \phi} [\log (1- D (f({\bf u}, G({\bf u}); \phi))) \nonumber \\ {\cal L}_{\rm adv}^D &= E[\log D({\bf u})] \nonumber
\end{align}
where $f$ denotes the rotation and the projection function. Note that we validate the pose from multiple views where we used angles [deg], $\phi = \{45, 60, 90, 135, 180, 235, 270 \}$ for each pose.
In addition to the adversarial loss, the geometric loss is also applied. Specifically, we use the bone symmetry loss ${\cal L}_{\rm sym}$ that constrain the left and right limb be similar and the bone ratio loss ${\cal L}_{\rm ratio}$ that minimizes the difference between the normalized bone length prediction and that of dataset. The bone ratio loss ${\cal L}_{\rm ratio}$ is defined as:
\begin{equation}
{\cal L}_{\rm ratio} = \sum_{e \in{\cal B}} \|\frac{l_e}{l_{\rm trunk}} - \frac{\bar{l}_e}{\bar{l}_{\rm trunk}} \|^2 \nonumber
\end{equation}
where $\frac{l_e}{l_{\rm trunk}}$ is the ratio of the bone length for bone $e$ in a set of bones ${\cal B}$ in a skeleton with respect to the trunk length and $\frac{\bar{l}_e}{\bar{l}_{\rm trunk}}$ is that of the average skeleton. Let ${\cal B}_s$ be the set of symmetry pairs of bone segments which contains indices of bones e.g., the left and right forearm. Then the bone symmetry loss ${\cal L}_{\rm sym}$ is defined as:
\begin{equation}
{\cal L}_{\rm sym} = \sum_{i,j \in{\cal B}_s } \|l_i- l_j \|^2 \nonumber\\
\end{equation}
where $l_i$ and $l_j$ is the lengths of the bone for symmetry bone pairs. We mix the above losses to train the generator such that the loss is:
\begin{equation}
{\cal L}^G = \epsilon{\cal L}_{\rm adv}^G + ({\cal L}_{\rm ratio} + {\cal L}_{\rm sym}) \nonumber\\
\end{equation}
where $\epsilon$ is the weight for controlling the strength of the adversarial term, which we set to 0.1 in this paper.
\begin{comment}
\section*{Appendix B: Deriving depth ordering from dense correspondences }
To define the depth ordering loss, for each image we extract ordinal depth pairs of vertices, one of which is an index of the closet vertex to the camera and the other is the vertex index that is located at the same location in the image space but far apart in the depth direction. Since we have established correspondences between surface and image points from DensePose annotations, the closest point to the camera is already identified. To find the vertices that is located distant from the camera in visibility line, we deform a template mesh by Laplacian deformation to fit to the dense annotations. As shown in Fig. the depths is smoothed out by this operation. From dense annotations, we search from nearest neighbor points of the deformed mesh. If the segment index is different than , we assigned it as 'far'.
\end{comment}
\section{Image-surface deformable registration}
\label{sec:regist}
We propose a deformable surface registration technique to fit a template mesh model to images to obtain 3D body shape and pose annotations for training deep ConvNets. Here deformable registration is formulated as a gradient-based method based on back propagations, which can be implemented with a deep learning framework and parallelized with GPU. With the automatic differentiation mechanisms provided with a deep learning framework, adding and minimizing various kinds of losses have made easy and straightforward. As a result, the proposed deformable registration technique thus incorporates kinematic, geometric and correspondence losses.
Given image-surface dense correspondences annotated on images, the template mesh is fitted to images by optimizing body parameters ${\bf \theta} = [{\bf a}, {\bf S},{\bf R}, s, {\bf t}]$ subject to kinematic and geometric constraints. In total, the overall loss function for our registration is of the form:
\begin{align}
{\cal L}_{\rm regist} &= \omega_{\rm dense} {\cal L}_{\rm dense} + \omega_{\rm KP} {\cal L}_{\rm KP} \\ \nonumber &+ \omega_{\rm scale} {\cal L}_{\rm scale} + \omega_{\rm joint} {\cal L}_{\rm joint} \nonumber + \omega_{\rm det} {\cal L}_{\rm det} \nonumber
\end{align}
where ${\cal L}_{\rm dense}$ and ${\cal L}_{\rm KP}$ are the dense correspondence and key point losses that penalize the alignment inconsistency of the body model and images defined in terms of dense correspondences and key points. The losses ${\cal L}_{\rm scale}$ and ${\cal L}_{\rm joint}$ is the segment scaling smoothness and kinematic loss for regularization. The transformation determinant loss ${\cal L}_{\rm det}$ makes the determinant of the global transformation positive. In addition, $\omega_{\rm dense}$, $\omega_{\rm KP}$, $\omega_{\rm scale}$, $\omega_{\rm joint}$ and $\omega_{\rm det}$ are the respective weights for the above defined losses. The initialization of body parameters is provided from the predictions of deep ConvNets. For the very first iteration where the Convnet predictions are not available, segment scale ${\bf S}$ is set 1 for all segments and pose ${\bf a}$ is set to 0 for all joints, which means that registration is started from the T pose.
\subsection{Correspondence fit loss}
\label{sec:corresp}
The correspondence loss comprises two losses: the dense correspondence loss ${\cal L}_{\rm Dense}$ and keypoint loss ${\cal L}_{\rm KP}$.
\noindent {\bf Dense correspondence loss } Let us define a set of image-surface correspondences $\mathcal{C}=\{(\mathbf{p}_{1}, \mathbf{v}_{\mathrm{idx}(1)}) \ldots (\mathbf{p}_{N}, \mathbf{v}_{\mathrm{idx}(N)}) \}$, where $\mathbf{p}$ is the image points. In addition $\mathrm{idx}(i)$ is the index of the mesh vertices that is matched with image point $i$. Now we can define the dense correspondence loss as:
\begin{eqnarray}
\label{eq:Ec}
{\cal L}_{\rm dense} = \sum_{i\in \mathcal{C}} \|{\bf p}_{i} - \mathbf{x}_{\mathrm{idx}(i)} \|^2 \nonumber
\end{eqnarray}
where a mean squared error (MSE) is used to calculate the loss.
\begin{comment}
\noindent {\bf Depth ordering loss } From DensePose annotations, the visibility information in depth direction can also be extracted. For example, the parts that are annotated in the image have to be the closest in depth direction. Following the ordinal depth supervision approach, we adopted the ordinal ranking loss. Given the surface point pair $(i,j)$ and assuming that the ConvNet is producing the depth estimates $d_i$ and $d_j$ for the two corresponding surface points, the loss is:
\begin{eqnarray}
{\cal L}_{\rm DOrder} = \sum_{i, j \in \mathcal{P}} {\rm log} (1 + {\rm exp}(d_{i} - d_{j} )) \nonumber
\end{eqnarray}
This is a differentiable ranking loss expression, which has similarities with early works on the learning-to-rank literature for e.g. apparent depth estimation. Intuitively, it enforces a large margin between the values $d_i$ and $d_j$. \\
\end{comment}
\noindent {\bf Key point loss } To produce 3D poses with statistically valid depths, the results of cGAN is used to guide deformable registration. Instead of attaching a discriminator to the registration framework, the depth values from cGAN and the ground truth 2D joint coordinates are provided as a soft constraint to constrain the position of the 3D joints based on the MSE loss:
\begin{eqnarray}
{\cal L}_{\rm KP} = \sum_{i\in \mathcal{J}} \|x_i - \bar{x}_i \|^2
+ \sum_{i\in \mathcal{J}} \|y_i - \bar{y}_i \|^2
+ \sum_{i\in \mathcal{J}} \|z_i - z_i^{\rm GAN} \|^2 \nonumber
\end{eqnarray}
where $\bar{x}_i$ and $\bar{y}_i$ are the ground truth of 2D key points. Also $z_i^{\rm GAN}$ is the depth at joint $i$ predicted by cGANs.
\subsection{Geometric and kinematic loss}
Since we attract the template mesh to 2D image coordinates, the problem is ill-posed and deformations are not constrained. Thus we introduce the regularization terms that avoids extreme deformations.
\noindent {\bf Segment scaling smoothness } To avoid extreme segment scalings, we introduce the scaling smoothness loss, which minimizes difference between scalings of adjacent segments:
\begin{equation}
{\cal L}_{\rm scale} = \sum_{e \in \mathcal{B}} \|{\bf S}_e - {\bf S}_{{\rm adj}(e)}\|^2 \nonumber
\end{equation}
\noindent {\bf Joint angle smoothness and limit loss } To prevent extreme poses, we introduce joint angle smoothness loss and joint limit loss. The joint smoothness loss is enforced at every joint in a skeleton, $\mathcal{J}$, and will contribute to avoid extreme bending. To avoid hyper-extensions which will bend certain joints like the elbows and knees (where we represent as $\mathcal{J'}$) in the negative direction, we introduce the joint limit loss. The regularizations that act on joints are thus represented as:
\begin{equation}
{\cal L}_{\rm joint} = \sum_{i \in \mathcal{J}} \| {\bf a}_i\|^2 + \sum_{i \in \mathcal{J'}} \|{\rm exp}({\bf a}_i)\|^2 \nonumber
\end{equation}
where the first term minimizes joint angles whereas the latter term penalizes rotations violating natural constraints by taking exponential and minimizing it.
\noindent {\bf Transformation determinant loss } Since we use a rotation matrix for representing the global rotation at the root, it is necessary to apply a constraint on a matrix to keep its determinant to positive. Thus, we define the transformation determinant loss as:
\begin{eqnarray}
{\cal L}_{\rm det} = \exp (-{\rm det}({\bf R})) \nonumber
\end{eqnarray}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{images/refinement.jpg}
\caption{Refinement based on densepose prediction. The result of body parameter regression networks is refined based on a deformable surface registration technique. To define the dense correspondence term, we converted to image-surface correspondences obtained from DensePose predictions of uv maps and part indices. }
\label{fig:refine_method}
\end{figure*}
\section{Estimating 3D human body shape and pose from a single image}
\label{sec:conv}
\subsection{Deep ConvNets for body shape and pose regression}
Using the results obtained by deformable registration as annotations for training deep ConvNets, we regress body shape and pose parameters with an image. We also add the dense correspondence and keypoint losses as in Section \ref{sec:corresp} for additional supervisions. In total, we minimize the loss function of the form:
\begin{equation}
{\cal L}_{\rm conv} = \alpha {\cal L}_{\rm regress} + \beta {\cal L}_{\rm dense} + \gamma {\cal L}_{\rm KP} \nonumber
\end{equation}
where ${\cal L}_{\rm regress}$ is the regression loss for body parameters. $\alpha$, $\beta$ and $ \gamma$ are the respective weights. Let $\theta_i$ be the parameters for $i$-th sample, the regression loss is defined as:
\begin{equation}
{\cal L}_{\rm regress} = \sum_{i} \; {\rm smooth}_{L1} (\theta_i - {\theta}_i^{\rm anno} ) \nonumber
\end{equation}
where ${\theta}_i^{\rm anno}$ is the annotation provided from the registration step. Here we use the smooth L1 loss because of its robustness to outliers. This choice was more effective than the L2 loss in contributing to decreasing the error during the iterative training strategy in the presence of potential outliers and noisy annotations.
The body model is similar to the one we used for registration, except for the pose representation, where we found that the use of quaternions improved stability and convergence of training than axis angle, which is probably due to the fact that the values of quaternions are in between -1 and 1 and is easier for ConvNets to learn with than axis angles. Other parameters are same as the ones used in Section \ref{sec:regist}, which results in 132 parameters in total. Note that the global rotation is regressed using 9 parameters and the Gram Schmidt orthogonalization is used to make a transformation into a rotation. We use ResNet50 \cite{DBLP:journals/corr/HeZRS15} pretrained on the ImageNet dataset as the base network.
\subsection{Inference and final refinement based on registration}
During the inference phase, there are two steps: 3D body parameter prediction and skeleton-based deformation. Since body shape/pose parameters are highly non-linear and are difficult to regress and predict accurately using deep ConvNets, we optionally provide a way to refine ConvNet predictions. This is based on the deformable registration technique proposed in Section \ref{sec:regist}. In order to define the dense correspondence term, we use DensePose \cite{Guler2018DensePose} to obtain dense uv maps and part indices (Fig. \ref{fig:refine_method}), which are then converted to image-surface correspondences. In addition, the simple baseline 2D human pose detector \cite{xiao2018simple} is used to obtain 2D human joint positions from an image and to define the key point loss. The pre-trained models from \cite{Guler2018DensePose} and \cite{xiao2018simple} are used.
\begin{comment}
\subsubsection{Surface normal sign loss}
One way to enforce visibility consistency is to consider the direction or surface normals. In particular, the vertex that has a corresponding point on an image should be visible from the viewpoint. Thus, at each vertex that has correspondence to an image point, we compare transformed surface normals and a viewpoint vector and check the sign of those two vectors. As the viewpoint vector is $[0,0,1]$, the sign of $z$ components of the surface normals should be negative:
\begin{eqnarray}
{\cal L}_{\rm normal} = \sum_{i \in \cal{C}} \|{\rm sign} (n_i^z) + 1 \|^2
\end{eqnarray}
\end{comment}
\begin{comment}
\noindent {\bf Conformal loss } Conformal mapping constrains a local transformation as $\mathbf{T}^\mathrm{T}\mathbf{T} = s^2 \mathbf{I_d}$. It must satisfy six conditions: all columns must have the same norm and must be orthogonal to one another.
\begin{eqnarray}
{\cal L}_{\rm conformal} = \sum_i \mathrm{ Conformal} (\mathbf{ T}_i)\nonumber
\end{eqnarray}
\begin{eqnarray}
\mathrm{ Conformal} (\mathbf{ T}) =& (\mathbf{ c}_1^\mathrm{ T} \mathbf{ c}_2)^2 + (\mathbf{ c}_2^\mathrm{ T} \mathbf{ c}_3)^2 + (\mathbf{ c}_3^\mathrm{ T} \mathbf{ c}_1)^2 \\ \nonumber
+& (\mathbf{ c}_1^\mathrm{ T} \mathbf{ c}_1 - \mathbf{ c}_2^\mathrm{ T} \mathbf{ c}_2)^2\\ \nonumber
+& (\mathbf{ c}_2^\mathrm{ T} \mathbf{ c}_2 - \mathbf{ c}_3^\mathrm{ T} \mathbf{ c}_3)^2\\ \nonumber
+& (\mathbf{ c}_3^\mathrm{ T} \mathbf{ c}_3 - \mathbf{ c}_1^\mathrm{ T} \mathbf{ c}_1)^2 \nonumber
\end{eqnarray}
where $\mathbf{ c}_1$, $\mathbf{ c}_2$ and $\mathbf{ c}_3$ are the $3 \times 1$ column vectors of $\mathbf{ T}$. We minimize this error using the mean squared error (MSE) loss where the target of each component is set to zero. \\
\noindent {\bf Volume positivity loss } Previous methods often constrains volumes or scales of the model by employing equiareal or equivolume constraints on transformations, i.e. $\| {\rm det}({\bf T}) - 1 \|^2$. Here, to maximize the flexibility to change scales of the segments while avoiding volume sign changes, we minimize volume positivity loss:
\begin{eqnarray}
{\cal L}_{\rm volPlus} = \| {\rm sign}({\rm det}({\bf T})) - 1 \|^2
\end{eqnarray}
\noindent {\bf Transformation smoothness } The smoothness term serves as a regularizer for the deformation by indicating that the linear transformations of adjacent segments should agree with one another:
\begin{eqnarray}
{\cal L}_{\rm TS} = \sum_{i, j \in \mathcal{N}} \| \mathbf{T}_i - \mathbf{T}_j \|^2
\end{eqnarray}
where $\mathcal{N}$ is the set of adjacent segment pairs in a skeleton. \\
\noindent {\bf Volume smoothness loss } We also impose a constraint to regularize the changes in volumes of segments. This is done by taking the determinants of transformations and minimizing the difference between that of adjacent ones:
\begin{eqnarray}
{\cal L}_{\rm VS} = \sum_{i, j \in \mathcal{N}} \| {\rm det}({\bf T}_i) - {\rm det}({\bf T}_j) \|^2
\end{eqnarray}
\end{comment}
\begin{comment}
content...
The optimization consists of two loops: The outer loop searches for the nearest neighbor points and constructs the closest point term with $w_\mathrm{reg}$ set to some value. The inner loop then optimizes the affine transformations at the vertices with the fixed position constraints by minimizing $E(\mathbf{X})$. Once this is converged, $w_\mathrm{ reg}$ is halved and the outer loop finds the closest points again.
\textbf{Nonlinear least squares}
We minimize the nonlinear energy $E(\mathbf{X}) = w_\mathrm{ ACAP} E_\mathrm{ ACAP} + w_\mathrm{ reg} E_\mathrm{ reg} + w_\mathrm{ C} E_\mathrm{ C}+ w_\mathrm{F} E_\mathrm{ F}$ using an iterative Gauss-Newton method \cite{SSP06}. We unrolled $\mathbf{X}$ and define stacked variables by a $12 n \times 1$ column vector $\mathbf{x}$. The Gauss-Newton algorithm linearizes the nonlinear problem with Taylor expansion about $\mathbf{x}$:
\begin{eqnarray*}
\mathbf{ f}(\mathbf{x} + \mathbf{\delta}) = \mathbf{f}(\mathbf{x}) + \mathbf{ J}\mathbf{\delta}
\end{eqnarray*}
The vector $\mathbf{f}(\mathbf{x})$ stacks the squared roots of the cost functions, so that $\mathbf{f}(\mathbf{x})^\mathrm{T} \mathbf{f}(\mathbf{x}) = E(\mathbf{x}) = w_\mathrm{ ACAP} E_\mathrm{ ACAP} + w_\mathrm{reg} E_\mathrm{reg} + w_\mathrm{ C} E_\mathrm{ C}+ w_\mathrm{ F} E_\mathrm{ F}$.
$\mathbf{ J}$ is the Jacobian matrix of $\mathbf{f}(\mathbf{x})$. At each iteration $t$, we solve a linearized problem and compute an updating vector $\mathbf{ \delta}_t$ to improve the current solution $\mathbf{x}_{t}$:
\begin{eqnarray}
\label{eq:GN}
\mathbf{ J}_t^\mathrm{ T} \mathbf{ J}_t\mathbf{ \delta}_t = - \mathbf{ J}_t^\mathrm{ T} \mathbf{ f}(\mathbf{x}_t)\\ \nonumber
\mathbf{x}_{t+1} = \mathbf{x}_{t} + \mathbf{\delta}_t
\end{eqnarray}
In each Gauss-Newton iteration, we solve the normal equations by Cholesky factorization. We must calculate both the symbolic and numeric factorization of $\mathbf{J}_t$ once after the outer loop finds the closest points. In the inner loop, however, the non-zero structure of $\mathbf{J}_t$ remains unchanged. Thus, we can reuse the symbolic factorization to speed up computations. The inner loop typically takes 6 iterations until convergence.
\end{comment}
\section{Problem formulation}
The goal of our work is to learn a model that can predict 3D body shape and pose from a single image using deep ConvNets, without having to use 3D dataset. To the best of our knowledge, this paper is the first one to achieve it. To that end, we use dense correspondence annotations (Fig. \ref{fig:dennse_annotation}) between image points and a body surface, which can be annotated on 2D images in-the-wild and provides rich information about body shape and pose. Compared to silhouettes and part segmentation, dense correspondence annotations are less noisy around boundaries and can be obtained with some more additional human efforts whose annotation time is almost the same as that of part segmentation \cite{Guler2018DensePose}.
Although dense correspondences between a body surface and image points contain rich information, they themselves are not sufficient for recovering 3D body shape and pose, especially for depth. The strategy we take in this paper is to incorporate geometric and kinematic losses imposed on body parameters as well as an adversarial loss defined from 2D key points to constrain the space of body shape and pose, as we do not have a direct access to 3D dataset that we can use for training. Consequently, the total loss we define and wish to minimize is as follows:
\begin{align}
{\cal L} = {a}{\cal L}_{\rm dense} + {b}{\cal L}_{\rm geo} + {c}{\cal L}_{\rm adv}
\label{eq:loss}
\end{align}
where ${\cal L}_{\rm dense}$ is the dense correspondence loss which penalizes the inconsistency of fits between the body model and images defined in terms of dense correspondences, ${\cal L}_{\rm geo}$ is the geometric and kinematic loss for regularization and ${\cal L}_{\rm adv}$ is the adversarial loss to constrain the distribution of the predicted poses close to that of 2D keypoint annotations. The weights $a$, $b$ and $c$ control the relative strengths of the terms.
\noindent {\bf Deform-and-learn iterative training strategy } Directly minimizing all of the losses in Eq. (\ref{eq:loss}) at the same time is difficult and in fact we experienced that the error stayed high. Instead, we decouple the problem into three components: A) training of a conditional generative adversarial networks that predicts 3D joints from 2D keypoints; B) optimization of latent body parameters based on image-surface registration; C) learning of body parameters by providing latent supervisions obtained in step B. The first component is trained once and provides soft constraints of 3D joints in the second and third step. The second and third components are iterated for several times to improve the accuracy, which we refer to as the ``deform-and-learn'' training strategy.
We found that decoupling of the training phase into three steps and providing supervisions on latent variables works effective. In fact, recent approaches showed that providing supervisions on latent body parameters are effective in stabilizing and improving training \cite{NBF:3DV:2018}. Here we reconstruct latent variables from dense annotations by deformable surface registration. Note that deformable registration is an local optimizer and is sensitive to an initial solution. This is why we propose the iterative training strategy ``deform-and-learn'', where we alternate between deformable registration and learning. This strategy will gradually improve performance by updating the initial solution of the registration phase and then the latent supervisions in the learning phase.
\noindent {\bf Body shape and pose model } To fit a template mesh model to an image, we use a skeleton-based parametric deformable model which is a modified version of SMPL \cite{bogo2016keep}. The template mesh consists of $n$ vertices, where the number of vertex $n$ is 6980 in this paper. The vertex positions of the template, $\mathbf{v}_1 \ldots \mathbf{v}_n$, are denoted by a $n \times 3$ vector, $\mathbf{v}= [\mathbf{v}_1 \ldots \mathbf{v}_n]^\mathrm{ T}$. The pose of the body is defined by a skeleton rig with 23 joints where the pose parameters ${\bf a} \in \mathbb{R}^{24 \times 3}$ is defined by the axis angle representation of the relative rotation between segments. The body model is posed by a joint parameters ${\bf a}$ via forward kinematics. Instead of using a low-dimensional shape space as in \cite{bogo2016keep}, which can be learned from thousands of registered 3D scans, we use segment scales to model a body shape, which is parametrized by segment scales ${\bf S} \in \mathbb{R}^{24}$. This way, body shape can be modeled more flexibly without the need to use 3D body scans---it does not have to be confined in the space of statistical models. Using linear blend skinning, the body deformation model is defined as a function ${\bf v} = X({\bf S}, {\bf a})$.
We use the weak-perspective camera model and solve for the global rotation ${\bf R} \in \mathbb{R}^{3 \times 3}$, translation ${\bf t} \in \mathbb{R}^{2}$ and global scale $s \in \mathbb{R}$. Rather than using other rotational representation such as axis angle, we directly optimize for a rotation matrix with 9 parameters due to its property to represent orientations uniquely in 3D space. Since this approach makes a transformation deviating from a rotation matrix, we applied the Gram Schmidt normalization to ortho-normalize the matrix. Thus the total number of the parameters representing human body is 108, ${\bf \theta} = [{\bf a}, {\bf S},{\bf R}, s, {\bf t}]$. With the body parameters ${\bf \theta}$, deformation and projection of vertices ${\bf v} = X({\bf S}, {\bf a})$ into an image is achieved as:
\begin{align}
{\bf x} = s \Pi ({\bf R} X({\bf S}, {\bf a})) + {\bf t} \nonumber
\end{align}
where $\Pi$ is an orthogonal projection.
\section{Overview}
The overview of our approach is depicted in Fig. \ref{fig:overview}. We train a conditional generative adversarial networks (cGANs) that predicts 3D joint positions from 2D joint positions, which will guide the registration and training of deep ConvNets for body shape and pose (Section \ref{sec:cgan}). The deform-and-learn training strategy alternates deformable surface registration that fits a 3D model to 2D images and training of deep neural network that predicts 3D body shape and pose from a single image (Section \ref{sec:regist}). In the very beginning, the initial pose of registration is in the T-pose, ${\bf \theta}_0$. Given image-surface dense correspondences, the first registration step fits a template model to images. After registration, we obtain a collection of body parameters ${\bf \theta}_{\rm fit}$ which is then used as supervisional signals ${\bf \theta}_{\rm anno}$ in order to train deep ConvNets that predicts body parameters ${\bf \theta}_{\rm conv}$ (Section \ref{sec:conv}). The results are used as initial poses of surface registration in the next round. This training process is iterated for several times to get better results.
In the inference phase, we optionally perform refinement based on deformable surface registration. We first use the trained deep ConvNets to predict body shape and pose parameters, we refine the result using the registration technique starting from the ConvNet result as an initial solution. Thus the overall component used here is the same as the training phase, except that the order is flipped.
\begin{comment}
\begin{table}[t]
{\footnotesize
\begin{center}
\caption[Comparison of mappings]{Comparison of mappings. $\mathbf{T}$ is a $3 \times 3$ linear transformation matrix. $\mathbf{C}$ is a stretch matrix. $\mathbf{I_d}$ is an identity matrix. $s$ is a scale. $\mathbf{R}$ is a rotation matrix. }
\label{tab:Comp_mapping}
\begin{tabular}{cccc}
\hline
& Property & $\mathbf{C} = \mathbf{T}^\mathrm{T}\mathbf{T}$ & $\mathbf{T}$ \\
\hline
Isometric & length-preserving & $\mathbf{I_d}$ & $\mathbf{R}$ \\
Conformal & angle-preserving & $s^2 \mathbf{I_d}$ & $s \mathbf{R}$ \\
Equiareal & scale-preserving & & $\mathrm{det}(\mathbf{T}) = 1$ \\
Harmonic & smooth deformation & & $ \mathrm{min} \| \mathbf{T} \|^2_F$\\
\hline
\end{tabular}
\end{center}
}
\vspace{10pt}
\end{table}
\section{Background}
\label{sec:related}
\subsection{Classes of mappings}
Here, we briefly review the classes of mappings \cite{FH05} as a guide to choose an appropriate mapping for nonrigid surface registration. Suppose that $S$ is a surface and $f$ is a mapping from $S$ to a second surface $\tilde{S}$. We consider a 3D-to-3D mapping case, where a point at position $\mathbf{S}$ is deformed to $\tilde{\mathbf{S}}$ by a nonlinear function $f$, $\tilde{\mathbf{S}} = f(\mathbf{S})$. We define an orthogonal local frame $\mathrm{d}\mathbf{S} = [\mathrm{d}\mathbf{S}_1 , \mathrm{d}\mathbf{S}_2 ,\mathrm{d}\mathbf{S}_3 ]$ at $\mathbf{S}$, which is deformed to $\mathrm{d}\tilde{\mathbf{S}} = [ \mathrm{d}\tilde{\mathbf{S}}_1 , \mathrm{d}\tilde{\mathbf{S}}_2 ,\mathrm{d}\tilde{\mathbf{S}}_3 ]$. A $3 \times 3$ local linear transformation $\mathbf{T}$ called the deformation gradient is calculated as $\mathbf{T} = \mathrm{d}\tilde{\mathbf{S}} \cdot (\mathrm{d}\mathbf{S})^{-1}$. The rotation-invariant measure of deformation, the Cauchy-Green stretch tensor, is defined as $\mathbf{C} = \mathbf{T}
^\mathrm{T}
\mathbf{T}$, which is an analogue of the first fundamental form. The properties of mappings are described as follows (Table \ref{tab:Comp_mapping}).
\textbf{ Isometric mappings }
A mapping from $S$ to $\tilde{S}$ is isometric or length-preserving if the length of any arc on $\tilde{S}$ is the same as that of the corresponding arc on $S$. When a mapping is isometric, $\mathbf{C}$ is an identity matrix, $\mathbf{C} = \mathbf{I_d}$. In other words, deformation is locally rigid (rotation), $\mathbf{T}= \mathbf{R}$.
\textbf{Conformal mappings }
A mapping from $S$ to $\tilde{S}$ is conformal or angle-preserving if the angle of intersection of every pair of the intersecting arcs on $\tilde{S}$ is the same as that of the corresponding arcs on $S$. When a mapping is conformal, the axes of the local frame must be orthogonal and have the same norm. In terms of stretch, it must satisfy $\mathbf{C} = s^2 \mathbf{I_d}$, where $s$ is a scale. In other words, a local transformation is scale and rotation $\mathbf{T} = s \mathbf{R}$; i.e., it is a similarity transformation. Thus, a circle and a sphere are transformed to a circle and a sphere, but their radii are allowed to change from their original values.
\textbf{Equiareal mappings }
A mapping from $S$ to $\tilde{S}$ is equiareal if every part of $S$ is mapped onto a part of $\tilde{S}$ with the same area. It is scale preserving.
\textbf{ Harmonic mappings }
A mapping from $S$ to $\tilde{S}$ is harmonic if the deformation minimizes the Dirichlet energy:
{
\abovedisplayskip=5pt
\belowdisplayskip=5pt
\begin{eqnarray*}
E_D(f) = \frac{1}{2} \|\mathrm{grad}_\mathcal{S} \; f\|^2 \nonumber
\end{eqnarray*}
}
where $\mathrm{grad}_\mathcal{S}$ is the gradient of the surface. Let $\mathbf{f}$ be a vector of a function defined on a surface. The solution for the minimization problem is obtained by solving the Laplace equation with some boundary constraints:
{
\abovedisplayskip=5pt
\belowdisplayskip=5pt
\begin{eqnarray*}
\Delta_\mathcal{S} \mathbf{f} = \mathbf{0} \;
\end{eqnarray*}
}
where $\Delta_\mathcal{S}$ is the Laplace-Beltrami operator. The consequence of this minimization is that the boundary conditions are smoothly interpolated. When the mappings are harmonic, because the gradient of a mapping is equivalent to the deformation gradient, we are minimizing a local transformation, $ \mathrm{min} \| \mathbf{T} \|^2_F$.
There are two important implications that describe relationships between the above mappings. First, every isometric mapping is conformal and equiareal (scale-preserving):
{
\abovedisplayskip=5pt
\belowdisplayskip=5pt
\begin{eqnarray*}
\mathrm{isometric = conformal \cap equiareal} \nonumber
\end{eqnarray*}
}
Second, a conformal mapping is the subspace of harmonic mapping:
{
\abovedisplayskip=5pt
\belowdisplayskip=5pt
\begin{eqnarray*}
\mathrm{conformal \subset harmonic} \nonumber
\end{eqnarray*}
}
Thus, conformal mappings are always harmonic but the inverse is not true. Not all harmonic mappings are angle preserving.
\section{Overview}
\end{comment}
\section{Related Work}
\noindent {\bf Human body shape modeling and surface registration } Previously, modeling of 3D body shape is done with 3D scanners. The first approach in this line of work is done by Allen et al. \cite{ACP03} where the authors fit a template 3D body model to Caesar dataset that contains a couple of thousand subjects and used principal component analysis (PCA) to model the space of human body shape. Later, several techniques are proposed to extend the method of Allen et al. to handle both body shape and pose variations (such as SCAPE \cite{ASK05} and SMPL \cite{bogo2016keep}) and even dynamic deformations (eg. Dyna \cite{Dyna:SIGGRAPH:2015}). Nonrigid surface registration techniques have been used in body shape modeling to fit a template mesh models to 3D scans \cite{HAW08,ACP03, SP04, ARV07, YLSL10}.
\noindent {\bf Estimating 3D joint positions from an image } Early approaches predict 3D joint positions from key points by assuming that the almost perfect 2D key points are already extracted from an image \cite{ramakrishna2012reconstructing}. The first method based on ConvNets directly regresses 3D joint positions with an image \cite{li20143d}. Recent techniques achieves higher accuracy with an end-to-end framework that predicts 2D joints with heatmaps and then regresses 3D joint positions or depths from them \cite{zhou2017weakly, mehta2016monocular}. Martinez et al. \cite{martinez_2017_3dbaseline} on the other hand proposed a very simple network architecture that maps 2D joint coordinates to 3D joint positions, resulting in a two separate networks which can also achieve high accuracy. Pavlakos et al. \cite{PavlakosZDD16} used a volumetric heatmap representation which is a ConvNet friendly representation and can avoid regressing the real values in a highly nonlinear manner. Some methods regress kinematic parameters \cite{zhou2016deep, Skeleton_Yoshiyasu2018} to preserve human skeletal structures.
\noindent {\bf Body shape from an image } A common way to predict 3D human body shape and pose from an image is to employ pre-built statistical human models. Guan et al. \cite{Guan:ICCV:2009} first manually extract 2D keypoints and silhouettes of human body. The first automatic method was proposed in SMPLify \cite{bogo2016keep} where the human statistical model called SMPL was fitted to the 2D keypoints estimated from an image using ConvNets by an optimization technique. Tan et al. \cite{TanBC17} proposed an indirect approach to learn body shape and pose by minimizing the estimated and real silhouettes. Tung et al. \cite{tung2017self} proposed a self-supervised learning motion capture technique that optimizes SMPL body parameters and Kanazawa et al. \cite{hmrKanazawa17} proposed an end-to-end learning system of human body and shape based on generative adversarial networks (GANs). More recently, silhouettes \cite{pavlakos2018humanshape,varol18_bodynet} and part segmentations \cite{NBF:3DV:2018} are incorporated to improve prediction accuracy. On the other hand, in DensePose \cite{Guler2018DensePose} uv coordinates and part indices are directly annotated on images to establish image-to surface dense correspondences but this is still not complete 3D representation. The most similar approach to ours would be Lassner et al. \cite{Lassner:UP:2017} where the authors proposed a method to construct a 3D human body shape and pose dataset by fitting a SMPL model to images. Compared to them, our approach does not require statistical 3D pose/shape priors or human interventions to validate pose fits.
\noindent {\bf Unsupervised and weakly-supervised approaches } Given 2D keypoints of human joints, Kudo et al. \cite{kudo2018} and Chen et al. \cite{Chen2019} used conditional generative adversarial networks (GANs) to estimate 3D human pose only from 2D pose dataset. Rodin et al. \cite{rhodin2018unsupervised} used auto-encoder to compress multi-view images into latent variables and they reconstructed 3D human pose from them, which does not need a large amount of 3D pose supervisions.
\section{Experimental results}
\subsection{Implementation and training detail}
Our method is implemented using Pytorch. Training takes 2-3 days using a NVIDIA Quadro P6000 graphics card with 24 GB memory. We use the Adam optimizer for all the steps in our approach. The multi-view cGANs is trained for 60 epochs with the batch size of 1024 and the learning rate of 0.0002. At each iteration, the body regressor is trained for 50 epochs with the batch size of 30 and the learning late of 0.0001. From the 1st to 4th iteration of training, we used both Human3.6M dataset and MS COCO dataset. At the last iteration (5th), we fine-tune the network on Human3.6M dataset only. We set the parameters in the loss function to $\alpha = \gamma =1$ and $\beta = 10$. For deformable surface registration, we use the learning rate of 0.1 and batch size of 10. We empirically set the parameters to $\omega_{\rm dense} = 1000$, $\omega_{\rm KP} = 1$, $\omega_{\rm scale} = 10$, $\omega_{\rm joint}= 0.001$ and $\omega_{\rm det} = 1$. For the first training iteration, in order to recover a global rotation, we set $\omega_{\rm scale} = 100$ and $\omega_{\rm joint}= 1$ to make the body model stiff, which is a common strategy in deformable registration \cite{ARV07}. We perform 300 forward-backward passes during the registration step at the 1st iteration. From the second iteration, 100 forward-backward passes were sufficient, since we start from the ConvNet prediction.
\subsection{Dataset}
\noindent {\bf DensePose } DensePose dataset \cite{Guler2018DensePose} contains images with dense annotations of part-specific UV coordinates (Fig. \ref{fig:dennse_annotation}), which are provided on the MS COCO images. To obtain part-specific UV coordinates, body surfaces of a SMPL human body model are partitioned into 24 regions and each of them are unwrapped so that vertices have UV coordinates. Thus, every vertex on the model have unique parameterizations. Based on this, images are manually annotated with part indices and UV coordinates to establish dense image-to surface correspondences. To use this dense correspondences in 3D model fitting, we find the closest points from image pixels to surface vertices in UV coordinates of every part. The nearest neighbor search is done in this direction because image pixels are usually coarser than surface vertices. We were able to obtain approximately 15k annotated training images with the whole body contained in the image region.
\begin{figure*}[tb]
\centering
\includegraphics[width=0.9\linewidth]{images/annotations.jpg}
\caption{Dense image-surface correspondences between the template body surface and image points are found from DensePose annotations by searching nearest points in the UV space of each body part. }
\label{fig:dennse_annotation}
\end{figure*}
\noindent {\bf Human3.6M } Human 3.6M dataset is a large scale dataset \cite{h36m_pami} for 3D human pose detection. This dataset contains 3.6 million images of 15 everyday activities, such as walking, sitting and making a phone call, which is performed by 7 professional actors and is taken from four different views. 3D positions of joint locations captured by MoCap systems are also available in the dataset. In addition, 2D projections of those 3D joint locations into images are available. To obtain dense annotations for this dataset, we use Mosh \cite{Loper:SIGASIA:2014} to obtain SMPL body and pose parameters from the raw 3D Mocap markers and then projected mesh vertices onto images to get dense correspondences between images and a template mesh. Note that some of the results are not well-aligned to markers and camera coordinates, resulting in a training dataset containing around $17k$ images and dense correspondence annotations.
\noindent {\bf MPII 2D human pose } The images from MPII 2D human pose dataset \cite{andriluka14cvpr} is used for testing and was not used in training. Also, 2D keypoint labels in this dataset were used to trained the cGANs.
\subsection{Protocol and metric}
We followed the same evaluation protocol used in previous approaches \cite{PavlakosZDD16,zhou2017weakly} for evaluation on Human3.6M dataset, where we use 5 subjects (S1, S5, S6, S7, S8) for training and the rest 2 subjects (S9, S11) for testing. The error metric for evaluating 3D joint positions is called mean per joint position error (MPJPE) in $mm$. Following \cite{zhou2017weakly} the output joint positions from ConvNets is scaled so that the sum of all 3D bone lengths is equal to that of a canonical average skeleton.
We also evaluate the fit of the body model to images based on the mean per pixel error and mean per vertex error which measures distances from the ground truth to the predicted vertices in 2D image space and 3D space. Prior to calculate per-vertex error, we obtain a
similarity transformation by Procrustes analysis and align the predicted vertices to the ground truth; this is similar to the reconstruction error in the 3D joint estimation.
\subsection{Qualitative results}
In Figs. \ref{fig:Qualitative}, \ref{fig:refinment} and \ref{fig:refinment2}, we show our results on body shape and pose estimation before and after refinement. As we can see from the figures, our technique can predict 3D body shape and pose from in-the-wild images. Before refinement, the predicted poses are close to the images but still there are misalignments especially at hands and feet. After refinement, the mesh is attracted toward image points based on dense correspondence predictions.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{images/qualitative_results.jpg}
\caption{Qualitative results before refinement. Our technique is able to recover body shape and pose even from in-the wild images. }
\label{fig:Qualitative}
\end{figure*}
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{images/comp_hmr.jpg}
\caption{Comparisons to HMR \cite{hmrKanazawa17}. }
\label{fig:comp_hmr}
\end{figure}
\subsection{Comparison with state-of-the-art}
\noindent {\bf 3D joint position and rotation } We compared our method with state-of-the-art techniques (Table \ref{tab:comparison2}). Here we only deal with unsupervised or weakly-supervised techniques which do not use full 3D supervisions. Kudo et al. \cite{kudo2018} uses conditional GANs to predict depths from 2D joint coordinates, which is the only technique that learns a model from 2D information only, except for ours. Rhodin et al. \cite{rhodin2018unsupervised} use an auto-encoder to compress visual features and reconstruct 3D pose from it, which does not require a large amount of 3D human poses. Our technique outperforms them in terms of MPJPE accuracy and is able to not only predict joints but also the orientations of limb segments as well as body shape represented in the form segment scales. Note that our 3D human pose cGANs even outperforms \cite{kudo2018} by incorporating geometric constraints. Our method is on par with HMR (unpaired) that uses 3D pose and body shape dataset for training GANs to provide 3D constraints in an unsupervised learning manner. On the other hand, we need dense annotations on 2D images and do not use any 3D annotations such as body shape and pose. Note that HMR (paired) further provides images paired with 3D poses to do supervised learning, which makes their method slightly better than ours but this requires an experimental setup with a Mocap system and synchronized video cameras to construct dataset.
\begin{table*}[hbt]
\begin{center}
\caption{Comparisons with state of the art. MPJPE [mm] is used for error metric. }
\label{tab:comparison2}
\begin{tabular}{c c c c c c c}
\hline
Kudo et al. \cite{kudo2018} & Rhodin et al. \cite{rhodin2018unsupervised} & HMR \cite{hmrKanazawa17} & HMR (paired) & Ours & Our cGANs & Ours (refine) \\
\hline
173.2 & 131.7 & 106.84 & 87.97 & 106.25 & 139.9 & 108.46 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\noindent {\bf Per-pixel and per-vertex error } In order to evaluate alignment of a body model to images, we measured the mean per-vertex error and mean per-pixel error and compare with HMR (paired), which is shown in Table \ref{tab:pixelerr}. HMR (paired) obtained better results on Human 3.6M dataset than ours in both vertex alignment and pixel alignment, as they use a large amount of 3D pose dataset paired with images whereas ours only use 2D annotations. For this dataset, our refinement was not very effective probably because there is no large variations of the subjects, actions and background in this dataset. For MS COCO dataset, our refinement was effective because this dataset is challenging for deep ConvNets to predict body parameters from due to a large variations in background, body shape/pose and clothing. Thanks to the refinement step, we achieve better fits in terms of the mean per-pixel error than HMR. Figure. \ref{fig:comp_hmr} shows the comparisons between ours and HMR (paired). Our method with refinement produces better alignment than HMR, especially around feet and hands. Our method captures more natural appearances of body shape and pose as the prior and constraints used do not come from the 3D dataset that is limited to some age range or captured in a controlled environment.
\begin{table*}[hbt]
\begin{center}
\caption{Comparisons to HMR (paired) in terms of per-pixel error and per-vertex error}
\label{tab:pixelerr}
\begin{tabular}{c c c c }
\hline
&HMR (paired) & Ours & Ours (refine) \\
\hline
COCO per-pixel err. [pixel] & 13.9 & 18.6 & {\bf 12.02} \\
H3.6M per-pixel err. [pixel] & {\bf 7.3} & 9.9 & 9.2\\
H3.6M per-vertex recon. err. [mm] & {\bf 75.0} & 102.7 & 97.2 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.9\linewidth]{images/regist.jpg}
\caption{Ablation studies on image-surface registration. Without $\cal{L}_{\rm dense}$, the alignment to the image is poor, even the pose fit is not satisfactory. The losses $\cal{L}_{\rm geo}$ and $\cal{L}_{\rm det}$ play important role in mitigating distortions. Using cGANs to obtain depths at joints and provide soft constraints through $\cal{L}_{\rm KP}$, we can keep the depths at the joints statistically valid, which can for example prevent the incline or recline of the body model. }
\label{fig:registration_results}
\end{figure*}
\begin{table}[hbt]
\begin{center}
\caption{Comparisons of MPJPE [mm] between training dataset. Note that the first iteration results are compared. }
\label{tab:dataset}
\begin{tabular}{c c c}
\hline
MS COCO & Human3.6M & Both \\
\hline
181.7 & 147.4 & 137.5 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Ablation studies}
\noindent {\bf Loss } We have compared the registration results by varying the losses (Fig. \ref{fig:registration_results}). Without $\cal{L}_{\rm dense}$, the alignment between the body model and the image is poor, even the pose fit is not satisfactory. The losses $\cal{L}_{\rm geo}$ and $\cal{L}_{\rm det}$ play important role in mitigating distortions. With our 3D human pose cGANs, the depths of a skeleton can be constrained by making the resulting distribution close to that of the data, which can for example prevent the incline and recline of the body.
\noindent {\bf Dataset } We also compared the results of ConvNets by changing the dataset i.e., using dense COCO only, Human3.6M only and both. The MPJPE results after the 1st iteration are shown in Table \ref{tab:dataset}. By combining MS COCO which contains in-the-wild images and Human3.6M dataset which includes domain knowledge, the better results are obtained than using a single dataset.
\begin{comment}
\begin{figure}[t]
\subfigure[Ablation studies.]{
\centering
\includegraphics[width=1\linewidth]{images/ablation.jpg}
\label{fig:ablation}
}
\centering
\includegraphics[width=1\linewidth]{images/ablation.jpg}
\caption{Ablation studies. }
\label{fig:ablation}
\end{figure}
\begin{figure}[htbp]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=70mm]{images/ablation.jpg}
\end{center}
\caption{Ablation studies. }
\label{fig:one}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=70mm]{images/MPJPE_graph.jpg}
\end{center}
\caption{History of MPJPE.}
\label{fig:two}
\end{minipage}
\end{figure}
\end{comment}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=80mm]{images/MPJPE_Graph.jpg}
\end{center}
\caption{History of MPJPE with respect to the number of iterations. Blue: MPJPE of ConvNet predictions on testing images; Orange: MPJPE of ConvNet predictions for training images; Gray: MPJPE evaluations of registration results on training dataset; From the first to fourth iteration, the network is trained using both MS COCO and Human 3.6M images. The fifth iteration fine-tunes the network with Human3.6M dataset only. }
\label{fig:history}
\end{figure}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width = 0.85\linewidth]{images/iterative_learning.jpg}
\end{center}
\caption{Results after each iteration. As the number of training iterations increases, the body model fits to images better, which visualizes the effectiveness of iterative training. }
\label{fig:vis_iterative}
\end{figure*}
\subsection{Is the iterative training strategy effective?}
To show the effectiveness of our iterative training strategy, we show a graph with the history of MPJPE errors in Fig. \ref{fig:history}. Here, MPJPE values after deformable registration is computed for training dataset. Our deform-and-learn strategy starts from image-surface registration using the T-pose as the initial pose. After the first registration phase, the train-set MPJPE for registration results is approx. 100 mm. Then, ConvNets is trained based on these registration results as supervisions. After 1 iteration, the test-set MPJPE of ConvNet predictions is 140 mm, which is slightly high. Next, deformable surface registration is performed again using the results of ConvNets as its initialization. These two steps are iterated for several times. This strategy was shown to be effective in gradually decreasing the error, which is visually noticeable in Fig. \ref{fig:vis_iterative}. In fact, MPJPE decreased approximately 30 mm from around 140 mm to 106 mm.
We also compared different training strategies in Table \ref{tab:strategy}. For the single-step learning strategy which incorporates all the losses from Eq. (\ref{eq:loss}) for training ConvNets (including the training of a discriminator), this is a difficult problem and the error stayed high. By omitting a discriminator and the loss for joint angle regularization ${\cal L}_{\rm joint}$, we were able to train the model but the MPJPE error was not satisfactory. Also, instead of iterating registration and learning we have tried to perform one iteration of deform-and-learn and train longer (200 epoch). This improved MPJPE slightly but not as much as five iterations of deform-and-learn. Note also that a longer deformable registration (600 iterations) in a single step only improves accuracy of MPJPE 10mm (the train-set MPJPE from 100mm to 90mm), which is inferior to our deform-and-learn strategy that can achieve the train-set MPJPE 60 mm after registration.
\begin{table*}[hbt]
\begin{center}
\caption{Comparisons between training strategies. MPJPE [mm] is used for error metric. }
\label{tab:strategy}
\begin{tabular}{c c c c}
\hline
Single step (Eq. (\ref{eq:loss})) & Single step (w/o $\cal{L}_{\rm adv}$ \& $\cal{L}_{\rm joint}$) & def-learn (1 iter 200 epoch) & def-learn (5 iter) \\
\hline
n/a & 148.1 & 134.8 & 106.25 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Inference time}
We measure the time required for the inference phase which can be divided in to five major steps: 3D body parameter prediction, skeleton-based deformation, 2D key point prediction (optional), DensePose prediction (optional) and refinement (optional). The 3D body parameter prediction step itself only takes approx. 0.035 sec. The time for the skeleton deformation step is also approx. 0.035 sec, which means that the inference can be performed in approximately 0.07 sec given the cropped image of a human. Other steps that are required for refinement take 0.03 sec and 0.08 sec for 2D joint prediction and DensePose prediction, respectively. The refinement step takes 5-6 seconds for 50 iterations. Adding up all the step, our technique including refinement takes around 5 seconds to process one image. Compared to SMPLify \cite{bogo2016keep} and its variants \cite{Lassner:UP:2017}, which takes over 30 sec, our technique is faster as we start from the better initial pose and shape.
\subsection{Failure cases and limitations}
As our refinement step rely on DensePose predictions, if this result is erroneousness, the final result will be worse; for example DensePose occasionally confuses the left and right for hands and feet, which results in distortions. While we represent body shape by segment scales, it is difficult to estimate a child's shape (Fig. \ref{fig:refinment}) as the body style is very different from the template mesh. A mechanism to select template meshes from different body styles would be useful for these cases. As with most of other approaches, our method cannot recover the absolute scale of body shape. Our network is currently designed for estimating body shape and pose for a single-person and we would like to extend to multiple human settings.
|
1,108,101,566,455 | arxiv | \section*{Introduction}
In a recent paper~\cite{Hetyei-EKP} the present author introduced a
class of progressively finite games played on ranked posets,
where each move of the winning strategy is unique and the positions
satisfy the following uniformity criterion: each
position of a given rank may be reached from the same number of
positions of a given higher rank in a single move. As a consequence, the
kernel positions
of a given rank may be counted by subtracting from the number of all
positions the appropriate multiples of the kernel positions of lower
ranks. The main example in~\cite{Hetyei-EKP} is the {\em original
Bernoulli game}, a truncation game played on pairs of words of the
same length, for which the number of kernel positions of rank $n$ is
a signed factorial multiple of the Bernoulli number of the
second kind $b_n$. Similarly to this game, most examples mentioned
in~\cite{Hetyei-EKP} are also truncation games played on words, where
the partial order is defined by taking initial segments and the rank is
determined by the length of the words involved.
In this paper we consider a class of {\em strongly Bernoulli type
truncation games} played on words, for which
we do not require the uniformity condition on the rank
to be satisfied. We show that for such games, the winning
strategy may be found by decomposing each kernel position as a
concatenation of {\em elementary kernel factors}. This decomposition is
unique. All truncation games considered in~\cite{Hetyei-EKP} (including
the ones played on pairs or triplets of words) are isomorphic to a
strongly Bernoulli type truncation game. For most of these examples,
the elementary kernel factors of a given type are also easy to
enumerate. Thus we may obtain explicit summation formulas and
non-alternating recurrences for numbers which were expressed
in~\cite{Hetyei-EKP} as coefficients in a generating function or by
alternating recurrences. The explicit
summation formulas are obtained by considering the entire unique
decomposition of each kernel position, the non-alternating recurrence
is obtained by considering the removal of the last elementary kernel
factor only. Thus we find some new identities for the Bernoulli
polynomials and numbers of the second kind, and shed new light on
King's~\cite{King} decomposition of ``indecomposable'' permutations.
The paper is structured as follows. After the Preliminaries, the main
unique decomposition theorem is stated in Section~\ref{s_Btg}.
In the subsequent sections we consider games to which this result is
applicable: we show they are isomorphic to strongly Bernoulli type
truncation games, we find formulas expressing their elementary kernel
factors of a given type, and use these formulas to express the number of
kernel positions as an explicit sum and by a non-alternating
recurrence. Most detail is given for the original Bernoulli game in
Section~\ref{s_ob2}, omitted details in other sections are replaced by
references to the appropriate part of this section. As a consequence of our
analysis of the original Bernoulli game, we obtain an explicit summation
formula of the Bernoulli numbers of the second kind, expressing them as a sum
of entries of the same sign. We also obtain a non-alternating recurrence
for their absolute values.
In Section~\ref{s_MR} we consider a
restriction of the original Bernoulli game to a set of positions, where
the kernel positions are identifiable with the {\em connected} or {\em
indecomposable} permutations forming an algebra basis of the
Malvenuto-Reutenauer Hopf algebra~\cite{Malvenuto-Reutenauer}. For these
the recurrence obtained by the removal of the last elementary kernel
factor is numerically identical to the recurrence that may be found in
King's~\cite{King} recursive construction of a transposition Gray code
for the connected permutations. We show that this is not a coincidence:
there is a bijection on the set of permutations, modulo
which King's recursive step corresponds to the removal of the last
elementary kernel factor in the associated {\em place-based
non-inversion tables} (a variant of the usual inversion tables). Our
result inspires another systematic algorithm to list all connected
permutations of a given order, and a new combinatorial model for the
numbers of connected permutations of order $n$, in which this number
arises as the total weight of all permutations
of order $n-2$, such that the highest weight is associated to the
permutations having the most {\em strong fixed points} (being thus
the ``least connected'').
Section~\ref{s_pb2} contains the consequences of our main result to
Bernoulli polynomials of the second kind. Here we observe that we obtain
the coefficients of these polynomials when we expand them in the basis
$\{\binom{x+1}{n}\::\: n\geq 0\}$, and obtain a new formula for the
Bernoulli numbers of the second kind.
Finally, in Section~\ref{s_fB} we
consider the {\em flat Bernoulli game}, whose kernel positions have the
generating function $t/((1-t)(1-\ln(1-t))$ and conclude the section with
an intriguing conjecture that for a long random initial word a novice player
could not decrease the chance of winning below $50\%$ by simply removing
the last letter in the first move.
\section{Preliminaries}
\subsection{Progressively finite games}
A progressively finite two-player game is a game whose positions may be
represented by the vertices of a directed graph that contains no directed
cycle nor infinite path, the edges represent valid moves. Thus
the game always ends after a finite number of moves.
The players take alternate turns to move along a directed edge to
a next position, until one of them reaches a {\em winning position} with
no edge going out: the player who moves into this position is declared a
winner, the next player is unable the move.
The winning strategy for a progressively finite game may be found by
calculating the {\em Grundy number} (or Sprague-Grundy number) of each
position, the method is well-known, a sample
reference is~\cite[Chapter 11]{Tucker}. The positions with Grundy number
zero are called {\em kernel positions}. A player has a winning strategy
exactly when he or she is allowed to start from a non-kernel position.
All games considered in this paper are progressively finite.
\subsection{The original Bernoulli game and its generalizations}
\label{s_b2}
In~\cite{Hetyei-EKP} the present author introduced the {\em original
Bernoulli game} as the following progressively finite two-player game.
The positions of rank $n>0$ in the game are all pairs of words
$(u_1\cdots u_n,v_1\cdots v_n)$ such that
\begin{itemize}
\item[(i)] the letters $u_1,\ldots,u_n$ and $v_1,\ldots,v_n$ are
positive integers;
\item[(ii)] for each $i\geq 1$ we have $1\leq u_i, v_i\leq i$.
\end{itemize}
A valid move consists of replacing the pair $(u_1\cdots
u_n,v_1\cdots v_n)$ with $(u_1\cdots u_m,v_1\cdots v_m)$ for
some $m\geq 1$ satisfying $u_{m+1}\leq v_j$ for $j=m+1,\ldots, n$.
The name of the game refers to the following fact~\cite[Theorem
2.2]{Hetyei-EKP}.
\begin{theorem}
\label{T_b2}
For $n\geq 1$, the number $\kappa_n$ of kernel positions of rank $n$ in the
original Bernoulli game is given by
$$
\kappa_n=(-1)^{n-1} (n+1)! b_n,
$$
where $b_n$ is the $n$-th Bernoulli number of the second kind.
\end{theorem}
Here the Bernoulli number of the second kind $b_n$ is obtained by
substituting zero into the Bernoulli polynomial of the second kind
$b_n(x)$, given by the generating function
\begin{equation}
\label{E_b2}
\sum_{n=0}^{\infty} \frac{b_n(x)}{n!} t^n=\frac{t(1+t)^x}{\ln(1+t)},
\end{equation}
see Roman~\cite[p.\ 116]{Roman}. Note that~\cite[p. 114]{Roman}
Jordan's~\cite[p.\ 279]{Jordan} earlier definition of the Bernoulli
polynomial of the second kind $\phi_n(x)$ is obtained by dividing $b_n(x)$
by $n!$.
The proof of Theorem~\ref{T_b2} depends on a few simple observations
which were generalized in~\cite{Hetyei-EKP} to a class of {\em Bernoulli
type games on posets} (see \cite[Definition 3.1]{Hetyei-EKP}).
The set of positions $P$ in these games is a partially ordered
set with a unique minimum element $\widehat{0}$ and a rank
function $\rho: P\rightarrow {\mathbb N}$ such that for each $n\geq 0$
the set $P_n$ of positions of rank $n$ have finitely many elements.
The valid moves satisfy the following criteria:
\begin{itemize}
\item[(i)] Each valid move is from a position of higher rank to a
position of lower rank. The set of positions reachable from a single
position is a chain.
\item[(ii)] If $y_1$ and $y_2$ are both reachable from $x$ in a single
move and $y_1<y_2$ then $y_1$ is reachable from $y_2$ in a single move.
\item[(iii)] For all $m<n$ there is a number $\gamma_{m,n}$ such that
each $y$ of rank $m$ may be reached from exactly $\gamma_{m,n}$
elements of rank $n$ in a single move.
\end{itemize}
For such games, it was shown in~\cite[Proposition
3.3]{Hetyei-EKP}, the numbers $\kappa_n$
of kernel positions of rank $n$ satisfy the recursion formula
\begin{equation}
\label{E_gkrec}
|P_n|=\kappa_n+\sum_{m=0}^{n-1} \kappa_{m}\cdot \gamma_{m,n}.
\end{equation}
\section{Winning a strongly Bernoulli type truncation game}
\label{s_Btg}
Let $\Lambda$ be an alphabet and let us denote by $\Lambda^*$
the free monoid generated by $\Lambda$, i.e., set
$$\Lambda^*:=\{v_1\cdots v_n \::\: n\geq 0, \forall i (v_i\in
\Lambda)\}.$$
Note that $\Lambda^*$ contains the empty word $\varepsilon$.
\begin{definition}
Given a subset $M\subseteq \Lambda^*\setminus \{\varepsilon\}$,
we define the {\em truncation game induced by $M$} as the game whose
positions are the elements of $\Lambda^*$, and whose valid moves consist of
all truncations $v_1\cdots v_n\rightarrow v_1\cdots v_i$ such that
$v_{i+1}\cdots v_n\in M$.
\end{definition}
Note that $\varepsilon\not\in M$ guarantees that the truncation game
induced by $M$ is progressively finite, we may define the {\em rank} of
each position as the length of each word. This rank decreases after each
valid move.
\begin{definition}
Given $M\subset \Lambda^*\setminus \{\varepsilon\}$, and $P\subseteq
\Lambda^*$, we say that $P$ is {\em $M$-closed} if for all
$v_1\cdots v_n\in\Lambda^*\setminus\{\varepsilon\}$, $v_1\cdots v_n\in
P$ and $v_{i+1}\cdots v_n\in M$ imply $v_1\cdots v_i\in P$. For an
$M$-closed $P$, the {\em restriction of the truncation
game induced by $M$ to $P$} is the game whose positions are the
elements of $P$ and whose valid moves consist of
all truncations $v_1\cdots v_n\rightarrow v_1\cdots v_i$ such that
$v_{i+1}\cdots v_n\in M$ and $v_1\cdots v_n\in P$.
We denote this game by $(P,M)$, and call it the {\em truncation game
induced by $M$ on $P$}.
\end{definition}
Clearly the definition of being $M$-closed is equivalent to saying that
the set $P$ is closed under making valid moves.
\begin{definition}
\label{D_Btm}
We say that $M\subset \Lambda^*\setminus\{\varepsilon\}$ {\em induces a
Bernoulli type truncation game} if for all pairs of words $\underline{u},
\underline{v}\in \Lambda^*\setminus\{\varepsilon\}$,
$\underline{u}\underline{v}\in M$ and $\underline{v}\in M$ imply
$\underline{u}\in M$.
If $M$ is also closed under taking nonempty initial segments, i.e.,
$v_1\cdots v_n\in M$ implies $v_1\cdots v_m\in M$ for all
$m\in\{1,\ldots,n\}$ then we say that $M$ induces a {\em strongly
Bernoulli type truncation game}. If $M$ induces a
(strongly) Bernoulli type truncation game, we call also $(P,M)$ a
(strongly) Bernoulli type truncation game for each $M$-closed
$P\subseteq \Lambda^*$.
\end{definition}
Every strongly Bernoulli type truncation game is also a Bernoulli type
truncation game. The converse is not true: consider for example the set
$M$ of all words of positive even length. It is easy to see that the
truncation game induced by $M$ is Bernoulli type, but it is not
strongly Bernoulli type since $M$ is not closed under taking initial segments
of odd length.
\begin{remark}
The definition of a Bernoulli type truncation game is {\em almost} a
special case of the Bernoulli type games on posets defined in~\cite[Definition
3.1]{Hetyei-EKP}. Each $M$-closed $P\subseteq \Lambda^*$ is partially
ordered by the relation $v_1\cdots v_m<v_1\cdots v_n$ for all $m<n$,
the unique minimum element of this poset is $\varepsilon$, and
the length function is a rank function for this partial order.
For this poset and rank function, the set of valid
moves satisfies conditions (i) and (ii) listed in Subsection~\ref{s_b2}.
Only the ``uniformity'' condition (iii) and the finiteness of $|P_n|$ do
not need to be satisfied. These conditions were used
in~\cite{Hetyei-EKP} to prove equation (\ref{E_gkrec}) and count the
kernel positions of rank $n$ ``externally''.
In this section we will show that the kernel positions
of a strongly Bernoulli type truncation game on words may be described
``internally'' in a manner that will allow their enumeration when each
$|P_n|$ is finite. The question whether the results presented in this
section may be generalized to all Bernoulli type truncation games remains
open. All examples of Bernoulli games played on words
in~\cite{Hetyei-EKP} are isomorphic to strongly Bernoulli type
truncation games, we will prove this for most of them in this paper,
the remaining examples are left to the reader. Together with the results
in~\cite{Hetyei-EKP}, we thus obtain two independent ways to count the
same kernel positions in these games. Comparing the results in~\cite{Hetyei-EKP}
with the results in the present paper yields explicit formulas
for the coefficients in the Taylor expansion of certain functions.
\end{remark}
In the rest of the section we set $P=\Lambda^*$
and just find the winning strategy for the truncation game
induced by $M$. Only the formulas counting the kernel positions will change
when we change the set $P$ in the subsequent sections, the
decomposition of the kernel positions will not.
First we define some {\em elementary kernel
positions} in which the second player may win after at most one move
by the first player.
\begin{definition}
\label{D_ekp}
The word $v_1\cdots v_n \in \Lambda^*\setminus\{\varepsilon\}$ is {\em
an elementary kernel position} if it satisfies $v_1\cdots v_n\not \in
M$, but for all $m<n$ we have $v_1\cdots v_m\in M$.
\end{definition}
In particular, for $n=1$, $v_1$ is an elementary kernel position
if and only if $v_1\not\in M$. Our terminology is justified by the following
two lemmas.
\begin{remark}
\label{R_ekp1}
A position $v_1$ is a winning position, if and only if it is an
elementary kernel position. Otherwise it is not a kernel position at
all.
\end{remark}
\begin{lemma}
\label{L_ekp2}
For $n>1$, starting from an elementary kernel position $v_1\cdots v_n$,
the first player is either unable to move, or is able to move only to a
position where the second player may win in a single move.
\end{lemma}
\begin{proof}
There is nothing to prove if the first player is unable to
move. Otherwise, by $v_1\cdots v_n\not\in M$, the first player is unable
to move to the empty word. Thus, after his or her move, we
arrive in a $v_1\cdots v_m$ where $1\leq m\leq n-1$.
Thus $v_1\cdots v_m\in M$ holds, the second player may now move to
the empty word right away.
\end{proof}
Next we show that the set of kernel positions in a strongly Bernoulli type
truncation game on $\Lambda^*$ is closed under the {\em concatenation}
operation.
\begin{proposition}
\label{P_conc}
Let $\underline{u}:=u_1\cdots u_m$ be a kernel position of length $m\geq 1$ in a strongly Bernoulli truncation game induced by $M$. Then an arbitrary position
$\underline{v}:=v_1\cdots v_n$ of length $n\geq 1$ is a kernel position
if and only if the concatenation $\underline{u}\underline{v}$ is also a
kernel position.
\end{proposition}
\begin{proof}
Assume first that $\underline{u}\underline{v}$ is a kernel
position. We instruct the second player to play the winning strategy
that exists for $\underline{v}$ as long as the
length of the word truncated from $\underline{u}\underline{v}$
at the beginning of his or her move is greater than $m$.
For pairs of words longer than $m$, the validity of a move is
determined without regard to the letters in the first $m$ positions. By
playing the winning strategy for $\underline{v}$ as
long as possible, the second player is able to force the first player
into a position where the first player is either unable to move, or will
be the first to move to a word of length less than $m$, say
$u_1\cdots u_k$. The validity of this move implies $u_{k+1}\cdots u_m
v_1\cdots v_i\in M$ for some $i\geq 0$. By the strong Bernoulli
property we obtain $u_{k+1}\cdots u_m\in M$ and moving from
$u_1\cdots u_m$ to $u_1\cdots u_k$ is also a valid
move. We may thus pretend that the first player just made the first move
from $u_1\cdots u_m$ and the second player may win by
following the winning strategy that exists for $\underline{u}$.
For the converse, assume that $\underline{v}$ is
not a kernel position. In this case we may instruct the first player to
play the strategy associated to $\underline{v}$ as long as possible, forcing
the second player into a position where he or she is either unable to
move, or ends up making a move equivalent to a first move starting from
$\underline{u}$. Now the original first player becomes
the second player in this subsequent game, and is able to
win. Therefore, in this case the concatenation
$\underline{u}\underline{v}$ is not a kernel position either.
\end{proof}
Using all results in this section we obtain the following structure
theorem.
\begin{theorem}
\label{T_sBt}
A word $\underline{v}\in \Lambda^*\setminus\{\varepsilon\}$ is a kernel
position in a strongly Bernoulli type truncation game, if and only if it may
be obtained by the concatenation of one or several elementary kernel
positions. Such a decomposition, if it exists, is unique.
\end{theorem}
\begin{proof}
The elementary kernel positions are kernel positions by
Remark~\ref{R_ekp1} and Lemma~\ref{L_ekp2}.
Repeated use of Proposition~\ref{P_conc}
yields that a pair of words obtained by concatenating several elementary
kernel positions is also a kernel position.
For the converse assume that $\underline{v}:=v_1\cdots v_n$ is a kernel
position. We prove by induction on $n$ that this
position is either an elementary kernel position or may be obtained by
concatenating several elementary kernel positions.
Let $m$ be the least index for which $v_1\cdots v_m\not\in M$ holds, such an
$m$ exists, otherwise the first player is able to move to $\varepsilon$
and win in the first move. It follows from the definition that
the $v_1\cdots v_m$ is an elementary kernel position. If $m=n$ then we
are done, otherwise applying Proposition~\ref{P_conc} to
$v_1\cdots v_n=(v_1\cdots v_m) \cdot (v_{m+1}\cdots v_n)$
yields that $v_{m+1}\cdots v_n$ must be a kernel
position. We may apply the induction hypothesis to $v_{m+1}\cdots v_n$.
The uniqueness of the decomposition may also be shown by induction on
$n$. Assume that $v_1\cdots v_n$ is a kernel position
and thus arises as a concatenation of one or several elementary kernel
positions. Let $v_1\cdots v_m$ be the leftmost factor in
this concatenation. By Definition~\ref{D_ekp}, $m$ is the least index
such $v_1\cdots v_m\not\in M$ is satisfied. This determines the
leftmost factor uniquely. Now we may apply our induction hypothesis to
$v_{m+1}\cdots v_n$.
\end{proof}
\section{The original Bernoulli game}
\label{s_ob2}
When we want to apply Theorem~\ref{T_sBt} to the original
Bernoulli game, we encounter two minor obstacles. The first obstacle is
that the rule defining a valid move from $(u_1\cdots u_n, v_1\cdots v_n)$ makes
an exception for the letters $u_1=v_1=1$, and does not allow their
removal. The second obstacle is that the game is defined on pairs of
words. Both problems may be easily remedied by changing the alphabet to
$\Lambda={\mathbb P}\times {\mathbb P}\times {\mathbb P}={\mathbb P}^3$ where
${\mathbb P}$ is the set of positive integers.
\begin{lemma}
\label{L_oBiso}
The original Bernoulli game is isomorphic to the
strongly Bernoulli type truncation game induced by
$$
M=\{(p_1,u_1,v_1)\cdots (p_n,u_n,v_n)\::\: p_1\neq 1, u_1\leq v_1,
\ldots, v_n\},
$$
on the set of positions
$$
P=\{(1,u_1,v_1)\cdots (n,u_n,v_n)\::\: 1\leq u_i, v_i\leq i\}\subset
({\mathbb P}^3)^*.
$$
The isomorphism is given by sending each pair of words
$(u_1\cdots u_n,v_1\cdots v_n)\in ({\mathbb P}^2)^*$
into the word $(1,u_1,v_1)(2,u_2,v_2)\cdots (n,u_n,v_n)\in ({\mathbb P}^3)^*$.
\end{lemma}
Theorem~\ref{T_sBt} provides a new way of counting the kernel positions
of rank $n$ in the game $(P,M)$ defined in Lemma~\ref{L_oBiso}.
Each kernel position $(1,u_1,v_1)\cdots (n,u_n,v_n)$ may be
uniquely written as a concatenation of elementary kernel positions.
Note that these elementary kernel positions do not need to belong to the
set of valid positions $P$. However, we are able to
independently describe and count all elementary kernel positions
that may appear in a concatenation factorization of a valid kernel position
$(1,u_1,v_1)\cdots (n,u_n,v_n)$ and contribute the segment $(i,u_i,v_i)\cdots
(j,u_j,v_j)$ to it.
We call such a pair an {\em elementary kernel factor of type $(i,j)$}
and denote the number of such factors by $\kappa(i,j)$. Note that for
$i=1$ we must have $j=1$ and $(1,1,1)$ is the only elementary kernel
factor of type $(1,1)$. Thus we have $\kappa(1,1)=1$.
\begin{lemma}
\label{L_ekf}
For $2\leq i\leq j$, a word $(i,u_i,v_i)\cdots (j,u_j,v_j)\in ({\mathbb
P}^3)^*$ is an elementary kernel factor of type $(i,j)$ if and only if
it satisfies the following criteria:
\begin{itemize}
\item[(i)] for each $k\in \{i,i+1,\ldots,j\}$ we have $1\leq u_k, v_k\leq k$;
\item[(ii)] we have $u_i>v_j$;
\item[(iii)] for all $k\in \{i,i+1,\ldots,j-1\}$ we have $u_i\leq v_k$.
\end{itemize}
\end{lemma}
In fact, condition (i) states the requirement for a valid position for
the letters at the positions $i,\ldots, j$, whereas conditions (ii) and
(iii) reiterate the appropriately shifted variant of the definition of an
elementary kernel position. A word $(1,u_1, v_1)\cdots
(n,u_n,v_n)$ that arises by concatenating $(1,u_1,v_1)\cdots (i_1,u_{i_1},
v_{i_1})$, $(i_1+1,u_{i_1+1},v_{i_1+1})\cdots (i_2,u_{i_2},v_{i_2})$,
and so on, $(i_{k+1,}u_{i_k+1},v_{i_k+1})\cdots (n,u_n,v_n)$ belongs to $P$
if and only if each factor $(i_s+1,u_{i_s+1},v_{i_s+1})\cdots
(i_{s+1},u_{i_{s+1}},v_{i_{s+1}})$
(where $0\leq s\leq k$, $i_0=0$ and $i_{k+1}=n$) satisfies conditions
(i) and (ii) in Lemma~\ref{L_ekf} with $i=i_s+1$ and $j=i_{s+1}$. We
obtain the unique factorization as a concatenation of elementary kernel
positions if and only if each factor $(u_{i_s+1},v_{i_s+1})\cdots
(u_{i_{s+1}},v_{i_{s+1}})$ also satisfies condition
(iii) in Lemma~\ref{L_ekf} with $i=i_s+1$ and $j=i_{s+1}$. Using the
description given in Lemma~\ref{L_ekf} it is easy to calculate the
numbers $\kappa(i,j)$.
\begin{lemma}
\label{L_ekfc}
For $2\leq i\leq j$, the number of elementary kernel factors of type $(i,j)$ is
$$
\kappa(i,j)=(j-i)!^2\binom{j}{i}\binom{j}{i-2}.
$$
\end{lemma}
\begin{proof}
There is no other restriction on $v_{i+1},\ldots,v_{j}$ than the
inequality given in condition (i) of Lemma~\ref{L_ekf}. These numbers
may be chosen in $(i+1)(i+2)\cdots j=j!/i!$ ways. Let us denote
the value of $u_i$ by $u$, this must satisfy $1\leq u\leq i$. However,
$v_j<u_i$ may only be satisfied if $u$ is at least $2$. In that case
$v_j$ may be selected in $(u-1)$ ways, and each $v_k$ (where $i\leq
k\leq j-1$ may be selected in $(k+1-u)$ ways (since $u_i\leq v_k\leq
k$). Thus the values of $v_i,\ldots v_j$ may be selected in
$(u-1)\cdot (i+1-u)(i+2-u)\cdots (j-u)=(u-1)\cdot (j-u)!/(i-u)!$
ways. We obtain the formula
$$
\kappa(i,j)=\sum_{u=2}^{i} (u-1)\cdot
\frac{j!(j-u)!}{i!(i-u)!}
=
(j-i)!^2\binom{j}{i}
\sum_{u=2}^{i} \binom{u-1}{u-2}\cdot \binom{j-u}{i-u}.
$$
Replacing the binomial coefficients with symbols
$$
\left(\binom{n}{k}\right):=\binom{n+k-1}{k},
$$
counting the $k$-element multisets on an
$n$-element set, we may rewrite the last sum
as
$$
\sum_{u=2}^{i} \left(\binom{2}{u-2}\right)\cdot
\left(\binom{j-i+1}{i-u}\right)=
\left(
\binom{j-i+3}{i-2}
\right).
$$
Thus we obtain
$$
\kappa(i,j)=(j-i)!^2\binom{j}{i}
\left(
\binom{j-i+3}{i-2}
\right),
$$
which is obviously equivalent to the stated equation.
\end{proof}
Once we have selected the length of the elementary kernel factors in the
unique decomposition of a kernel position, we may select each kernel
factor of a given type independently. Thus we obtain the following
result.
\begin{theorem}
\label{T_b2k}
For $n\geq 1$, the number $\kappa_n$ of kernel positions of rank $n$ in
the original Bernoulli game is given by
$$
\kappa_n=\sum_{k=0}^{n-2}\sum_{1=i_0<i_1<\cdots <i_{k+1}= n}
\prod_{j=0}^k (i_{j+1}-i_j-1)!^2
\binom{i_{j+1}}{i_j+1}\binom{i_{j+1}}{i_j-1}.
$$
\end{theorem}
\begin{proof}
Consider the isomorphic game $(P,M)$ given in Lemma~\ref{L_oBiso}.
Assuming that the elementary kernel factors cover the positions $1$
through $1$, $2=i_0+1$ through $i_1$, $i_1+1$ through $i_2$, and so on, $i_k+1$
through $i_{k+1}=n$, we obtain the formula
$$
\kappa_n=\kappa(1,1)\sum_{k=0}^{n-1}\sum_{1=i_0<i_1<\cdots <i_{k+1}= n}
\prod_{j=0}^k \kappa(i_j+1,i_{j+1}),
$$
from which the statement follows by $\kappa(1,1)=1$ and Lemma~\ref{L_ekfc}.
\end{proof}
Comparing Theorem~\ref{T_b2k} with Theorem~\ref{T_b2} we obtain the
following formula for the Bernoulli numbers of the second kind.
\begin{corollary}
\label{C_b2}
For $n\geq 2$ the Bernoulli numbers of the second kind are
given by
\begin{equation}
\label{E_b2e}
b_n=(-1)^{n-1} \frac{1}{(n+1)!}
\sum_{k=0}^{n-2}\sum_{1=i_0<i_1<\cdots <i_{k+1}= n}
\prod_{j=0}^k (i_{j+1}-i_j-1)!^2
\binom{i_{j+1}}{i_j+1}\binom{i_{j+1}}{i_j-1}.
\end{equation}
\end{corollary}
\begin{example}
For $n=4$, Equation (\ref{E_b2e}) yields
\begin{align*}
b_4=\frac{-1}{5!}&\left((3-1)!^2\binom{4}{2}\binom{4}{0}
+(1-1)!^2\binom{2}{2}\binom{2}{0}(3-2)!^2\binom{4}{3}\binom{4}{1}\right.\\
&+(2-1)!^2\binom{3}{2}\binom{3}{0}(3-3)!^2\binom{4}{4}\binom{4}{2}\\
&\left.+(1-1)!^2\binom{2}{2}\binom{2}{0}
(2-2)!^2\binom{3}{3}\binom{3}{1}
(3-3)!^2\binom{4}{4}\binom{4}{2}\right)=-\frac{19}{30}.
\end{align*}
Thus $b_4/4!=-19/720$, which agrees with the number tabulated by
Jordan~\cite[p.\ 266]{Jordan}.
\end{example}
As $n$ increases, the number of terms in (\ref{E_b2e}) increases
exponentially. However, we are unaware of any other explicit formula
expressing the Bernoulli numbers of the second kind as a sum of terms of
the same sign.
Lemma~\ref{L_ekfc} may also be used to obtain a recursion formula for
the number of kernel positions of rank $n$ in the original Bernoulli
game.
\begin{proposition}
\label{P_b2krec}
For $n\geq 2$, the number $\kappa_n$ of kernel positions of rank $n$ in
the original Bernoulli game satisfies the recursion formula
$$
\kappa_n=\sum_{i=1}^{n-1} \kappa_i (n-i-1)!^2 \binom{n}{i+1}\binom{n}{i-1}.
$$
\end{proposition}
\begin{proof}
Consider again the isomorphic game $(P,M)$ given in Lemma~\ref{L_oBiso}.
Assume the last elementary kernel factor
is $(i+1,u_{i+1}, v_{i+1})\cdots (n,u_n,v_n)$ where $i\geq 1$. Removing it we obtain a kernel
position of rank $i$. Conversely, concatenating an elementary kernel
factor $(i+1,u_{i+1}, v_{i+1})\cdots (n,u_n,v_n)$ to a kernel position of
rank $i$ yields a kernel position of rank $n$. Thus we have
\begin{equation}
\kappa_n=\sum_{i=0}^{n-1} \kappa_i \cdot \kappa(i+1,n),
\end{equation}
and the statement follows by Lemma~\ref{L_ekfc}.
\end{proof}
Comparing Proposition~\ref{P_b2krec} with Theorem~\ref{T_b2} we obtain the
following recursion formula for absolute values of the Bernoulli
numbers of the second kind.
\begin{equation}
\label{E_b2rec}
|b_n|=\frac{1}{n+1}\sum_{i=1}^{n-1} |b_i| (n-i-1)!
\binom{n}{i-1}\quad\mbox{holds for $n\geq 2$}.
\end{equation}
Equivalently, Jordan's~\cite{Jordan} Bernoulli numbers of the second
kind $b_n/n!$ satisfy
\begin{equation}
\label{E_b2Jrec}
\left|\frac{b_n}{n!}\right|=\sum_{i=1}^{n-1} \left|\frac{b_i}{i!}\right|
\frac{i}{(n+1)(n-i+1)(n-i)} \quad\mbox{for $n\geq 2$}.
\end{equation}
\begin{remark}
Since the sign of $b_n$ for $n\geq 1$ is $(-1)^{n-1}$, and
substituting $x=0$ in (\ref{E_b2}) gives
$$
\sum_{n\geq 0} \frac{b_n}{n!} t^n=\frac{t}{\ln(1-t)},
$$
it is easy to verify that (\ref{E_b2Jrec}) could also be derived from
the following equation, satisfied by the generating function of the
numbers $b_n$:
$$
\frac{d}{dt}\left(t\cdot \frac{t}{\ln(1-t)}\right)+1-t
=
\frac{d}{dt}\left(\frac{t}{\ln(1-t)}\right)\cdot ((1-t)\ln(1-t)+t).
$$
However, it seems hard to guess that this equation will yield a nice
recursion formula.
\end{remark}
\section{Decomposing the indecomposable permutations}
\label{s_MR}
\begin{definition}
The {\em instant Bernoulli game} is the restriction of the original
Bernoulli game to the set of positions $\{ (12\cdots n,v_1\cdots
v_n)\::\: n\geq 1\}$.
\end{definition}
\begin{lemma}
\label{L_MRsimple}
Equivalently, we may define the set of positions of the
instant Bernoulli game as the set of words $v_1\cdots v_n$
satisfying $n\geq 1$ and $1\leq v_i\leq i$ for all $i$. A valid move
consists of replacing $v_1\cdots v_n$ with $v_1\cdots v_m$ for some
$m\geq 1$ such that $m+1\leq v_{m+1}, v_{m+2},\ldots, v_n$ holds.
\end{lemma}
Lemma~\ref{L_MRsimple} offers the simplest possible way to visualize the
instant Bernoulli game, even if this is not a form in which
the applicability of Theorem~\ref{T_sBt} could be directly seen. For
that purpose we need to note that the isomorphism of games stated
in Lemma~\ref{L_oBiso} may be restricted to the set of positions of the
instant Bernoulli game, and we obtain the following representation.
\begin{lemma}
\label{L_iBiso}
The instant Bernoulli game is isomorphic to the
strongly Bernoulli type truncation game induced by
$$
M=\{(p_1,u_1,v_1)\cdots (p_n,u_n,v_n)\::\: p_1\neq 1, u_1\leq v_1,
\ldots, v_n\},
$$
on the set of positions
$$
P=\{(1,1,v_1)\cdots (n,n,v_n)\::\: 1\leq u_i, v_i\leq i\}\subset
({\mathbb P}^3)^*.
$$
\end{lemma}
Unless otherwise noted, we will use the simplified representation
stated in Lemma~\ref{L_MRsimple}.
The kernel
positions of the instant Bernoulli game are identifiable with the
primitive elements of the Malvenuto-Reutenauer Hopf algebra, as it was
mentioned in the concluding remarks of~\cite{Hetyei-EKP}.
We call this game the instant Bernoulli game because this is a game
in which one of the players wins instantly: either there is no valid
move and the second player wins instantly, or the first player may
select the least $m\geq 1$ satisfying $m+1\leq v_{m+1}, v_{m+2},\ldots,
v_n$ and move to $v_1\cdots v_m$, thus winning instantly. The kernel
positions are identical to the winning positions in
this game. The recursion formula (\ref{E_gkrec}) may be rewritten as
$$
n!=\kappa_n+\sum_{m=1}^{n-1} \kappa_m (n-m)!,
$$
(we start the summation with $\kappa_1$ since the first letter cannot
be removed), and the generating function of the numbers $\kappa_n$ is
easily seen to be
\begin{equation}
\label{E_MRgf}
\sum_{n=1}^{\infty} \kappa_n t^n=1-\frac{1}{\sum_{n=0}^{\infty}n!t^n}.
\end{equation}
The numbers $\{\kappa_n\}_{n\geq 0}$ are listed
as sequence A003319 in the On-Line Encyclopedia of Integer
Sequences~\cite{OEIS}, and count the number of {\em connected} or {\em
indecomposable} permutations of $\{1,2,\ldots,n\}$. A permutation
$\pi\in S_n$ is {\em connected} if there is no $m<n$ such that $\pi$
takes the set $\{1,\ldots,m\}$ into itself. The kernel positions of the
instant Bernoulli game are directly identifiable with the connected
permutations in more than one ways. One way is mentioned at the end
of~\cite{Hetyei-EKP}, we may formalize that bijection using two variants of the
well-known {\em inversion tables} (see, for example ~\cite[Section
5.1.1]{Knuth} or \cite[Section 1.3]{Stanley-EC1}).
\begin{definition}
Given a permutation $\pi\in S_n$ we define its {\em letter-based
non-inversion table} as the word $v_1\cdots v_n$ where
$v_j=1+|\{i<j\::\: \pi^{-1}(i)<\pi^{-1}(j)\}|$.
\end{definition}
For example, for $\pi=693714825$ the letter-based non-inversion table
is $121351362$. This is obtained by adding $1$ to all entries in the
usual definition of an inversion table~\cite[Section 1.3]{Stanley-EC1}
of the permutation $\widetilde{\pi}=417396285$, defined by
$\widetilde{\pi}(i)=n+1-\pi(i)$ and taking the reverse of the resulting
word. In particular, for $\widetilde{\pi}=417396285$ we find the
inversion table $(1,5,2,0,4,2,0,1,0)$ in~\cite[Section
1.3]{Stanley-EC1}. Our term {\em letter-based} refers to the fact that
here we associate the letter $j$ to $v_j$ and not the place $j$.
A variant of the notion of letter-based non-inversion table is the
place-based non-inversion table.
\begin{definition}
Given a permutation $\pi\in S_n$ we define its {\em place-based
non-inversion table (PNT)} as the word $v_1\cdots v_n$ where
$v_j=1+|\{i<j\::\: \pi(i)<\pi(j)\}|$.
\end{definition}
Obviously the PNT of a permutation $\pi$ equals the letter-based
non-inversion table of $\pi^{-1}$. For example, for $\pi=583691472$ the PNT is
$121351362$. We have $v_7=3=1+2$ because $\pi(7)=4$ is preceded by two
letters $\pi(i)$ such that $(\pi(i),\pi(7))$ is not an inversion. Any
PNT $v_1\cdots v_n$ is a word satisfying $1\leq i\leq v_i$.
\begin{lemma}
\label{L_MRc}
A position $v_1\cdots v_n$ in the instant Bernoulli game is a kernel
position if and only if it is the place-based (letter-based)
non-inversion table of a connected permutation.
\end{lemma}
\begin{proof}
We prove the place-based variant of the lemma, the letter-based version
follows immediately since the set of connected permutations is closed
under taking inverses. It is easy to verify that the place-based
non-inversion table $v_1\cdots v_n$ of a permutation $\pi$ satisfies
$m+1\leq v_{m+1},\ldots, v_n$ if and only if $\pi$ takes the set
$\{1,\ldots,m\}$ into itself. Thus the first player has no valid move if
and only if $\pi$ is connected.
\end{proof}
The study of connected permutations goes back to the work of
Comtet~\cite{Comtet-n!,Comtet-AC}, for a reasonably complete
list of references we refer to the entry A003319 in the On-Line
Encyclopedia of Integer Sequences~\cite{OEIS}. It was shown by Poirier
and Reutenauer~\cite{Poirier-Reutenauer} that the connected permutations
form a free algebra basis of the Malvenuto-Reutenauer Hopf-algebra,
introduced by Malvenuto and Reutenauer~\cite{Malvenuto-Reutenauer}.
The same statement appears in dual form in the work of Aguiar and
Sottile~\cite{Aguiar-Sottile}.
Although the instant Bernoulli game is very simple,
Theorem~\ref{T_sBt} offers a nontrivial analysis of its kernel
positions, allowing to identify a unique structure on each connected
permutation. We begin with stating the following analogue of
Theorem~\ref{T_b2k}.
\begin{theorem}
\label{T_MRk}
The number $\kappa_n$ of connected permutations of rank $n$
is given by
$$
\kappa_n=\sum_{k=1}^{n-1}\sum_{1\leq i_1<i_2<\cdots <i_{k+1}=n}
\prod_{j=1}^k (i_{j+1}-i_j-1)!\cdot i_j.
$$
\end{theorem}
\begin{proof}
By Lemma~\ref{L_MRc}, $\kappa_n$ is the number of kernel positions of
rank $n$ in the instant Bernoulli game. The fact that this number is
equal to the expression on the right hand side may be shown similarly to
the proof of Theorem~\ref{T_b2k}. Consider the equivalent representation of the
instant Bernoulli game given in Lemma~\ref{L_MRsimple}. Note that this
is obtained from the representation given in Lemma~\ref{L_iBiso} by
deleting the ``redundant coordinates'' $i,i$ from each letter
$(i,i,v_i)$. Given an
arbitrary kernel position $v_1\cdots v_n$, the first letter $v_1=1$
corresponds to an elementary kernel factor of type $(1,1)$
and we have $\kappa(1,1)=1$.
For $2\leq i\leq j$, by abuse of terminology,
let us call $v_i\cdots v_j$ an elementary kernel factor of type
$(i,j)$ if it corresponds to an elementary kernel factor in the
equivalent representation in Lemma~\ref{L_iBiso}.
The elementary kernel factors of type $(i,j)$ are then exactly
those words $v_i\cdots v_j$ for which $i\leq v_i,\ldots, v_{j-1}$ and $v_j<i$
hold. Thus their number is
\begin{equation}
\label{E_MRij}
\kappa(i,j)=(j-i)!\cdot (i-1),
\end{equation}
The statement now follows from the obvious formula
$$
\kappa_n=\kappa(1,1)\cdot \sum_{k=1}^{n-1}
\sum_{1=i_0<i_1<\cdots<i_{k+1}=n}
\prod_{j=1}^k \kappa(i_j+1,i_{j+1}).
$$
\end{proof}
In analogy to Proposition~\ref{P_b2krec}, we may also use (\ref{E_MRij})
to obtain a recursion formula for the number of connected permutations.
We end up with a formula that was first discovered by King~\cite[Theorem
4]{King}.
\begin{proposition}[King]
\label{P_MRrec}
For $n\geq 2$, the number $\kappa_n$ of connected permutations of rank
$n$ satisfies the recursion formula
$$
\kappa_n=\sum_{i=1}^{n-1} \kappa_i (n-i-1)!i.
$$
\end{proposition}
The proof may be presented the same way as for
Proposition~\ref{P_b2krec}, by removing the last elementary kernel
factor of type $(i,n)$, using informal notion of an elementary kernel
factor as in the proof of Theorem~\ref{T_MRk}. King's proof is worded
differently, but may be shown to yield a bijectively equivalent decomposition.
\begin{lemma}
The induction step presented in King's proof of Proposition~\ref{P_MRrec} is
equivalent to the removal of the last
elementary kernel factor in the place-based non-inversion table of
$\widetilde{\sigma}(1)\widetilde{\sigma}(2)\cdots\widetilde{\sigma}(n)$.
Here $\widetilde{\sigma}(i)=n+1-\sigma(n+1-i)$.
\end{lemma}
\begin{proof}
Let $\sigma(1)\cdots \sigma(n)$ be the connected permutation considered
in King's proof, and let $v_1\cdots v_n$ be the PNT of
$\widetilde{\sigma}(1)\widetilde{\sigma}(2)\cdots
\widetilde{\sigma}(n)$. King's proof first identifies $\sigma(1)=r$.
This is equivalent to setting $v_n=n+1-r$. King
then defines $\pi(1)\cdots \pi(n-1)$ as the permutation obtained by
deleting $\sigma(1)$ and subtracting $1$ from all letters greater than
$r$. Introducing $\widetilde{\pi}(i)=n-\pi(n-i)$, the permutation
$\widetilde{\pi}(1)\cdots\widetilde{\pi}(n-1)$ is obtained from
$\widetilde{\sigma}(1)\widetilde{\sigma}(2)\cdots \widetilde{\sigma}(n)$
by deleting the last letter $n+1-r$ and by decreasing all letters
greater than $n+1-r$ by one. The PNT of
$\widetilde{\pi}(1)\cdots\widetilde{\pi}(n-1)$ is
thus $v_1\cdots v_{n-1}$. King then defines $j$ as the largest $j$
such that $\pi(\{1,\ldots,j\})=\{1,\ldots,j\}$. This is equivalent to
finding the least $n-j$ such that
$\widetilde{\pi}(\{n-j,n-j+1,\ldots,n-1\})=\{n-j,n-j+1,\ldots,n-1\}$.
Using the proof of Lemma~\ref{L_MRc}, this is easily seen to be
equivalent to finding the smallest $n-j$ such that $v_{n-j}=n-j$ and for
all $n-j\leq k\leq n-1$ we have $v_{k}\geq n-j$. King defines
$\beta(\pi)$ as the permutation obtained from $\pi$ by removing
$\pi(1)\cdots \pi(j)$ and then subtracting $j$ from each
element. Correspondingly, we may define
$\widetilde{\beta}(\widetilde{\pi})$ as the permutation obtained from
$\widetilde{\pi}$ by removing $\widetilde{\pi}(n-j)\cdots
\widetilde{\pi}(n-1)$. The PNT of
$\widetilde{\beta}(\widetilde{\pi})$ is then $v_1\cdots v_{n-j-1}$,
representing a kernel position in the instant Bernoulli game. This is
the kernel position of the least rank that is reachable from $v_1\cdots
v_{n-1}$. In terms of elementary kernel factors, the removal of $v_{n}$
makes the first player able to remove the rest of the last elementary kernel
factor in a single valid move, we only need to show that the fist player
cannot move to a position $v_1\cdots v_k$ where $r\leq k\leq s$ for some
elementary kernel factor $v_r\cdots v_s$. Assume by way of contradiction
that such a move is possible. By definition of a valid move,
we then have $k\leq v_s$, implying $r\leq v_s$, in contradiction with
the definition of the elementary kernel factor $v_r\cdots
v_s$. Therefore $v_1\cdots v_{n-j-1}$ is obtained from $v_1\cdots v_n$
by removing exactly the last elementary kernel factor.
\end{proof}
King~\cite{King} uses the removal of the last elementary kernel factor
to recursively define a {\em transposition Gray code} of all connected
permutations of a given rank. A transposition Gray code is a list of
permutations such that subsequent elements differ by a transposition.
Using place-based non-inversion tables, not only the last elementary
kernel factor is easily identifiable, but the entire unique
decomposition into elementary kernel factors is transparent. This gives
rise to a new way to systematically list all connected permutations.
The resulting list is not a transposition Gray code, but it is fairly
easy to generate.
To explain the construction, consider the connected permutation
$\pi=251376948$. Its letter-based non-inversion table is $v_1\cdots
v_8=121355748$ whose decomposition into elementary kernel factors is
$1\cdot 21\cdot 3\cdot 5574\cdot 8$. For $i<j$, each elementary kernel
factor of type $(i,j)$ begins with $i$, all entries in the factor are at
least $i$, except for the last letter which is less than $i$.
For $i=1$, $1$ is a special elementary kernel factor, for $i>1$ a kernel
factor of type $(i,i)$ is a positive integer less than $i$.
\begin{definition}
Given a connected permutation $\pi$, we define its {\em elevation $E(\pi)$} as
the permutation whose PNT is obtained from
the PNT of $\pi$ as follows: for
each elementary kernel factor of type $(i,j)$, increase the last letter in
the factor to $j$.
\end{definition}
For example, the PNT of the elevation of $251376948$ is
$1\cdot 23\cdot 4\cdot 5578\cdot 9$, thus $E(\pi)$ is
$123465789$. The PNT of $E(\pi)$ is written as a product of factors,
such that each factor $u_i\cdots u_j$ ends with $j$,
and all letters after $u_j$ are more than $j$. We may use this
observation to prove that each factor $u_i\cdots u_j$ ends with a $j$
that is a {\em strong fixed point} $j$.
\begin{definition}
A number $i\in\{1,\ldots,n\}$ is a strong fixed point of a permutation
$\sigma$ of $\{1,\ldots,n\}$ if $\sigma(i)=i$ and
$\sigma(\{1,\ldots,i\})=\{1,\ldots,i\}$. We denote the set of strong
fixed points of $\sigma$ by $\mbox{SF}(\sigma)$.
\end{definition}
\begin{remark}
The definition of a strong fixed point may be found in Stanley's
book~\cite[Ch.\ 1, Exercise 32b]{Stanley-EC1}, where it is stated that
the number $g(n)$ of permutations of rank $n$ with no strong fixed points
has the generating function
$$
\sum_{n\geq 0} g(n) t^n=\frac{\sum_{n\geq 0} n! t^n}{1+t \sum_{n\geq 0} n! t^n}.
$$
\end{remark}
\begin{lemma}
\label{L_ifp}
Let $v_1\cdots v_n$ be the PNT of a permutation $\sigma$. Then $j$ is an
strong fixed point of $\sigma$ if and only if $v_j=j$ and for all $k>j$
we have $v_k>j$.
\end{lemma}
In fact, the condition $\forall j (k>j\implies v_k>j)$ is easily seen to
be equivalent to $\sigma(\{1,\ldots,j\})=\{1,\ldots,j\}$. Assuming this
is satisfied, $j$ is a fixed point of $\sigma$ if and only if $v_j=j$.
As a consequence of Lemma~\ref{L_ifp} the last letters of the elementary
kernel factors of the PNT of $\pi$ mark strong fixed points of
$E(\pi)$. The converse is not necessarily true: in our example $7$ is an
strong fixed point of $E(\pi)$; however, no elementary kernel factor of
the PNT of $\pi$ ends with $v_7$. On the other hand, $v_1$ is always a special
elementary kernel factor by itself and the last elementary
kernel factor must end at $v_n$, thus $1$ and $n$ must always be
strong fixed points of $E(\pi)$. The numbers $1$ and $n$ are also special
in the sense that $i\in\{1,n\}$ is an strong fixed point if and only if
it is a fixed point.
\begin{theorem}
\label{T_epi}
Let $\sigma\in S_n$ be a permutation satisfying $\sigma(1)=1$ and
$\sigma(n)=n$ and let the strong fixed points of $\sigma$ be
$1=i_0<i_1<\cdots<i_{k+1}=n$. Then there are
exactly $(i_1+1)\cdots (i_k+1)$ connected permutations $\pi$ whose elevation is
$\sigma$.
\end{theorem}
\begin{proof}
Assume $E(\pi)=\sigma$ and the PNT of $\pi$ is the product of
elementary factors of type $(1,1)$, $(j_0+1,j_1)$, $(j_1+1,j_2)$, \ldots,
$(j_l+1,j_{l+1})$, where $1=j_0<j_1<\cdots<j_{l+1}=n$. As we have seen
above, $\{j_1,\ldots,j_l\}$ must be a subset of $\{i_1,\ldots,i_k\}$.
This condition is also sufficient since we may decompose the PNT of
$\sigma$ as $u_1\cdot (u_{j_0+1}\cdots u_{j_1})\cdots (u_{j_l+1},u_{j_{l+1}})$,
and decrease the value of each $u_{j_t}=j_t$ (where $t=1,2,\ldots,l+1$)
independently to any number that is at most $j_{t-1}$. Note that
each $u_{j_t+1}=j_t+1$, and the required inequalities for all other $u_j$s are
automatically satisfied as a consequence of having selected the $j_t$s
from among the strong fixed points. Thus we obtain the PNT
of a connected permutation, whose kernel factors are of type
$(1,1)$, $(j_0+1,j_1)$, $(j_1+1,j_2)$, \ldots,
$(j_l+1,j_{l+1})$. Therefore the number of permutations $\pi$ satisfying
$E(\pi)=\sigma$ is
$$
\sum_{l=0}^k \sum_{\{j_1,\ldots,j_l\}\subseteq \{i_1,\ldots,i_k\}}
j_1\cdots j_l=(i_1+1)\cdots (i_k+1).
$$
\end{proof}
The proof of Theorem~\ref{T_epi} suggests a straightforward way to list
the PNTs of all connected permutations of rank $n$:
\begin{itemize}
\item[(1)] List all words $u_1\cdots u_n$ satisfying $u_1=1$, $u_n=n$
and $1\leq u_i\leq i$ for all $i$. These are the PNTs of all
permutations of rank $n$, of which $1$ and $n$ are fixed points.
\item[(2)] For each $u_1\cdots u_n$, identify the places of strong
fixed points by finding all $i$s such that $u_i=i$ and $u_k>i$ for all
$k>i$.
\item[(3)] For each $u_1\cdots u_n$ select a subset
$\{j_1,\ldots,j_l\}$ of the set of strong fixed points satisfying
$1<j_1<\cdots<j_l<n$ and decrease the values of each $u_{j_t}$
to any number in $\{1,\ldots,j_{t-1}\}$. Output these as the PNTs of
connected permutations.
\end{itemize}
Steps and $(1)$ and $(3)$ involve nothing more than listing words using
some lexicographic order, step $(2)$ may be performed after reading each
word once.
As a consequence of Theorem~\ref{T_epi} we obtain the following formula
for the number of connected permutations of rank $n\geq 2$:
$$
\kappa_n=\sum_{\substack{\sigma\in S_n\\\sigma(1)=1, \sigma(n)=n}} \prod_{i\in
\mbox{SF}(\sigma)\setminus\{1,n\}} (i+1).
$$
After removing the redundant letters $\sigma(1)=1$ and $\sigma(n)=n$ and
decreasing all remaining letters by $1$, we obtain that
\begin{equation}
\label{E_IF}
\kappa_n=\sum_{\sigma\in S_{n-2}} \prod_{i\in
\mbox{SF}(\sigma)} (i+2)\quad\mbox{holds for $n\geq 2$}.
\end{equation}
Equation (\ref{E_IF}) offers a new combinatorial model for the numbers
counting the connected permutations of rank $n\geq 2$: it is the total weight
of all permutations of rank $n-2$, using a weighting which assigns the
most value to those permutations which have the most strong fixed points
and are thus in a sense the farthest from being connected.
\section{The polynomial Bernoulli game of the second kind, indexed by
$x$}
\label{s_pb2}
This game is defined in~\cite{Hetyei-EKP} on triplets of words
$(u_1\cdots u_n, v_1\cdots v_n, w_1\cdots w_n)$ for
$n\geq 0$ such that $1\leq u_i\leq i$, $1\leq v_i\leq i+1$ and
$1\leq w_i\leq x$ hold for $i\geq 1$, furthermore we require
$w_i\leq w_{i+1}$ for all $i\leq n-1$.
A valid move consists of replacing $(u_1\cdots
u_n,v_1\cdots v_n, w_1\cdots w_n)$ with $(u_1\cdots u_m,v_1\cdots v_m,
w_1\cdots w_m)$ for some $m\geq 0$ satisfying
$w_{m+1}=w_{m+2}=\cdots=w_n=x$ and $u_{m+1}< v_j$ for $j=m+1,\ldots, n$.
Theorem~\ref{T_sBt} is applicable to this game, because of the following
isomorphism.
\begin{lemma}
\label{L_rpb2}
Let $\Lambda={\mathbb P}\times {\mathbb P}\times \{1,\ldots,x\}$ where $x\in
{\mathbb P}$. The polynomial Bernoulli game, indexed by $x$ is
isomorphic to the strongly Bernoulli type truncation game, induced by
$$
M:=\{(u_1,v_1,x)\cdots (u_n,v_n,x)\::\: u_1< v_1, \ldots, v_n\}
$$
on the set of positions
$$
P:=\{(u_1,v_1,w_1)\cdots(u_n,v_n,w_n)\::\: 1\leq u_i\leq i, 1\leq
v_i\leq i+1, w_1\leq\cdots\leq w_n\}.
$$
This isomorphism is given by sending each triplet
$(u_1\cdots u_n, v_1\cdots v_n, w_1\cdots w_n)\in {\mathbb P}^*\times
{\mathbb P}^* \times \{1,\ldots,x\}^*$ into
$(u_1,v_1,w_1)\cdots (u_n,v_n,w_n)\in ({\mathbb P}\times {\mathbb
P}\times \{1,\ldots,x\})^*$.
\end{lemma}
\begin{theorem}
\label{T_b2p}
The number $\kappa_n$ of kernel positions of rank $n$ in the polynomial
Bernoulli game of the second kind, indexed by $x$ is
\begin{align*}
\kappa_n=&\sum_{m=0}^{n-1} \binom{x+m-2}{m} m!(m+1)!
\sum_{k=0}^{n-m-1}\sum_{m=i_0<i_1<\cdots <i_{k+1}=n}
\prod_{j=0}^k (i_{j+1}-i_j-1)!^2
\binom{i_{j+1}}{i_j+1}\binom{i_{j+1}+1}{i_j}\\
&+\binom{x+n-2}{n} n!(n+1)!.
\end{align*}
\end{theorem}
\begin{proof}
Consider the isomorphic game $(P,M)$, given in Lemma~\ref{L_rpb2}.
Since in a valid move all truncated letters $(u_j,v_j,w_j)$ satisfy
$w_j=x$, we have to distinguish two types of elementary kernel factors:
those which contain a letter $(u_i,v_i,w_i)$ with $w_i<x$ and those
which do not. If the elementary kernel factor contains a $(u_i,v_i,w_i)$
with $w_i<x$, it must consist of the single letter $(u_i,v_i,w_i)$. We
call such a factor an {\em elementary kernel factor of type
$(i;w_i)$}. Clearly, their number is
\begin{equation}
\label{E_iw}
\kappa(i;w_i)=i(i+1),
\end{equation}
since $u_i\in \{1,\ldots,i\}$ and $v_i\in \{1,\ldots,
i+1\}$ may be selected independently. The elementary kernel factors containing
only $x$ in their $w$-component of their letters are similar to the ones
considered in Lemma~\ref{L_ekf}.
We call an elementary kernel factor of type $(i,j;x)$ an
elementary kernel factor $(u_i,v_i,x)\cdots (u_j, v_j, x)$.
A calculation completely analogous to the one in Lemma~\ref{L_ekfc}
shows that their number is
\begin{equation}
\label{E_ijx}
\kappa(i,j;x)=\sum_{u=1}^i
u\frac{(j-u)!j!}{(i-u)!i!}=(j-i)!^2\binom{j}{i}\binom{j+1}{i-1}.
\end{equation}
Because of $w_1\leq \cdots \leq w_n$, the factors of type $(i;w_i)$
must precede the factors of type $(i,j;x)$. Thus we obtain
\begin{align*}
\kappa_n=&
\sum_{m=0}^{n-1}
\sum_{1\leq w_1\leq \cdots \leq w_m\leq x-1} \prod_{i=1}^m \kappa(i;w_i)
\sum_{k=0}^{n-m-1}\sum_{m=i_0<i_1<\cdots <i_{k+1}=n}
\prod_{j=0}^k \kappa(i_j+1,i_{j+1};x)\\
&+\sum_{1\leq w_1\leq \cdots \leq w_n\leq x-1} \prod_{i=1}^n \kappa(i;w_i)
\end{align*}
The statement now follows from (\ref{E_iw}), (\ref{E_ijx}), from
$\prod_{i=1}^m i(i+1)=m!(m+1)!$,
and from the fact that the number of words $w_1\cdots w_m$ satisfying
$1\leq w_1\leq \cdots \leq w_m\leq x-1$ is
$$
\left(\binom{x-1}{m}\right)=\binom{x+m-2}{m}.
$$
\end{proof}
We already know~\cite[Theorem 4.2]{Hetyei-EKP} that we also have
$$\kappa_n=(-1)^n(n+1)!b_n(-x)$$
for all positive integer $x$. Since two polynomial functions are equal
if their agree for infinitely many substitutions, we obtain a valid
expansion of the polynomial $(-1)^n(n+1)!b_n(-x)$.
Substituting $-x$ into $x$ and
rearranging yields the expansion of $b_n(x)$ in the basis
$\{\binom{x+1}{n}\::\: n\geq 0\}$.
\begin{corollary}
\label{C_b2p}
Introducing $c_{n,n}=n!$ and
$$
c_{n,m}=\frac{(-1)^{n-m}m!(m+1)!}{(n+1)!}
\sum_{k=0}^{n-m-1}\sum_{m=i_0<i_1<\cdots <i_{k+1}=n}
\prod_{j=0}^k (i_{j+1}-i_j-1)!^2
\binom{i_{j+1}}{i_j+1}\binom{i_{j+1}+1}{i_j}
$$
for $0\leq m<n$, we have
$$
b_n(x)=\sum_{m=0}^{n} c_{n,m}\binom{x+1}{m}.
$$
\end{corollary}
\begin{example}
For $n=2$, Corollary~\ref{C_b2p} gives
\begin{align*}
b_2(x)=& \frac{0!1!}{3!}\left(1!^2\binom{2}{1}\binom{3}{0}
+0!^2\binom{1}{1}\binom{2}{0}0!^2\binom{2}{2}\binom{3}{1}\right)
-\frac{1!2!}{3!}\binom{x+1}{1}
0!^2\binom{2}{2}\binom{3}{1}+\binom{x+1}{2}2!\\
=&\frac{5}{6}-(x+1)+(x+1)x=x^2-\frac{1}{6}.
\end{align*}
Thus $b_2(x)/2!=x^2/2-1/12$ which agrees with the formula given
in~\cite[\S 92]{Jordan}.
\end{example}
We may also obtain a new formula for the Bernoulli numbers of the second
kind by substituting $x=0$ into Corollary~\ref{C_b2p}. We obtain
$b_n=c_{n,0}+c_{n,1}$, i.e.,
\begin{equation}
\begin{aligned}
b_n=&\frac{(-1)^{n}}{(n+1)!}
\sum_{k=0}^{n-1}\sum_{0=i_0<i_1<\cdots <i_{k+1}=n}
\prod_{j=0}^k (i_{j+1}-i_j-1)!^2
\binom{i_{j+1}}{i_j+1}\binom{i_{j+1}+1}{i_j}\\
&+
\frac{(-1)^{n-1}\cdot 2}{(n+1)!}
\sum_{k=0}^{n-2}\sum_{1=i_0<i_1<\cdots <i_{k+1}=n}
\prod_{j=0}^k (i_{j+1}-i_j-1)!^2
\binom{i_{j+1}}{i_j+1}\binom{i_{j+1}+1}{i_j}
\end{aligned}
\end{equation}
for $n\geq 2$.
\section{The flat Bernoulli game}
\label{s_fB}
This game is defined in~\cite{Hetyei-EKP} on words $u_1\cdots
u_n$ for $n\geq 0$ such that each $u_i\in{\mathbb P}$ satisfies
$1\leq u_i\leq i$. A valid move consists of replacing $u_1\cdots
u_n$ with $u_1\cdots u_m$ if $m\geq 1$ and $u_{m+1}<u_j$ holds for all $j>m+1$.
In analogy to Lemma~\ref{L_oBiso}, we have the following result.
\begin{lemma}
\label{L_fBiso}
The flat Bernoulli game is isomorphic to the
strongly Bernoulli type truncation game induced by
$$
M=\{(p_1,u_1)\cdots (p_n,v_n)\::\: p_1\neq 1, u_1<u_2,\ldots, u_n\},
$$
on the set of positions
$$
P=\{(1,u_1)\cdots (n,u_n)\::\: 1\leq u_i\leq i\}\subset
({\mathbb P}^2)^*.
$$
The isomorphism is given by sending each word
$u_1\cdots u_n\in {\mathbb P}^*$
into the word $(1,u_1)(2,u_2)\cdots (n,u_n)\in ({\mathbb P}^2)^*$.
\end{lemma}
\begin{theorem}
\label{T_fB}
For $n\geq 2$, the number $\kappa_n$ of kernel positions of rank $n$ in
the flat Bernoulli game is
$$
\kappa_n=\sum_{k=0}^{\lfloor(n-3)/2\rfloor}
\sum_{\substack{1=i_0<i_1<\cdots<i_{k+1}=n\\ i_{j+1}-i_j\geq 2}}
\prod_{j=0}^k (i_{j+1}-i_j-2)!\binom{i_{j+1}}{i_j}.
$$
\end{theorem}
\begin{proof}
Consider isomorphic representation given in Lemma~\ref{L_fBiso}.
Note first that, in any
kernel position $(1,u_1)\cdots (n,u_n)$, the letter $u_1=(1,1)$ is
an elementary kernel factor of type $(1,1)$ and we have $\kappa(1,1)=1$.
For $2\leq i< j$, let $\kappa(i,j)$ be the number of elementary
kernel factors $(i,u_i)\cdots (j,u_j)$ of type
$(i,j)$. A calculation completely analogous to the one in Lemma~\ref{L_ekfc}
shows
\begin{equation}
\label{E_ijf}
\kappa(i,j)=\sum_{u=1}^i u \frac{(j-1-u)!}{(j-1-i)!}=(j-1-i)!\binom{j}{i-1}.
\end{equation}
Note that for $i\geq 2$ there is no elementary kernel factor of type
$(i,i)$ since removing the last letter only is always a valid move,
provided at least one letter is left. The statement now follows from
Equation (\ref{E_ijf}) and the obvious formula
$$
\kappa_n=\kappa(1,1)\cdot \sum_{k=0}^{\lfloor(n-3)/2\rfloor}
\sum_{\substack{1=i_0<i_1<\cdots<i_{k+1}=n\\ i_{j+1}-i_j\geq 2}}
\prod_{j=0}^k \kappa(i_j+1,i_{j+1}).
$$
\end{proof}
Introducing $m_j:=i_{j}-i_{j-1}-1$ for $j\geq 1$ and shifting the index
$k$ by $1$, we may rewrite the equation in Theorem~\ref{T_fB} as
\begin{equation}
\label{E_fB}
\kappa_n=n\cdot \sum_{k=1}^{\lfloor(n-1)/2\rfloor}
\sum_{\substack{m_1+\cdots + m_{k}=n-1\\ m_1,\ldots, m_k\geq 2}}
\binom{n-1}{m_1, \ldots, m_k} (m_1-2)!\cdots (m_k-2)!.
\end{equation}
A more direct proof of this equation follows from
Corollary~\ref{C_fBperm} below.
\begin{example}
For $n=5$, (\ref{E_fB}) yields
$$
\kappa_5=5\left(\binom{4}{4} 2!+\binom{4}{2,2}0!0!\right)=40.
$$
Thus $\kappa_5/5!=1/3$ which agrees with the number given in~\cite[Table
1]{Hetyei-EKP}.
\end{example}
We already know~\cite[Proposition 7.3]{Hetyei-EKP} that the exponential
generating function of the numbers $\kappa_n$ is
\begin{equation}
\label{E_fBgen}
\sum_{n=1}^{\infty}\frac{\kappa_n}{n!} t^n
=\frac{t}{(1-t)(1-\ln(1-t))}.
\end{equation}
Just like in Section~\ref{s_MR}, we may use place-based non-inversion
tables to find a permutation enumeration model for
for the numbers $\kappa_n$.
\begin{lemma}
\label{L_PNT<}
Let $u_1\cdots u_n$ be the PNT of a permutation $\pi\in S_n$.
Then, for all $i<j$, $\pi(i)<\pi(j)$ implies $u_i<u_j$.
The following partial converse is also true: $u_i<u_{i+1},\ldots, u_j$
implies $\pi(i)<\pi(i+1),\ldots, \pi(j)$.
\end{lemma}
\begin{proof}
If $\pi(i)<\pi(j)$ then the set $\{k<i\::\: \pi(k)<\pi(i)\}$ is a proper
subset of $\{k<j\::\: \pi(k)<\pi(j)\}$ (the index $i$ belongs only to
the second subset). Thus $u_i<u_j$. The converse may be shown by
induction on $j-i$. For $j=i+1$, $\pi(i)>\pi(i+1)$ implies that the set
$\{k<i+1\::\: \pi(k)<\pi(i+1)\}$ is a subset of $\{k<i\::\:
\pi(k)<\pi(i)\}$, thus $u_i\geq u_{i+1}$. Therefore $u_i<u_{i+1}$
implies $\pi(i)<\pi(i+1)$. Assume now that $u_i\leq u_{i+1},\ldots, u_j$
holds and that we have already shown $\pi(i)<\pi(i+1),\ldots,
\pi(j-1)$. Assume, by way of contradiction, that $\pi(i)>\pi(j)$
holds. Then there is no $k$ satisfying $i<k<j$ and $\pi(k)<\pi(j)$ thus
$\{k<j\::\: \pi(k)<\pi(j)\}$ is a subset of $\{k<i\::\:
\pi(k)<\pi(i)\}$, implying $u_i\geq u_j$, a contradiction. Therefore we
obtain $\pi(i)<\pi(j)$.
\end{proof}
\begin{corollary}
Let $u_1\cdots u_n$ be the PNT of a permutation $\pi\in S_n$. Then
$u_i\cdots u_j$ satisfies $u_i<u_{i+1},\ldots, u_{j-1}$ and $u_i\geq u_j$
if and only if $\pi(j)<\pi(i)< \pi(i+1),\ldots,\pi(j-1)$ holds.
\end{corollary}
\begin{corollary}
\label{C_fBperm}
Let $u_1\cdots u_n$ be the PNT of a permutation $\pi\in S_n$. Then
$u_1\cdots u_n$ is a kernel position in the flat Bernoulli game, if and
only if there exists a set of indices $1=i_0<i_1<\cdots<i_{k+1}=n$ such
that for each $j\in\{0,\ldots,k\}$ we have
$\pi(i_{j+1})<\pi(i_j+1)<\pi(i_j+2), \pi(i_j+3),\ldots,\pi(i_{j+1}-1)$.
\end{corollary}
Equation (\ref{E_fB}) also follows from Corollary~\ref{C_fBperm}.
In fact, there are $n$ ways to select $\pi(1)$. Then, introducing
$m_j:=i_{j}-i_{j-1}-1$ for $j\geq 1$, we have
$\binom{n-1}{m_1,\ldots,m_k}$ ways to select the partitioning
$$
\{1,\ldots,n\}\setminus \pi(1)=\biguplus_{j=0}^k
\pi\left(\{i_j+1,\ldots,i_{j+1}\}\right)
$$
and, for each $j$ there are $(i_{j+1}-i_j-2)!=(m_j-2)!$ ways to select
the partial permutation $\pi(i_j+1)\cdots\pi(i_{j+1})$. Both Equation
(\ref{E_fB}) and Corollary~\ref{C_fBperm} suggest looking at the numbers
\begin{equation}
\label{E_K}
K_n=\kappa_{n+1}/(n+1)=
\sum_{k=1}^{\lfloor n/2\rfloor}
\sum_{\substack{m_1+\cdots + m_{k}=n\\ m_1,\ldots, m_k\geq 2}}
\binom{n}{m_1, \ldots, m_k} (m_1-2)!\cdots (m_k-2)!\quad\mbox{for $n\geq
0$}.
\end{equation}
It is easy to check the following statement.
\begin{proposition}
$K_n$ is the number of kernel positions of rank $n$ in the {\em exception-free}
variant of the flat Bernoulli game, where removing the entire word if
$u_1<u_2,\ldots, u_n$ is also a valid move, and the empty word is a
valid position.
\end{proposition}
Corollary~\ref{C_fBperm} may be rephrased as follows.
\begin{corollary}
\label{C_fBpermK}
$K_n$ is the number of those permutations $\pi\in S_n$ for which
there exists a set of indices $0=i_0<i_1<\cdots<i_{k+1}=n$ such
that for each $j\in\{0,\ldots,k\}$ we have
$\pi(i_{j+1})<\pi(i_j+1)<\pi(i_j+2), \pi(i_j+3),\ldots,\pi(i_{j+1}-1)$.
\end{corollary}
The generating function of the numbers $K_n$ is
\begin{equation}
\label{E_Kgen}
\sum_{n=0}^{\infty} \frac{K_n}{n!} t^n
=\frac{1}{(1-t)(1-\ln(1-t))}.
\end{equation}
This formula may be derived not only from $K_n=\kappa_{n+1}/(n+1)$
and (\ref{E_fBgen}), but also from from Corollary~\ref{C_fBpermK} and the
compositional formula for exponential generating
functions~\cite[Thm.\ 5.5.4]{Stanley-EC2}. We only need to observe that
$$
\frac{1}{(1-t)(1-\ln(1-t))}=\frac{1}{1-t}\circ \left(t+(1-t)\ln(1-t)\right),
$$
where
$$
\frac{1}{1-t}=\sum_{n=0}^{\infty} \frac{n! t^n}{n!}
$$
is the exponential generating function of linear orders, whereas
$$
t+(1-t)\ln(1-t)=-t\ln(1-t)-(-\ln(1-t-t))
=\sum_{n=1}^{\infty}\frac{t^{n+1}}{n}-\sum_{n=2}^{\infty}\frac{t^{n}}{n}
=\sum_{n=2}^{\infty}\frac{(n-2)! t^{n}}{n!}
$$
is the exponential generating function of linear orders of
$\{1,\ldots,n\}$, listing $1$ last and $2$ first.
By taking the antiderivative on both sides of (\ref{E_Kgen}) we obtain
$$
\sum_{n=0}^\infty \frac{K_n}{(n+1)!} t^{n+1}=\int \frac{1}{(1-t)(1-\ln(1-t))}\
dt=\ln(1-\ln(1-t))+K_{-1}.
$$
Introducing $K_{-1}:=0$, the numbers $K_{-1}, K_0,K_1,\ldots$ are listed
as sequence A089064 in the On-Line Encyclopedia of Integer
Sequences~\cite{OEIS}. There we may also find the formula
\begin{equation}
\label{E_st}
K_n=(-1)^{n}\sum_{k=1}^{n+1} s(n+1,k)\cdot (k-1)!
\end{equation}
expressing them in terms of the Stirling numbers of the first kind.
Using the well-known formulas
$$
\sum_{k=1}^n s(n+1,k) x^k=x(x-1)\cdots (x-n)
\quad
\mbox{and}
\quad
n!=\int_0^{\infty} x^n e^{-x}\ dx,
$$
Equation (\ref{E_st}) is equivalent to
\begin{equation}
\label{E_Kint}
K_n=(-1)^{n}\int_0^{\infty} (x-1)\cdots(x-n) e^{-x}\ dx.
\end{equation}
This formula may be directly verified by substituting it into the left
hand side of (\ref{E_Kgen}) and obtaining
$$
\int_{0}^{\infty} e^{-x}
\sum_{n=0}^{\infty}\binom{x-1}{n}(-t)^n\ dx
=\int_{0}^{\infty} e^{-x} (1-t)^{x-1}\ dx
=\frac{1}{(1-t)(1-\ln(1-t))}.
$$
We conclude this section with an intriguing conjecture.
By inspection of (\ref{E_fBgen}) and (\ref{E_Kgen}) we obtain the
following formula.
\begin{lemma}
For $n\geq 1$,
$$a_{n}:=(-1)^{n}\frac{(\kappa_{n+1}-(n+1)\cdot \kappa_{n})}{n+1}
=(-1)^{n}\left(K_n-n\cdot K_{n-1}\right)
$$
is the coefficient of $t^{n}/n!$ in $1/(1-\ln(1+t))$.
\end{lemma}
The numbers $a_0,a_1,\ldots $ are listed as sequence A006252 in the
On-Line Encyclopedia of Integer Sequences~\cite{OEIS}. The first $11$
entries are positive, then $a_{12}=-519312$ is negative, the subsequent
entries seem to have alternating signs. The conjecture that this
alternation continues indefinitely, may be rephrased as follows.
\begin{conjecture}
\label{C_novice}
For $n\geq 12$ we have $n\cdot\kappa_{n-1}> \kappa_n$.
Equivalently, $n\cdot K_{n-1}>K_n$ holds for $n\geq 11$.
\end{conjecture}
We may call Conjecture~\ref{C_novice} the {\em novice's chance}. Imagine
that the first player asks a novice friend to replace him or her for
just the first move in a flat Bernoulli game starting from a random
position of rank $n\geq 12$. If Conjecture~\ref{C_novice} is correct
then novice could simply remove the last letter,
because the number of nonkernel positions in which this is the first
move of the first player's winning strategy still exceeds the
number of all kernel positions. We should note that for the original
Bernoulli game a novice has no such chance. In that game the removal of
a single letter at the end of both words is not always a valid move,
but we could advise our novice to remove the last letters at the end
of both words if this is a valid move and make a random valid move
otherwise. Our novice would have a chance if
$$
\kappa_{n-1}\cdot \left(n^2-\binom{n-1}{2}\right)
=\kappa_{n-1}\cdot \binom{n+1}{2}\geq \kappa_n
$$
was true for all large $n$. However, it is known~\cite[\S 93]{Jordan} that
we have
\begin{equation}
\label{E_b2bound}
\frac{n-2}{n}\left|\frac{b_{n-1}}{(n-1)!}\right|
<\left|\frac{b_{n}}{n!}\right|
<
\frac{n-1}{n}\left|\frac{b_{n-1}}{(n-1)!}\right|,\quad\mbox{implying}
\end{equation}
$$
(n-2)(n+1)\kappa_{n-1}<\kappa_n<(n-1)(n+1)\kappa_{n-1}.
$$
On the page of A006252 in~\cite{OEIS} we find that the
coefficient of $t^{n}/n!$ in $1/(1-\ln(1+t))$ is
\begin{equation}
\label{E_novicest}
\frac{(-1)^{n}(\kappa_{n+1}-(n+1)\cdot \kappa_{n})}{n+1}
=(-1)^{n}\left(K_n-n\cdot K_{n-1}\right)=\sum_{k=0}^{n} s(n,k) k!
\end{equation}
Equivalently,
\begin{equation}
\label{E_noviceint}
\frac{(-1)^{n}(\kappa_{n+1}-(n+1)\cdot \kappa_{n})}{n+1}=
(-1)^{n}\left(K_n-n\cdot K_{n-1}\right)=\int_0^{\infty}
x(x-1)\cdots (x-n+1) e^{-x}\ dx.
\end{equation}
Equations (\ref{E_novicest}) and (\ref{E_noviceint}) may be verified the
same way as the analogous formulas (\ref{E_st}) and (\ref{E_Kint}).
Therefore we may rewrite Conjecture~\ref{C_novice} as
follows:
\begin{equation}
\label{E_novice}
(-1)^{n}\int_0^{\infty} x(x-1)\cdots(x-n+1) e^{-x}\ dx
>0\quad\mbox{holds for $n\geq 11$.}
\end{equation}
This form indicates well the complication that arises, compared to the
original Bernoulli game. To prove (\ref{E_b2bound}),
Jordan~\cite[\S 93]{Jordan} uses the formula
$$
\frac{b_n}{n!}=\int_0^1 \binom{x}{n} \ dx
$$
and is able to use the mean value theorem to compare $b_n/n!$ with
$b_{n+1}/(n+1)!$, because the function
$\binom{x}{n}$ does not change sign on the interval $(0,1)$. Proving Equation
(\ref{E_novice}) is equivalent to a similar estimate of the change of
the integral $(-1)^n\int_0^{\infty} (x-1)\cdots(x-n) e^{-x}\ dx$ as we
increase $n$, however, this integrand does change the sign several times
on the interval $(0,\infty)$.
\section{Concluding remarks}
\label{s_c}
Conjecture~\ref{C_novice}, if true, would be an intriguing
example of a sequence ``finding its correct signature pattern'' after a
relatively long ``exceptional initial segment''. Many such examples seem
to exist in analysis, and it is perhaps time for combinatorialists to start
developing a method of proving some of them.
Some of the most interesting questions
arising in connection with this paper seem to be related to the
instant Bernoulli game, presented in Section~\ref{s_MR}. The fact
that our decomposition into elementary kernel factors is
bijectively equivalent to King's~\cite{King} construction
raises the suspicion that this decomposition may
also have an algebraic importance beyond the combinatorial one. This
suspicion is underscored by the fact that the correspondence between our
decomposition and King's is via some modified inversion table, whereas
Aguiar and Sottile~\cite{Aguiar-Sottile} highlight the importance of the weak
order to the structure of the Malvenuto-Reutenauer Hopf algebra, first
pointed out by Loday and Ronco~\cite{Loday-Ronco}. The weak order is
based on comparing the sets of inversions of two
permutations. Depending the way we choose the basis of the self-dual
Malvenuto-Reutenauer Hopf algebra, expressing one of the product and
coproduct seems easy in terms of place-based non-inversion tables,
whereas the other seems very difficult. If we choose the representation
considered by Poirier and Reutenauer~\cite{Poirier-Reutenauer} where
connected permutations form the free algebra basis, then the product of
two permutations is easily expressed in terms of PNTs, thus the
elementary kernel factor decomposition might indicate the presence of a
larger algebra ``looming on the horizon'' in which the multiplicative
indecomposables of the Malvenuto-Reutenauer Hopf algebra become decomposable.
We should also mention that the decomposition that is equivalent to the
removal of the last elementary kernel factor is only the first phase in
King's construction~\cite{King}, a lot of hard work is done
afterwards to find the transposition Gray code, while recursing
on these reduction steps. Our presentation allows to better visualize King's
entire ``rough'' decomposition ``at once'' and thus may be suitable to
attack the open question of finding an adjacent transposition Gray code.
Finally, the degenerate Bernoulli game indexed with
$(p,q)$~\cite[\S 6]{Hetyei-EKP} can also be shown to be isomorphic to a
strongly Bernoulli type truncation game. For this game, the number of kernel
positions of rank $n$ is $(-q)^n (n+1)!\beta_n(p/q,0)$~\cite[Thm.\
6.2]{Hetyei-EKP}, where $\beta_n(p/q)$ is a degenerate Bernoulli
number. We leave the detailed analysis of this game
to a future occasion.
\section*{Acknowledgements} This work was completed while the author was
on reassignment of duties sponsored by the University of North Carolina
at Charlotte. The author wishes to thank two anonymous referees for
helping substantially improve both the presentation and the contents of
this paper and Christian Krattenthaler for remembering the
exercise in Stanley's book~\cite[Ch.\ 1, Exercise 32b]{Stanley-EC1} on
strong fixed points.
|
1,108,101,566,456 | arxiv | \section{Introduction}
\label{sec:intro}
The dwarf spheroidal galaxy (dSph) companions of the Milky Way (MW)
are excellent laboratories for investigating the chemical evolution
and star formation histories of dwarf galaxies. These galaxies have
undergone at most a few star formation episodes \citep{hol06} and are
dynamically simple \citep{wal07}. The dSphs of the MW provide an
opportunity to examine closely the processes that establish the galaxy
luminosity-metallicity relation \citep[e.g.,][]{sal09}.
The MW dSphs are also considered to be strong candidates of a
population of dwarf galaxies that were tidally stripped by the young
Galaxy and eventually incorporated into the Galactic halo. This
scenario has become central to our picture of how large galaxies form
\citep{sea78,rob05}. Important tests of this scenario are to compare
the details of the metallicity distribution function of the collection
of dSphs to that of the Galactic halo stars and to compare abundance
ratio patterns seen in dSphs to those measured for the halo
\citep[e.g.,][]{ven04}.
To date, each of these areas has been hampered by the small sample of
dSph stars for which high-quality measurements of [Fe/H] and abundance
ratios for other elements have been available. \citet{lan04} compared
their models of dSphs less massive than Sagittarius to six or fewer
stars per galaxy. The usual approach for high-quality detailed
abundance determinations is to use high-resolution spectroscopy (HRS,
$R > 20000$) of individual stars. Because of the large distances to
even the nearest dSphs, these are time-consuming observations even
using the largest telescopes.
Our approach is to derive abundances from medium-resolution
spectroscopy (MRS, $R \sim 6500$) using the Deep Imaging Multi-Object
Spectrometer \citep[DEIMOS,][]{fab03} on the Keck~II telescope. As
demonstrated by \citet{kir08a,kir08b}, accurate measurements can be
made for Fe and some $\alpha$ elements (Mg, Si, Ca, and Ti) with these
individual stellar spectra. \citet{she09} demonstrated similarly
precise results using the Keck~I LRIS spectrometer on a sample of
individual stars in the \object[NAME Leo II dSph]{Leo~II dSph}. In a
typical dSph, the DEIMOS field of view allows between 80 and 150 red
giant stars to be targeted per multi-object mask. Samples of several
hundred giants can be observed in a given dSph. The Dwarf Abundances
and Radial Velocities team \citep[DART,][hereafter T04]{tol04} has
been collecting a combination of MRS and HRS in dSphs to exploit the
advantages of both techniques.
This paper is the first in a series that explores the multi-element
abundances of stellar systems measured with MRS. The particular focus
of this series is to characterize the distributions of [Fe/H]\ and [$\alpha$/Fe]\
in MW dSphs. These measurements will provide insight into the role of
dSphs in building the Galactic stellar halo
\citep[i.e.,][]{sea78,whi78}.
Our first target is the \object[NAME SCULPTOR dSph]{Sculptor dSph}
\citep[$\alpha = 1^{\rm{h}}00^{\rm{m}}$, $\delta = -33^{\circ}43'$,
$M_V = -11.1$,][]{mat98}. Sculptor has been a favored HRS and MRS
target for the past ten years. Of all the dSphs, it appears most
often in explanations of dSph chemical evolution and galaxy
formation \citep[e.g., T04,][]{she03,gei07}. T04 discovered that
Sculptor is actually ``two galaxies'' in one, with two stellar
populations that are kinematically and compositionally distinct.
\citet{bat06} later showed that \object[NAME FORNAX dSph]{Fornax}
also displays multiple stellar populations with different
kinematics, spatial extents, and metallicities. But Sculptor is
also unique in that it is the only MW dSph known to rotate
\citep{bat08a}. Recently, \citet{wal09} published radial velocities
for 1365 Sculptor members, and \citet{ven05,ven08} presented
high-resolution abundance measurements of Mg, Ca, Ti, and Fe for 91
stars in Sculptor. They also measured Y, Ba, and Eu for some of
those stars.
This paper consists of six sections and an appendix.
Section~\ref{sec:obs} introduces the spectroscopic target selection
and observations, and Sec.~\ref{sec:prep} explains how the spectra are
prepared for abundance measurements. Section~\ref{sec:measure}
describes the technique to extract abundances, which builds on the
method described by \citet*[][hereafter KGS08]{kir08a}. In
Sec.~\ref{sec:abund}, we present the metallicity distribution and
multi-element abundance trends of Sculptor. In Sec.~\ref{sec:concl},
we summarize our findings in the context of dSph chemical evolution
and the formation of the Galaxy. Finally, we devote the appendix to
quantifying the uncertainties in our MRS measurements, including
comparisons to independent HRS of the same stars.
\section{Observations}
\label{sec:obs}
\subsection{Target Selection}
We selected targets from the Sculptor photometric catalog of
\citet{wes06}. The catalog includes photometry in three filters: $M$
and $T_2$ in the Washington system, and the intermediate-width DDO51
filter (henceforth called $D$) centered at 5150~\AA. This band probes
the flux from a spectral region susceptible to absorption by the
surface gravity-sensitive \ion{Mg}{1} and MgH lines. \citet{maj00}
and \citet{wes06} outlined the procedure for distinguishing between
distant red giant stars and foreground Galactic dwarf stars using
these three filters. We followed the same procedure to select a
sample of red giant candidates from the Sculptor $MT_2D$ catalog.
\begin{deluxetable*}{lcccccccc}
\tablewidth{0pt}
\tablecolumns{6}
\tablecaption{Targets with Previous High-Resolution Abundances\label{tab:hrslist}}
\tablehead{\colhead{name} & \colhead{reference} & \colhead{RA} & \colhead{Dec} & \colhead{$M$} & \colhead{$T_2$}}
\startdata
H482 & \citet{she03} & $00^{\mathrm{h}} 59^{\mathrm{m}} 58 \fs 2$ & $-33 \arcdeg 41 \arcmin 08 \arcsec$ & $17.967 \pm 0.030$ & $16.324 \pm 0.020$ \\
H459 & \citet{she03} & $01^{\mathrm{h}} 00^{\mathrm{m}} 12 \fs 5$ & $-33 \arcdeg 43 \arcmin 01 \arcsec$ & $18.465 \pm 0.032$ & $16.924 \pm 0.031$ \\
H479 & \citet{she03} & $01^{\mathrm{h}} 00^{\mathrm{m}} 12 \fs 7$ & $-33 \arcdeg 41 \arcmin 15 \arcsec$ & $17.562 \pm 0.023$ & $15.860 \pm 0.030$ \\
H400 & \citet{she03} & $01^{\mathrm{h}} 00^{\mathrm{m}} 17 \fs 0$ & $-33 \arcdeg 45 \arcmin 13 \arcsec$ & $18.413 \pm 0.030$ & $17.140 \pm 0.027$ \\
H461 & \citet{she03} & $01^{\mathrm{h}} 00^{\mathrm{m}} 18 \fs 2$ & $-33 \arcdeg 42 \arcmin 12 \arcsec$ & $17.806 \pm 0.028$ & $16.166 \pm 0.027$ \\
1446 & \citet{gei05} & $00^{\mathrm{h}} 59^{\mathrm{m}} 46 \fs 4$ & $-33 \arcdeg 41 \arcmin 23 \arcsec$ & $17.618 \pm 0.023$ & $15.695 \pm 0.022$ \\
195 & \citet{gei05} & $00^{\mathrm{h}} 59^{\mathrm{m}} 55 \fs 6$ & $-33 \arcdeg 46 \arcmin 39 \arcsec$ & $17.515 \pm 0.022$ & $15.845 \pm 0.018$ \\
982 & \citet{gei05} & $01^{\mathrm{h}} 00^{\mathrm{m}} 16 \fs 2$ & $-33 \arcdeg 42 \arcmin 37 \arcsec$ & $17.433 \pm 0.025$ & $15.552 \pm 0.028$ \\
770 & \citet{gei05} & $01^{\mathrm{h}} 00^{\mathrm{m}} 23 \fs 8$ & $-33 \arcdeg 42 \arcmin 17 \arcsec$ & $17.623 \pm 0.025$ & $15.857 \pm 0.026$ \\
\enddata
\end{deluxetable*}
Nine stars, listed in Table~\ref{tab:hrslist}, have previously
published HRS abundance measurements \citep{she03,gei05}. These stars
were observed and provide the basis for demonstrating the accuracy of
the MRS abundance measurements, described in the appendix.
\subsection{Slitmask Design}
We designed the DEIMOS slitmasks with the IRAF software module
\texttt{dsimulator}.\footnote{\url{http://www.ucolick.org/$^\sim$phillips/deimos\_ref/masks.html}}
Each slitmask subtended approximately $16' \times 4'$. In order to
adequately subtract night sky emission lines, we required a minimum
slit length of $4''$. The minimum space between slits was $0 \farcs
35$. When these constraints forced the selection of one among
multiple possible red giant candidates, the brightest object was
selected. The slits were designed to be at the approximate
parallactic angle at the anticipated time of observation
($-25^{\circ}$). This choice minimized the small light losses due to
differential atmospheric refraction. This configuration was
especially important for Sculptor, which was visible from Keck
Observatory only at a low elevation. The slitmasks' sky position
angle (PA) was $-35^{\circ}$. The $10^{\circ}$ offset between the
slit PA and the slitmask PA tilted the night sky emission lines
relative to the CCD pixel grid to increase the subpixel wavelength
sampling and improve sky subtraction.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f1.eps}
\caption{DEIMOS slitmask footprints laid over a map of sources from
the photometric catalog. Targets selected for spectroscopy are
shown in red. Targets observed in more than one mask are shown in
green. Blue diamonds enclose stars with previous HRS abundance
measurements. The left and bottom axis scales show the angular
displacement in arcmin from the center of the galaxy
\citep[$\alpha_0 = 1^{\rm{h}}00^{\rm{m}}09^{\rm{s}}$, $\delta_0 =
-33^{\circ}42'30''$,][]{mat98}, and the right and top axis scales
show the physical displacement for an assumed distance of 85.9~kpc
\citep{pie08}.\label{fig:coords}}
\end{figure}
Figure~\ref{fig:coords} shows the coordinates of all the objects in
the catalog regardless of their probability of membership in Sculptor.
Five DEIMOS slitmask footprints enclose the spectroscopic targets:
scl1, scl2, scl3, scl5, and scl6 (see Tab.~\ref{tab:obs}). The scl5
slitmask included 24 targets also included on other masks. These
duplicate observations provide estimates of uncertainty in radial
velocity and abundance measurements (Sec.~\ref{sec:velocities} and
Sec.~\ref{sec:duplicate}). The spectral coverage of each slit is not
the same. The minimum and maximum wavelengths of spectra of targets
near the long, straight edge of the DEIMOS footprint can be up to
400~\AA\ lower than for targets near the irregularly shaped edge of
the footprint (upper left and lower right of the slitmask footprints
in Fig.~\ref{fig:coords}, respectively). Furthermore, spectra of
targets near either extreme of the long axis of the slitmask suffered
from vignetting which reduced the spectral range. It is important to
keep these differences of spectral range in mind when interpreting the
differences of measurements derived from duplicate observations.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f2.eps}
\caption{Color-magnitude diagram in the Washington and Cousins systems
for the sources within the right ascension and declination ranges
shown in Fig.~\ref{fig:coords}. The symbols have the same meanings
as in Fig.~\ref{fig:coords}. The transformation from the Washington
system ($M$ and $T_2$) to the Cousins system ($V_{\rm C}$ and
$I_{\rm C}$) is $I_{\rm C} = T_2$ and $V_{\rm C} - I_{\rm C} =
0.800(M-T_2) - 0.006$ \citep{maj00}.\label{fig:cmd}}
\end{figure}
Figure~\ref{fig:cmd} shows the color-magnitude diagram (CMD) of the
targets within the right ascension and declination ranges of the axes
in Fig.~\ref{fig:coords}. The $MT_2D$ membership criteria caused the
selected red giants to form a tight sequence. This selection may have
imposed a metallicity bias on the spectroscopic sample. Although only
a tiny fraction of stars lay outside the main locus of the red giant
branch, some may have been spectroscopically untargeted members of
Sculptor. For example, if Sculptor contained any old stars with
${\rm [Fe/H]} \ga -0.5$, they would have been too red to be included in
the spectroscopic sample. Any such metallicity bias should have
excluded at most a few stars.
\subsection{Spectroscopic Configuration and Exposures}
\begin{deluxetable}{lcccc}
\tablewidth{0pt}
\tablecolumns{5}
\tablecaption{DEIMOS Observations\label{tab:obs}}
\tablehead{\colhead{Slitmask} & \colhead{Targets} & \colhead{UT Date} & \colhead{Exposures} & \colhead{Seeing}}
\startdata
scl1 & \phn86 & 2008 Aug 3\phn & $3 \times 1200$~s & $0 \farcs 8$ \\
scl2 & 106 & 2008 Aug 3\phn & $2 \times 900$~s\phn & $0 \farcs 8$ \\
scl3 & \phn87 & 2008 Aug 4\phn & $1 \times 462$~s\phn & $0 \farcs 9$ \\
& & 2008 Aug 31 & $1 \times 1000$~s & $0 \farcs 8$ \\
& & 2008 Aug 31 & $1 \times 834$~s\phn & $0 \farcs 8$ \\
scl5 & \phn95 & 2008 Sep 1\phn & $3 \times 720$~s\phn & $0 \farcs 8$ \\
scl6 & \phn91 & 2008 Sep 1\phn & $3 \times 720$~s\phn & $1 \farcs 2$ \\
\enddata
\tablecomments{The scl4 slitmask was not observed.}
\end{deluxetable}
Our observing strategy was nearly identical to that of \citet{sim07}
and \citet{kir08a}. In summary, we used with the 1200 lines~mm$^{-1}$
grating at a central wavelength of 7800~\AA. The slit widths were
$0\farcs 7$, yielding a spectral resolution of $\sim 1.3$~\AA\ FWHM
(resolving power $R \sim 6500$ at 8500~\AA). The OG550 filter blocked
diffraction orders higher than $m=1$. The spectral range was about
6400--9000~\AA\ with variation depending on the slit's location along
the dispersion axis. Exposures of Kr, Ne, Ar, and Xe arc lamps
provided wavelength calibration, and exposures of a quartz lamp
provided flat fielding. Table~\ref{tab:obs} lists the number of
targets for each slitmask, the dates of observations, the exposure
times, and the approximate seeing.
\section{Data Reduction}
\label{sec:prep}
\subsection{Extraction of One-Dimensional Spectra}
We reduced the raw frames using version 1.1.4 of the DEIMOS data
reduction pipeline developed by the DEEP Galaxy Redshift
Survey.\footnote{\url{http://astro.berkeley.edu/$^{\sim}$cooper/deep/spec2d/}}
\citet{guh06} give the details of the data reduction. We also made
use of the optimizations to the code described by \citet[Sec.~2.2 of
their article]{sim07}. These modifications provided better
extraction of unresolved stellar sources.
In summary, the pipeline traced the edges of slits in the flat field
to determine the CCD location of each slit. The wavelength solution
was given by a polynomial fit to the CCD pixel locations of arc lamp
lines. Each exposure of stellar targets was rectified and then
sky-subtracted based on a B-spline model of the night sky emission
lines. Next, the exposures were combined with cosmic ray rejection
into one two-dimensional spectrum for each slit. Finally, the
one-dimensional stellar spectrum was extracted from a small spatial
window encompassing the light of the star in the two-dimensional
spectrum. The product of the pipeline was a wavelength-calibrated,
sky-subtracted, cosmic ray-cleaned, one-dimensional spectrum for each
target.
Some of the spectra suffered from unrecoverable defects, such as a
failure to find an acceptable polynomial fit to the wavelength
solution. There were 53 such spectra. An additional 2\ spectra
had such poor signal-to-noise ratios (SNR) that abundance measurements
were impossible, leaving 410\ useful spectra, comprising 393\
unique targets and 17\ duplicate measurements.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f3.eps}
\caption{Examples of small regions of DEIMOS spectra of four different
stars. The continuum in each spectrum has been normalized to unity.
The $I_{\rm C}$ magnitude, measured effective temperature, and
measured [Fe/H]\ is given for each star. The top two panels show two
stars with very different [Fe/H], and the bottom two panels show two
stars with nearly the same temperature and [Fe/H]\ but different SNR.
The colors show the regions used to measure each of the Fe, Mg, Si,
Ca, and Ti abundances (see
Fig.~\ref{fig:coadd}).\label{fig:snexamples}}
\end{figure}
Figure~\ref{fig:snexamples} shows four example spectra at a variety of
$I_{\rm C}$ magnitudes, effective temperatures, and [Fe/H]. The two
upper panels show stars in the top 10\% of the SNR distribution. The
two lower panels show stars from the middle and bottom 10\% of the
distribution.
The one-dimensional DEIMOS spectra needed to be prepared for abundance
measurements. The preparation included velocity measurement, removal
of telluric absorption, and continuum division. KGS08 (their Sec.~3)
described these preparations in detail. We followed the same process
with some notable exceptions, described below.
\subsection{Telluric Absorption Correction}
We removed the absorption introduced into the stellar spectra by the
earth's atmosphere in the same manner as KGS08: division by a hot star
template spectrum. However, the high airmass of the Sculptor
observations caused much stronger absorption than KGS08 observed in
globular cluster (GC) spectra. Even after scaling the hot star
template spectrum by the airmass, large residuals in the Sculptor
stellar spectra remained. Consequently, we masked spectral regions of
heavy telluric absorption before measuring abundances. These regions
are 6864--6932~\AA, 7162--7320~\AA, 7591--7703~\AA, 8128--8351~\AA,
and 8938--10000~\AA\ (see Fig.~\ref{fig:coadd}).
\subsection{Radial Velocities and Spectroscopic Membership Determination}
\label{sec:velocities}
Our primary interest in this paper is chemical abundances, and we
measured radial velocities only to determine membership and to shift
the spectra into the rest frame.
Following KGS08, we measured stellar radial velocities by
cross-correlation with a template spectrum. However, KGS08
cross-correlated the observed spectra against synthetic spectra
whereas we cross-correlated the observed spectra against high SNR
template spectra of stars observed with DEIMOS. Templates observed
with the same instrument should provide more accurate radial velocity
measurements than synthetic templates. \citet{sim07} provided their
template spectra to us. For the rest of the analysis, the spectra are
shifted to the rest frame.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f4.eps}
\caption{Distribution of measured radial velocities for targets in the
Sculptor field along with the best-fit Gaussian. The top left label
gives the mean and standard deviation of this Gaussian fit. The
five stars outside of the dashed lines are not considered Sculptor
members. The velocity range of this plot includes all stars for
which a velocity measurement was possible.\label{fig:vhist}}
\end{figure}
Although the $MT_2D$ selection eliminated almost all of the foreground
MW contaminants from the spectroscopic sample, we checked the
membership of each target by radial velocity selection.
Figure~\ref{fig:vhist} shows the distribution of radial velocities in
this spectroscopic data set along with the best-fit Gaussian. We
consider the radial velocity limits of Sculptor membership to be
$84.8$~km~s$^{-1} < v_r < 138.3$~km~s$^{-1}$. We chose these limits
because beyond them, the expected number of Sculptor members per
2~km~s$^{-1}$ bin \citep[the approximate maximum velocity resolution
of DEIMOS,][]{sim07} is fewer than 0.5. This selection eliminated
just 5\ out of 393\ unique targets.
As a check on our procedure, we compared some derived quantities from
the velocity distribution to previous measurements. The mean velocity
of our sample is $\langle v_{\rm{helio}}\rangle = 111.6 \pm
0.5$~km~s$^{-1}$ with a dispersion of $\sigma_v = 8.0
\pm 0.7$~km~s$^{-1}$. The velocity dispersion is the
per-measurement velocity error subtracted in quadrature from the
$1\sigma$ width of the velocity distribution. The per-measurement
error is 3.9~km~s$^{-1}$, which is the standard deviations
of the differences in measured velocities for the 17\ duplicate
spectra. In comparison, \citet{wes06} found $\langle
v_{\rm{helio}}\rangle = 110.4 \pm
0.8$~km~s$^{-1}$ (difference of $+1.2\sigma$)
and $\sigma_v = 8.8 \pm 0.6$~km~s$^{-1}$
(difference of $-0.9\sigma$). The comparison of the
velocity dispersions depends on the assumed binary fraction
\citep{que95} and---given the presence of multiple kinematically and
spatially distinct populations in Sculptor (T04)---the region of
spectroscopic selection. Furthermore, \citet{wal07,wal09} reported
velocity dispersion gradients, and \citet{bat08a} reported mean
velocity gradients along the major axis, indicating rotation. We
choose not to address the kinematic complexity of this system in this
paper.
\subsection{Continuum Determination}
In the abundance analysis described in Sec.~\ref{sec:measure}, it is
necessary to normalize each stellar spectrum by dividing by the slowly
varying stellar continuum. KGS08 determined the continuum by
smoothing the regions of the stellar spectrum free from strong
absorption lines. Instead of smoothing, we fit a B-spline with a
breakpoint spacing of 150~\AA\ to the same ``continuum regions''
defined by KGS08. Each pixel was weighted by its inverse variance in
the fit. Furthermore, the fit was performed iteratively such that
pixels that deviated from the fit by more than $5\sigma$ were removed
from the next iteration of the fit.
The spline fit results in a smoother continuum determination than
smoothing. Whereas the smoothed continuum value may be influenced
heavily by one or a few pixels within a relatively small smoothing
kernel, the spline fit is a global fit. It is more likely to be
representative of the true stellar continuum than a smoothed spectrum.
\citet{she09} pointed out the importance of determining the continuum
accurately when measuring weak lines in medium-resolution spectra.
They refined their continuum determinations by iteratively fitting a
high-order spline to the quotient of the observed spectrum and the
best-fitting synthetic spectrum. We adopted this procedure as well.
As part of the iterative process described in
Sec.~\ref{sec:iterations}, we fit a B-spline with a breakpoint spacing
of 50~\AA\ to the observed spectrum divided by the best-fitting
synthetic spectrum. We divided the observed spectrum by this spline
before the next iteration of abundance measurement.
\section{Abundance Measurements}
\label{sec:measure}
The following section details some improvements on the abundance
measurement techniques of KGS08. Aspects of the technique not
mentioned here were unchanged from the technique of KGS08. In
summary, each observed spectrum was compared to a large grid of
synthetic spectra. The atmospheric abundances were adopted from the
synthetic spectrum with the lowest $\chi^2$.
A major improvement was our measurement of four individual elemental
abundances in addition to Fe: Mg, Si, Ca, and Ti. We chose these
elements because they are important in characterizing the star
formation history of a stellar population and because a significant
number of lines represent each of them in the DEIMOS spectral range.
\subsection{Model Atmospheres}
Like KGS08, we built synthetic spectra based on ATLAS9 model
atmospheres \citep{kur93} with no convective overshooting
\citep{cas97}. KGS08 chose to allow the atmospheres to have ${\rm [\alpha/Fe]}
= +0.4$ or ${\rm [\alpha/Fe]} = 0.0$. This choice allowed them to use the
large grid of ATLAS9 model atmospheres computed with new opacity
distribution functions \citep{cas04}. However, we found that
best-fitting model spectra computed by KGS08 tended to cluster around
${\rm [\alpha/Fe]} = +0.2$ due to the discontinuity in $\chi^2$ caused by the
abrupt switch between alpha-enhanced and solar-scaled models.
\begin{deluxetable}{lccc}
\tablewidth{0pt}
\tablecolumns{4}
\tablecaption{New Grid of ATLAS9 Model Atmospheres\label{tab:atlas}}
\tablehead{\colhead{Parameter} & \colhead{Minimum Value} & \colhead{Maximum Value} & \colhead{Step}}
\startdata
$T_{\rm{eff}}$~(K) & 3500 & 5600 & 100 \\
& 5600 & 8000 & 200 \\
$\log g$~(cm~s$^{-2}$) & 0.0 ($T_{\rm eff} < 7000$~K) & 5.0 & 0.5 \\
& 0.5 ($T_{\rm eff} \ge 7000$~K) & 5.0 & 0.5 \\
{[A/H]} & $-4.0$ & $0.0$ & $0.5$ \\
[$\alpha$/Fe] & $-0.8$ & $+1.2$ & $0.1$ \\
\enddata
\end{deluxetable}
To avoid this discontinuity, we recomputed ATLAS9 model atmospheres on
the grid summarized in Table~\ref{tab:atlas}. The new grid required
recomputing new opacity distribution functions (ODFs), for which we
used the DFSYNTHE code \citep{cas05}. Unlike the grid of
\citet{cas04}, we adopted the solar composition of \citet{and89},
except for Fe, for which we followed \citet[][see the note in
Table~\ref{tab:solarspec}]{sne92}. One opacity distribution
function was computed for each of the 189 combinations of [A/H] and
[$\alpha$/Fe]\ specified in Table~\ref{tab:atlas}. The abundances of all the
elements except H and He were augmented by [A/H]. Additionally, the
abundances of O, Ne, Mg, Si, Ar, Ca, and Ti were augmented by [$\alpha$/Fe].
These ODFs were used to compute one ATLAS9 model atmosphere for each
grid point in Table~\ref{tab:atlas} and for two values of
microturbulent velocity, for a total of 139104 model atmospheres.
\subsection{Microturbulent Velocity}
\label{sec:vt}
In order to reduce the number of parameters required to determine a
stellar abundance, KGS08 assumed that the microturbulent velocity
($\xi$) of the stellar atmosphere was tied to the surface gravity
($\log g$). They chose to fit a line to the spectroscopically measured
$\xi$ and $\log g$\ of the giant stars in \citeauthor{ful00}'s
(\citeyear{ful00}) sample:
\begin{equation}
\xi~(\mathrm{km~s}^{-1}) = 2.70 - 0.51\,\log g
\end{equation}
We also adopted a relation between $\xi$ and $\log g$, but we
re-determined this relation from the GC red giant sample of KGS08
combined with \citeauthor{kir09}'s (\citeyear{kir09}) compilation of
high-resolution spectroscopic measurements from the literature
\citep[][and references from
KGS08]{fre09,gei05,joh02,lai07,she01,she03}. The best-fit line
between the spectroscopically measured $\xi$ and $\log g$\ is
\begin{equation}
\label{eq:vtlogg} \xi~(\mathrm{km~s}^{-1}) = (2.13 \pm 0.05) - (0.23 \pm 0.03)\,\log g \: .
\end{equation}
\noindent
corresponding roughly to a 0.0--0.5~km~s$^{-1}$ decrease in $\xi$,
depending on $\log g$. In the generation of the grid of synthetic
stellar spectra described in Sec.~\ref{sec:generation}, $\xi$ was not
a free parameter, but was fixed to $\log g$\ via Eq.~\ref{eq:vtlogg}.
In general, a decrease in $\xi$ increases the measurement of [Fe/H].
Therefore, this change tended to increase the derived values of [Fe/H].
A typical change in [Fe/H]\ was $\la +0.05$~dex. This change would be
more severe in an HRS analysis based on equivalent widths (EWs). In
our $\chi^2$ minimization, the abundance measurement was most
sensitive to lines with large $d({\rm EW})/d{\rm [Fe/H]}$. Such lines are
the weak, unsaturated transitions whose strength does not depend on
$\xi$. The DEIMOS spectra contain enough of these weak lines that
$\xi$ did not play a large role in the abundance determination.
\subsection{Line List}
\label{sec:linelist}
We compared the \ion{Fe}{1} oscillator strengths ($\log gf$) in the
KGS08 line list to values measured in the laboratory \citep{fuh06}.
Most of the KGS08 oscillator strengths were stronger than the
laboratory measurements. The average offset was 0.13~dex. Because
KGS08 calibrated their line list to the solar spectrum, we interpreted
this offset as a systematic error in the solar model atmosphere, solar
spectral synthesis, and/or solar composition. Accepting the
laboratory-measured values as more accurate than the solar
calibration, we replaced \ion{Fe}{1} oscillator strengths with
\citeauthor{fuh06} where available, and we subtracted 0.13~dex from
$\log gf$ for all other \ion{Fe}{1} transitions in the KGS08 line
list. All other data remained unchanged.
Decreasing the oscillator strengths requires a larger [Fe/H]\ to match
the observed spectrum. The amount of change in [Fe/H]\ depends on the
atmospheric parameters as well as the saturation of the measured Fe
lines. From comparison of results with the old and new line lists, we
estimate a typical change in [Fe/H]\ to be $\sim +0.1$~dex.
\subsection{Generation of Synthetic Spectra}
\label{sec:generation}
The spectra were synthesized as described in KGS08. Specifically, the
current version of the local thermodynamic equilibrium (LTE) spectrum
synthesis software MOOG \citep{sne73} generated one spectrum for each
point on the grid. The spectral grid was more finely spaced in
[Fe/H]\ than the model atmosphere grid. The spacing is 0.1~dex for each
of [Fe/H]\ and [$\alpha$/Fe], yielding a total of 316848 synthetic spectra.
\begin{deluxetable}{lr|lr}
\tablewidth{0pt}
\tablecolumns{4}
\tablecaption{Adopted Solar Composition\label{tab:solarspec}}
\tablehead{\colhead{Element} & \colhead{$12 + \log \epsilon$} & \colhead{Element} & \colhead{$12 + \log \epsilon$}}
\startdata
Mg & 7.58 & Ti & 4.99 \\
Ca & 6.36 & Fe & 7.52 \\
Si & 7.55 & & \\
\enddata
\tablecomments{This composition is adopted from \protect\citet{and89},
except for Fe. For justification of the adopted Fe solar abundance,
see \citet{sne92}. The abundance of an element X is defined as its
number density relative to hydrogen: $12+ \log \epsilon_{\rm{X}} =
12 + \log (n_{\rm{X}}) - \log (n_{\rm{H}})$.}
\end{deluxetable}
The solar composition used in the generation of the synthetic spectra
was identical to the solar composition used in the computation of the
model atmospheres. Table~\ref{tab:solarspec} lists the adopted solar
abundances for the five elements for which we measure abundances in
Sculptor stars.
\subsection{Effective Temperatures and Surface Gravities}
\label{sec:tefflogg}
Different spectroscopic studies of chemical abundances rely on
different sources of information for determining the effective
temperature ($T_{\rm{eff}}$) and surface gravity ($\log g$) of the stellar
atmosphere. KGS08 consulted Yonsei-Yale model isochrones
\citep{dem04} to determine the temperature and gravity that correspond
to a dereddened color and an extinction-corrected absolute magnitude.
They also considered Victoria-Regina \citep{van06} and Padova
\citep{gir02} model isochrones, as well as an empirical
color-temperature relation \citep{ram05}.
The Fe lines accessible in DEIMOS spectra span a large range of
excitation potential. Together, these different lines provide a
constraint on $T_{\rm{eff}}$. KGS08 (their Sec.~5.1) showed that---without any
photometric information---the synthesis analysis of medium-resolution
spectra of GC stars yielded values of $T_{\rm{eff}}$\ very close to values
previously measured from HRS. Therefore, we chose to measure
$T_{\rm{eff}}$\ from photometry and spectroscopy simultaneously.
To begin, we converted extinction-corrected \citep{sch98} Washington
$M$ and $T_2$ magnitudes to Cousins $V_{\rm C}$ and $I_{\rm C}$
magnitudes \citep{maj00}. With these magnitudes, we computed
$T_{\rm{eff}}$\ from the Yonsei-Yale, Victoria-Regina, and Padova model
isochrones, as well as the \citet{ram05} empirical color-based $T_{\rm{eff}}$.
For each measurement, we estimated the effect of photometric error by
measuring the standard deviation of $T_{\rm{eff}}$\ determined from 1000 Monte
Carlo realizations of $V_{\rm C}$ and $I_{\rm C}$. In each
realization, $V_{\rm C}$ and $I_{\rm C}$ were chosen from a normal
distribution with a mean of the measured, extinction-corrected
magnitude and a standard deviation of the photometric error. We call
this error $\delta T_{{\rm eff,}i}$, where $i$ represents each of the
four photometric methods of determining $T_{\rm{eff}}$. In order to arrive at
a single photometric $T_{\rm{eff}}$, we averaged the four $T_{{\rm eff,}i}$
together with appropriate error weighting. We also estimated the
random and systematic components of error. In summary,
\begin{eqnarray}
\label{eq:teffphot} \overline{T_{\rm{eff}}} &=& \frac{\sum_i T_{{\rm eff,}i} \delta T_{{\rm eff,}i}^{-2}}{\sum_i \delta T_{{\rm eff,}i}^{-2}} \\
\delta_{\rm{rand}} T_{\rm{eff}} &=& \frac{\sum_i \delta T_{{\rm eff,}i}^{-1}}{\sum_i \delta T_{{\rm eff,}i}^{-2}} \\
\delta_{\rm{sys}} T_{\rm{eff}} &=& \sqrt{\frac{\sum_i \delta T_{{\rm eff,}i}^{-2} \sum_i \delta T_{{\rm eff,}i}^{-2} \left(T_{{\rm eff,}i} - \overline{T_{\rm{eff}}}\right)^2}{1 - \left(\sum_i \delta T_{{\rm eff,}i}^{-2}\right)^2 \sum_i \delta T_{{\rm eff,}i}^{-4}}} \\
\label{eq:tefferr} \delta_{\rm{total}} T_{\rm{eff}} &=& \sqrt{(\delta_{\rm{rand}} T_{\rm{eff}})^2 + (\delta _{\rm{sys}}T_{\rm{eff}})^2}
\end{eqnarray}
For the stars in this data set, the median random, systematic, and
total errors on $T_{\rm{eff}}$\ were 98~K, 58~K,
and 117~K respectively. The somewhat large errors on
the photometric temperatures indicated that the spectra may help
constrain $T_{\rm{eff}}$. Therefore, Eq.~\ref{eq:teffphot} does not show the
final temperature used in the abundance determination.
Section~\ref{sec:iterations} describes the iterative process for
determining $T_{\rm{eff}}$\ and elemental abundances from spectroscopy.
We followed a similar procedure for determining
$\log g$\ photometrically, except that we used only the three model
isochrones and not any empirical calibration. The error on the true
distance modulus \citep[$19.67 \pm 0.12$,][]{pie08} was included in
the Monte Carlo determination of the error on $\log g$. The median
random, systematic, and total errors on $\log g$\ were
0.06, 0.01, and 0.06. These
errors are very small, and the medium-resolution, red spectra have
little power to help constrain $\log g$\ because there are so few ionized
lines visible. Therefore, we assumed the photometric value of
$\log g$\ for the abundance analysis.
\subsection{Wavelength Masks}
\label{sec:masks}
The procedure described in the next section consisted of separately
measuring the abundances of five elements: Mg, Si, Ca, Ti, and Fe.
The procedure relied on finding the synthetic spectrum that best
matched an observed spectrum. In order to make this matching most
sensitive to a particular element, we masked all spectral regions that
were not significantly affected by abundance changes of that element.
To make the wavelength masks, we began with a base spectrum that
represented the solar composition in which the abundances of all the
metals were scaled down by 1.5~dex ($\rm{[A/H]} = -1.5$). The
temperature and gravity of the synthetic star were $T_{\rm eff} =
4000$~K and $\log g = 1.0$. Then, we created two pairs of spectra
for each of the five elements. In one spectrum, the abundance of the
element was enhanced by 0.3~dex, and in the other, depleted by
0.3~dex. Spectral regions where the flux difference between these two
spectra exceeds 0.5\% were used in the abundance determination of that
element. This small threshold assured that weak lines, which
experience large fractional changes in EW as [Fe/H]\ changes, were
included in the analysis. We repeated this procedure for spectra with
$T_{\rm eff} = 5000$~K, 6000~K, 7000~K, and 8000~K. Additional spectral
regions that passed the 0.5\% flux difference criterion were also
included in the abundance determination of that element. All other
wavelengths were masked.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f5.eps}
\caption{A coaddition of all Sculptor stars with the continuum
normalized to unity. The high SNR provided by the coaddition makes
stellar absorption lines readily apparent. The colored regions show
the wavelength masks used in the determination of the abundance of
each element. Regions susceptible to telluric absorption are
labeled with blue text. Because large residuals from the telluric
absorption correction remain, we eliminate these regions from the
abundance analysis. Some stellar features excluded from the
abundances measurement are also labeled.\label{fig:coadd}}
\end{figure}
The result was one wavelength mask for each of Mg, Si, Ca, Ti, and Fe,
shown in Fig.~\ref{fig:coadd}. We also created one ``$\alpha$'' mask
as the intersection of the Mg, Si, Ca, and Ti masks. The $\alpha$
element regions do not overlap with each other, but the $\alpha$
element regions do overlap with the Fe regions. The most severe case
is the Ca mask, where $\sim 35\%$ of the pixels are shared with the Fe
mask. However, the overlap did not introduce interdependence in the
abundance measurements. The $\alpha$ element abundances were held
fixed while [Fe/H]\ was measured, and the Fe abundance was held fixed
while [$\alpha$/Fe]\ was measured. The measurements of [Fe/H]\ and [$\alpha$/Fe]\ were
performed iteratively (see the next subsection). We tested the
independence of the measurements by removing all overlapping pixels
from consideration. Abundance measurements changed on average by only
0.01~dex.
\subsection{Measuring Atmospheric Parameters and Elemental Abundances}
\label{sec:iterations}
A Levenberg-Marquardt algorithm \citep[the IDL routine MPFIT, written
by][]{mark09} found the best-fitting synthetic spectrum in ten
iterative steps. In each step, the $\chi^2$ was computed between an
observed spectrum and a synthetic spectrum degraded to match the
resolution of the observed spectrum. First, we interpolated the
synthetic spectrum onto the same wavelength array as the observed
spectrum. Then, we smoothed the synthetic spectrum through a Gaussian
filter whose width was the observed spectrum's measured resolution as
a function of wavelength.
\begin{enumerate}
\item $T_{\rm{eff}}$\ and [Fe/H], first pass: An observed spectrum was compared
to a synthetic spectrum with $T_{\rm{eff}}$\ and $\log g$\ determined as
described in Sec.~\ref{sec:tefflogg} and [Fe/H]\ determined from
Yonsei-Yale isochrones. For this iteration, [$\alpha$/Fe]\ was fixed at 0.0
(solar), and only spectral regions most susceptible to Fe absorption
(Sec.~\ref{sec:masks}) were considered. The two quantities $T_{\rm{eff}}$\
and [Fe/H]\ were varied, and the algorithm found the best-fitting
synthetic spectrum by minimizing $\chi^2$. We sampled the parameter
space between grid points by linearly interpolating the synthetic
spectra at the neighboring grid points. $T_{\rm{eff}}$\ was also loosely
constrained by photometry. As the spectrum caused $T_{\rm{eff}}$\ to stray
from the photometric values, $\chi^2$ increased, and it increased
more sharply for smaller photometric errors (as calculated in
Eq.~\ref{eq:tefferr}). Therefore, both photometry and spectroscopy
determined $T_{\rm{eff}}$. Photometry alone determined $\log g$.
\item [$\alpha$/Fe], first pass: For this iteration, $T_{\rm{eff}}$, $\log g$, and [Fe/H]\
were fixed. Only [$\alpha$/Fe]\ was allowed to vary. In the model stellar
atmosphere, the abundances of the $\alpha$ elements with respect to
Fe varied together. Only the spectral regions susceptible to
absorption by Mg, Si, Ca, or Ti were considered.
\item Continuum refinement: The continuum-divided, observed spectrum
was divided by the synthetic spectrum with the parameters determined
in steps 1 and 2. The result approximated a flat noise spectrum.
To better determine the continuum, we fit a B-spline with a
breakpoint spacing of 50~\AA\ to the residual spectrum. We divided
the observed spectrum by the spline fit.
\item [Fe/H], second pass: We repeated step 1 with the revised spectrum,
but $T_{\rm{eff}}$\ was held fixed at the previously determined value.
\item{[Mg/Fe]: We repeated step 2. However, only Mg spectral lines
were considered in the abundance measurement.}
\item{[Si/Fe]: We repeated step 5 for Si instead of Mg.}
\item{[Ca/Fe]: We repeated step 5 for Ca instead of Mg.}
\item{[Ti/Fe]: We repeated step 5 for Ti instead of Mg.}
\item [$\alpha$/Fe], second pass: We repeated step 2 for all of the $\alpha$
elements instead of just Mg. This step was simply a different way to
average the $\alpha$ element abundances than combining the
individual measurements of [Mg/Fe], [Si/Fe], [Ca/Fe], and [Ti/Fe].
\item [Fe/H], third pass: The value of [$\alpha$/Fe]\ affected the measurement of
[Fe/H]\ because [$\alpha$/Fe]\ can affect the structure of the stellar
atmosphere. Specifically, the greater availability of electron
donors with an increased [$\alpha$/Fe]\ ratio allows for a higher density of
H$^{-}$ ions. The subsequent increase in continuous opacity
decreases the strength of Fe and other non-$\alpha$ element lines.
With [$\alpha$/Fe]\ fixed at the value determined in step 9, we re-measured
[Fe/H]. Typically, [Fe/H]\ changed from the value determined in step 1
by much less than 0.1~dex.
\end{enumerate}
\subsection{Correction to [Fe/H]}
\label{sec:correction}
In comparing our MRS measurements of [Fe/H]\ to HRS measurements of the
same stars (see the appendix), we noticed that our measurements of
metal-poor stars were consistently $\sim 0.15$~dex lower. The same
pattern is also visible in the \citet{kir08a} GC measurements (see
their Figs.~6, 7, 10, and 11).
We have thoroughly examined possible sources of this difference of
scale. The changes to the microturbulent velocity relation
(Sec.~\ref{sec:vt}) and the line list (Sec.~\ref{sec:linelist}) were
intended to yield a more accurate and standardized estimation of [Fe/H],
but the offset still remained. Restricting the analysis to narrow
spectral regions did not reveal any systematic trend of [Fe/H]\ with
wavelength.
A possible explanation for this offset is overionization
\citep{the99}. Ultraviolet radiation in stellar atmospheres can
ionize Fe more than would be expected in LTE. Therefore, the
abundance of \ion{Fe}{1} would seem to be lower than the abundance of
\ion{Fe}{2} in an LTE analysis. \ion{Fe}{2} does not suffer from this
effect. However, the effect is smaller at higher [Fe/H], and we do not
observe a trend with metallicity for the offset of our values relative
to HRS studies.
In order to standardize our measurements with previous HRS studies, we
added 0.15~dex to all of our measurements of [Fe/H]. This offset and
the microturbulent velocity-surface gravity relation are the only ways
in which previous HRS studies inform our measurements. Furthermore,
this offset is not intended to change the standardization of our
abundances. All of the abundance in this article, including those
from other studies, are given relative to the solar abundances quoted
in Table~\ref{tab:solarspec}.
\subsection{Error Estimation}
\label{sec:error}
\begin{deluxetable}{lr|lr}
\tablewidth{0pt}
\tablecolumns{4}
\tablecaption{Systematic Abundance Errors\label{tab:syserr}}
\tablehead{\colhead{Element Ratio} & \colhead{$\delta_{\rm sys}$} & \colhead{Element Ratio} & \colhead{$\delta_{\rm sys}$}}
\startdata
\protect[Fe/H] & 0.136 & [Ca/Fe] & 0.087 \\
\protect[Mg/Fe] & 0.108 & [Ti/Fe] & 0.101 \\
\protect[Si/Fe] & 0.179 & & \\
\enddata
\end{deluxetable}
We repeated the error estimation procedure described by KGS08 (their
Sec.~6) by repeating their abundance analysis on GC stars with the
above modifications. We no longer found a convincing trend of
$\delta{\rm [Fe/H]}$ with [Fe/H]. Instead, we estimate the total error on
[Fe/H]\ by adding a systematic error in quadrature with the
SNR-dependent uncertainty of the synthetic spectral fit. The
magnitude of $\delta_{\rm{sys}}{\rm [Fe/H]} = 0.136$ was the value
required to force HRS and MRS [Fe/H]\ estimates of the same GC stars to
agree at the 1$\sigma$ level. We also estimated systematic errors for
each of [Mg/Fe], [Si/Fe], [Ca/Fe], and [Ti/Fe] in the same manner as
for [Fe/H]. These are listed in Table~\ref{tab:syserr}.
\section{Results}
\label{sec:abund}
In this section, we discuss the interpretation of the abundance
measurements in Sculptor, all of which are presented in
Table~\ref{tab:data} on the last page of this manuscript.
\subsection{Metallicity Distribution}
The metallicity distribution function (MDF) of a dwarf galaxy can
reveal much about its star formation history. In chemical evolution
models of dwarf galaxies \cite[e.g.,][]{lan04,mar06,mar08}, the
duration of star formation affects the shape of the MDF. The MDF also
has implications for the formation of the MW. If the MW halo was
built from dSphs \citep{sea78,whi78}, then it is important to find
dSph counterparts to halo field stars at all metallicities, as pointed
out by \citet[][hereafter H06]{hel06}.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f6.eps}
\caption{The metallicity distribution in Sculptor. The red curve is
the maximum likelihood fit to a galactic chemical evolution model
with pre-enrichment (Eq.~\ref{eq:gce}), and the green curve is the
maximum likelihood fit to a model of star formation in the presence
of infalling, zero-metallicity gas (Eq.~\ref{eq:infallgce}). The
long, metal-poor tail is typical for systems with non-instantaneous
star formation.\label{fig:feh_hist}}
\end{figure}
Figure~\ref{fig:feh_hist} shows the MDF of Sculptor. The shape of the
MDF is highly asymmetric, with a long, metal-poor tail \citep[as
predicted by][]{sal09}. The inverse-variance weighted mean is
$\langle{\rm [Fe/H]}\rangle = -1.58$ with a standard deviation of
$0.41$. The median is $-1.58$ with a median absolute
deviation of $0.33$ and an interquartile range of $0.67$.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f7.eps}
\caption{Regions of the DEIMOS spectrum of the extremely metal-poor
star S1020549, which has ${\rm [Fe/H]} = -3.80 \pm
0.28$. The spectrum appears particularly noisy because
the $y$-axis range is small. Some Fe absorption lines are barely
detectable, but all together, they contain enough signal to make a
quantitative measurement of [Fe/H]. The shading corresponds to the
same spectral region shown in Fig.~\ref{fig:coadd}. Frebel, Kirby,
\& Simon (in preparation) will present a high-resolution spectrum of
this star, which confirms the extremely low
metallicity.\label{fig:1020549}}
\end{figure}
The MDF boasts an exceptionally metal-poor star, S1020549. The
metallicity is ${\rm [Fe/H]} = -3.80 \pm 0.28$.
Figure~\ref{fig:1020549} shows how weak the Fe absorption lines are in
this star. Frebel, Kirby, \& Simon (in preparation) have confirmed
this extremely low metallicity with a high-resolution spectrum.
Sculptor is now the most luminous dSph in which an extremely
metal-poor (EMP, ${\rm [Fe/H]} < -3$) star has been detected.
[\citet{kir08b} discovered 15 EMP stars across eight ultra-faint dwarf
galaxies, and \citet{coh09} discovered one EMP star in the
\object[NAME DRACO DSPH GALAXY]{Draco dSph}.] Stars more metal-poor
than S1020549 are known to exist only in the field of the Milky Way
field. This discovery hints that dSph galaxies like Sculptor may have
contributed to the formation of the metal-poor component of the halo.
We discuss Sculptor's link to the halo further in
Sec.~\ref{sec:halomdf}.
The $MT_2D$ photometric selection of spectroscopic targets may have
introduced a tiny [Fe/H]\ bias. Figure~\ref{fig:cmd} shows that the RGB
is sharply defined in Sculptor. Because the number density of stars
redward and blueward of the RGB is much lower than the number density
on the RGB, the number of very young or very metal-poor stars
(blueward) or very metal-rich stars (redward) missed by photometric
pre-selection must be negligible. Furthermore, the hard color cut (as
opposed to one that depends on $M-D$ color) was $0.6 < (M - T_2)_0 <
2.2$. The CMD gives no reason to suspect Sculptor RGB members outside
of these limits, but it is possible that some extremely blue Sculptor
members have been excluded.
\subsubsection{Possible Explanation of the Discrepancy with Previous Results}
Our measured MDF and our detection of EMP stars in Sculptor are at
odds with the findings of H06. Whereas our MDF peaks at ${\rm [Fe/H]}
\sim -1.3$, theirs peaks at ${\rm [Fe/H]} \sim -1.8$. Furthermore, our
observed MDF is much more asymmetric than that of H06, which may even
be slightly asymmetric in the opposite sense (a longer metal-{\it
rich} tail). The greater symmetry would indicate a less extended star
formation history or early infall of a large amount of gas
\citep{pra03}.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f8.eps}
\caption{Sculptor's metallicity distribution as observed in this study
(MRS, {\it black}) and by B08b (HRS, {\it red}), which is a subset
of the MDF observed by H06 (Ca triplet, {\it green}). The CaT-based
MDF is more metal-poor probably because the sample of H06 is more
spatially extended than the other two
samples.\label{fig:feh_hist_bat08}}
\end{figure}
\citet[][hereafter B08b]{bat08b} observed a subset of the H06 stars at
high resolution. The MDFs from the two studies have noticeably
different shapes. Figure~\ref{fig:feh_hist_bat08} shows that the HRS
MDF peaks at ${\rm [Fe/H]} \sim -1.3$, which is also the peak that we
observe. The mean and standard deviation of their MDF are
$-1.56$ and $0.38$. However, the MDF of H06 peaks
at ${\rm [Fe/H]} \sim -1.8$, and the mean and standard deviation are
$-1.82$ and $0.35$. The overlapping stars between
the samples of B08b and H06 agree very well.
The most likely explanation for the different MDFs is the different
spatial sampling of the three studies. Sculptor has a steep radial
metallicity gradient \citep[][also see
Sec.~\ref{sec:gradients}]{tol04,wes06,wal09}. The stars in the
center of Sculptor are more metal-rich than stars far from the center.
H06 sampled stars out to the tidal radius \citep[$r_t =
76.5$~arcmin,][]{mat98}, but we and B08b sampled stars only out to
about 11~arcmin. As a result, the mean metallicity of the H06 CaT
sample is lower than our MRS sample and the B08b HRS sample. In the
next subsection, we address the chemical evolution of Sculptor based
on its MDF. Our conclusions are based only on stars within the
central 11~arcmin.
\subsubsection{Quantifying Chemical Evolution in Sculptor}
In chemical evolution models, extended star formation produces a long,
metal-poor tail. \citet{pra08} described the shape of the
differential metallicity distribution derived from a ``simple model''
of galactic chemical evolution. Expressed in terms of [Fe/H]\ instead
of metal fraction $Z$, the predicted distribution is
\begin{equation}
\frac{dN}{d{\rm [Fe/H]}} = A\left(10^{{\rm [Fe/H]}} - 10^{{\rm [Fe/H]}_i}\right) \exp \left(-\frac{10^{{\rm [Fe/H]}}}{p}\right)\label{eq:gce}
\end{equation}
\noindent where $p$ is the effective yield in units of the solar metal
fraction ($Z_\sun$) and ${\rm [Fe/H]}_i$ is the initial gas metallicity.
An initial metallicity is needed to resolve the Galactic G dwarf
problem \citep{van62,sch63}. $A$ is a normalization that depends on
$p$, ${\rm [Fe/H]}_i$, the final metallicity ${\rm [Fe/H]}_f$, and the number
of stars in the sample $N$:
\begin{equation}
A = \frac{(N \ln 10)/p}{\exp\left(-\frac{10^{{\rm [Fe/H]}_i}}{p}\right) -
\exp\left(-\frac{10^{{\rm [Fe/H]}_f}}{p}\right)}\label{eq:norm}
\end{equation}
The red curve in Figure~\ref{fig:feh_hist} is the two-parameter,
maximum likelihood fit to Eq.~\ref{eq:gce}. The likelihood $L_i$ that
star $i$ is drawn from the probability distribution defined by
Eq.~\ref{eq:gce} is the integral of the product of the error
distribution for the star and the probability distribution. The total
likelihood $L = \prod_i L_i$. The most likely $p$ and ${\rm [Fe/H]}_0$
are the values that maximize $L$. For display, the curve has been
convolved with an error distribution, which is a composite of $N$ unit
Gaussians. $N$ is the total number of stars in the observed
distribution, and the width of the $i^{\rm th}$ Gaussian is the
estimated total [Fe/H]\ error on the $i^{\rm th}$ star. This
convolution approximates the effect of measurement error on the model
curve under the assumption that the error on [Fe/H]\ does not depend on
[Fe/H]. This assumption seems to be valid because our estimates of
$\delta{\rm [Fe/H]}$ do not show a trend with [Fe/H].
The most likely yield---largely determined by the [Fe/H]\ at the peak of
the MDF---is $p = 0.031 Z_\sun$. [From the MDF of H06,
\citet{pra08} calculated $p = 0.016 Z_\sun$.] We also measure
${\rm [Fe/H]}_0 = -2.92$. H06 also measured ${\rm [Fe/H]}_0 = -2.90
\pm 0.21$ for Sculptor, even though they included stars out to the
tidal radius, which are more metal-poor on average than the centrally
concentrated stars in our sample. (Instead of finding the maximum
likelihood model, they performed a least-squares fit to the cumulative
metallicity distribution without accounting for experimental
uncertainty. In general, observational errors exaggerate the extrema
of the metallicity distribution, and the least-squares fit converges
on a lower ${\rm [Fe/H]}_0$ than the maximum likelihood fit.) One
explanation that they proposed for this non-zero initial metallicity
was pre-enrichment of the interstellar gas that formed the first
stars. Pre-enrichment could result from a relatively late epoch of
formation for Sculptor, after the supernova (SN) ejecta from other
galaxies enriched the intergalactic medium from which Sculptor formed.
However, our observation of a star at ${\rm [Fe/H]} = -3.80$ is
inconsistent with pre-enrichment at the level of ${\rm [Fe/H]}_0 = -2.9$.
\citet{pra08} instead interpreted the apparent dearth of EMP stars as
an indication of early gas infall \citep{pra03}, wherein star
formation begins from a small amount of gas while the majority of gas
that will eventually form dSph stars is still falling in. In order to
test this alternative to pre-enrichment, we have also fit an Infall
Model, the ``Best Accretion Model'' of \citet[][also see
\citeauthor{pag97} \citeyear{pag97}]{lyn75}. It is one of the
models which accounts for a time-decaying gas infall that has an
analytic solution. The model assumes that the gas mass $g$ in units
of the initial mass is related quadratically to the stellar mass $s$
in units of the initial mass:
\begin{equation}
g(s) = \left(1 - \frac{s}{M}\right)\left(1 + s - \frac{s}{M}\right)\label{eq:g}
\end{equation}
\noindent
where $M$ is a parameter greater than 1. When $M=1$, Eq.~\ref{eq:g}
reduces to $g = 1 - s$, which describes the Closed Box Model.
Otherwise, $M$ monotonically increases with the amount of gas infall
and with the departure from the Simple Model. Following \citet{lyn75}
and \citet{pag97}, we assume that the initial and infalling gas
metallicity is zero. The differential metallicity distribution is
described by two equations.
\begin{eqnarray}
{\rm [Fe/H]}(s) &=& \log \Big\{p \left(\frac{M}{1 + s -
\frac{s}{M}}\right)^2 \times \label{eq:s} \\
\nonumber & & \left[\ln \frac{1}{1 - \frac{s}{M}} - \frac{s}{M} \left(1 - \frac{1}{M}\right)\right]\Big\} \\
\frac{dN}{d{\rm [Fe/H]}} &=& A \frac{10^{{\rm [Fe/H]}}}{p} \times \label{eq:infallgce} \\
\nonumber & & \frac{1 + s\left(1 - \frac{1}{M}\right)}{\left(1 - \frac{s}{M}\right)^{-1} - 2 \left(1 - \frac{1}{M}\right) \times 10^{{\rm [Fe/H]}/p}}
\end{eqnarray}
\noindent
Equation~\ref{eq:s} is transcendental, and it must be solved for $s$
numerically. Equation~\ref{eq:infallgce} decouples the peak of the
MDF from the yield $p$. As $M$ increases, the MDF peak decreases
independently of $p$.
The green line in Fig.~\ref{fig:feh_hist} shows the most likely Infall
Model convolved with the error distribution as described above. The
Infall Model has $M = 1.76$, which is only a small departure
from the Simple Model.
Neither the Simple Model nor the Infall Model fits the data
particularly well. Both models fail to reproduce the sharp peak at
${\rm [Fe/H]} \sim -1.3$ and the steep metal-rich tail. However, the
Infall Model does reproduce the metal-poor tail about as well as the
Simple Model. Therefore, the Infall Model is a reasonable alternative
to pre-enrichment, and it allows the existence of the star at
${\rm [Fe/H]} = -3.80$. In reality, a precise explanation of the
MDF will likely incorporate the radial metallicity gradients and
multiple, superposed populations. It is tempting to conclude from
Fig.~\ref{fig:feh_hist} that Sculptor displays two metallicity
populations. We have not attempted a two-component fit, but that
would seem to be a reasonable approach for future work, especially in
light of \citeauthor{tol04}'s (\citeyear{tol04}) report of two
distinct stellar populations in Sculptor.
Searches for the lowest metallicity stars in the MW halo have revealed
some exquisitely metal-poor stars \citep[e.g., ${\rm [Fe/H]} =
-5.96$,][]{fre08}. Such exotic stars have not yet been discovered
in any dSph. However, if Sculptor was not pre-enriched, a large
enough sample of [Fe/H]\ measurements in Sculptor---and possibly other
dSphs---may reveal stars as metal-poor as the lowest metallicity stars
in the MW halo.
\subsubsection{Comparison to the Milky Way Halo MDF}
\label{sec:halomdf}
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f9.eps}
\caption{The metal-poor tails of the MDFs in Sculptor ({\it black}) and
Galactic halo field stars \citep[{\it red},][]{sch08} shown as
cumulative distributions, all normalized to the number of stars with
${\rm [Fe/H]} < -2$. The green line shows the MDF measured by the DART
team \citep{hel06} with a calibration based on the Ca triplet. The
calibration may overpredict very low metallicities. The
synthesis-based metallicities ({\it black}, this work) are valid at
lower [Fe/H]\ than the Ca triplet [Fe/H]. Regardless, the halo has a
steeper metal-poor tail than Sculptor in both representations.
Galaxies such as Sculptor were probably not the dominant
contributors to the halo.\label{fig:halo_compare}}
\end{figure}
\citet{sea78} and \citet{whi78} posited that the MW halo formed from
the accretion and dissolution of dwarf galaxies. The dSphs that exist
today may be the survivors from the cannibalistic construction of the
Galactic halo. \citet{hel06} suggested that at least some of the halo
field stars could not have come from counterparts to the surviving
dSphs because the halo field contained extremely metal-poor stars
whereas the dSphs do not. However, \citet{sch08} showed that the
Hamburg/ESO Survey's halo MDF, after correction for selection bias,
actually looks remarkably like the MDFs of the dSphs Fornax,
\object[NAME UMi dSph]{Ursa Minor}, and Draco. Furthermore,
\citet{kir08b} presented MRS evidence for a large fraction of EMP
stars in the ultra-faint dSph sample of \citet{sim07}, suggesting that
today's surviving dSphs contain stars that span the full range of
metallicities displayed by the Galactic field halo population.
We revisit the halo comparison with the present MDF for Sculptor.
Figure~\ref{fig:halo_compare} shows the metal-poor tail (${\rm [Fe/H]} <
-2$) of the MRS synthesis-based Sculptor MDF presented here, the
CaT-based Sculptor MDF \citep{hel06}, and the MW halo MDF
\citep{sch08}. As observed in the comparisons to other dSphs
presented by \citeauthor{sch08}, the halo seems to have a steeper
metal-poor tail than the CaT-based Sculptor MDF, despite the evidence
that CaT-based metallicities overpredict [Fe/H]\ at ${\rm [Fe/H]} \la -2.2$
\citep[e.g.,][]{koc08,nor08}. The synthesis-based MDF does not rely
on empirical calibrations, and the technique has been shown to work at
least down to ${\rm [Fe/H]} = -3$ \citep{kir08b}.
This MDF shows that the halo has a much steeper metal-poor tail than
Sculptor. This result is consistent with a merging scenario wherein
several dwarf galaxies significantly larger than Sculptor contributed
most of the stars to the halo field \citep[e.g.,][]{rob05,fon06}. In
these models, the more luminous galaxies have higher mean
metallicities. Galaxies with a Sculptor-like stellar mass are
minority contributors to the halo field star population. Less
luminous galaxies are even more metal-poor \citep{kir08b}. Therefore,
Sculptor conforms to the luminosity-metallicity relation for dSphs,
and the difference between Sculptor's MDF and the MW halo MDF does not
pose a problem for hierarchical assembly.
\subsection{Alpha Element Abundances}
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f10.eps}
\caption{Multi-element abundances in Sculptor ({\it black}). The
point sizes reflect the quadrature sum of the errors on [Fe/H]\ and
[X/Fe], where larger points have smaller errors. The bottom panel
shows the average of the four elements shown in the other panels.
For comparison, the red error bars show the means and standard
deviations from the seven GCs of KGS08. Because the Sculptor and GC
abundances were measured in the same way, the comparison
demonstrates that [X/Fe] declines with increasing [Fe/H]\ in Sculptor,
but not the GCs.\label{fig:alphafe_feh_gc}}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f11.eps}
\caption{As in Fig.~\ref{fig:alphafe_feh_gc}, the black points show
medium-resolution multi-element abundances in Sculptor. The points
with error bars show published high-resolution data
(\citeauthor{she03} \citeyear{she03}, {\it blue}, and
\citeauthor{gei05} \citeyear{gei05}, {\it green}). The green line
is the inverse variance-weighted average of at least 20 stars within
a window of $\Delta{\rm [Fe/H]} = 0.25$. The red line shows the
chemical evolution model of \citet[][updated 2009]{lan04}. The
onset of Type~Ia SNe causes the decline in [X/Fe] with [Fe/H]. Mg
declines steadily because it is produced exclusively in Type~II SNe,
but Si, Ca, and Ti are produced in both Type~Ia and II
SNe.\label{fig:alphafe_feh_lf}}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f12.eps}
\caption{As in Fig.~\ref{fig:alphafe_feh_gc}, the black points show
medium-resolution multi-element abundances in Sculptor. The colored
points show different components of the Milky Way \citep{ven04}: the
thin disk ({\it red}), the thick disk ({\it green}), and the field
halo ({\it cyan}). The dashed lines are moving averages of the MW
data in 0.75~dex bins of [Fe/H]. In Sculptor, [$\alpha$/Fe]\ falls at lower
[Fe/H]\ than in the halo, indicating that the halo field stars were
less polluted by Type~Ia SNe and therefore formed more rapidly than
Sculptor stars.\label{fig:alphafe_feh_venn}}
\end{figure}
The discrepancy between halo and dSph abundances extends beyond the
MDF. In the first HRS study of stars in a dSph, \citet*{she98} found
that the [Ca/Fe] ratio of metal-poor stars in Draco appeared solar, in
contrast to the enhanced halo field stars. \citet*{she01} and
\citet{she03} confirmed the same result in \object[NAME Sextans
dSph]{Sextans}, Ursa Minor, Sculptor, Fornax, Carina, and \object[NAME
LEO I dSph]{Leo~I}, and they included other $\alpha$ elements in
addition to Ca.
Here, we present the largest sample of [$\alpha$/Fe]\ measurements in any dSph.
Figure~\ref{fig:alphafe_feh_gc} shows [Mg/Fe], [Ca/Fe], and [Ti/Fe]
versus [Fe/H] for Sculptor. The figure also shows the mean and
standard deviations of all of the individual stellar abundance
measurements for each of the seven GCs in the sample of KGS08. All of
the modifications to the KGS08 technique described in
Secs.~\ref{sec:prep} and \ref{sec:measure} apply to the GC
measurements in Fig.~\ref{fig:alphafe_feh_gc}. Although our
discussion in the appendix demonstrates that our measurements are
accurate on an absolute scale by comparing to several different HRS
studies in Sculptor, it is also instructive to compare abundances
measured with the same technique in two types of stellar systems. All
four element ratios slope downward with [Fe/H]\ in Sculptor but remain
flat in the GCs. Additionally, the larger spread of [Mg/Fe] than
other element ratios in the GCs is not due to larger measurement
uncertainties but to the known intrinsic spread of Mg abundance in
some GCs \citep[see the review by][]{gra04}. [Si/Fe], [Ca/Fe], and
[Ti/Fe] are more slightly sloped than [Mg/Fe] in Sculptor because both
Type~Ia and Type~II SNe produce Si, Ca, and Ti, but Type~II SNe are
almost solely responsible for producing Mg \citep{woo95}. Finally, to
maximize the SNR of the element ratio measurements, we average the
four ratios together into one number called [$\alpha$/Fe]. The [$\alpha$/Fe]\ ratio is
flat across the GCs, but it decreases with increasing [Fe/H]\ in
Sculptor.
Quantitative models of chemical evolution in dwarf galaxies are
consistent with these trends. At a certain time corresponding to a
certain [Fe/H]\ in the evolution of the dSph, Type~Ia begin to pollute
the interstellar medium with gas at subsolar [$\alpha$/Fe]. More metal-rich
stars that form from this gas will have lower [$\alpha$/Fe]\ than the more
metal-poor stars. \citet{lan04} have developed a sophisticated model
that includes SN feedback and winds. They predicted the abundance
distributions of six dSphs, including Sculptor.
Figure~\ref{fig:alphafe_feh_lf} shows our measurements with their
predictions (updated with new SN yields, G.~Lanfranchi 2009, private
communication). As predicted, the range of [Mg/Fe] is larger than the
range of [Ca/Fe] or [Si/Fe] because Mg is produced exclusively in
Type~II SNe whereas Si and Ca are produced in both Type~Ia and II SNe
\citep{woo95}. We do not observe strong evidence for a predicted
sharp steepening in slope of both elements at ${\rm [Fe/H]} \sim -1.8$,
but observational errors and intrinsic scatter may obscure this
``knee.'' Also, the observed [Fe/H]\ at which [Mg/Fe] begins to drop is
higher than the model predicts, indicating a less intense wind than
used in the model. Note that the element ratios [X/Fe] become
negative (subsolar) at high enough [Fe/H], as predicted by the models.
\citet{lan04} do not predict [Ti/Fe] because it behaves more like an
Fe-peak element than an $\alpha$ element.
In addition to trends of [$\alpha$/Fe]\ with [Fe/H], \citet{mar06,mar08}
predicted the distribution functions of [Fe/H]\ and [$\alpha$/Fe]\ of a
Draco-like dSph. The range of [Fe/H]\ they predicted is nearly
identical to the range we observe in Sculptor, and the shapes of both
distributions are similar. The outcome of the models depends on the
mass of the dSph. Sculptor is ten times more luminous than Draco
\citep{mat98} and therefore may have a larger total mass. [However,
\citet{str08} find that all dSphs have the same dynamical mass within
300~pc of their centers. It is unclear whether the total masses of
the original, unstripped dark matter halos are the same.] In
principle, these chemical evolution models could be used to measure
the time elapsed since different epochs of star formation and their
durations. We defer such an analysis until the advent of a model
based on a Sculptor-like luminosity or mass.
In Fig.~\ref{fig:alphafe_feh_venn}, we compare individual stellar
abundances in Sculptor to MW halo and disk field stars
\citep[compilation by][]{ven04}. As has been seen in many previous
studies of individual stellar abundances in dSphs, [$\alpha$/Fe]\ falls at a
significantly lower [Fe/H]\ in Sculptor than in the MW halo. The drop
is particularly apparent in [Mg/Fe], which is the element ratio most
sensitive to the ratio of the contributions of Type~II to Type~Ia SNe.
The other element ratios also drop sooner in Sculptor than in the
halo, but appear lower than in the halo at all metallicities. Along
with the MDF comparison in Sec.~\ref{sec:halomdf}, this result is
consistent with the suggestion by \citet{rob05} that galaxies
significantly more massive than Sculptor built the inner MW halo.
Their greater masses allowed them to retain more gas and experience
more vigorous star formation. By the time Type~Ia SNe diluted [$\alpha$/Fe]\
in the massive halo progenitors, the metallicity of the star-forming
gas was already as high as ${\rm [Fe/H]} = -0.5$. In Sculptor, the
interstellar [Fe/H]\ reached only $-1.5$ before the onset of Type~Ia SNe
pollution.
\subsection{Radial Abundance Distributions}
\label{sec:gradients}
\begin{figure}[t!]
\includegraphics[width=\linewidth]{kir09a_f13.eps}
\caption{Spatial abundance distributions in Sculptor. Point sizes are
larger for stars with smaller measurement uncertainties. The red
points reflect the mean values in 1~arcmin bins, along with the
errors on the means. The red lines are the least-squares linear
fits. We detect a gradient of $-0.030 \pm 0.003$~dex per
arcmin in [Fe/H]\ and $+0.013 \pm 0.003$~dex per arcmin in
[$\alpha$/Fe].\label{fig:feh_dist}}
\end{figure}
Because dSphs interact with the MW, they can lose gas through tidal or
ram pressure stripping \citep{lin83}. The gas preferentially leaves
from the dSph's outskirts, where the gravitational potential is
shallow. If the dSph experiences subsequent star formation, it must
occur in the inner regions where gas remains. Sculptor's MDF suggests
a history of extended star formation. Sculptor might then be expected
to exhibit a radial abundance gradient in the sense that the inner
parts of the dSph are more metal-rich than the outer parts.
The detection of a radial metallicity gradient in Sculptor has been
elusive. In a photometric study, \citet{hur00} found no evidence for
an age or metallicity gradient. Based on HRS observations of five
stars \citep[the same sample as][]{she03}, \citet{tol03} found no
correlation between [Fe/H]\ and spatial position. Finally, in a sample
of 308 stars with CaT-based metallicities, T04 detected a significant
segregation in Sculptor: a centrally concentrated, relatively
metal-rich component and an extended, relatively metal-poor component.
\citet{wes06} arrived at the same conclusion, and \citet{wal09}
confirmed the existence of a [Fe/H]\ gradient in a sample of 1365
Sculptor members.
In order to detect a gradient, those studies targeted Sculptor stars
at distances of more than 20~arcmin. The maximum elliptical radius of
this study is 11~arcmin. Therefore, this study is not ideally
designed to detect radial gradients. Figure~\ref{fig:feh_dist} shows
the radial distribution of [Fe/H]\ and [$\alpha$/Fe]\ in Sculptor. The $x$-axis
is the length of the semi-major axis of an ellipse defined by
Sculptor's position angle and ellipticity \citep{mat98}. Although
this study is limited in the spatial extent of targets, we do detect a
gradient of $-0.030 \pm 0.003$~dex per arcmin. This
estimate is very close to the gradient observed by T04. \citet{wal09}
measure a shallower gradient, but they present their results against
circular radius instead of elliptical radius.
\citet{mar08} predict radial gradients in both [Fe/H]\ and [$\alpha$/Fe]\ in
dSphs. In particular, they expect shallower [Fe/H]\ gradients for
longer durations of star formation. The gradient we observe is
stronger than any of their models. They also expect very few stars
with low [$\alpha$/Fe]\ at large radius. Given that [$\alpha$/Fe]\ decreases with [Fe/H]\
and [Fe/H]\ decreases with distance, it seems reasonable to expect that
[$\alpha$/Fe]\ increases with radius. In fact, we detect an [$\alpha$/Fe]\ gradient of
$+0.013 \pm 0.003$~dex per arcmin.
\section{Conclusions}
\label{sec:concl}
Sculptor is one of the best-studied dwarf spheroidal satellites of the
Milky Way. In the past ten years, at least five spectroscopic
campaigns at both low and high resolution have targeted this galaxy.
More than any other dSph, Sculptor has aided in the understanding of
the chemical evolution of dSphs and the construction of the Milky Way
stellar halo.
We have sought to increase the sample of multi-element abundances in
Sculptor through MRS. The advantages over HRS include higher
throughput per resolution element, the ability to target fainter
stars, and multiplexing. The large sample sizes will enable detailed
comparisons to chemical evolution models of [$\alpha$/Fe]\ and [Fe/H]\ in dSphs.
The disadvantages include larger uncertainties, particularly for
elements with few absorption lines in the red, and the inability to
measure many elements accessible to HRS. MRS is not likely to soon
provide insight into the evolution of neutron-capture elements in
dSphs.
In order to make the most accurate measurements possible, we have made
a number of improvements to the technique of \citet*{kir08a}. We have
consulted independent HRS of the same stars to confirm the accuracy of
our measurements of [Fe/H], [Mg/Fe], [Ca/Fe], and [Ti/Fe]. In the case
of [Fe/H]\ and the average [$\alpha$/Fe]\ our MRS measurements are only slightly
more uncertain than HRS measurements.
Some of the products of this study include
\begin{enumerate}
\item {\bf An unbiased metallicity distribution for Sculptor.}
Because the synthesis-based abundances do not rely on any empirical
calibration, their applicability is unrestricted with regard to
[Fe/H]\ range. The MDF is asymmetric with a long, metal-poor tail, as
predicted by chemical evolution models of dSphs. Furthermore, fits
to simple chemical evolution models shows that Sculptor's MDF is
consistent with a model that requires no pre-enrichment.
\item {\bf The largest sample of [$\alpha$/Fe]\ and [Fe/H]\ measurements in any
single dSph: 388\ stars.} We have confirmed the trend for [$\alpha$/Fe]\
to decrease with [Fe/H], as shown by \citet{gei07} with just nine
stars from the studies of \citet{she03} and \citet{gei05}. Chemical
evolution models may be constructed from these measurements to
quantify the star formation history of Sculptor.
\item {\bf The detection of radial [Fe/H]\ and [$\alpha$/Fe]\ gradients.} Our
sample probes a smaller range than previous studies; nonetheless, we
find a $-0.030 \pm 0.003$~dex per arcmin gradient in [Fe/H]\
and a $+0.013 \pm 0.003$~dex per arcmin gradient in [$\alpha$/Fe].
\item {\bf The discovery of a Sculptor member star with
$\rm{\bf{[Fe/H]}} \mathbf{= -3.80 \pm 0.28}$.}
This discovery suggests that since-disrupted galaxies similar to
Sculptor may have played a role in the formation of the Milky Way
metal-poor halo. High-resolution spectroscopy of individual stars
will confirm or refute this indication.
\end{enumerate}
Much more can be done with this technique in other galaxies. The
stellar population of a dSph depends heavily on its stellar mass. For
instance, \citet{lan04} and \citet{rob05} predict that more massive
satellites have an [$\alpha$/Fe]\ ``knee'' at higher [Fe/H]. In the next papers
in this series, we intend to explore the multi-element abundance
distributions of other dSphs and compare them to each other. We will
observe how the shapes of the MDFs and the [$\alpha$/Fe]--[Fe/H]\ diagrams change
with dSph luminosity or stellar mass. These observations should aid
our understanding of star formation, chemical evolution, and the
construction of the Galaxy.
\acknowledgments We thank Kyle Westfall for providing the photometric
catalog, Gustavo Lanfranchi and Francesca Matteucci for providing
their chemical evolution model, David Lai for thoughtful
conversations, and the anonymous referee for helpful comments that
improved this manuscript. The generation of synthetic spectra made
use of the Yale High Performance Computing cluster Bulldog. ENK is
grateful for the support of a UC Santa Cruz Chancellor's Dissertation
Year Fellowship. PG acknowledges NSF grant AST-0307966, AST-0607852,
and AST-0507483. CS acknowledges NSF grant AST-0607708.
{\it Facility:} \facility{Keck:II (DEIMOS)}
|
1,108,101,566,457 | arxiv | \section{Introduction}
We assume everywhere $X$ to be a connected, compact polyhedron and $f:X\rightarrow X$ to be a continuous map. Let $p:\tilde{X}\rightarrow X$ be the universal cover of $X$ and $\tilde{f}:\tilde{X}\rightarrow \tilde{X}$ a lifting
of $f$, {i.e.,} $p\circ\tilde{f}=f\circ p$. Two lifts $\tilde{f}$ and $\tilde{f}^\prime$ are called \emph{conjugate} if there is a $\gamma\in\Gamma\cong\pi_1(X)$ such that $\tilde{f}^\prime = \gamma\circ\tilde{f}\circ\gamma^{-1}$. The subset $p(\mathrm{Fix}(\tilde{f}))\subset \mathrm{Fix}(f)$ is called the \emph{fixed point class} of $f$ determined by the lifting class $[\tilde{f}]$. A fixed point class is called \emph{essential} if its index is nonzero. The number of lifting classes of $f$ (and hence the number of fixed point classes, empty or not) is called the \emph{Reidemeister number} of $f$, denoted {by} $R(f)$. This is a positive integer or infinity. The number of essential fixed point classes is called the \emph{Nielsen number} of $f$, denoted by $N(f)$ \cite{Jiang}.
The Nielsen number is always finite. $R(f)$ and $N(f)$ are homotopy invariants. In the category of compact, connected polyhedra the Nielsen number of a map is, apart from in certain exceptional cases, equal to the least number of fixed points of maps with the same homotopy type as $f$.
Let $G$ be a group and $\phi:G\rightarrow G$ an endomorphism. Two elements $\alpha, \alpha^\prime\in G$ are said to be \emph{ $\phi$-conjugate} if and only if there exists $\gamma \in G$ such that $\alpha^\prime=\gamma\alpha\phi(\gamma)^{-1}$.
The number of $\phi$-conjugacy classes is called the \emph{Reidemeister number} of $\phi$, denoted by $R(\phi)$. This is a positive integer or infinity.
Taking a dynamical point of view, we consider the iterates of $f$ and $\phi$, and we may define following \cite{Fel84, PilFel85, Fel88, Fel91} several zeta functions connected with {the} Nielsen fixed point theory.
The Reidemeister zeta functions of $f$ and $\phi$ and the Nielsen zeta function of $f$ are defined as power series:
\begin{align*}
R_\phi(z)&=\exp\left(\sum_{n=1}^\infty \frac{R(\phi^n)}{n}z^n\right),\\
R_f(z)&=\exp\left(\sum_{n=1}^\infty \frac{R(f^n)}{n}z^n\right),\\
N_f(z)&=\exp\left(\sum_{n=1}^\infty \frac{N(f^n)}{n}z^n\right).
\end{align*}
Whenever we mention the Reidemeister zeta function $R_f(z)$, we shall assume that it is well-defined and so $R(f^n)<\infty$ and $R(\phi^n)<\infty$ for all $n>0$. Hence $R_f(z)=N_f(z)$ on infra-nilmanifolds by Theorem~\ref{AV-all} below and on infra-solvmanifolds of type $(\mathrm{R})$ by Corollary~\ref{R-fix}. However, there are spaces and maps for which $R_f(z)$ is not defined. The zeta functions $R_f(z)$ and $N_f(z)$ are homotopy invariants. {The function $N_f(z)$ has a positive radius of convergence for any continuous map $f$ \cite{PilFel85}.} The above zeta functions are directly analogous to the Lefschetz zeta function
$$
L_f(z) := \exp\left(\sum_{n=1}^\infty \frac{L(f^n)}{n} z^n \right),
$$
where
\begin{equation*}\label{Lef}
L(f^n) := \sum_{k=0}^{\dim X} (-1)^k \mathrm{tr}\Big[f_{*k}^n:H_k(X;\mathbb{Q})\to H_k(X;\mathbb{Q})\Big]
\end{equation*}
is the Lefschetz number of the iterate $f^n$ of $f$. The Lefschetz zeta function is a rational function of $z$ and is given by the formula:
$$
L_f(z) = \prod_{k=0}^{\dim X}
\det\big({I}-f_{*k}.z\big)^{(-1)^{k+1}}.
$$
The following problem was investigated: for which spaces and maps and for which groups and endomorphisms are the Nielsen and Reidemeister zeta functions rational functions? Are these functions algebraic functions?
The knowledge that a zeta function is a rational function is important because it shows that the infinite sequence of coefficients of the corresponding power series is closely interconnected, and is given by the finite set of zeros and poles of the zeta function.
In \cite{Fel91, FelHil, fhw, Li94, Fel00}, the rationality of the Reidemeister zeta function $R_\phi(z)$ was proven in the following cases: the group is finitely generated and an endomorphism is eventually commutative; the group is finite; the group is a direct sum of a finite group and a finitely generated free Abelian group; the group is finitely generated, nilpotent and torsion free.
In \cite[Theorem 4]{Wong01} the rationality of the Reidemeister and Nielsen zeta functions was proven for infra-nilmanifold under some (rather technical) sufficient conditions.
It is also known that the Reidemeister numbers of the iterates of an automorphism of an {almost polycyclic group} satisfy remarkable Gauss congruences \cite{crelle, ft}.
In this paper we investigate the Reidemeister and the Nielsen zeta functions on infra-solvmanifolds of type $(\mathrm{R})$. Our main technical tool is the averaging formulas for the Lefschetz numbers, the Nielsen numbers and the Reidemeister numbers on infra-nilmanifolds and on infra-solvmanifolds of type $(\mathrm{R})$.
Recently, using these averaging formulas, K. Dekimpe and G.-J. Dugardein \cite{DeDu,Du} calculated the Nielsen numbers via Lefschetz numbers and proved the rationality of the Nielsen zeta functions on infra-nilmanifolds.
We prove in this paper the rationality, the functional equations and calculate the radii of convergence of the Nielsen and the Reidemeister zeta functions of continuous maps on infra-solvmanifolds of type $(\mathrm{R})$. We
find a connection between the Reidemeister and Nielsen zeta functions and the Reidemeister torsions of the corresponding mapping tori. We show that if the Reidemeister zeta function is defined for a homeomorphism on an infra-solvmanifold of type $(\mathrm{R})$, then this manifold is an infra-nilmanifold.
We also prove that a map on an infra-solvmanifold of type $(\mathrm{R})$ induced by an affine map minimizes the topological entropy in its homotopy class and it has a rational Artin-Mazur zeta function. Finally we prove the Gauss congruences for the Reidemeister and Nielsen numbers of any map on an infra-solvmanifolds of type $(\mathrm{R})$ whenever all the Reidemeister numbers of iterates of the map are finite.
Let us present the contents of the paper in more details.
In Section~\ref{DeDu} we describe the averaging formulas for the Lefschetz numbers, the Nielsen numbers and the Reidemeister numbers on infra-nilmanifolds and Dekimpe-Dugardein's formula for the Nielsen numbers.
In Section~\ref{Coin-S}, we obtain a partial generalization of K. Dekimpe and G.-J. Dugardein's formula from fixed points on infra-nilmanifolds to coincidences on infra-solvmanifolds of type $(\mathrm{R})$ when the holonomy group is a cyclic group.
The rationality and the functional equations for the Reidemeister and the Nielsen zeta functions on infra-solvmanifolds of type $(\mathrm{R})$ are proven in Sections~\ref{Rationality} and ~\ref{jiang}.
After studying the asymptotic Nielsen numbers on infra-solvmanifolds of type $(\mathrm{R})$ in Section~\ref{Asymptotic}, we discuss the relationship between the topological entropies, the asymptotic Nielsen numbers and the radius of convergence of the Nielsen and the Reidemeister zeta functions in Section~\ref{EC}.
We also prove in Section~\ref{EC} that a map on an infra-solvmanifold of type $(\mathrm{R})$ induced by the affine map minimizes the topological entropy in its homotopy class .
In Section~\ref{Tor}, we find a connection between the Nielsen and the Reidemeister zeta functions and the Reidemeister torsions of the corresponding mapping tori. In Section~\ref{jiang}, we obtain the averaging formula for the Reidemeister numbers on infra-solvmanifolds of type $(\mathrm{R})$ and we are able to show that the Reidemeister zeta functions on infra-solvmanifolds of type $(\mathrm{R})$ coincide with the Nielsen zeta functions. In Section~\ref{No R}, we show
that if the Reidemeister zeta function is defined for a homeomorphism on an infra-solvmanifold of type $(\mathrm{R})$, then this manifold is an infra-nilmanifold.
In Section~\ref{AM} we prove that the Artin- Mazur zeta function coincides with the Nielsen zeta function and is a rational function with functional equation for a continuous map on an infra-solvmanifold of type $(\mathrm{R})$ induced by an affine map. In Section~\ref{Gauss cong} we prove the Gauss congruences for the Reidemeister and Nielsen numbers of any map on an infra-solvmanifolds of type $(\mathrm{R})$ whenever all the Reidemeister numbers of iterates of the map are finite.
\smallskip
\noindent
\textbf{Acknowledgments.}
The first author is indebted to the Institut des Hautes \'Etudes Scientifiques
(Bures-sur-Yvette) for the support and hospitality and the
possibility of the present research during his visit there.
The authors are grateful to Karel Dekimpe and Gert-Jan Dugardein for helpful comments and
valuable discussions. {The authors would like to thank the referee for making careful corrections to a few expressions and suggesting some relevant references in the original version of the article. This helped improving some results.}
\section{Averaging formulas and Dekimpe-Dugardein's formula}\label{DeDu}
We consider almost Bieberbach groups $\Pi\subset G\rtimes \mathrm{Aut}(G)$, where $G$ is a connected, simply connected nilpotent Lie group, and infra-nilmanifolds $M=\Pi\backslash{G}$. {It is known that these are exactly the class of almost flat Riemannian manifolds \cite{Ruh}.} It is L. Auslander's result (see, for example, \cite{LR}) that $\Gamma:=\Pi\cap G$ is a lattice of $G$, and is the unique maximal normal nilpotent subgroup of $\Pi$. The group $\Phi=\Pi/\Gamma$ is the \emph{holonomy group} of $\Pi$ or $M$. Thus we have the following commutative diagram:
$$
\CD
1@>>>G@>>>G\rtimes\mathrm{Aut}(G)@>>>\mathrm{Aut}(G)@>>>1\\
@.@AAA@AAA@AAA\\
1@>>>\Gamma@>>>\Pi@>{p}>>\Phi@>>>1
\endCD
$$
Thus $\Phi$ sits naturally in $\mathrm{Aut}(G)$. Denote $\rho:\Phi\to\mathrm{Aut}(\mathfrak{G})$, $A\mapsto A_*= \text{the differential of $A$}$.
Let $M=\Pi\backslash{G}$ be an infra-nilmanifold. Any continuous map $f:M\to M$ induces a homomorphism $\phi:\Pi\to\Pi$. Due to \cite[Theorem~1.1]{KBL95}, we can choose an affine element $(d,D)\in G\rtimes \mathrm{Endo}(G)$ such that
\begin{equation}\label{KBL}
\phi(\alpha)\circ (d,D)=(d,D)\circ\alpha,\quad \forall\alpha\in\Pi.
\end{equation}
This implies that the affine map $(d,D):G\to G$ induces a continuous map on the infra-nilmanifold $M=\Pi\backslash{G}$, which is homotopic to $f$. That is, $f$ has an affine homotopy lift $(d,D)$.
By \cite[Lemma~3.1]{LL-JGP}, we can choose a fully invariant subgroup $\Lambda\subset\Gamma$ of $\Pi$ which is of finite index. Therefore $\phi(\Lambda)\subset\Lambda$ and so $\phi$ induces the following commutative diagram
\begin{equation*
\CD
1@>>>\Lambda@>>>\Pi@>>>\Psi@>>>1\\
@.@VV{\phi'}V@VV{\phi}V@VV{\bar\phi}V\\
1@>>>\Lambda@>>>\Pi@>>>\Psi@>>>1
\endCD
\end{equation*}
where $\Psi=\Pi/\Lambda$ is finite. Applying \eqref{KBL} for $\lambda\in\Lambda\subset\Pi$, we see that
$$
\phi(\lambda)=dD(\lambda)d^{-1}=(\tau_d D)(\lambda)
$$
where $\tau_d$ is the conjugation by $d$. The homomorphism $\phi':\Lambda\to\Lambda$ induces a unique Lie group homomorphism $F=\tau_dD:G\to G$, and hence a Lie algebra homomorphism $F_*:\mathfrak{G}\to\mathfrak{G}$. On the other hand, since $\phi(\Lambda)\subset\Lambda$, $f$ has a lift $\bar{f}:N\to N$ on the nilmanifold $N:=\Lambda\backslash{G}$ which finitely and regularly covers $M$ and has $\Psi$ as its group of covering transformations.
\begin{Thm}[Averaging Formula {\cite[Theorem~3.4]{LL-JGP}, \cite[Theorem~6.11]{HLP}}]\label{AV-all}
Let $f$ be a continuous map on an infra-nilmanifold $\Pi\backslash{G}$ with holonomy group $\Phi$. Let $f$ have an affine homotopy lift $(d,D)$ and let $\phi:\Pi\to\Pi$ be the homomorphism induced by $f$. Then we have
\begin{align*}
L(f)&=\frac{1}{|\Phi|}\sum_{A\in\Phi}\det(I-A_*F_*)=\frac{1}{|\Phi|}\sum_{A\in\Phi}\det(I-A_*D_*),\\
N(f)&=\frac{1}{|\Phi|}\sum_{A\in\Phi}|\det(I-A_*F_*)|=\frac{1}{|\Phi|}\sum_{A\in\Phi}|\det(I-A_*D_*)|,\\
R(f)=R(\phi)&=\frac{1}{|\Phi|}\sum_{A\in\Phi}\sigma\left(\det(A_*-F_*)\right)=\frac{1}{|\Phi|}\sum_{A\in\Phi}\sigma\left(\det(A_*-D_*)\right)
\end{align*}
where $\sigma:\mathbb{R}\to\mathbb{R}\cup\{\infty\}$ is defined by $\sigma(0)=\infty$ and $\sigma(x)=|x|$ for all $x\ne0$.
\end{Thm}
Recently, Dekimpe and Dugardein in \cite{DeDu} showed the following: Let $f:M\to M$ be a continuous map on an infra-nilmanifold $M$. Then the Nielsen number $N(f)$ is either equal to $|L(f)|$ or equal to the expression $|L(f)-L(f_+)|$, where $f_+$ is a lift of $f$ to a $2$-fold covering of $M$. By exploiting the exact nature of this relationship for all powers of $f$, they proved that the Nielsen zeta function $N_f(z)$ is always a rational function.
Let $M=\Pi\backslash{G}$ be an infra-nilmanifold with the holonomy group $\Phi$ and let $f:M\to M$ be a continuous map with an affine homotopy lift $(d,D)$.
Let $A\in\Phi$. Then we can choose $g\in G$ so that $\alpha=(g,A)\in\Pi$. Write $\phi(\alpha)=(g',A')$. By (\ref{KBL}), we have $(g',A')(d,D)=(d,D)(g,A) \Rightarrow A'D=DA$. Thus $\phi$ induces a function $\hat\phi:\Phi\to\Phi$ given by $\hat\phi(A)=A'$ so that it satisfies that
\begin{equation}\label{Dekimpe-eq}
\hat\phi(A)D=DA, \quad \hat\phi(A)_*D_*=D_*A_*
\end{equation}
for all $A\in\Phi$.
In what follows, we shall give a brief description of main results in \cite{DeDu}. We can choose a linear basis of $\mathfrak{G}$ so that $\rho(\Phi)=\Phi_*\subset\mathrm{Aut}(\mathfrak{G})$ can be expressed as diagonal block matrices
$$
\left[\begin{matrix}\Phi_1&0\\0&\Phi_2\end{matrix}\right]\subset\mathrm{GL}(n_1,\mathbb{R})\times\mathrm{GL}(n_2,\mathbb{R})
\subset\mathrm{GL}(n,\mathbb{R})
$$
and $D_*$ can be written in block triangular form
$$
\left[\begin{matrix}D_1&*\\0&D_2\end{matrix}\right]
$$
where $D_1$ and $D_2$ have eigenvalues of modulus $\le1$ and $>1$, respectively. We can assume $\Phi=\Phi_1\times\Phi_2$. Every element $\alpha\in\Pi$ is of the form $(a,A)\in G\rtimes\mathrm{Aut}(G)$ and $\alpha$ is mapped to $A=(A_1,A_2)$. We define
$$
\Pi_+=\{\alpha\in\Pi\mid \det A_2=1\}.
$$
Then $\Pi_+$ is a subgroup of $\Pi$ of index at most $2$. If $[\Pi:\Pi_+]=2$, then $\Pi_+$ is also an almost Bieberbach group and the corresponding infra-nilmanifold $M_+=\Pi_+\backslash{G}$ is a double covering of $M=\Pi\backslash{G}$; the map $f$ lifts to a map $f_+:M_+\to M_+$ which has the same affine homotopy lift $(d,D)$ as $f$. If $D_*$ has no eigenvalues of modulus $>1$, then for any $A\in\Phi$, $A=A_1$ and in this case we take $\Pi_+=\Pi$. Now, a main result, Theorem~4.4, of \cite{DeDu} is the following:
\begin{Thm}[{\cite{DeDu},Theorem~4.4, when $\Pi=\Pi_+$ see also proof of Theorem \ref{infra}}]
\label{T4.4}
Let $f$ be a continuous map on an infra-nilmanifold $\Pi\backslash{G}$ with an affine homotopy lift $(d,D)$.
Then the Nielsen numbers of $f^k$ are
$$
N(f^k)= \begin{cases}
(-1)^{p+(k+1)n}L(f^k),&\text{when $\Pi=\Pi_+$;}\\
(-1)^{p+(k+1)n}\left(L(f_+^k)-L(f^k)\right),&\text{when $\Pi\ne\Pi_+$},
\end{cases}
$$
where $p$ be the number of real eigenvalues of $D_*$ which are $>1$ and $n$ be the number of real eigenvalues of $D_*$ which are $<-1$.
\end{Thm}
\begin{Rmk}
1) In \cite[Theorem~4.4]{DeDu} Nielsen numbers $N(f^n)$ are expressed in terms of Lefschetz numbers $L(f^n)$ and $L(f_+^n)$ via a table given by parity of $n$. \\
2) The proof of our Theorem~\ref{infra} covers the case when $\Pi=\Pi_+$ in Theorem \ref{T4.4} above because in this case $N(f)=|L(f)|$.
\end{Rmk}
\section{Coincidences on infra-solvmanifolds of type $(\mathrm{R})$ with a cyclic holonomy group}
\label{Coin-S}
In this section, we will be concerned with a generalization of Theorem~\ref{T4.4} when $k=1$ (that is, $N(f)=|L(f)|$ or $|L(f_+)-L(f)|$) from fixed points on infra-nilmanifolds to coincidences on infra-solvmanifolds of type $(\mathrm{R})$. We obtain a partial result for coincidences on infra-solvmanifolds of type $(\mathrm{R})$ when the holonomy group is a cyclic group.
Let $S$ be a connected, simply connected solvable Lie group of type $(\mathrm{R})$, and let $C$ be a compact subgroup of $\mathrm{Aut}(S)$. Let $\Pi\subset S\rtimes C$ be torsion free and discrete which is a finite extension of the lattice $\Gamma=\Pi\cap S$ of $S$. Such a group $\Pi$ is called an SB-\emph{group} modeled on $S$. The quotient space $\Pi\backslash{S}$ is called an \emph{infra-solvmanifold} of type $(\mathrm{R})$ with holonomy group $\Phi=\Pi/\Gamma$. When $\Pi\subset S$, $\Pi\backslash{S}$ is a \emph{special solvmanifold} of type $(\mathrm{R})$. Thus the infra-solvmanifold $\Pi\backslash{S}$ is finitely and regularly covered by the special solvmanifold $\Gamma\backslash{S}$ with the group of covering transformations $\Phi$. For more details, we refer to \cite{LL-Nagoya}.
Let $M=\Pi\backslash{S}$ be an infra-solvmanifold of type $(\mathrm{R})$ with the holonomy group $\Phi$. Then $\Phi$ sits naturally in $\mathrm{Aut}(S)$. Write $\rho:\Phi\to\mathrm{Aut}(\mathfrak{S})$, $A\mapsto A_*$. Let $f,g:M\to M$ be maps with affine homotopy lifts $(d,D), (e,E):S\to S$, respectively. Then $f$ and $g$ induce homomorphisms $\phi,\psi:\Pi\to\Pi$ by the following rules:
\begin{align*}
\phi(\alpha)\circ(d,D)=(d,D)\circ\alpha,\ \
\psi(\alpha)\circ(e,E)=(e,E)\circ\alpha\ \ \forall\alpha\in\Pi.
\end{align*}
In turn, we obtain functions $\hat\phi, \hat\psi:\Phi\to\Phi$ satisfying
\begin{align*}
\hat\phi(A)D=DA\ \text{ and }\ \hat\psi(A)E=EA\ \ \forall A\in\Phi.
\end{align*}
Thus
\begin{align}\label{*}
\hat\phi(A)_*D_*=D_*A_*\ \text{ and }\ \hat\psi(A)_*E_*=E_*A_*\ \ \forall A\in\Phi.
\end{align}
\bigskip
Recall the following well-known facts from representation theory:
\begin{Thm}[{H. Maschke}]\label{Maschke}
Let $\rho:\Phi\to \mathrm{GL}(n,\mathbb{R})$ be a representation. Then there exist irreducible representations $\rho_i:\Phi\to\mathrm{GL}(n_i,\mathbb{R})$ such that $\rho$ is similar to $\rho_1\oplus\cdots\oplus\rho_s$.
\end{Thm}
\begin{Thm}\label{Schur}
Let $\Phi=\langle A\rangle$ be a {cyclic group} of order $n$ and let $\rho:\Phi\to\mathrm{GL}(m,\mathbb{R})$ be a faithful $\mathbb{R}$-irreducible representation. If $n=1$ then $\rho$ is the trivial representation $\rho_{\mathrm{triv}}$. If $n=2$, then $m=1$ and $\rho(A)=-1$. In this case, we denote $\rho$ by $\tau$. If $n>2$, then there exists $k\in\mathbb{Z}$ such that $\gcd(n,k)=1$ and $\rho$ is similar to the irreducible rotation given by
$$
\Phi\longrightarrow\mathrm{GL}(2,\mathbb{R}),\ A\longmapsto
\left[\begin{matrix}\cos\frac{2k\pi}{n}&-\sin\frac{2k\pi}{n}\\
\sin\frac{2k\pi}{n}&\hspace{8pt}\cos\frac{2k\pi}{n}\end{matrix}\right].
$$
\end{Thm}
\bigskip
Consider the case where the infra-solvmanifold $M$ of type $(\mathrm{R})$ is {orientable} ({for coincidences}) with holonomy group $\Phi$ a {cyclic group} with a generator $A_0$. By Theorem~\ref{Maschke}, the natural representation $\rho:\Phi\to\mathrm{Aut}(S)\cong\mathrm{Aut}(\mathfrak{S})$ is similar to a sum of irreducible representations. If $\sigma:\Phi\to\mathrm{GL}(m,\mathbb{R})$ is irreducible, then the induced representation $\bar\sigma:\Phi/\ker\rho\to\mathrm{GL}(m,\mathbb{R})$ is faithful and irreducible. By Theorem~\ref{Schur}, $\bar\sigma$ is similar to $\rho_{\mathrm{triv}}$, $\tau$ or a rotation. Thus we may assume that $\rho=m\rho_{\mathrm{triv}}\oplus k\tau\oplus\rho_1\oplus\cdots\oplus\rho_t$, where $\rho_i:\Phi\to\mathrm{GL}(2,\mathbb{R})$ is an irreducible rotation. That is, there is a linear basis of $\mathfrak{S}$ so that $\rho(A_0)\in\mathrm{Aut}(\mathfrak{S})$ can be represented as diagonal block matrices
$$
\rho(A_0)=\left[\begin{matrix}I_m&&&&\\&-I_k&&&\\&&\Phi_1&&\\&&&\ddots&\\&&&&\Phi_t
\end{matrix}\right]\
\text{ where }
\Phi_i=\rho_i(A_0)\in\mathrm{GL}(2,\mathbb{R}).
$$
Remark that if $k>0$ then the {order of $\Phi$ is even}, and $\det(\rho_i(A_0))=1$ for all $i$. Hence $\det(\rho(A_0))=1$ if and only if $k$ is even. This is the only case when the infra-solvmanifold is orientable and hence {$k$ is even}.
Using the identities (\ref{*}), we can write $D_*$ and $E_*$ as block matrices
\begin{align*}
D_*=\left[\begin{matrix}D_{\mathrm{triv}}&0&0\\0&D_\tau&0\\{*}&*&\hat{D}\end{matrix}\right],\quad
E_*=\left[\begin{matrix}E_{\mathrm{triv}}&0&0\\0&E_\tau&0\\{*}&*&\hat{E}\end{matrix}\right]
\end{align*}
where $D_{\mathrm{triv}}, E_{\mathrm{triv}}$ are $m\times m$, $D_\tau, E_\tau$ are $k\times k$ and $\hat{D}, \hat{E}$ are $2t\x2t$.
For $A\in\Phi$, $A=A_0^p$ for some $p$ and
$$
A_*=\left[\begin{matrix}I_m&&\\&(-1)^p I_k&\\&&A_*\end{matrix}\right].
$$
Write $\hat\rho=\rho_1\oplus\cdots\oplus\rho_t:\Phi\to\mathrm{GL}(2t,\mathbb{R}), A\mapsto \hat\rho(A)=A_*$ (abusing the notation: $\rho(A)=A_*$). Then the identities (\ref{*}) induce
\begin{align*}
\hat\phi(A)_*\hat{D}=\hat{D}A_*,\ \hat\psi(A)_*\hat{E}=\hat{E}A_*.
\end{align*}
Hence, for all $A=A_0^p$ and $B=A_0^q\in\Phi$, we have that
\begin{align}\label{gen}
\det&(E_*-A_*D_*)\det(E_*-B_*D_*)\\
&=\det(E_{\mathrm{triv}}-D_{\mathrm{triv}})^2\det(E_\tau-(-1)^pD_\tau)\det(E_\tau-(-1)^qD_\tau)\notag\\
&\qquad\times\det(\hat{E}-A_*\hat{D})\det(\hat{E}-B_*\hat{D}).\notag
\end{align}
Note here that $\det(\hat{E}-A_*\hat{D})\det(\hat{E}-B_*\hat{D})\ge0$, see \cite[Lemma~6.3]{DP}.
From \cite[Theorem~4.5]{HL-a}, immediately we have:
\begin{Thm}[{Compare with \cite[Theorem~6.1]{DP}}]
Let $f$ and $g$ be continuous maps on an orientable infra-solvmanifold of type $(\mathrm{R})$ with cyclic holonomy group $\Phi=\langle A_0\rangle$. If $\rho(A_0)$ has no eigenvalue $-1$, i.e., if $k=0$, then $N(f,g)=|L(f,g)|$.
\end{Thm}
{Assume $k>0$ (is even); then $\Phi=\langle A_0\rangle$ is of even order. Let $\Phi_0=\langle A_0^2\rangle$ and let $\Pi_0$ be the subgroup of $\Pi$ induced by the inclusion $\Phi_0\hookrightarrow \Phi$.} Remark also that if $D_\tau=0$ or $E_\tau=0$, then we still have $N(f,g)=|L(f,g)|$. {We also assume that $D_\tau\ne0$ and $E_\tau\ne0$.}
\begin{Thm}
Then $\Pi_0$ is a subgroup of $\Pi$ of index $2$, $\Pi_0$ is also an {\rm SB}-group and the corresponding infra-solvmanifold $M_0=\Pi_0\backslash{S}$ is a double covering of $M=\Pi\backslash{S}$; {the maps $f,g$ lift to map $f_0, g_0:M_0\to M_0$ which have the same affine homotopy lifts $(d,D), (e,E)$ as $f$ and $g$}.
\end{Thm}
\begin{proof}
It is clear that $[\Pi:\Pi_0]=2$ and that $\Pi_0$ is also an SB-group and the corresponding infra-solvmanifold $\Pi_0\backslash{S}$ is a double covering of $\Pi\backslash{S}$.
To prove the last assertion, we may {consider and} assume that $(d,D):S\to S$ induces $f$ and that $\phi:\Pi\to\Pi$ is a homomorphism such that
$$
\phi(\alpha)(d,D)=(d,D)\alpha, \quad \forall\alpha\in\Pi.
$$
We need to show that $(d,D)$ also induces a map on $\Pi_0\backslash{S}$. For this purpose, it is enough to show that $\phi(\Pi_0)\subset\Pi_0$. For any $\beta=(a,A)\in\Pi_0$, let $\phi(\beta)=(b,\hat\phi(A))$. Since $(a,A)\in\Pi_0$, we have $A\in\Phi_0$. The above identity implies that
\begin{align*}
\hat\phi(A)_*D_*=D_*A_* \Rightarrow D_\tau=0 \text{ or } \hat\phi(A)\in\Phi_0.
\end{align*}
Since $D_\tau\ne0$, this finishes the proof of the last assertion.
\end{proof}
For any $A=A_0^p\in\Phi$, we recall from (\ref{gen}) that
\begin{align*}
&\det(E_*-A_*D_*)=\det(E_{\mathrm{triv}}-D_{\mathrm{triv}})\det(E_\tau-(-1)^pD_\tau)\det(\hat{E}-A_*\hat{D})
\end{align*}
and
\begin{align*}
&\det(\hat{E}-\hat{D})\det(\hat{E}-A_*\hat{D})\ge0.
\end{align*}
Let
\begin{align*}
{\epsilon_o=\mathrm{sign~\!} \det(E_\tau-D_\tau),\
\epsilon_e=\mathrm{sign~\!} \det(E_\tau+D_\tau).}
\end{align*}
Then $\epsilon_o=\pm\epsilon_e$. {Notice that the values $\epsilon_o$ and $\epsilon_e$ depend both on $f$ and $g$.} When $\epsilon_o=\epsilon_e$, we still have $N(f,g)=|L(f,g)|$. When $\epsilon_o=-\epsilon_e$, we have that
\begin{align*}
N(f,g)&=\frac{1}{|\Phi|}\sum_{A\in\Phi}|\det(E_*-A_*D_*)|\\
&=\frac{1}{|\Phi|}\left(\sum_{A\in\Phi_0}|\det(E_*-A_*D_*)|+\sum_{A\notin\Phi_0}|\det(E_*-A_*D_*)|\right)\\
&=\frac{\epsilon_o}{|\Phi|}\left(\sum_{A\in\Phi_0}\det(E_*-A_*D_*)
-\sum_{A\notin\Phi_0}\det(E_*-A_*D_*)\right)\\
&=\frac{\epsilon_o}{|\Phi|}\left(2\sum_{A\in\Phi_0}\det(E_*-A_*D_*)
-\sum_{A\in\Phi}\det(E_*-A_*D_*)\right)\\
&=\epsilon_o\left(\frac{1}{|\Phi_0|}\sum_{A\in\Phi_0}\det(E_*-A_*D_*)
-\frac{1}{|\Phi|}\sum_{A\in\Phi}\det(E_*-A_*D_*)\right)\\
&={\epsilon_o(L(f_0,g_0)-L(f,g))}.
\end{align*}
Therefore, we can summarize what we have observed as follows:
\begin{Thm}
Let $M=\Pi\backslash{S}$ be an orientable infra-solvmanifold of type $(\mathrm{R})$ with cyclic holonomy group $\Phi=\langle A_0\rangle$. Let $\rho:\Phi\to\mathrm{Aut}(\mathfrak{G})$ be the natural presentation. Then $\rho$ is similar to the sum of irreducible representations $m\rho_{\mathrm{triv}}\oplus k\tau\oplus\rho_1\oplus\cdots\oplus\rho_t$, where $\rho_{\mathrm{triv}}:\Phi\to\mathrm{GL}(1,\mathbb{R})$ is the trivial representation, $\tau:\Phi\to\mathrm{GL}(1,\mathbb{R})$ is the representation given by $\tau(A_0)=-1$, and $\rho_i:\Phi\to\mathrm{GL}(2,\mathbb{R})$ is an irreducible rotation. Let $f,g:M\to M$ be continuous maps with affine homotopy lifts $(d,D),(e,E)$ respectively. Then $D_*$ and $E_*$ can be expressed as block matrices
\begin{align*}
D_*=\left[\begin{matrix}D_{\mathrm{triv}}&0&0\\0&D_\tau&0\\{*}&*&\hat{D}\end{matrix}\right],\quad
E_*=\left[\begin{matrix}E_{\mathrm{triv}}&0&0\\0&E_\tau&0\\{*}&*&\hat{E}\end{matrix}\right]
\end{align*}
where $D_{\mathrm{triv}}, E_{\mathrm{triv}}$ are $m\times m$, $D_\tau, E_\tau$ are $k\times k$ and $\hat{D}, \hat{E}$ are $2t\x2t$. Moreover, we have that:
\begin{enumerate}
\item[$(1)$] If $k=0$, then $N(f,g)=|L(f,g)|$.
\item[$(2)$] If $k>0$ and $\det(E_\tau-D_\tau)\det(E_\tau+D_\tau)\ge0$, then $N(f,g)=|L(f,g)|$.
\item[$(3)$] If $k>0$ and $\det(E_\tau-D_\tau)\det(E_\tau+D_\tau)<0$, then the maps $f,g$ lift to maps $f_0, g_0:M_0\to M_0$ on a double covering $M_0$ of $M$ which have the same homotopy lifts as $f,g$ respectively so that the following formula holds
$$
N(f,g)=|L(f_0,g_0)-L(f,g)|.
$$
\end{enumerate}
\end{Thm}
\begin{proof}
We are left to notice only one thing: If $D_\tau=0$ or $E_\tau=0$, then $k>0$ is even and so $\det(E_\tau-D_\tau)\det(E_\tau+D_\tau)\ge0$.
\end{proof}
\section{The rationality and the functional equation}\label{Rationality}
We start with an example that shows how different can be the Nielsen, the Reidemeister and the Lefschetz zeta functions.
\begin{Example}[\cite{Fel00}]\label{wedge}
Let $f:S^2\vee S^4\rightarrow S^2\vee S^4$ to be a continuous map of the bouquet of spheres such that the restriction $f|_{S^4}=id_{S^4}$ and the degree of the restriction $f|_{S^2}:S^2\rightarrow S^2$ equal to $-2$. Then $L(f)=0$, hence
$N(f)=0$ since $ S^2\vee S^4$ is simply connected. For $k>1$ we have $L(f^k)=2+(-2)^k\not=0$, therefore $N(f^k)=1$. $R(f^k)=1$ for all $k\geq 1$ since $ S^2\vee S^4$ is simply connected. From this we have by direct calculation that
\begin{equation*}
N_f(z)=\exp(-z)\cdot \frac{1}{1-z};\ R_f(z)= \frac{1}{1-z};\ L_f(z)= \frac{1}{(1-z)^2(1+2z)}.
\end{equation*}
Hence $N_f(z)$ is a meromorphic function, and $R_f(z)$ and $L_f(z) $ are rational and different.
\end{Example}
We give now some other examples of the Nielsen and the Reidemeister zeta functions on infra-nilmanifolds.
For the explicit computation of the zeta functions, the following is useful.
\begin{Prop}\label{RZ}
Let $f$ be a continuous map on an infra-nilmanifold $\Pi\backslash{G}$ with holonomy group $\Phi$. Let $f$ have an affine homotopy lift $(d,D)$ and let $\phi:\Pi\to\Pi$ be the homomorphism induced by $f$. Then
\begin{equation*}
\begin{split}
N_f(z)=\prod_{A\in\Phi}\sqrt[|\Phi|]{\exp\left(\sum_{n=1}^\infty\frac{|\det(A_*-D_*^n)|}{n}z^n\right)}.
\end{split}
\end{equation*}
When $R_f(z)$ is defined, $R_f(z)=R_\phi(z)=N_f(z)$.
\end{Prop}
\begin{proof}
We may assume $R_f(z)$ is defined. By Theorem~\ref{AV-all}, we have that $R_f(z)=R_\phi(z)=N_f(z)$ and
\begin{equation*}
\begin{split}
R_\phi(z)&=\exp\left(\sum_{n=1}^\infty\frac{R(\phi^n)}{n}z^n\right)\\
&=\exp\left(\sum_{n=1}^\infty\frac{\frac{1}{|\Phi|}\sum_{A\in\Phi}|\det(A_*-F_*^n)|}{n}z^n\right)\\
&=\prod_{A\in\Phi}\left(\exp\left(\sum_{n=1}^\infty\frac{|\det(A_*-F_*^n)|}{n}z^n\right)\right)^{\frac{1}{|\Phi|}}\\
&=\prod_{A\in\Phi}\sqrt[|\Phi|]{\exp\left(\sum_{n=1}^\infty\frac{|\det(A_*-F_*^n)|}{n}z^n\right)}.\qedhere
\end{split}
\end{equation*}
\end{proof}
\begin{Example}\label{ex1}
This is an example used by Anosov to show that the Anosov relation does not hold when the manifold is not a nilmanifold \cite{Anosov}.
Let $\alpha=(a,A)$ and $t_i=(e_i, I_2)$ be elements of $\mathbb{R}^2\rtimes\mathrm{Aut}(\mathbb{R}^2)$, where
$$
a=\left[\begin{matrix}\tfrac{1}{2}\\0\end{matrix}\right],\
A=\left[\begin{matrix}1&\hspace{8pt}0\\0&-1\end{matrix}\right],\
e_1=\left[\begin{matrix}1\\0\end{matrix}\right],\
e_2=\left[\begin{matrix}0\\1\end{matrix}\right].
$$
Then $A$ has period 2, $(a,A)^2=(a+Aa,I_2)=(e_1,I_2)$, and $t_2\alpha=\alpha t_2^{-1}$. Let $\Gamma$ be the subgroup generated by $t_1$ and $t_2$. Then it forms a lattice in $\mathbb{R}^2$ and the quotient space $\Gamma\backslash\mathbb{R}^2$ is the 2-torus. It is easy to check that the subgroup
$$
\Pi=\langle \Gamma, (a,A)\rangle\subset \mathbb{R}^2\rtimes\mathrm{Aut}(\mathbb{R}^2)
$$
generated by the lattice $\Gamma$ and the element $(a,A)$ is discrete and torsion free. Furthermore, $\Gamma$ is a normal subgroup of $\Pi$ of index 2. Thus $\Pi$ is an (almost) Bieberbach group, which is the Klein bottle group, and the quotient space $\Pi\backslash\mathbb{R}^2$ is the Klein bottle. Thus $\Gamma\backslash\mathbb{R}^2\to\Pi\backslash\mathbb{R}^2$ is a double covering projection.
Let $K:\mathbb{R}^2\to\mathbb{R}^2$ be the linear automorphism given by
$$
K=\left[\begin{matrix}-1&0\\\hspace{8pt}0&2\end{matrix}\right].
$$
It is not difficult to check that $K$ induces $\bar{f}:\Gamma\backslash\mathbb{R}^2\to\Gamma\backslash\mathbb{R}^2$ and $f:\Pi\backslash\mathbb{R}^2\to\Pi\backslash\mathbb{R}^2$ so that the following diagram is commutative:
$$
\CD
\mathbb{R}^2@>K>>\mathbb{R}^2\\
@VVV@VVV\\
\Gamma\backslash\mathbb{R}^2@>{\bar{f}}>>\Gamma\backslash\mathbb{R}^2\\
@VVV@VVV\\
\Pi\backslash\mathbb{R}^2@>{f}>>\Pi\backslash\mathbb{R}^2
\endCD
$$
Note that all the vertical maps are the natural covering maps. In particular, $\Gamma\backslash\mathbb{R}^2\to\Pi\backslash\mathbb{R}^2$ is a double covering by the holonomy group of $\Pi/\Gamma$, which is $\Phi=\{I,A\}\cong\mathbb{Z}_2$.
By Theorem~\ref{AV-all}, we have
\begin{align*}
L(f^n)&=\frac{1}{2}\left(\det(I-K^n)+\det(I-AK^n)\right)=1-(-1)^n,\\
N(f^n)&=2^n(1-(-1)^n).
\end{align*}
In particular, $R(f^n)=2^{n+1}$ when $n$ is odd; otherwise, $R(f^n)=\infty$.
Therefore, the Reidemeister zeta function $R_{f}(z)$ is not defined, and
\begin{align*}
L_{f}(z)&=\exp\left(\sum_{n=1}^\infty\frac{2}{2n-1}z^{2n-1}\right)=\frac{1+z}{1-z},\\
N_{f}(z)&=\exp\left(\sum_{n=1}^\infty\frac{2^{2n}}{2n-1}z^{2n-1}\right)\\
&=\exp\left(\sum_{n=1}^\infty\frac{2}{2n-1}(2z)^{2n-1}\right)=\frac{1+2z}{1-2z}.\\
\end{align*}
\end{Example}
\begin{Example}\label{ex3}
Consider Example~3.5 of \cite{LL-JGP} in which an infra-nilmanifold $M$ modeled on the 3-dimensional Heisenberg group $\mathrm{Nil}$ has the holonomy group of order $2$ generated by $A$ and a self-map $f$ on $M$ is induced by the automorphism $D:\mathrm{Nil}\to\mathrm{Nil}$ given by
$$
D:\left[\begin{matrix}1&x&z\\0&1&y\\0&0&1\end{matrix}\right]
\longmapsto
\left[\begin{matrix}1&-4x-y&z'\\0&1&6x+2y\\0&0&1\end{matrix}\right]
$$
where $z'=-2z-(12x^2+10xy+y^2)$.
Then with respect to the ordered (linear) basis for the Lie algebra of $\mathrm{Nil}$
$$
{\bf e}_1=\left[\begin{matrix}0&0&1\\0&0&0\\0&0&0\end{matrix}\right],\
{\bf e}_2=\left[\begin{matrix}0&1&0\\0&0&0\\0&0&0\end{matrix}\right],\
{\bf e}_3=\left[\begin{matrix}0&0&0\\0&0&1\\0&0&0\end{matrix}\right],
$$
the differentials of $A$ and $D$ are
$$
A_*=\left[\begin{matrix}1&\hspace{8pt}0&\hspace{8pt}0\\0&-1&\hspace{8pt}0\\0&\hspace{8pt}0&-1\end{matrix}\right],\
D_*=\left[\begin{matrix}-2&\hspace{8pt}0&\hspace{8pt}0\\\hspace{8pt}0&-4&-1
\\\hspace{8pt}0&\hspace{8pt}6&\hspace{8pt}2\end{matrix}\right].
$$
By Proposition~\ref{RZ}, we have
\begin{align*}
R_\phi(z)&=\sqrt{\exp\left(\sum_{n=1}^\infty\frac{|\det(I-D_*^n)|}{n}z^n\right)}\sqrt{\exp\left(\sum_{n=1}^\infty\frac{|\det(A_*-D_*^n)|}{n}z^n\right)}.
\end{align*}
Remark that $A_*$ is a block diagonal matrix with $1\x1$ block $I_1$ and $2\x2$ block $-I_2$. We have
\begin{align*}
|\det(A_*-D_*^n)|&=|\det(I_1-D_1^n)\det(-I_2-D_2^n)|\\
&=|\det(I_1-D_1^n)||\det(I_2+D_2^n)|\\
&=|(1-(-2)^n)|(-1)^n\det(I_2+D_2^n)\\
&=(2^n-(-1)^n)(-1)^n\sum_i\mathrm{tr}({\bigwedge}^{\!i}D_2^n).
\end{align*}
Consequently, we obtain
\begin{align*}
&\exp\left(\sum_{n=1}^\infty\frac{|\det(A_*-D_*^n)|}{n}z^n\right)\\
&=\exp\left(\sum_{n=1}^\infty\frac{(2^n-(-1)^n)(-1)^{n}\sum_i\mathrm{tr}({\bigwedge}^{\!i}D_2^n)}{n}z^n\right)\\
&=\exp\left(\sum_{n=1}^\infty\frac{\sum_i\mathrm{tr}({\bigwedge}^{\!i}D_2^n)}{n}(-2z)^n-\sum_{n=1}^\infty\frac{\sum_i\mathrm{tr}({\bigwedge}^{\!i}D_2^n)}{n}z^n\right)\\
&=\prod_i\frac{\det(I-z{\bigwedge}^{\!i}D_2)}{\det(I+2z{\bigwedge}^{\!i}D_2)}\\
&=\frac{1-z}{1+2z}\cdot\frac{1+2z-2z^2}{1-4z-8z^2}\cdot\frac{1+2z}{1-4z}.
\end{align*}
In a similar fashion, we compute
\begin{align*}
&\exp\left(\sum_{n=1}^\infty\frac{|\det(I-D_*^n)|}{n}z^n\right)\\
&=\exp\left(\sum_{n=1}^\infty\frac{|\det(I_1-D_1^n)\det(I_2-D_2^n)|}{n}z^n\right)\\
&=\exp\left(\sum_{n=1}^\infty\frac{(2^n-(-1)^n)(-1)^{n+1}\sum_i(-1)^{i}\mathrm{tr}({\bigwedge}^{\!i}D_2^n)}{n}z^n\right)\\
&=\prod_i\left(\frac{\det(I+2z{\bigwedge}^{\!i}D_2)}{\det(I-z{\bigwedge}^{\!i}D_2)}\right)^{(-1)^i}\\
&=\frac{1+2z}{1-z}\cdot\frac{1+2z-2z^2}{1-4z-8z^2}\cdot\frac{1-4z}{1+2z}.
\end{align*}
The last identity of the above computations follows from the definition of ${\bigwedge}^{\!i}D_2$ (see \cite[Lemma~3.2]{HLP11}). Namely, we have
$$
{\bigwedge}^{\!0}D_2=1,\ {\bigwedge}^{\!1}D_2=D_2,\ {\bigwedge}^{\!2}D_2=\det(D_2)=-2.
$$
In all, we obtain that
\begin{align*}
&N_f(z)=R_f(z)=\frac{1+2z-2z^2}{1-4z-8z^2}.
\end{align*}
\end{Example}
\begin{Thm}\label{infra}
Let $f$ be a continuous map on an infra-nilmanifold with an affine homotopy lift $(d,D)$.
Assume $N(f)=|L(f)|$. Then the Nielsen zeta function $N_f(z)$ is a rational function and is equal to
\begin{equation*}
N_f(z)=L_f((-1)^qz)^{(-1)^r}
\end{equation*}
where $q$ is the number of real eigenvalues of $D_*$ which are $<-1$ and
$r$ is the number of real eigenvalues of $D_*$ of modulus $>1$.
When the Reidemeister zeta function $R_f(z)$ is defined, we have
$$
R_f(z)=R_\phi(z)=N_f(z)
$$.
\end{Thm}
\begin{proof}
By \cite[Theorem~8.2.2]{P} $N(f)=|L(f)|$ implies $N(f^n)=|L(f^n)|$ for all $n$.
Let $\epsilon_n$ be the sign of $\det(I-D^n_*)$. Let $q$ be the number of real eigenvalues of $D_*$ which are less than $-1$ and $r$ be the number of real eigenvalues of $D_*$ of modulus $>1$. Then $\epsilon_n=(-1)^{r+qn}$.
By Theorem~\ref{AV-all}, we have that $\epsilon_1\det(I-A_*D_*)\ge0$ for all $A\in\Phi$. In particular, we have
$$
\det(I-A_*D_*)\det(I-B_*D_*)\ge0\quad \text{for all $A,B\in\Phi$}.
$$
Choose arbitrary $n>0$. By \cite[Lemma~8.2.1]{P},
$$
\det(I-A_*D_*^n)\det(I-D^n_*)\ge0\quad \text{for all $A\in\Phi$}.
$$
Hence we have $N(f^n)=\epsilon_n L(f^n)=(-1)^{r+qn} L(f^n)$. Consequently,
\begin{align*}
N_f(z)&=\exp\left(\sum_{n=1}^\infty\frac{N(f^n)}{n}z^n\right)\\
&=\exp\left(\sum_{n=1}^\infty\frac{(-1)^{r+qn} L(f^n)}{n}z^n\right)\\
&=\left(\exp\left(\sum_{n=1}^\infty\frac{L(f^n)}{n}((-1)^qz)^n\right)\right)^{(-1)^r}\\ &=L_f((-1)^qz)^{(-1)^r}
\end{align*}
is a rational function.
Assume $R_f(z)$ is defined. So, $R(f^n)=R(\phi^n)<\infty$ for all $n>0$. On infra-nilmanifolds, by Theorem~\ref{AV-all}, it is equivalent to saying that $\det(A_*-D_*^n)\ne0$ for all $A\in\Phi$ and all $n$, and hence $\sigma\left(\det(A_*-D_*^n)\right)=|\det(A_*-D_*^n)|$. Thus
\begin{align*}
R(f^n)=R(\phi^n)&=\frac{1}{|\Phi|}\sum_{A\in\Phi}\sigma\left(\det(A_*-D_*^n)\right)\\
&=\frac{1}{|\Phi|}\sum_{A\in\Phi}|\det(A_*-D_*^n)|=N(f^n).
\end{align*}
This implies that $R_f(z)=R_\phi(z)=N_f(z)$.
\end{proof}
Therefore, for those classes of maps on infra-nilmanifolds for which Anosov relation $N(f)=|L(f)|$ holds \cite{KL, Malfait, DRM} and for those classes of infra-nilmanifolds for which Anosov relation $N(f)=|L(f)|$ holds for ALL maps \cite{Anosov, DRM-JGP, DRM, DRP}, the Nielsen zeta functions and the Reidemeister zeta functions are rational functions.
In general case, using the results of Theorem~\ref{T4.4}, Dekimpe and Dugardein described the Nielsen zeta function of $f$ as follows:
\begin{Thm}[{\cite[Theorem~4.5]{DeDu}}]\label{T4.5}
Let $f$ be a continuous map on an infra-nilmanifold $\Pi\backslash{G}$ with an affine homotopy lift $(d,D)$. Then the Nielsen zeta function is a rational function and is equal to
\begin{equation*}
N_f(z)=\begin{cases}
L_f((-1)^nz)^{(-1)^{p+n}}&\text{when $\Pi=\Pi_+$;}\\
\left(\frac{L_{f_+}((-1)^nz)}{L_f((-1)^nz)}\right)^{(-1)^{p+n}}&\text{when $\Pi\ne\Pi_+$,}
\end{cases}
\end{equation*}
where $p$ is the number of real eigenvalues of $D_*$ which are $>1$ and $n$ is the number of real eigenvalues of $D_*$ which are $<-1$.
When the Reidemeister zeta function $R_f(z)$ is defined, we have
$$
R_f(z)=R_\phi(z)=N_f(z)
$$.
\end{Thm}
\begin{Rmk}\label{NtoS1}
In \cite[Theorem~4.5]{DeDu} the Nielsen zeta function is expressed in terms of Lefschetz zeta functions $L_f(z)$ and $L_{f_+}(z)$ via a table given by parity of $p$ and $n$.
{The class of infra-solvmanifolds of type $(\mathrm{R})$ contains and shares a lot of properties of the class of infra-nilmanifolds such as the averaging formula for Nielsen numbers, see \cite{HLP11,LL-Nagoya}. Therefore, Theorem~\ref{T4.4} and the statement about $N_f(z)$ in Theorem~\ref{T4.5} can be generalized directly to the class of infra-solvmanifolds of type $(\mathrm{R})$, see Remark in \cite[Sec.~\!4]{DeDu}.}
\end{Rmk}
To write down a functional equation for the Reidemeister and the Nielsen zeta function, we recall the following functional equation for the Lefschetz zeta function:
\begin{Lemma}[{\cite[Proposition~8]{Fri1}}, see also \cite{del}] \label{Fried}
{Let $M$ be a closed orientable manifold of dimension $m$ and let $f:M\to M$ be a continuous map of degree $d$. Then
$$
L_{f}\left(\frac{\alpha}{dz}\right)=\epsilon\,(-\alpha dz)^{(-1)^m\chi(M)}\,L_{f}(\alpha z)^{(-1)^m}
$$
where $\alpha=\pm1$ and $\epsilon\in\mathbb{C}$ is a non-zero constant such that if $|d|=1$ then $\epsilon=\pm1$.}
\end{Lemma}
\begin{proof}
In the Lefschetz zeta function formula \eqref{Lef}, we may replace $f_*$ by $f^*:H^*(M;\mathbb{Q})\to H^*(M;\mathbb{Q})$. Let $\beta_k=\dim H_k(M;\mathbb{Q})$ be the $k$th Betti number of $M$. Let $\lambda_{k,j}$ be the (complex and distinct) eigenvalues of ${f_*}_k:H_k(M;\mathbb{Q})\to H_k(M;\mathbb{Q})$
Via the natural non-singular pairing in the cohomology $H^k(M;\mathbb{Q})\otimes H^{m-k}(M;\mathbb{Q})\to\mathbb{Q}$, the operators $f^*_{m-k}$ and $d(f^*_k)$ are adjoint to each other. Hence since $\lambda_{k,j}$ is an eigenvalue of $f^*_k$, $\mu_{\ell,j}=d/\lambda_{k,j}$ is an eigenvalue of $f^*_{m-k}=f^*_\ell$. Furthermore, $\beta_k=\beta_{m-k}=\beta_\ell$.
Consequently, we have
\begin{align*}
L_{f}\left(\frac{\alpha}{dz}\right)&=\prod_{k=0}^{m}\prod_{j=1}^{\beta_k} \left(1-\lambda_{k,j}\frac{\alpha}{dz}\right)^{(-1)^{k+1}}\\
&=\prod_{k=0}^{m}\prod_{j=1}^{\beta_k} \left({1-\frac{d}{\lambda_{k,j}}\alpha z}\right)^{(-1)^{k+1}}\left(-\frac{\alpha dz}{\lambda_{k,j}}\right)^{(-1)^{k}}\\
&=\prod_{\ell=0}^{m}\prod_{j=1}^{\beta_{m-\ell}} \left({1-\mu_{\ell,j}\alpha z}\right)^{(-1)^{m-\ell+1}}\prod_{k=0}^{m}\prod_{j=1}^{\beta_k} \left(-\frac{\alpha dz}{\lambda_{k,j}}\right)^{(-1)^{m-\ell}}\\
&=\left(\prod_{\ell=0}^{m}\prod_{j=1}^{\beta_{\ell}} \left({1-\mu_{\ell,j}\alpha z}\right)^{(-1)^{\ell+1}}\prod_{k=0}^{m}\prod_{j=1}^{\beta_k} \left(-\frac{\alpha dz}{\lambda_{k,j}}\right)^{(-1)^{\ell}}\right)^{(-1)^m}\\
&=L_{f}(\alpha z)^{(-1)^m}\cdot(-\alpha dz)^{\sum_{\ell=0}^m(-1)^\ell\beta_\ell}\cdot\prod_{k=0}^{m}\prod_{j=1}^{\beta_k} \lambda_{k,j}^{(-1)^{k+1}}\\
&=L_{f}(\alpha z)^{(-1)^m}\,\epsilon (-\alpha dz)^{(-1)^m\chi(M)}.
\end{align*}
Here,
\begin{align*}
\epsilon&=\prod_{k=0}^{m}\prod_{j=1}^{\beta_k} \lambda_{k,j}^{(-1)^{k+1}}=\pm\prod_{k=0}^m\det(f^*_k). \qedhere
\end{align*}
\end{proof}
We obtain:
\begin{Thm}[{Functional Equation}]\label{FE-case1}
Let $f$ be a continuous map on an {orientable infra-nilmanifold $M=\Pi\backslash{G}$} with an affine homotopy lift $(d,D)$. Then the Reidemeister zeta function, whenever it is defined, and the Nielsen zeta function have the following functional equations:
\begin{equation*}
R_{f}\left(\frac{1}{dz}\right)
=\begin{cases}
R_f(z)^{(-1)^m}\epsilon^{(-1)^{p+n}}&\text{when $\Pi=\Pi_+$;}\\
R_f(z)^{(-1)^m}\epsilon^{-1}&\text{when $\Pi\ne\Pi_+$}
\end{cases}
\end{equation*}
and
\begin{equation*}
N_{f}\left(\frac{1}{dz}\right)
=\begin{cases}
N_f(z)^{(-1)^m}\epsilon^{(-1)^{p+n}}&\text{when $\Pi=\Pi_+$;}\\
N_f(z)^{(-1)^m}\epsilon^{-1}&\text{when $\Pi\ne\Pi_+$}
\end{cases}
\end{equation*}
where $d$ is a degree $f$, $m= \dim M$, $\epsilon$ is a constant in $\mathbb{C}^\times$, $\sigma=(-1)^n$, $p$ is the number of real eigenvalues of $D_*$ which are $>1$ and $n$ is the number of real eigenvalues of $D_*$ which are $<-1$. If $|d|=1$ then $\epsilon=\pm1$.
\end{Thm}
\begin{proof}
Assume $\Pi=\Pi_+$. Then $R_f(z)= N_f(z)=L_f(\sigma z)^{(-1)^{p+n}}$. By Lemma~\ref{Fried}, we have
\begin{align*}
R_f\left(\frac{1}{dz}\right)=N_f\left(\frac{1}{dz}\right)&= L_f\left(\frac{\sigma}{dz}\right)^{(-1)^{p+n}}\\
&=\left(\epsilon(-\sigma dz)^{(-1)^m\chi(M)}L_f(\sigma z)^{(-1)^m}\right)^{(-1)^{p+n}} \\
&=N_f(z)^{(-1)^m}\epsilon^{(-1)^{p+n}}(-\sigma dz)^{(-1)^{m+p+n}\chi(M)}\\
&= R_f(z)^{(-1)^m}\epsilon^{(-1)^{p+n}}(-\sigma dz)^{(-1)^{m+p+n}\chi(M)}.
\end{align*}
Assume now that $\Pi\ne\Pi_+$. First we claim that $f$ and $f_+$ have the same degree. Let $\pi:M_+\to M$ be the natural double covering projection. Then $\Pi/\Pi_+\cong\mathbb{Z}_2$ is the group of covering transformations of $\pi$. By \cite[III.2]{Bredon}, the homomorphism $\pi^*:H^m(M;\mathbb{Q})\to H^m(M_+;\mathbb{Q})$ induces an isomorphism $\pi^*:H^m(M;\mathbb{Q})\to H^m(M_+;\mathbb{Q})^{\Pi/\Pi_+}$. In particular, $\pi^*$ is injective.
If $x$ is the nontrivial covering transformation, we have the commutative diagram
\centerline {\xymatrix{M_+ \ar[dr]_\pi \ar[rr]^{x}&& M_+\ar[dl]^{\pi}\\
&M&}}
\smallskip
\noindent
This induces the following commutative diagram
\centerline {\xymatrix{H^m(M_+;\mathbb{Q}) \ar[rr]^{x^*}&& H^m(M_+;\mathbb{Q})\\
&H^m(M;\mathbb{Q})\ar[ul]^{\pi^*}\ar[ur]_{\pi^*}&}}
\smallskip
\noindent
We denote generators of $H^m(M;\mathbb{Q})$ and $H^m(M_+;\mathbb{Q})$ by $[M]$ and $[M_+]$, respectively.
The above diagram shows that $x^*(\pi^*([M]))=\pi^*([M])$, which induces that $x^*([M_+])=[M_+]$ as $\pi^*$ is injective, and hence $x$ acts on $H^m(M_+;\mathbb{Q})$ trivially. In other words, $H^m(M_+;\mathbb{Q})=H^m(M_+;\mathbb{Q})^{\Pi/\Pi_+}$ and $\pi^*:H^m(M;\mathbb{Q})\to H^m(M_+;\mathbb{Q})$ is an isomorphism. This implies that $f$ and $f_+$ have the same degree.
By Theorem~\ref{T4.5} and Lemma~\ref{Fried}, we have
\begin{align*}
R_f\left(\frac{1}{dz}\right) = N_f\left(\frac{1}{dz}\right) &= L_{f_+}\left(\frac{\sigma}{dz}\right)^{(-1)^{p+n}}\cdot
L_f\left(\frac{\sigma}{dz}\right)^{(-1)^{p+n+1}} \\
&= \left(\epsilon(-\sigma dz)^{(-1)^m\chi(M)}L_{f_+}(\sigma z)^{(-1)^m}\right)^{(-1)^{p+n}}\\
&\quad\times\left(\epsilon(-\sigma dz)^{(-1)^m\chi(M)}L_{f}(\sigma z)^{(-1)^m}\right)^{(-1)^{p+n+1}}\\
&=N_f(z)^{(-1)^m}\left(\epsilon(-\sigma dz)^{(-1)^m\chi(M)}\right)^{-1}\\
&= R_f(z)^{(-1)^m}\left(\epsilon(-\sigma dz)^{(-1)^m\chi(M)}\right)^{-1}.
\end{align*}
On the other hand, it is known that $\chi(M)=0$, e.g. see the remark below, which finishes our proof.
\end{proof}
\begin{Rmk}
Let $G$ be a torsion-free polycylic group. Then $\chi(G)=0$. For, by induction, we may assume that $G$ is an extension of $\mathbb{Z}^m$ by $\mathbb{Z}^n$; then as $\chi(\mathbb{Z})=0$, we have $\chi(G)=\chi(\mathbb{Z}^m)\chi(\mathbb{Z}^n)=0$, \cite[Theorem~6.4.2]{Dekimpe}. {Another proof: A solvmanifold is aspherical and its fundamental group contains a nontrivial Abelian normal subgroup. By Gottlieb's theorem, its Euler characteristic is zero.} If $S$ is a torsion-free extension of $G$ by a finite group of order $k$, then $k\cdot\chi(S)=\chi(G)=0\Rightarrow \chi(S)=0$.
\end{Rmk}
\begin{Rmk}\label{NtoS2}
As it is mentioned above, since Theorem~\ref{T4.5} is true for the Nielsen zeta functions on infra-solvmanifolds of type $(\mathrm{R})$, the functional equation for the Nielsen zeta functions in Theorem~\ref{FE-case1} is true on infra-solvmanifolds of type $(\mathrm{R})$ (see Theorem~\ref{zeta-S} for the Reidemeister zeta functions).
\end{Rmk}
\section{Asymptotic Nielsen numbers}\label{Asymptotic}
The growth rate of a sequence $a_n$ of complex numbers is defined by
$$
\grow (a_n):=\max \left\{1, \limsup_{n \rightarrow \infty} |a_n|^{1/n}\right\}.
$$
We define the asymptotic Nielsen number \cite{I} and the asymptotic Reidemeister number to be the growth rate
$N^{\infty}(f) := \grow(N(f^n))$ and $ R^{\infty}(f) := \grow(R(f^n))$ correspondingly.
These asymptotic numbers are homotopy type invariants.
We denote by $\mathrm{sp}(A)$ the spectral radius of the matrix or the operator $A$, $\mathrm{sp}(A)=\lim_n \sqrt[n]{\| A^n \| |}$ which coincide with {the largest modulus of an eigenvalue of $A$}.
We denote by $ \bigwedge F_* :=\bigoplus_{\ell=0}^m\bigwedge^\ell F_* $ a linear operator induced in the exterior algebra $\bigwedge^* \mathbb{R}^m :=\bigoplus_{\ell=0}^m\bigwedge^\ell \mathbb{R}^m $ of $\mathfrak{G}$ considered as the linear space $\mathbb{R}^m$.
\begin{Thm}[{see also the proof of \cite[Theorem~1.5]{mp}}]\label{MP-nil}
Let $M=\Gamma\backslash{S}$ be a {special solvmanifold of type $(\mathrm{R})$} and let $f$ be a continuous map on $M$ with a Lie group homomorphism $D:S\to S$ as a homotopy lift. Then we have
$$
N^{\infty}(f) =\mathrm{sp}(\bigwedge D_{*})
$$
provided that $1$ is not in the spectrum of $D_{*}$.
\end{Thm}
\begin{proof}
We give a \emph{very elementary proof} of this theorem. {Compare with the proof of \cite[Theorem~1.5]{mp} in which the authors are interested only in the case of positive topological entropy which excludes the case $N^\infty(f)=1$.}
By \cite[Theorem~2.2]{LL-Nagoya}, we may assume that $f$ is induced by a Lie group homomorphism $D:S\to S$.
Let $\{\lambda_1,\cdots,\lambda_m\}$ be the eigenvalues of $D_*$, counted with multiplicities.
First we note from definition that
\begin{align*}
\mathrm{sp}(\bigwedge D_*)=\begin{cases}\prod_{|\lambda_j|>1}|\lambda_j|&\text{when $\mathrm{sp}(D_*)>1$}\\
1 &\text{when $\mathrm{sp}(D_*)\le1$.}\end{cases}
\end{align*}
In the case when $\mathrm{sp}(D_*)\le1$, the eigenvalues of $\bigwedge^{q\ge1} D_*$ are multiples of eigenvalues of $D_*$, which are $\le1$. On the other hand $\bigwedge^0D_*=\mathrm{id}$, and hence $\mathrm{sp}(\bigwedge D_*)=1$.
Recalling $N(f^n)=|\det(I-D_*^n)|=\prod_{j=1}^m|1-\lambda_j^n|$, we first consider the case where $N(f^n)\ne0$ for all $n$. Then we have
\begin{align*}
\log\limsup_{n\to\infty} N(f^n)^{1/n}&=\limsup_{n\to\infty}\frac{1}{n}\sum_{j=1}^m\log|1-\lambda_j^n|\\
&=\sum_{j=1}^m\limsup_{n\to\infty}\frac{1}{n}\log|1-\lambda_j^n|.
\end{align*}
If $|\lambda|\le1$ then $\limsup_n\frac{1}{n}\log|1-\lambda^n|=0$. For, $\log|1-\lambda^n|\le\log2$. If $|\lambda|>1$ then using L'H\^{o}pital's rule, we have
$$
|\lambda|^n-1\le|1-\lambda^n|\le |\lambda|^n+1
\Rightarrow
\lim_{n\to\infty}\frac{1}{n}\log|1-\lambda^n|=\log|\lambda|.
$$
Hence
\begin{align*}
N^\infty(f)&=\max\left\{1,\limsup_{n\to\infty}N(f^n)^{1/n}\right\}\\
&=\max\left\{1,\prod_{|\lambda|>1}|\lambda|\right\
=\mathrm{sp}(\bigwedge D_*).
\end{align*}
Next we consider the case where $N(f^n)=0$ for some $n$. Thus some $\lambda_j$ is an $n$th root of unity. For each such $\lambda_j$, consider all $k$'s for which $|1-\lambda_j^k|\ne0$. Since by the assumption $\lambda_j\ne1$, there are infinitely many such $k$'s. Furthermore, there are infinitely many $k$'s for which $|1-\lambda_j^k|\ne0$ for all such (finitely many) $\lambda_j$. Therefore, {when $\mathrm{sp}(D_*)>1$} we have
\begin{align*}
\log\limsup_{n\to\infty} N(f^n)^{1/n}
&=\limsup_{k\to\infty}\frac{1}{k}\log N(f^k)\\
&=\limsup_{k\to\infty}\frac{1}{k}\sum_{j=1}^m\log|1-\lambda_j^k|\\
&=\sum_{|\lambda|>1}\limsup_{k\to\infty}\frac{1}{k}\log|1-\lambda^k|\\
&=\log\left(\prod_{|\lambda|>1}|\lambda|\right);
\end{align*}
{when $\mathrm{sp}(D_*)\le1$ we have $\log\limsup_{n} N(f^n)^{1/n}=0$.}
This completes the proof.
\end{proof}
In fact, what we have shown in the above proof is the following:
\begin{Cor}\label{MP-m}
Let $D$ be a matrix with eigenvalues $\lambda_1,\cdots,\lambda_m$, counted with multiplicities. Let $L(D^n)=\det(I-D^n)$. If $1$ is not in the spectrum of $D$, then
$$
\mathrm{Growth}\left(L(D^n)\right)=\mathrm{sp}(\bigwedge D).
$$
\end{Cor}
\bigskip
Recall that if $f:M\to M$ is a continuous map on an infra-nilmanifold $M=\Pi\backslash{G}$ with the holonomy group $\Phi$ and if $f$ has an affine homotopy lift $(d,D)$, then $f$ induces a homomorphism $\phi:\Pi\to\Pi$ defined by the rule:
$$
\forall\alpha\in\Pi,\ \phi(\alpha)\circ(d,D)=(d,D)\circ\alpha.
$$
Furthermore, the homomorphism $\phi$ induces a {function} $\hat\phi:\Phi\to\Phi$ satisfying the identity (\ref{Dekimpe-eq}):
$$
\forall A\in\Phi,\ \hat\phi(A)D=DA.
$$
For any $n\ge1$, we can observe that:
\begin{enumerate}
\item $f^n$ has an affine homotopy lift $(d,D)^n=(*,D^n)$,
\item $f^n$ induces a homomorphism $\phi^n:\Pi\to\Pi$,
\item the homomorphism $\phi^n$ induces a function $\widehat{\phi^n}={\hat\phi}^n:\Phi\to\Phi$.
\end{enumerate}
Recall from Theorem~\ref{AV-all} the averaging formula:
$$
N(f^n)=\frac{1}{|\Phi|}\sum_{A\in\Phi}|\det(I-A_*D_*^n)|.
$$
Since
$$
\frac{1}{|\Phi|}|\det(I-D_*^n)|\le N(f^n),
$$
we have
\begin{align*}
&\frac{1}{n}\log N(f^n)\ge\frac{1}{n}\left(\log|\det(I-D_*^n)|-\log|\Phi|\right)\\
&\Rightarrow
\limsup\frac{1}{n}\log N(f^n)\ge\limsup\frac{1}{n}\left(\log|\det(I-D_*^n)|\right).
\end{align*}
This induces from Corollary~\ref{MP-m} that
\begin{align*
\mathrm{sp}(\bigwedge D_*)=\mathrm{Growth}(L(D_*^n))\le N^\infty(f).
\end{align*}
Next we recall \cite[Lemma~3.1]{DRM}: Give $A\in\Phi$, we can choose a sequence $(B_i)_{i\in\mathbb{N}}$ of elements in $\Phi$ be taking $B_1=A$ and such that $B_{i+1}=\hat\phi(B_i)$, associated to $f$. Since $\Phi$ is finite, this sequence will become periodic from a certain point onwards. Namely, there exist $j,k\ge1$ such that $B_{j+k}=B_j$. It is shown in \cite[Lemma~3.1]{DRM} that
\begin{enumerate}
\item $\forall i\in\mathbb{N},\ \det(I-\hat\phi(B_i)_*D_*)=\det(I-\hat\phi(B_{i+1})_*D_*)$,
\item $\exists \ell\in\mathbb{N}$ such that $(\hat\phi(B_j)_*D_*)^\ell=D_*^\ell$,
\end{enumerate}
Since $A$ is of finite order, $\det A_*=\pm1$.
Let $\lambda_1,\cdots,\lambda_m$ be the eigenvalues of $D_*$ counted with multiplicities and let $\mu_1,\cdots,\mu_m$ be the eigenvalues of $\hat\phi(B_j)_*D_*$ counted with multiplicities. Since $(\hat\phi(B_j)_*D_*)^\ell=D_*^\ell$, $(\hat\phi(B_j)_*D_*)^\ell$ has the eigenvalues
$$
\{\lambda_1^\ell, \cdots, \lambda_m^\ell\}=\{\mu_1^\ell,\cdots,\ \mu_m^\ell\}.
$$
We may assume that $\lambda_i^\ell=\mu_i^\ell$ for all $i=1,\cdots,m$. Thus $|\mu_i|=|\lambda_i|$.
Now, \begin{align*}
|\det(I-A_*D_*)|&=|\det A_*\det(A_*^{-1}-D_*)|=|\det(I-D_*A_*)|\\
&=|\det(I-\hat\phi(B_j)_*D_*)|\ \text{ (by (1))}\\
&=\prod_{i=1}^m|1-\mu_i|\le \prod_{i=1}^m(1+|\mu_i|) \ \text{ (by triangle inequality)}\\
&=\prod_{i=1}^m(1+|\lambda_i|)
\end{align*}
Applying the above argument to $D^n$, we obtain that
$$
|\det(I-A_*D_*^n)|\le \prod_{i=1}^m\left(1+|\lambda_i|^n\right).
$$
By the averaging formula, we have
\begin{align*}
N(f^n)&=\frac{1}{|\Phi|}\sum_{A\in\Phi}|\det(I-A_*D_*^n)|\\
&\le\frac{1}{|\Phi|}\sum_{A\in\Phi}\prod_{i=1}^m\left(1+|\lambda_i|^n\right)
=\prod_{i=1}^m\left(1+|\lambda_i|^n\right),
\end{align*}
which induces
\begin{align*}
\limsup\frac{1}{n}\log N(f^n)&\le\sum_{i=1}^m\
\limsup\frac{1}{n}\log\left(1+|\lambda_i|^n\right)\\
&=\sum_{|\lambda|>1} \log|\lambda|=\log\left(\prod_{|\lambda|>1}|\lambda|\right).
\end{align*}
Hence it follows that
\begin{align*}
N^\infty(f)\le\mathrm{sp}(\bigwedge D_*).
\end{align*}
Because the above (algebraic) properties \cite{DRM} and the averaging formula for the Nielsen number \cite{LL-Nagoya} on infra-nilmanifolds can be generalized to infra-solvmanifolds of type $(\mathrm{R})$, we have proven in all that:
\begin{Thm}\label{MP-infranil}
Let $f$ be a continuous map on an infra-solvmanifold of type $(\mathrm{R})$ with an affine homotopy lift $(d,D)$. Then we have
$$
N^{\infty}(f) =\mathrm{sp}(\bigwedge D_{*})
$$
provided that $1$ is not in the spectrum of $D_{*}$.
\end{Thm}
\begin{Rmk}
The above theorem was proved when $M$ is a special solvmanifold of type $(\mathrm{R})$, see Theorem~\ref{MP-nil} and the proof of \cite[Theorem~1.5]{mp}. {In the paper \cite{mp}, it is assumed that $\mathrm{sp}(D_*)>1$. Since $N(f)=|\det(I-D_*)|$, $1$ is not in the spectrum of $D_*$ if and only if $f$ is not homotopic to a fixed point free map.}
\end{Rmk}
\bigskip
Now, we provide an example of the asymptotic Nielsen numbers.\newline
\begin{Example}
Let $f:\Pi\backslash\mathbb{R}^2\to\Pi\backslash\mathbb{R}^2$ be any continuous map on the Klein bottle $\Pi\backslash\mathbb{R}^2$ of type $(r,\ell,q)$. Recall from \cite[Theorem~2.3]{KLY} and its proof that $r$ is odd or $q=0$, and
\begin{align*}
N(f^n)&=\begin{cases}
|q^n(1-r^n)|&\text{when $r$ is odd and $q\ne0$;}\\
|1-r^n|&\text{when $q=0$,}
\end{cases}\\
D_*&=\begin{cases}
\left[\begin{matrix}r&0\\0&q\end{matrix}\right]&\text{when $r$ is odd and $q\ne0$;}\\
\left[\begin{matrix}r&0\\2\ell&0\end{matrix}\right]&\text{when $q=0$.}
\end{cases}
\end{align*}
Assume $q=0$. If $|r|\le1$, then $N(f^n)\le2$ and so $N^\infty(f)=1$; if $|r|>1$ then
\begin{align*}
\log\limsup_{n\to\infty} N(f^n)^{1/n}&=\limsup_{n\to\infty}\frac{1}{n}\log|1-r^n|
=\log|r|.
\end{align*}
Thus $N^\infty(f)=\max\{1,|r|\}$.
Assume $q\ne0$ and $r$ is odd. If $r=1$ then $N(f^n)=0\Rightarrow N^\infty(f)=1$. If $r\ne1$ is odd, then
\begin{align*}
\log\limsup_{n\to\infty} N(f^n)^{1/n}&=\limsup_{n\to\infty}\left(\log|q|+\frac{1}{n}\log|1-r^n|\right)\\
&=\begin{cases}
\log|q|&\text{when $|r|\le1$, i.e., $r=-1$;}\\
\log|qr|&\text{when $|r|>1$.}
\end{cases}
\end{align*}
Thus
\begin{align*}
N^\infty(f)=\begin{cases}
1&\text{$q\ne0$ and $r=1$}\\
\max\{1, |q|, |qr|\}&\text{$q\ne0$ and $r\ne1$ is odd}.
\end{cases}
\end{align*}
On the other hand, since $\mathrm{sp}(\bigwedge D_*)$ is the largest modulus of an eigenvalue of $\bigwedge D_*$, it follows that
$$
\mathrm{sp}(\bigwedge D_*)=\max\{1,|r|,|q|,|qr|\}.
$$
Hence:
\begin{enumerate}
\item If $r=1$, then $N^\infty(f)=1$ and $\mathrm{sp}(\bigwedge D_*)=|q|\ge1$ (since $r=1$ is odd and so $q\ne0$).
\item If $r=0$ (even), then $q$ must be $0$ and so $N^\infty(f)=\mathrm{sp}(\bigwedge D_*)=1$.
\item Otherwise, $N^\infty(f)=\mathrm{sp}(\bigwedge D_*)$.
\end{enumerate}
We observe explicitly in this example that the condition that $1$ is not in the spectrum of $D_*$
induce the identity $N^\infty(f)=\mathrm{sp}(\bigwedge D_*)$. {If $q=0$ then $\mathrm{sp}(D_*)=|r|$ and so $N^\infty(f)=\max\{1,|r|\}=\mathrm{sp}(\bigwedge D_*)$.} If $q\ne0$ then $r$ is odd and $\mathrm{sp}(D_*)=\max\{|r|,|q|\}>1$; if $\mathrm{sp}(D_*)=|r|\ge|q|$ then $|r|>1$ and so $N^\infty(f)=|qr|=\mathrm{sp}(\bigwedge D_*)$; if $\mathrm{sp}(D_*)=|q|\ge|r|$ then $|q|>1$ and $|r|>1$ or $r=-1$ (because $r$ cannot be $1$) so $N^\infty(f)=|qr|=\mathrm{sp}(\bigwedge D_*)$.
\end{Example}
\section{Topological entropy and the radius of convergence}\label{EC}
The most widely used measure for the complexity of a dynamical system is the topological
entropy. For the convenience of the reader, we include its definition.
Let $ f: X \rightarrow X $ be a self-map of a compact metric space. For given $\epsilon > 0 $
and $ n \in \mathbb{N} $, a subset $E \subset X$ is said to be $(n,\epsilon)$-separated under $f$ if for
each pair $x \not= y$ in $E$ there is $0 \leq i <n $ such that $ d(f^i(x), f^i(y)) > \epsilon$.
Let $s_n(\epsilon,f)$ denote the largest cardinality of any $(n,\epsilon)$-separated subset $E$
under $f$. Thus $s_n(\epsilon,f)$ is the greatest number of orbit segments ${x,f(x),\cdots,f^{n-1}(x)}$
of length $n$ that can be distinguished one from another provided we can only distinguish
between points of $X$ that are at least $\epsilon$ apart. Now let
$$
h(f,\epsilon):= \limsup_{n} \frac{1}{n}\log \,s_n(\epsilon,f)
$$
$$
h(f):=\limsup_{\epsilon \rightarrow 0} h(f,\epsilon).
$$
The number $0\leq h(f) \leq \infty $, which to be independent of the metric $d$ used, is called the topological entropy of $f$.
If $ h(f,\epsilon)> 0$ then, up to resolution $ \epsilon >0$, the number $s_n(\epsilon,f)$ of
distinguishable orbit segments of length $n$ grows exponentially with $n$. So $h(f)$
measures the growth rate in $n$ of the number of orbit segments of length $n$
with arbitrarily fine resolution.
A basic relation between topological entropy $h(f)$ and Nielsen numbers was found by N. Ivanov
\cite{I}. We present here a very short proof by B. Jiang of the Ivanov's inequality.
\begin{Lemma}[{\cite{I}}]
\label{Iv}
Let $f$ be a continuous map on a compact connected polyhedron $X$. Then
$$
h(f) \geq {\log N^\infty(f)}
$$
\end{Lemma}
\begin{proof}
Let $\delta$ be such that every loop in $X$ of diameter $ < 2\delta $ is contractible.
Let $\epsilon >0$ be a smaller number such that $d(f(x),f(y)) < \delta $ whenever $ d(x,y)<2\epsilon $. Let $E_n \subset X $ be a set consisting of one point from each essential fixed point class of $f^n$. Thus $|E_n| =N(f^n) $. By the definition of $h(f)$, it suffices to show that $E_n$ is $(n,\epsilon)$-separated. Suppose it is not so. Then there would be two points $x\not=y \in E_n$ such that $ d(f^i(x), f^i(y)) \leq \epsilon$ for $o\leq i< n$ hence for all $i\geq 0$. Pick a path $c_i$ from $f^i(x)$ to $f^i(y)$ of diameter $< 2\epsilon$ for $ 0\leq i< n$ and let $c_n=c_0$. By the choice of $\delta$ and $\epsilon$, $f\circ c_i \simeq c_{i+1} $ for all $i$, so $f^n\circ c_0\simeq c_n=c_0$.
This means $x,y$ in the same fixed point class of $f^n$, contradicting the construction of $E_n$.
\end{proof}
This inequality is remarkable in that it does not require smoothness of the map and provides a common lower bound for the topological entropy of all maps in a homotopy class.
Let $ H^*(f): H^*(M;\mathbb{R}) \to H^*(M;\mathbb{R}) $ be a linear map induced by $f$ on the total cohomology $H^*(M;\mathbb{R})$ of $M$ with real coefficients. By $\mathrm{sp}(f)$ we denote the spectral radius of $H^*(f)$, which is a homotopy invariant. In 1974 Michael Shub asked, \cite{Shub}, the extent to which the inequality
$$
h(f) \ge \log(\mathrm{sp}(f))
$$
holds. From this time this inequality has been usually called the Entropy Conjecture. Later A. Katok conjectured \cite{Katok} that Entropy Conjecture holds for all continuous map for $M$ being a manifold with the universal cover homeomorphic to $\mathbb{R}^m$. In \cite{{mp-a}}, this was confirmed for every continuous map on an infra-nilmanifold.
\begin{Thm}
\label{AB}
Let $f$ be a continuous map on an infra-solvmanifold $M$ of type $(\mathrm{R})$ with an affine homotopy lift $(d,D)$.
If $1$ is not in the spectrum of $D_*$, then
$$
h(f)\ge \log(\mathrm{sp}(f)).
$$
If $\bar{f}$ is the map on $M$ induced by the affine map $(d,D)$, then
\begin{align*}
&h(f)\ge h(\bar{f})\ge\log\mathrm{sp}(f),\\
&h(\bar{f})=\log\mathrm{sp}(\bigwedge D_*)=\log N^\infty(\bar{f})=\log N^\infty(f).
\end{align*}
Hence $\bar{f}$ minimizes the entropy in the homotopy class of $f$.
\end{Thm}
\begin{proof}
Let $\bar{f}$ be the map on $M$ induced by the affine map $(d,D)$. Thus $f$ is homotopic to $\bar{f}$. By \cite[Lemma~2.1]{LL-Nagoya}, there is a special solvmanifold which regularly and finitely covers the infra-solvmanifold $M$ so that $f$ can be lifted to $\hat{f}$ on the solvmanifold. We also remark that the Lie group homomorphism $\tau_dD$ induces a map $\bar{\hat{f}}$ on the solvmanifold so that $\bar{f}$ lifts to $\bar{\hat{f}}$, $\hat{f}$ is homotopic to $\phi_f$, the linearization $D_*$ of $f$ is also a linearization of the lift $\hat{f}$, and the topological entropies of $f,\bar{f}$ and their lifts $\hat{f}, \bar{\hat{f}}$ are the same, i.e., $h(f)=h(\hat{f})$ and $h(\bar{f})=h(\bar{\hat{f}})$. Moreover, since the spectral radius is a homotopy invariant, $\mathrm{sp}(f)=\mathrm{sp}(\bar{f})$ and $\mathrm{sp}(\hat{f})=\mathrm{sp}(\bar{\hat{f}})$. It is also known that $\mathrm{sp}(f)\le\mathrm{sp}(\hat{f})$. See, for example, \cite[Proposition~2]{mp-a}.
Now observe that
\begin{align*}
\log\mathrm{sp}(\bigwedge D_*)&=\log N^\infty(f)\ \text{ (Theorem~\ref{MP-infranil})}\\
&=\log N^\infty(\bar{f})\ \text{ (homotopy invariance of $N^\infty(\cdot)$)}\\
&\le h(\bar{f})\ \text{ (Lemma~\ref{Iv})}\\
&=h(\bar{\hat{f}})\ \text{ (lift under a finite regular cover)}\\
&\le\log\mathrm{sp}(\bigwedge D_*).
\end{align*}
{The fundamental is the last inequality. It follows from the estimate of topological entropy of a $C^1$ self-map of a compact manifold $M$
$$
h(f)\le\limsup_{n\to\infty}\frac{1}{n}\log\sup_{x\in M}||\bigwedge Df(x)||
$$
given in \cite{pz} (see \cite{mp} for another reference on this estimate). Next we observe that for an affine map $f=(d,D)$ the latter reduces to $||\bigwedge D||=||\bigwedge D_*||=\mathrm{sp}(\bigwedge D_*)$.}
This implies that
$$
\log\mathrm{sp}(\bigwedge D_*)=\log N^\infty(f)=\log N^\infty(\bar{f})=h(\bar{f}).
$$
Furthermore,
\begin{align*}
\log\mathrm{sp}(\bigwedge D_*)&\ge \log\mathrm{sp}(\bar{\hat{f}})\ \text{ (\cite[Theorem~2.1]{mp})}\\
&=\log\mathrm{sp}(\hat{f})\ \text{ (homotopy invariance of $\mathrm{sp}(\cdot)$)}\\
&\ge\log\mathrm{sp}(f)\ \text{ (lift under a finite regular cover)}.
\end{align*}
Thus we have
\begin{align*}
\log\mathrm{sp}(f) \le \log\mathrm{sp}(\bigwedge D_*)&=\log N^\infty(f)=\log N^\infty(\bar{f})=h(\bar{f})
\\&\le h(f).
\end{align*}
The last inequality follows from Ivanov's inequality, Lemma~\ref{Iv}.
\end{proof}
\begin{Rmk}
If $\mathrm{sp}(D_*)\le1$, then $\mathrm{sp}(\bigwedge D_*)=1$ and so we have the obvious inequality
$$
h(f)\ge0=\log\mathrm{sp}(\bigwedge D_*)\ge\log\mathrm{sp}(f).
$$
\end{Rmk}
\begin{Rmk}
The first inequality $h(f)\ge\log\mathrm{sp}(f)$ in Theorem~\ref{AB}, i.e. Entropy Conjecture, also follows from \cite[Proposition~4.2 and Theorem~1.5]{mp} and by taking a regular finite covering to a special solvmanifold of type $(\mathrm{R})$. The second inequality in Theorem~\ref{AB} generalizes the same results \cite[Theorem~4.13]{mp} and \cite[Theorem~B]{mp-a} on nilmanifolds and infra-nilmanifolds.
\end{Rmk}
We denote by $R$ the radius of convergence of the zeta functions $N_f(z)$ or $R_f(z)$.
\begin{Thm}\label{RC}
Let $f$ be a continuous map on an infra-nilmanifold with an affine homotopy lift $(d,D)$. Then the Nielsen zeta function $N_f(z)$ and the Reidemeister zeta function $R_f(z)$, whenever it is defined, have the same positive radius of convergence $R$ which admits following estimation
\begin{equation*}
R \geq \exp(-h)>0,
\end{equation*}
where $h=\inf \{h(g)\mid g\simeq f\}$.
If $1$ is not in the spectrum of $D_{*}$, the radius $R$ of convergence of $R_f(z)$ is
$$
R=\frac{1}{N^{\infty}(f)}=\frac{1}{\exp h(\bar{f})}
=\frac{1}{\mathrm{sp}(\bigwedge D_{*})}.
$$
\end{Thm}
\begin{proof}
When $R_f(z)$ is defined, as it was observed before, $R(f^n)<\infty$ and so $R(f^n)=N(f^n)>0$ for all $n>0$ on infra-nilmanifolds. In particular, $R_f(z)=N_f(z)$. By the Cauchy-Hadamard formula,
$$
\frac{1}{R}=\limsup_{n\to\infty}\left(\frac{N(f^n)}{n}\right)^{1/n}
=\limsup_{n\to\infty}N(f^n)^{1/n}.
$$
Since $N(f^n)\ge1$ for all $n>0$, it follows that $\limsup_{n\to\infty}N(f^n)^{1/n}\ge1$. Thus
$$
\frac{1}{R}= N^\infty(f)\le\exp h(f).
$$
This induces the inequality $R\ge\exp(-h)$ by the homotopy invariance of the radius $R$ of the Reidemeister zeta function $R_f(z)$. We consider a smooth map $g:M\rightarrow M$ which is homotopic to $f$. As it is known in \cite{pz}, the entropy $h(g)$ is finite. Thus $\exp(-h) \geq \exp(-h(g)) > 0$. Now the identities in our theorem follow from Theorem~\ref{AB}.
Consider next the Nielsen zeta function $N_f(z)$. If $\limsup_{n\to\infty}N(f^n)^{1/n}\ge1$, then we obtain the same inequality for $R$ as for $R_f(z)$. Thus, we assume $\limsup_{n\to\infty}N(f^n)^{1/n}<1$. This happens only when $N(f^n)=0$ for all but finitely many $n$. In this case, $1/R=\limsup_{n\to\infty}N(f^n)^{1/n}=0$ and so $R=\infty$ and $N^\infty(f)=1$.
\end{proof}
\section{Zeta functions and the Reidemeister torsion of the mapping torus}\label{Tor}
The Reidemeister torsion
is a graded version of the absolute value of the determinant
of an isomorphism of vector spaces.
Let $d^i:C^i\rightarrow C^{i+1}$ be a cochain complex $C^*$
of finite dimensional vector spaces over $\mathbb{C}$ with
$C^i=0$ for $i<0$ and large $i$.
If the cohomology $H^i=0$ for all $i$ we say that
$C^*$ is {\it acyclic}.
If one is given positive densities $\Delta_i$ on $C^i$
then the Reidemeister torsion $\tau(C^*,\Delta_i)\in(0,\infty)$
for acyclic $C^*$ is defined as follows:
\begin{Def}
Consider a chain contraction $\delta^i:C^i\rightarrow C^{i-1}$,
i.e., a linear map such that $d\circ\delta + \delta\circ d = \mathrm{id}$.
Then $d+\delta$ determines a map
$ (d+\delta)_+ : C^+:=\oplus C^{2i} \rightarrow C^- :=\oplus C^{2i+1}$
and a map $ (d+\delta)_- : C^- \rightarrow C^+ $.
Since the map $(d+\delta)^2 = id + \delta^2$ is unipotent,
$(d+\delta)_+$ must be an isomorphism.
One defines $\tau(C^*,\Delta_i):= |\det(d+\delta)_+|$.
\end{Def}
Reidemeister torsion is defined in the following geometric setting.
Suppose $K$ is a finite complex and $E$ is a flat, finite dimensional,
complex vector bundle with base $K$.
We recall that a flat vector bundle over $K$ is essentially the
same thing as a representation of $\pi_1(K)$ when $K$ is
connected.
If $p\in K$ is a base point then one may move the fibre at $p$
in a locally constant way around a loop in $K$. This
defines an action of $\pi_1(K)$ on the fibre $E_p$ of $E$
above $p$. We call this action the holonomy representation
$\rho:\pi\to GL(E_p)$.
Conversely, given a representation $\rho:\pi\to GL(V)$
of $\pi$ on a finite dimensional complex vector space $V$,
one may define a bundle $E=E_\rho=(\tilde{K}\times V) / \pi$.
Here $\tilde{K}$ is the universal cover of $K$, and
$\pi$ acts on $\tilde{K}$ by covering transformations and on $V$
by $\rho$.
The holonomy of $E_\rho$ is $\rho$, so the two constructions
give an equivalence of flat bundles and representations of $\pi$.
If $K$ is not connected then it is simpler to work with
flat bundles. One then defines the holonomy as a
representation of the direct sum of $\pi_1$ of the
components of $K$. In this way, the equivalence of
flat bundles and representations is recovered.
Suppose now that one has on each fibre of $E$ a positive density
which is locally constant on $K$.
In terms of $\rho_E$ this assumption just means
$|\det\rho_E|=1$.
Let $V$ denote the fibre of $E$.
Then the cochain complex $C^i(K;E)$ with coefficients in $E$
can be identified with the direct sum of copies
of $V$ associated to each $i$-cell $\sigma$ of $K$.
The identification is achieved by choosing a basepoint in each
component of $K$ and a basepoint from each $i$-cell.
By choosing a flat density on $E$ we obtain a
preferred density $\Delta_
i$ on $C^i(K,E)$. A case of particular interest is when $E$ is an acyclic bundle, meaning that the twisted cohomology of $E$ is zero ($H^i(K;E)=0$). In this case one defines the R-torsion of $(K,E)$ to be $\tau(K;E)=\tau(C^*(K;E),\Delta_i)\in(0,\infty)$. It does not depend on the choice of flat density on $E$.
The Reidemeister torsion of an acyclic bundle $E$ on $K$ has many nice properties.
Suppose that $A$ and $B$ are subcomplexes of $K$. Then we have a multiplicative law:
\begin{equation}\label{viet}
\tau(A\cup B;E)\cdot \tau(A\cap B;E) =\tau(A;E)\cdot \tau(B;E)
\end{equation}
that is interpreted as follows. If three of the bundles $E| A\cup B, \> E |A\cap B, \> E|A,\> E|B$
are acyclic then so is the fourth and the equation (\ref{viet}) holds.
Another property is the simple homotopy invariance of the Reidemeister torsion.
In particular $\tau$ is invariant under
subdivision. This implies that for a smooth manifold, one can unambiguously define $\tau(K;E)$ to be the torsion of any smooth triangulation of $K$.
In the case $K= S^1$ is a circle, let $A$ be the holonomy of a generator of the fundamental group
$\pi_1(S^1)$. One has that $E$ is acyclic if and only if $I-A$ is invertible and then
\begin{equation*
\tau(S^1;E)= |\det(I-A)|
\end{equation*}
Note that the choice of generator is irrelevant as $I-A^{-1}= (-A^{-1})(I-A) $ and $|\det(-A^{-1})|=1$.
These three properties of the Reidemeister torsion are the analogues of the properties of Euler
characteristic (cardinality law, homotopy invariance and normalization on a point), but there are
differences. Since a point has no acyclic representations ($H^0\not=0$) one cannot normalize
$\tau$ on a point as we do for the Euler characteristic, and so one must use $S^1$ instead.
The multiplicative cardinality law for the Reidemeister torsion can be made additive just by using $\log\tau$, so the difference here is inessential. More important for some purposes is that
the Reidemeister torsion is not an invariant under a general homotopy equivalence: as mentioned earlier this is in fact why it was first invented.
It might be expected that the Reidemeister torsion counts something geometric (like the Euler characteristic).
D. Fried \cite{Fri2} showed that it counts the periodic orbits of a flow and the
periodic points of a map.
We will show that the Reidemeister torsion counts the periodic point classes of a map (fixed point classes of the iterations of the map).
Some further properties of $\tau$ describe its behavior under bundles.
Let $p: X\rightarrow B$ be a simplicial bundle with fiber $F$ where $F, B, X$ are finite
complexes and $p^{-1}$ sends subcomplexes of $B$ to subcomplexes of $X$
over the circle $S^1$.
We assume here that $E$ is a flat, complex vector bundle over $B$ . We form its pullback $p^*E$
over $X$.
Note that the vector spaces $H^i(p^{-1}(b),\mathbb{C})$ with
$b\in B$ form a flat vector bundle over $B$,
which we denote $H^i F$. The integral lattice in
$H^i(p^{-1}(b),\mathbb{R})$ determines a flat density by the condition
that the covolume of the lattice is $1$.
We suppose that the bundle $E\otimes H^i F$ is acyclic for all
$i$. Under these conditions D. Fried \cite{Fri2} has shown that the bundle
$p^* E$ is acyclic, and
\begin{equation*
\tau(X;p^* E) = \prod_i \tau(B;E\otimes H^i F)^{(-1)^i}.
\end{equation*}
Let $f:X\rightarrow X$ be a homeomorphism of
a compact polyhedron $X$.
Let $T_f := (X\times I)/(x,0)\sim(f(x),1)$ be the
mapping torus of $f$.
We shall consider the bundle $p:T_f\rightarrow S^1$
over the circle $S^1$.
We assume here that $E$ is a flat, complex vector bundle with
finite dimensional fibre and base $S^1$. We form its pullback $p^*E$
over $T_f$.
Note that the vector spaces $H^i(p^{-1}(b),\mathbb{C})$ with
$b\in S^1$ form a flat vector bundle over $S^1$,
which we denote $H^i F$. The integral lattice in
$H^i(p^{-1}(b),\mathbb{R})$ determines a flat density by
the condition
that the covolume of the lattice is $1$.
We suppose that the bundle $E\otimes H^i F$ is acyclic for all
$i$. Under these conditions D. Fried \cite{Fri2} has shown that the bundle
$p^* E$ is acyclic, and we have
\begin{equation}\label{ReidTor}
\tau(T_f;p^* E) = \prod_i
\tau(S^1;E\otimes H^i F)^{(-1)^i}.
\end{equation}
Let $g$ be the preferred generator of the group
$\pi_1 (S^1)$ and let $A=\rho(g)$ where
$\rho:\pi_1 (S^1)\rightarrow GL(V)$.
Then the holonomy around $g$ of the bundle $E\otimes H^i F$
is $A\otimes (f^*)^i$.
Since $\tau(S^1;E)=|\det(I-A)|$ it follows from (\ref{ReidTor})
that
\begin{equation*
\tau(T_f;p^* E) = \prod_i \mid\det(I-A\otimes (f^*)^i)\mid^{(-1)^i}.
\end{equation*}
We now consider the special case in which $E$ is one-dimensional,
so $A$ is just a complex scalar $\lambda$ of modulus one.
Then in terms of the rational function $L_f(z)$ we have :
\begin{equation}\label{ReidTorLef}
\tau(T_f;p^* E) = \prod_i \mid\det(I-\lambda (f^*)^i)\mid^{(-1)^i}
= \mid L_f(\lambda)\mid^{-1}
\end{equation}
This means that the special value of the Lefschetz zeta function is given by the Reidemeister torsion of the corresponding mapping torus. Let us consider an infra-nilmanifold $M=\Pi\backslash{G}$ and a continuous map $f$ on $M$.
As in Section~1, we consider the subgroup $\Pi_+$ of $\Pi$ of index at most $2$. Then $\Pi_+$ is also an almost Bieberbach group as $\Pi$ itself and the corresponding infra-nilmanifold $M_+=\Pi_+\backslash{G}$ is a double covering of the infra-nilmanifold $M=\Pi\backslash{G}$; the map $f$ lifts to a map $f_+:M_+\to M_+$ which has the same affine homotopy lift $(d,D)$ as $f$. Let $T_f$ and $T_{f_{+}}$ be the mapping torus of $f$ and $f_{+}$ correspondingly.
We shall consider two bundles $p:T_f\rightarrow S^1$ and $p_{+}:T_{f_{+}}\rightarrow S^1$ over the circle $S^1$.
We assume here that $E$ is a flat, complex vector bundle with one dimensional fibre and base $S^1$. We form its pullback $p^*E$ over $T_f$ and pullback $p_{+}^*E$ over $T_{f_{+}}$. We suppose that the bundles $E\otimes H^i M$ and $E\otimes H^i M_{+}$ are acyclic for all $i$.
Then Theorem~\ref{T4.5} and the formula (\ref{ReidTorLef}) imply
the following result about special values of the Reidemeister and Nielsen zeta functions
\begin{Thm} \label{tor}
Let $f$ be a homeomorphism on an infra-nilmanifold $\Pi\backslash{G}$ with an affine homotopy lift $(d,D)$. Then
\begin{align*}
&|R_f((-1)^n\lambda)^{(-1)^{p+n}}| = |R_\phi((-1)^n\lambda)^{(-1)^{p+n}}|
=|N_f((-1)^n\lambda)^{(-1)^{p+n}}|\\
&=\begin{cases}
|L_f(\lambda)|=\tau(T_f;p^* E)^{-1}&\text{when $\Pi=\Pi_+$;}\\
|L_{f_+}(\lambda)L_f(\lambda)^{-1}| =\tau(T_f; p^* E)\tau(T_{f_{+}}; p_{+}^* E)^{-1} &\text{when $\Pi\ne\Pi_+$,}
\end{cases}
\end{align*}
where $p$ is the number of real eigenvalues of $D_*$ which are $>1$ and $n$ is the number of real eigenvalues of $D_*$ which are $<-1$.
\end{Thm}
\section{Jiang-type spaces and averaging formula for the Reidemeister numbers on infra-solvmanifolds of type $(\mathrm{R})$}\label{jiang}
A closed manifold $M$ is called a \emph{Jiang-type space} if for all continuous maps $f:M\to M$,
\begin{align*}
L(f)=0&\Rightarrow N(f)=0;\\
L(f)\ne0&\Rightarrow N(f)=R(f).
\end{align*}
A closed orientable manifold $M$ is called a \emph{Jiang-type space for coincidences} (\cite{GW01}) if for any continuous maps $f,g:N\to M$ where $N$ is any closed orientable manifold of equal dimension,
\begin{align*}
L(f,g)=0&\Rightarrow N(f,g)=0;\\
L(f,g)\ne0&\Rightarrow N(f,g)=R(f,g).
\end{align*}
It is well-known that Jiang spaces are of Jiang-type for coincidences. When $N=M$ is a nilmanifold and $\phi,\psi$ are homomorphisms on the group of covering transformations induced by self-maps $f,g$ on $N$, it is proven in \cite[Theorem~2.3]{Gon} that
\begin{align*}
N(f,g)>0 \Leftrightarrow
\mathrm{coin}(\phi,\psi)=1 \Leftrightarrow
R(f,g)<\infty
\end{align*}
Further if one of the above holds then
$$
R(f,g)=N(f,g)=|L(f,g)|.
$$
Furthermore, nilmanifolds are Jiang-type spaces for coincidences, see \cite{GW01}. Recall that if $N$ is a finite connected complex and $M$ is a nilmanifold then $N(f,g)\ne0\Rightarrow R(f,g)<\infty$; if both $N$ and $M$ are nilmanifolds of equal dimension, then two conditions are equivalent and in that case we have $N(f,g)=R(f,g)$.
\bigskip
Recall what C. McCord proved in \cite[Sec.\!~2]{mccord}.
Let $S_i$ be simply connected solvable Lie groups of type $(\mathrm{E})$ with equal dimension, and let $\Gamma_i$ be lattices of $S_i$. Let $D_i:S_1\to S_2$ be Lie group homomorphisms such that $D_i(\Gamma_1)\subset\Gamma_2$. Write $\phi_i=D_i|_{\Gamma_1}:\Gamma_1\to\Gamma_2$. Thus $D_i$ induce maps $f_i$ between orbit spaces $M_i=\Gamma_i\backslash{S_i}$, special solvmanifolds of type $(\mathrm{E})$. {When $S_i$ are of type $(\mathrm{R})$, we can always assume that any $f_i$ is induced from a Lie group homomorphism $D_i$,} {see \cite[Theorem~2.2]{LL-Nagoya} or \cite[Theorem~4.2]{HL-a}.}
Denote $C_\gamma:=\mathrm{coin}(\gamma\circ D_1,D_2)$ and $\mathbb{S}_\gamma=p_1\left(\mathrm{coin}(\gamma\circ D_1,D_2)\right)$ for each $\gamma\in\Gamma_2$. We also consider the map $D:S_1\to S_2$ defined by $D(s)=D_1(s)^{-1}D_2(s)$ for $s\in S_1$.
\begin{Lemma}[{\cite[Lemmas~2.6 and ~2.7, and Theorem~2.1]{mccord}}]\label{mcc}
The following are equivalent:
\begin{enumerate}
\item[$(1)$] $\mathrm{coin}(\phi_1,\phi_2)=1$.
\item[$(2)$] $\dim(C_1)=0$.
\item[$(3)$] $D$ is injective.
\item[$(4)$] $C_1=\mathbb{S}_1$.
\item[$(5)$] $\mathrm{ind}(\mathbb{S}_1)=\pm1$.
\item[$(6)$] $\mathrm{ind}(\mathbb{S}_1)\ne0$.
\end{enumerate}
These statements are also valid for any other coincidence class $\mathbb{S}_\gamma$, and all $\mathrm{ind}(\mathbb{S}_\gamma)$ have the same sign. Hence $N(f_1,f_2)=|L(f_1,f_2)|$.
\end{Lemma}
We generalize \cite[Theorem~2.3]{Gon} from nilmanifolds to special solvmanifolds of type $(\mathrm{R})$.
\begin{Thm} \label{Jiang-type}
Let $f_1$ and $f_2$ be maps on a special solvmanifold $\Gamma\backslash{S}$ of type $(\mathrm{R})$. Let $\phi_1,\phi_2:\Gamma\to\Gamma$ be homomorphisms induced by $f_1,f_2$ respectively. Then the following are equivalent:
\begin{enumerate}
\item[$(\mathrm{a})$] $N(f_1,f_2)>0$.
\item[$(\mathrm{b})$] $\mathrm{coin}(\phi_1,\phi_2)=1$.
\item[$(\mathrm{c})$] $R(f_1,f_2)<\infty$.
\end{enumerate}
Further if one of the above holds then
$$
R(f_1,f_2)=N(f_1,f_2)=|L(f_1,f_2)|.
$$
\end{Thm}
\begin{proof}
By Lemma~\ref{mcc}, $(\mathrm{a})\Leftrightarrow (\mathrm{b})$. Now we will show $(\mathrm{b})\Rightarrow (\mathrm{c})$, and $(\mathrm{c})\Rightarrow (\mathrm{a})$ together with the identity $R(f_1,f_2)=N(f_1,f_2)$.
Let $S$ be a simply connected solvable Lie group of type $(\mathrm{R})$. Let $N=[S,S]$ and $\Lambda=S/N$. Then $N$ is nilpotent and $\Lambda\cong\mathbb{R}^{k}$ for some $k>0$. A lattice $\Gamma$ of $S$ yields a lattice $N\cap\Gamma$ of $N$. Moreover, the lattice $\Gamma$ induces a short exact sequence $1\to N\cap\Gamma\to\Gamma\to\Gamma/N\cap\Gamma\cong\Gamma\cdot N/N\to1$ so that the following diagram is commutative
$$
\CD
1@>>>N@>>>S@>>>\Lambda=S/N@>>>0\\
@.@AAA@AAA@AAA\\
1@>>> N\cap\Gamma@>>>\Gamma@>>>\Gamma\cdot N/N@>>>0
\endCD
$$
This gives rise to the fibration, called a \emph{Mostow fibration},
$$
N\cap\Gamma\backslash{N}\longrightarrow M=\Gamma\backslash{S}\longrightarrow \Gamma\cdot N\backslash{S}
$$
over a torus base $\Gamma\cdot{N}\backslash{S}$ with compact nilmanifold fiber $N\cap\Gamma\backslash{N}$. It is known that this fibration is orientable if and only if the solvmanifold $M$ is a nilmanifold.
Let $E:S\to S$ be a homomorphism. Then $E$ induces a homomorphism $E':N\to N$ and hence a homomorphism $\bar{E}:\Lambda\to\Lambda$ so that the following diagram is commutative
$$
\CD
1@>>>N@>>>S@>>>\Lambda@>>>0\\
@.@VV{E'}V@VV{E}V@VV{\bar{E}}V\\
1@>>>N@>>>S@>>>\Lambda@>>>0
\endCD
$$
Hence we have the following diagram is commutative
$$
\CD
1@>>> N\cap\Gamma@>>>\Gamma@>>>\Gamma\cdot N/N@>>>0\\
@.@VV{\phi'_i}V@VV{\phi_i}V@VV{\bar\phi_i}V\\
1@>>> N\cap\Gamma@>>>\Gamma@>>>\Gamma\cdot N/N@>>>0
\endCD
$$
Denote $\Gamma'=N\cap\Gamma$ and let $\bar\Gamma=\Gamma\cdot N/N$.
By \cite[Theorem~2.2]{LL-Nagoya} or \cite[Theorem~4.2]{HL-a}, we may assume that $f_1,f_2$ are induced by Lie group homomorphisms $D_1,D_2:S\to S$ respectively. Then
$$
\varphi_i(\gamma)\circ D_i=D_i\circ\gamma\ \ \forall\gamma\in\Gamma.
$$
Evaluating at the identity of $S$, we obtain that $\phi_i(\gamma)=D_i(\gamma)$ for all $\gamma\in\Gamma$. So, $\phi_i$ is the restriction of $D_i$ on $\Gamma$.
Assume (b): $\mathrm{coin}(\phi_1,\phi_2)=1$.
Then $\mathrm{coin}(D_1,D_2)=1$ by Lemma~\ref{mcc}. By taking differential, we see that $\mathrm{coin}({D_1}_*, {D_2}_*)=0$, or ${D_2}_*-{D_1}_*$ is a linear isomorphism. We can write ${D_2}_*-{D_1}_*$ as
$$
{D_2}_*-{D_1}_*=\left[\begin{matrix}\bar{D}_{2_*}-\bar{D}_{1_*}&0\\{*}&{D'_2}_*-{D'_1}_*
\end{matrix}\right]
$$
with respect to some linear basis of the Lie algebra of $S$. This implies that $\bar{D}_{2_*}-\bar{D}_{1_*}$ is an isomorphism and so $\mathrm{coin}(\bar{D}_{2_*},\bar{D}_{1_*})=0$ or $\mathrm{coin}(\bar{D}_1,\bar{D}_2)=\bar{1}=\mathrm{coin}(\bar\varphi_1,\bar\varphi_2)$. This happens on $\Lambda\cong\mathbb{R}^k$ with the lattice $\Gamma'$ and so on the torus $\Gamma\cdot{N}\backslash{S}=\Gamma'\backslash\Lambda$. Hence $\mathrm{coin}(\bar\phi_1,\bar\phi_2)=\bar{1}$ implies $R(\bar\phi_1, \bar\phi_2)<\infty$.
On the other hand, since $\mathrm{coin}(\phi'_1, \phi'_2)=1$ from $\mathrm{coin}(\phi_1, \phi_2)=1$, by \cite[Theorem~2.3]{Gon}, $R(\phi_1', \phi_2')<\infty$. Now the above commutative diagram induces a short exact sequence of the sets of Reidemeister classes
$$
\mathcal{R}(\phi'_1,\phi'_2)\longrightarrow \mathcal{R}(\phi_1,\phi_2)\longrightarrow \mathcal{R}(\bar\phi_1,\bar\phi_2)\lra1.
$$
Because both sets $\mathcal{R}(\phi'_1,\phi'_2)$ and $\mathcal{R}(\bar\phi_1,\bar\phi_2)$ are finite, it follows that the middle set $\mathcal{R}(\phi_1,\phi_2)$ is also finite. Hence $R(\phi_1,\phi_2)<\infty$.
Assume (c): $R(\phi_1,\phi_2)<\infty$. Then $R(\bar\phi_1,\bar\phi_2)<\infty$ on the torus $\Gamma'\backslash\Lambda$. We already know that this implies $0<N(\bar{f}_1, \bar{f}_2)=R(\bar\phi_1, \bar\phi_2)$ and $\mathrm{coin}(\bar\phi_1,\bar\phi_2)=\bar{1}$. Assume that $R(\phi'_1,\phi'_2)=\infty$. By \cite[Theorem~2.3]{Gon}, $\mathrm{coin}(\phi'_1,\phi'_2)\ne1$ and then by Lemma~\ref{mcc}, $\mathrm{coin}(D_1',D_2')\ne1$ and hence $D'_{2_*}-D'_{1_*}$ is singular, which implies $D_{2_*}-D_{1_*}$ is also singular and so contradicts $\mathrm{coin}(\phi_1,\phi_2)=1$. Hence $R(\phi'_1,\phi'_2)<\infty$ on the nilmanifold $\Gamma'\backslash{N}$. This implies that $0<N(f_1',f_2')=R(\phi'_1, \phi'_2)$. Hence we have
\begin{align*}
N(f_1,f_2)&=|L(f_1,f_2)|\ (\text{\cite[Theorem~2.1]{mccord}})\\
&=|\det(D_{2_*}-D_{1_*})|\ (\text{\cite[Theorem~3.1]{HLP11}})\\
&=|\det(\bar{D}_{2_*}-\bar{D}_{1_*})||\det(D'_{2_*}-D'_{1_*})|\\
&=N(\bar{f}_1,\bar{f}_2)N(f'_1,f'_2)=R(\bar\phi_1,\bar\phi_2)R(\phi'_1,\phi'_2)\\
&\ge R(\phi_1,\phi_2)\ (\text{exactness and finiteness of each Reidemeister set}).
\end{align*}
Consequently, sine it is always true that $N(f_1,f_2)\le R(\phi_1,\phi_2)$, we have the identity $N(f_1,f_2)= R(\phi_1,\phi_2)$.
\end{proof}
Immediately, from Theorem~\ref{Jiang-type} we obtain the following: for any maps $f_1,f_2:M\to M$ on a special solvmanifold $M$ of type $(\mathrm{R})$, we have
\begin{align*}
L(f_1,f_2)=0&\Rightarrow N(f_1,f_2)=0;\\
L(f_1,f_2)\ne0&\Rightarrow N(f_1,f_2)=R(f_1,f_2).
\end{align*}
\begin{Example}
Consider the closed $3$-manifolds with $\mathrm{Sol}$-geometry. We refer to \cite[Sec.\!~6]{HL-a} for details about the Reidemeister numbers on these manifolds. These are infra-solvmanifolds $\Pi\backslash\mathrm{Sol}$ of type $(\mathrm{R})$. When $\Pi=\Pi_0$ or $\Pi_2^\pm$, the corresponding manifold is a torus bundle over $S^1$, and when $\Pi=\Pi_3$ or $\Pi_6$, the manifold is a sapphire space. Only $\Pi_0\backslash\mathrm{Sol}$ is the special solvmanifold and the remaining manifolds are non-special, infra-solvmanifolds. For any \emph{homeomorphism} $f:\Pi\backslash\mathrm{Sol}\to\Pi\backslash\mathrm{Sol}$, let $F_*$ be its linearization. Then the following can be found in \cite[Sec.\!~6]{HL-a}:
\begin{enumerate}
\item When $\Pi=\Pi_0$ or $\Pi_2^+$, $L(f)=N(f)=R(f)=4$ only when $F_*$ is of type (II) with $\det F_*=-1$; otherwise, $L(f)=N(f)=0$ and $R(f)=\infty$.
\item When $\Pi=\Pi_2^-$, $F_*$ is always of type (I) and $L(f)=N(f)=0$, but $R(f)=\infty$.
\item When $\Pi=\Pi_3$, $L(f)=N(f)=0$, but $R(f)=\infty$.
\item When $\Pi=\Pi_6$, $L(f)=N(f)$, which is $0$ or $2$ according as $\det F_*=1$ or $-1$, but $R(f)=\infty$.
\end{enumerate}
These results show that Theorem~\ref{Jiang-type} (i.e., $N(f)>0\Leftrightarrow R(f)<\infty$; in this case, $N(f)=R(f)$) is true for the special solvmanifold $\Pi_0\backslash\mathrm{Sol}$ and infra-solvmanifolds $\Pi_2^\pm\backslash\mathrm{Sol}$ and $\Pi_3\backslash\mathrm{Sol}$, but not true anymore for the infra-solvmanifold $\Pi_6\backslash\mathrm{Sol}$.
\end{Example}
Now we can state a practical formula for the Reidemeister number of a pair of continuous maps on an infra-solvmanifold of type $(\mathrm{R})$. This is a straightforward generalization of \cite[Theorem~6.11]{HLP} and its proof from infra-nilmanifolds.
\begin{Thm}\label{R-coin}
Let $M=\Pi\backslash{S}$ be an infra-solvmanifold of type $(\mathrm{R})$ with holonomy group $\Phi$. Let $f,g:M\to M$ be continuous maps with affine homotopy lifts $(d,D), (e,E)$ respectively. Then
$$
R(f,g)=\frac{1}{|\Phi|}\sum_{A\in\Phi}\sigma\left(\det(E_*-A_*D_*)\right),
$$
where $A_*, D_*$ and $E_*$ induced by $A,D$ and $E$ are expressed with respect to a preferred basis of $\Pi\cap S$ and where $\sigma:\mathbb{R}\to\mathbb{R}\cup\{\infty\}$ is given by $\sigma(0)=\infty$ and $\sigma(x)=|x|$ for all $x\ne0$.
\end{Thm}
\begin{proof}
Choose a fully invariant subgroup $\Lambda\subset\Gamma:=\Pi\cap S$ of $\Pi$ with finite index (\cite[Lemma~2.1]{LL-Nagoya}). Then $f,g$ lift to maps $\bar{f},\bar{g}$ on the special solvmanifold $\Lambda\backslash{S}$ of type $(\mathrm{R})$ and by \cite[Corollary~1.3]{HL} we have
$$
R(f,g)=\frac{1}{[\Pi:\Lambda]}\sum_{\bar\alpha\in\Pi/\Lambda}R(\bar\alpha\bar{f},\bar{g}).
$$
By Theorem~\ref{Jiang-type}, $R(\bar\alpha\bar{f},\bar{g})=\sigma\!\left(N(\bar\alpha\bar{f},\bar{g})\right)$ for all $\bar\alpha\in\Pi/\Lambda$.
On the other hand, we may assume that $f,g$ are induced by the affine maps $(d,D), (e,E)$ respectively. This induces that $\bar{f}, \bar{g}$ are induced by the Lie group homomorphisms $\mu(d)\circ D, \mu(e)\circ E:S\to S$, where $\mu(\cdot)$ is conjugation. If $(a,A)\in\Pi$ is a preimage of $\bar\alpha\in\Pi/\Lambda$, then the transformation $\bar\alpha$ on $\Lambda\backslash{S}$ is induced by the Lie group automorphism $\mu(a)\circ A$. By \cite[Theorem~3.1]{HLP11} and Lemma~\ref{mcc}, we have that
$$
N(\bar\alpha\bar{f},\bar{g})=|\det(\mathrm{Ad}(e)E_*-\mathrm{Ad}(a)A_*\mathrm{Ad}(d)D_*)|
=|\det_\Lambda(E_*-A_*D_*)|
$$
with respect to any preferred basis of $\Lambda$. If we regard this as a basis of $\Gamma$, then we can see that
$$
[\Gamma:\Lambda]\det_\Lambda(E_*-A_*D_*)=\det_\Gamma(E_*-A_*D_*),
$$
for example see the proof of \cite[Theorem~6.11]{HLP}. Hence
\begin{align*}
R(f,g)&=\frac{1}{[\Pi:\Lambda]}\sum_{\bar\alpha\in\Pi/\Lambda}\sigma\!\left(N(\bar\alpha\bar{f},\bar{g})\right)\\
&=\frac{1}{[\Pi:\Lambda]}\sum_{A\in\Phi}[\Gamma:\Lambda]\ \sigma\!\left(\det_\Lambda(E_*-A_*D_*)\right)\\
&=\frac{1}{|\Phi|}\sum_{A\in\Phi}\sigma\left(\det_\Gamma(E_*-A_*D_*)\right).\qedhere
\end{align*}
\end{proof}
{The following corollaries generalize \cite[Theorems~5.1 and 5.2]{DP11} from infra-nilmanifolds to infra-solvmanifolds of type $(\mathrm{R})$.}
\begin{Cor}
Let $M=\Pi\backslash{S}$ be an orientable infra-solvmanifold of type $(\mathrm{R})$. Let $f,g:M\to M$ be continuous maps. If $R(f,g)<\infty$, then $R(f,g)=N(f,g)$.
\end{Cor}
\begin{proof}
Because $M$ is orientable, the Nielsen number $N(f,g)$ is defined and is equal to, by \cite[Theorem~4.5]{HL-a},
$$
N(f,g)=\frac{1}{|\Phi|}\sum_{A\in\Phi}|\det(E_*-A_*D_*)|.
$$
Since $R(f,g)<\infty$, by Theorem~\ref{R-coin}, $\sigma\!(\det(E_*-A_*D_*))$ is finite for all $A\in\Phi$. By the definition of $\sigma$, we have $\sigma\!(\det(E_*-A_*D_*))=|\det(E_*-A_*D_*)|$ for all $A\in\Phi$. This finishes the proof.
\end{proof}
\begin{Cor}\label{R-fix}
Let $M=\Pi\backslash{S}$ be an infra-solvmanifold of type $(\mathrm{R})$ with holonomy group $\Phi$. Let $f:M\to M$ be a continuous map with an affine homotopy lift $(d,D)$. Then
$$
R(f)=\frac{1}{|\Phi|}\sum_{A\in\Phi}\sigma\left(\det(I-A_*D_*)\right),
$$
and if $R(f)<\infty$ then $R(f)=N(f)$.
\end{Cor}
{By Remarks~\ref{NtoS1} and ~\ref{NtoS2}, since the averaging formulas for the Lefschetz number and the Nielsen number are generalized from infra-nilmanifolds to infra-solvmanifolds of type $(\mathrm{R})$ (see \cite{HLP11, LL-Nagoya}), all results and proofs concerning the Nielsen number and the Nielsen zeta function in this article directly generalize to the class of infra-solvmanifolds of type $(\mathrm{R})$.}
By Corollary~\ref{R-fix} and \cite[Theorem~4.3]{LL-Nagoya}, the averaging formulas for the Reidemeister number and the Nielsen number on infra-solvmanifolds of type $(\mathrm{R})$, we can generalize all results and proofs concerning the Reidemeister zeta function, whenever it is defined, to the class of infra-solvmanifolds of type $(\mathrm{R})$. If $R_f(z)$ is defined, then $R(f^n)<\infty$ and so by Corollary~\ref{R-fix} $R(f^n)=N(f^n)>0$ for all $n>0$ and thus $R_f(z)=N_f(z)$. For example, we can generalize Theorems~\ref{infra}, ~\ref{T4.5}, ~\ref{FE-case1}, and ~\ref{tor}, and their proofs from infra-nilmanifolds to infra-solvmanifolds of type $(\mathrm{R})$ to obtain the
following:
\begin{Thm}\label{infrasolv}
Let $f$ be a continuous map on an infra-solvmanifold of type $(\mathrm{R})$ with an affine homotopy lift $(d,D)$.
Assume $N(f)=|L(f)|$. Then the Nielsen zeta function $N_f(z)$ is a rational function and is equal to
\begin{equation*}
N_f(z)=L_f((-1)^qz)^{(-1)^r}
\end{equation*}
where $q$ is the number of real eigenvalues of $D_*$ which are $<-1$ and
$r$ is the number of real eigenvalues of $D_*$ of modulus $>1$. When the Reidemeister zeta function $R_f(z)$ is defined, we have $R_f(z)=R_\phi(z)=N_f(z)$.
\end{Thm}
\begin{Thm}\label{zeta_infrasolv}
Let $f$ be a continuous map on an infra-solvmanifold $\Pi\backslash{S}$ of type $(\mathrm{R})$ with an affine homotopy lift $(d,D)$.
Then the Reidemeister zeta function, whenever it is defined, is a rational function and is equal to
\begin{equation*}
R_f(z)=N_f(z)=\begin{cases}
L_f((-1)^nz)^{(-1)^{p+n}}&\text{when $\Pi=\Pi_+$;}\\
\left(\frac{L_{f_+}((-1)^nz)}{L_f((-1)^nz)}\right)^{(-1)^{p+n}}&\text{when $\Pi\ne\Pi_+$,}
\end{cases}
\end{equation*}
\end{Thm}
\begin{Thm}[{Functional Equation}]\label{zeta-S}
Let $f$ be a continuous map on an {orientable infra-solvmanifold $M=\Pi\backslash{S}$ of type $(\mathrm{R})$} with an affine homotopy lift $(d,D)$. Then the Reidemeister zeta function, whenever it is defined, and the Nielsen zeta function have the following functional equations:
\begin{equation*}
R_{f}\left(\frac{1}{dz}\right)
=\begin{cases}
R_f(z)^{(-1)^m}\epsilon^{(-1)^{p+n}}&\text{when $\Pi=\Pi_+$;}\\
R_f(z)^{(-1)^m}\epsilon^{-1}&\text{when $\Pi\ne\Pi_+$}
\end{cases}
\end{equation*}
and
\begin{equation*}
N_{f}\left(\frac{1}{dz}\right)
=\begin{cases}
N_f(z)^{(-1)^m}\epsilon^{(-1)^{p+n}}&\text{when $\Pi=\Pi_+$;}\\
N_f(z)^{(-1)^m}\epsilon^{-1}&\text{when $\Pi\ne\Pi_+$}
\end{cases}
\end{equation*}
where $d$ is a degree $f$, $m= \dim M$, $\epsilon$ is a constant in $\mathbb{C}^\times$, $\sigma=(-1)^n$, $p$ is the number of real eigenvalues of $D_*$ which are $>1$ and $n$ is the number of real eigenvalues of $D_*$ which are $<-1$. If $|d|=1$ then $\epsilon=\pm1$.
\end{Thm}
\begin{Thm}
Let $f$ be a continuous map on an infra-solvmanifold of type $(\mathrm{R})$ with an affine homotopy lift $(d,D)$. Then the Nielsen zeta function $N_f(z)$ and the Reidemeister zeta function $R_f(z)$, whenever it is defined, have the same positive radius of convergence $R$ which admits following estimation
\begin{equation*}
R \geq \exp(-h)>0,
\end{equation*}
where $h=\inf \{h(g)\mid g\simeq f\}$.
If $1$ is not in the spectrum of $D_{*}$, the radius $R$ of convergence of $R_f(z)$ is
$$
R=\frac{1}{N^{\infty}(f)}=\frac{1}{\exp h(\bar{f})}
=\frac{1}{\mathrm{sp}(\bigwedge D_{*})}.
$$
\end{Thm}
\begin{Thm} \label{tor-S}
Let $f$ be a homeomorphism on an infra-solvmanifold $\Pi\backslash{S}$ of type $(\mathrm{R})$ with an affine homotopy lift $(d,D)$. Then
\begin{align*}
&|N_f((-1)^n\lambda)^{(-1)^{p+n}}|\\
&=\begin{cases}
|L_f(\lambda)|=\tau(T_f;p^* E)^{-1}&\text{when $\Pi=\Pi_+$;}\\
|L_{f_+}(\lambda)L_f(\lambda)^{-1}| =\tau(T_f; p^* E)\tau(T_{f_{+}}; p_{+}^* E)^{-1} &\text{when $\Pi\ne\Pi_+$,}
\end{cases}
\end{align*}
where $p$ is the number of real eigenvalues of $D_*$ which are $>1$ and $n$ is the number of real eigenvalues of $D_*$ which are $<-1$.
\end{Thm}
\begin{Rmk}
One may formulate the above theorem also for the Reidemeister zeta function of a homeomorphism $f$ on an infra-solvmanifold of type $(\mathrm{R})$ . However it will be seen in Theorem~\ref{Bour} that in the case of $R_f(z)$ such a manifold must be an infra-nilmanifold.
\end{Rmk}
\begin{Rmk}
{For any map $f$ on an infra-solvmanifold of type $(\mathrm{R})$, Theorem~\ref{T4.4} states the relation between the Lefschetz numbers and the Nielsen numbers of iterates of $f$ and Corollary~\ref{R-fix} states the relation of the Nielsen numbers with the Reidemeister numbers of iterates of $f$ when these are finite. Via these relations some of the arithmetic, analytic, and asymptotic properties of the sequences $N(f^n)$ and $R(f^n)$ can be determined from the corresponding properties of the sequence $L(f^n)$. For the sequence $L(f^n)$, all these properties were
thoroughly discussed in \cite[Sect.~\!3.1]{JM}, see also \cite{BaBo}.}
\end{Rmk}
\section{The Reidemeister zeta function is never defined for any homeomorphism of infra-solvmanifold of type $(\mathrm{R})$, not an infra-nilmanifold}
\label{No R}
Consider now as an example closed $3$-manifolds with $\mathrm{Sol}$-geometry. We refer to \cite[Sec.\!~6]{HL-a} for details about the Reidemeister numbers on these manifolds. These are infra-solvmanifolds $\Pi\backslash\mathrm{Sol}$ of type $(\mathrm{R})$. Let $\Pi_1$ be a lattice of $\mathrm{Sol}$:
$$
\Pi_1=\Gamma_{\!A}=\langle{a_1,a_2,\tau\mid [a_1,a_2]=1,\tau a_i\tau^{-1}=A(a_i)}\rangle,
$$
where $A$ is a $2\x2$-integer matrix of determinant $1$ and trace $>2$. Consider a homomorphism $\phi$ on $\Pi_1$ of type (III), i.e., $\phi$ is given by the form
\begin{align*}
\phi(a_1)=\phi(a_2)=1, \phi(\tau)=a_1^pa_2^q\tau^r, \ r\ne\pm1.
\end{align*}
Then it is shown in \cite[Theorem~6.1]{HL-a} that $R(\phi)=|1-r|$. We can observe easily that $\phi^n$ is also of type (III) and $R(\phi^n)=|1-r^n|$ for all $n>0$. Hence
$$
R_\phi(z)=\exp\left(\sum_{n=1}^\infty\frac{|1-r^n|}{n}z^n\right)
=\begin{cases}
\frac{1}{1-z}&\text{when $r=0$;}\\
\frac{1-\frac{r}{|r|}z}{1-|r|z}&\text{when $|r|>1$.}
\end{cases}
$$
It can be seen also that if $\phi$ is not of type (III), then $R(\phi)=\infty$ or $R(\phi^2)=\infty$. Thus the associated Reidemeister zeta function is not defined.
A similar phenomenon happens for the infra-solvmanifold $\Pi^\pm\backslash\mathrm{Sol}$. For the remaining infra-solvmanifolds $\Pi_3\backslash\mathrm{Sol}$ and $\Pi_6\backslash\mathrm{Sol}$, it is shown that only trivial map has a finite Reidemeister number, which is $1$. That is, only the trivial map defines the Reidemeister zeta function. {The homomorphisms above are eventually commutative, and in fact, for every eventually commutative homomorphism the
Reidemeister zeta function, whenever it is defined, is a rational function, see Theorem 9 and Theorem 10 in \cite{Fel00}.}
We will show now that if the Reidemeister zeta function is defined for a homeomorphism on an infra-solvmanifold of type $(\mathrm{R})$, then the manifold must be an infra-nilmanifold.
Recall the following
\begin{Prop}[{\cite[Ex.\,21(b), p.\,97]{Bourbaki}, \cite[Proposition~3.6]{Smale}}]
\label{BS}
Let $\sigma$ be a Lie algebra automorphism. If none of the eigenvalues of $\sigma$ is a root of unity, then the Lie algebra must be nilpotent.
\end{Prop}
\begin{Thm}\label{Bour}
If the Reidemeister zeta function $R_f(z)$ is defined for a homeomorphism $f$ on an infra-solvmanifold $M$ of type $(\mathrm{R})$, then $M$ is an infra-nilmanifold.
\end{Thm}
\begin{proof}
Let $f$ be a {homeomorphism} on an infra-solvmanifold $M=\Pi\backslash{S}$ of type $(\mathrm{R})$. By \cite[Theorem~2.2]{LL-Nagoya}, we may assume that $f$ has an affine map as a homotopy lift. By \cite[Lemma~2.1]{LL-Nagoya}, there is a special solvmanifold $N=\Lambda\backslash{S}$ which covers $M$ finitely and on which $f$ has a lift $\bar{f}$, which is induced by a Lie group automorphism $D$ on the solvable Lie group $S$.
From \cite[Corollary~1.3]{HL}, we have an averaging formula for Reidemeister numbers:
$$
R(f^n)=\frac{1}{[\Pi:\Lambda]}\sum_{\bar\alpha\in\Pi/\Lambda}R(\bar\alpha\bar{f}^n).
$$
Assume now that $f$ defines the Reidemeister zeta function. Then $R(f^n)<\infty$ for all $n>0$. The above averaging formula implies that $R(\bar{f}^n)<\infty$ for all $n$. By Theorem \ref{Jiang-type}, we must have
$$
R(\bar{f}^n)=N(\bar{f}^n)=|L(\bar{f}^n)|>0.
$$
Since $L(\bar{f}^n)=\det(I-D_*^n)\ne0$ for all $n>0$ by \cite[Theorem~3.1]{HLP11}, this would imply that the differential $D_*$ of $D$ has no roots of unity. By Proposition~\ref{BS}, $S$ must be nilpotent.
\end{proof}
\begin{Rmk} Let $A$ be an Anosov diffeomorphism on an infra-nilmanifold. Then an iteration $A^n$ will be also an Anosov diffeomorphism for every $n \geq 1$. The Reidemeister number of an Anosov diffeomorphism
is always finite \cite{DRPAn}. Hence the Reidemeister zeta $R_A(z)$ is well-defined.
From Theorem~\ref{T4.5} and Theorem~\ref{FE-case1} it follows that the Reidemeister zeta function $R_A(z)$ of an Anosov diffeomorphism on an infra-nilmanifold is a rational function with functional equation.
It is known that a nilmanifold modelled on a free c-step nilpotent Lie group on $r$ generators admits an Anosov diffeomorphism if and only if
$r > c$ \cite{Da}. Hence the Reidemeister zeta function of an Anosov diffeomorphism on such nilmanifold is well-defined if $r > c$ and is a rational function with functional equation.
\end{Rmk}
\section{The Artin-Mazur zeta functions on infra-solvmanifolds of type $(\mathrm{R})$} \label{AM}
Let $f$ be a continuous map on a topological space $X$. Then the \emph{Artin-Mazur zeta function} of $f$ is defined as follows:
$$
AM_f(z)=\exp\left(\sum_{n=1}^\infty\frac{F(f^n)}{n}z^n\right)
$$
where $F(f)$ is the number of isolated fixed points of $f$.
\begin{Prop}[{\cite[Proposition~1]{KL}}]\label{KL}
Let $f$ be a continuous map on an infra-solvmanifold $\Pi\backslash{S}$ of type $(\mathrm{E})$ induced by an affine map $F:S\to S$. For any $\alpha\in\Pi$, $\mathrm{Fix}(\alpha\circ F)$ is an empty set or path connected. Hence every nonempty fixed point class of $f$ is path connected, and every isolated fixed point class forms an essential fixed point class.
\end{Prop}
\begin{proof}
Let $x,y\in\mathrm{Fix}(\alpha\circ F)$. So, the affine map $\alpha F$ fixes $x$ and $y$. Writing $\alpha\circ F=(d,D)\in S\rtimes\mathrm{Endo}(S)$, we see that
\begin{itemize}
\item $(d,D)({x})={x}\Rightarrow D({x})=d^{-1}{x}$,
\item $(d,D)({y})={y}\Rightarrow D({y})=d^{-1}{y}$,
\item $({x},I)^{-1}(\alpha\circ F)({x},I)=({x},I)^{-1}(d,D)({x},I)=({x}^{-1}dD({x}),D)=(1,D)$ and $D$ fixes $1$ and ${x}^{-1}{y}$.
\end{itemize}
Since $S$ is of type $(\mathrm{E})$, $\exp:\mathfrak{S}\to S$ is a diffeomorphism with inverse $\log$. Let $X=\log({x}^{-1}{y})\in\mathfrak{S}$. Then the $1$-parameter subgroup $\{\exp(tX)\mid t\in\mathbb{R}\}$ of $S$ is fixed by the endomorphism $D$. Consequently, the affine map $\alpha\circ F$ fixes the `line' connecting the points ${x}$ and ${y}$. In particular, $p(\mathrm{Fix}(\alpha\circ F))$ is isolated $\{\bar{x}\}$ if and only if $\mathrm{Fix}(\alpha\circ F)$ is isolated $\{x\}$, where $p:S\to\Pi\backslash{S}$ is the covering projection. Further, the index of the fixed point class $p(\mathrm{Fix}(\alpha\circ F))=\{\bar{x}\}$ is
{
$$
\det(I-df_{\bar{x}})=\pm\det(I-d(\alpha\circ F)_x)=\pm\det(I-D_*)
$$
}
where the second identity follows from the fact that $x^{-1}(\alpha\circ F)x=D$. Since $D$ fixes only the identity element of $S$, $D_*$ fixes only the zero element of $\mathfrak{S}$ and so $I-D_*$ is nonsingular. Hence the fixed point class $p(\mathrm{Fix}(\alpha\circ F))$ is essential.
\end{proof}
\begin{Rmk}
The above proposition is a straightforward generalization of \cite[Proposition~1]{KL} from infra-nilmanifolds to infra-solvmanifolds of type $(\mathrm{E})$. Further, the linear part of the affine map $F$ need not be an automorphism.
\end{Rmk}
Proposition~\ref{KL} is proved when the manifold is a special solvmanifold of type $(\mathrm{E})$ and the map is induced by a homomorphism in Lemma~\ref{mcc}, \cite{mccord}. In fact, the converse is also proved. That is, every essential fixed point class consists of a single element. We will prove the converse of the proposition on infra-solvmanifolds of type $(\mathrm{R})$.
\begin{Prop}\label{single}
Let $f$ be a continuous map on an infra-solvmanifold of type $(\mathrm{R})$ induced by an affine map. Then every essential fixed point class of $f$ consists of a single element.
\end{Prop}
\begin{proof}
Let $\tilde{f}=(d,D)$ be the affine map on the connected, simply connected solvable Lie group $S$ of type $(\mathrm{R})$ which induces $f:\Pi\backslash{S}\to\Pi\backslash{S}$. Then $f$ induces a homomorphism $\phi:\Pi\to\Pi$.
By \cite[Lemma~2.1]{LL-Nagoya}, we can choose a fully invariant subgroup $\Lambda\subset\Pi\cap S$ of $\Pi$ with finite index. Hence $\phi(\Lambda)\subset\Lambda$. This implies that $\tilde{f}$ induces a map $\bar{f}$ on $\Lambda\backslash{S}$.
Then we have an averaging formula, \cite[Theorem~4.2]{LL-Nagoya},
$$
N(f)=\frac{1}{[\Pi:\Lambda]}\sum_{\bar\alpha\in\Pi/\Lambda}N(\bar\alpha\circ\tilde{f}).
$$
Assume that $f$ has an essential fixed point class. The averaging formula tells that this essential fixed point class of $f$ is lifted to an essential fixed point class of some $\bar\alpha\circ\bar{f}$. That is, there is $\alpha=(a,A)\in\Pi$ such that the fixed point class $p'(\mathrm{Fix}(\alpha\circ\tilde{f}))$ of $\bar\alpha\circ\bar{f}$ is essential (and so $p(\mathrm{Fix}(\alpha\circ\tilde{f}))$ is an essential fixed point class of $f$). It suffices to show that the fixed point class $p'(\mathrm{Fix}(\alpha\circ\tilde{f}))$ consists of only one point.
Let $F=\alpha\circ\tilde{f}=(a,A)(d,D):=(e,E)$ be the affine map on $S$, and let $\bar{F}=\bar\alpha\circ\bar{f}$. Then $p'(\mathrm{Fix}(F))$ is essential and $N(\bar{F})=|\det(I-E_*)|\ne0$. Choose $x\in\mathrm{Fix}(F)=\mathrm{Fix}((e,E))$. Then the left multiplication by $x^{-1}$,
$$
\ell_{x^{-1}}:y\in\mathrm{Fix}((e,E))\mapsto x^{-1}y\in\mathrm{Fix}(E),
$$
is a bijection. Further, since $\exp:\mathfrak{S}\to S$ is a diffeomorphism, it follows that $\mathrm{Fix}(E) \leftrightarrow\ \mathrm{fix}(E_*)=\ker(I-E_*)$. Since $I-E_*$ is invertible, we see that $\mathrm{Fix}(F)$ and hence $p'(\mathrm{Fix}(F))$ and $p(\mathrm{Fix}(F))$ consist of a single element.
\end{proof}
\begin{Rmk}\label{isolated}
In Propositions~\ref{KL} and \ref{single}, we have shown that for any continuous map on an infra-solvmanifold of type $(\mathrm{R})$ induced by an affine map the isolated fixed points of $f$ are the essential fixed point classes of $f$. That is, $F(f)=N(f)$. Similarly $F(f^n)=N(f^n)$ for all $n$.
\end{Rmk}
Therefore, by Theorem \ref{zeta_infrasolv} and Theorem~\ref{zeta-S} we have
\begin{Thm}
Let $f$ be a continuous map on an infra-solvmanifold of type $(\mathrm{R})$ induced by an affine map. Then
$
AM_f(z)=N_f(z),
$
i.e., $AM_f(z)$ is a rational function with functional equation.
\end{Thm}
By the main result in \cite{mccord94}, if $f$ is a map on an infra-solvmanifold of type $(\mathrm{R})$ which is induced by an affine map and is homotopically periodic, then we have $AM_f(z)=N_f(z)=L_f(z)$ as $N(f^n)=L(f^n)$.
According to Theorem~\ref{virtual unipotency}, if $f$ is a virtually unipotent affine diffeomorphism on an infra-solvmanifold of type $(\mathrm{R})$, then we still have $AM_f(z)=N_f(z)=L_f(z)$.
\section{The Nielsen numbers of virtually unipotent maps on infra-solvmanifolds of type $(\mathrm{R})$}
A square matrix is \emph{unipotent} if all of its eigenvalues are $1$. A square matrix is called \emph{virtually unipotent} if some power of it is unipotent.
Let $M=\Pi\backslash{S}$ be an infra-solvmanifold of type $(\mathrm{R})$. Let $f:M\to M$ be a continuous map with an affine homotopy lift $(d,D)\in\mathrm{Aff}(S)$. Then $f$ is homotopic to the diffeomorphism on $M$ induced by the affine map $(d,D)$, called an \emph{affine diffeomorphism}. If, in addition, $D_*$ is virtually unipotent then we say that $f$ is a \emph{virtually unipotent} map.
Now we observe the following:
\begin{enumerate}
\item A matrix is virtually unipotent if and only if all of its eigenvalues have absolute value $1$, see \cite[Lemma~11.6]{ST}.
\item Let $\Phi$ be a finite subgroup of $\mathrm{GL}(n,\mathbb{R})$ and let $D \in \mathrm{GL}(n,\mathbb{R})$ normalize $\Phi$. If $D$ is virtually unipotent, then for all $A \in \Phi$, $AD$ is virtually unipotent, see \cite[Lemma~3.2]{Malfait}.
\end{enumerate}
\begin{Example}
Consider the $3$-dimensional Lie group $\mathrm{Sol}=\mathbb{R}^2\rtimes_\sigma\mathbb{R}$, where
\begin{align*}
\sigma(t)=\left[\begin{matrix}e^t&0\\0&e^{-t}\end{matrix}\right].
\end{align*}
Let $g=((x,y),t)\in\mathrm{Sol}$. Then it can be seen easily that $\tau_g:\mathrm{Sol}\to\mathrm{Sol}$ is given by
$$
\tau_g:((u,v),s)\mapsto(e^tu-e^sx+x,e^{-t}v-e^{-s}y+y),s),
$$
and $\mathrm{Ad}(g):\mathfrak{sol}\to\mathfrak{sol}$ is given by
$$
\mathrm{Ad}(g)=\left[\begin{matrix}e^t&0&-x\\0&e^{-t}&\hspace{8pt}y\\0&0&\hspace{8pt}1\end{matrix}\right]
$$
for some basis of $\mathfrak{sol}$. Hence $\mathrm{Ad}(g)$ is not virtually unipotent unless $t=0$.
Now consider the infra-solvmanifold $\Pi_2^+\backslash\mathrm{Sol}$.
The holonomy group of $\Pi_2^+\backslash\mathrm{Sol}$ is
$$
\Phi_2^+=\left\langle
\left[\begin{matrix}-1&\hspace{8pt}0&0\\\hspace{8pt}0&-1&0\\\hspace{8pt}0&\hspace{8pt}0&1\end{matrix}\right]
\right\rangle
$$
and thus $\mathrm{Sol}^\Phi=\{((x,y),t)\in\mathrm{Sol}\mid x=y=0\}$. Fix $g=((0,0),t)\in\mathrm{Sol}^\Phi$ with $t\ne0$ and consider $(g,\tau_{g^{-1}})\in\mathrm{Aff}(\mathrm{Sol})$. Then $(g,\tau_{g^{-1}})$ centralizes $\Pi_2^+$ and $(g,\tau_{g^{-1}})$ induces an affine diffeomorphism $f$ on $\Pi_2^+\backslash\mathrm{Sol}$ given by $\bar{x}\mapsto \bar{x}\bar{g}$. Hence the affine diffeomorphism $f$ is homotopic to the identity map. However $f$ is not virtually unipotent since $(\tau_{g^{-1}})_*=\mathrm{Ad}(g^{-1})$ is not virtually unipotent.
\end{Example}
\begin{Rmk}
Recall \cite[Lemma~3.6]{Malfait}, which states that if an affine diffeomorphism $f$ on an infra-nilmanifold $M$ is homotopic to a virtually unipotent affine diffeomorphism on $M$, then $f$ is virtually unipotent. However, the above example shows that this statement is not true in general for infra-solvmanifolds of type $(\mathrm{R})$. Namely, there is an affine diffeomorphism on an infra-solvmanifold of type $(\mathrm{R})$ which not virtually unipotent, but is homotopic to a virtually unipotent affine diffeomorphism.
Furthermore, in the above example, $f\simeq\mathrm{id}$ is a homotopically periodic map which is not virtually unipotent. Therefore \cite[Proposition~3.11]{Malfait} is not true in general for infra-solvmanifolds of type $(\mathrm{R})$. Note also that there is a unipotent affine diffeomorphism on the torus which is not homotopically periodic, see \cite[Remark~3.12]{Malfait}.
Consequently, on infra-nilmanifolds homotopically periodic maps are virtually unipotent maps. But on infra-solvmanifolds of type $(\mathrm{R})$, there is no relation between homotopically periodic maps and virtually unipotent maps.
\end{Rmk}
\begin{Thm}\label{virtual unipotency}
If $f$ is a virtually unipotent map on an infra-solvmanifold of type $(\mathrm{R})$, then $L(f)=N(f)$.
\end{Thm}
\begin{proof}
Let $M$ be an infra-solvmanifold of type $(\mathrm{R})$ with holonomy group $\Phi$. Then we can assume $f$ is an affine diffeomorphism induced by an affine map $(d,D)$ such that $D_*$ is virtually unipotent. This implies that $(d,D)$ normalizes $\Pi$ and hence it follows that $D$ normalizes the holonomy group $\Phi$. By the previous observation (2), since $D_*$ is virtually unipotent, so are all $A_*D_*$ where $A\in\Phi$ and hence by \cite[Lemma~4.2]{Malfait}, $\det(I-A_*D_*)\ge0$. Using the averaging formula \cite[Theorem~4.3]{LL-Nagoya}, we obtain
\begin{align*}
N(f)&=\frac{1}{|\Phi|}\sum_{A\in\Phi}|\det(I-A_*D_*)|\\
&=\frac{1}{|\Phi|}\sum_{A\in\Phi}\det(I-A_*D_*)=L(f).\qedhere
\end{align*}
\end{proof}
\section{Gauss congruences for the Nielsen and Reidemeister numbers}\label{Gauss cong}
In number theory, the following Gauss congruence for integers holds:
$$
\sum_{d\mid n}\mu(d)\ a^{n/d}\equiv 0\mod n
$$
for any integer $a$ and any natural number $n$. Here $\mu$ is the M\"obius function. In the case of a prime power $n=p^r$, the Gauss congruence turns into the Euler congruence. Indeed, for $n=p^r$ the M\"obius function $\mu(n/d)=\mu(p^r/d)$ is different from zero only in two cases: when $d=p^r$ and when $d=p^{r-1}$. Therefore, from the Gauss congruence we obtain the Euler congruence
$$
a^{p^r}\equiv a^{p^{r-1}}\mod p^r
$$
This congruence is equivalent to the following classical Euler's theorem:
$$
a^{\varphi(n)}\equiv 1\mod n
$$
where $(a,n)=1$.
These congruences have been generalized from integers $a$ to some other mathematical invariants such as the traces of all integer matrices $A$ and the Lefschetz numbers of iterates of a map, {see \cite{mp99,Z}}:
\begin{align}
\label{Gauss}
&\sum_{d\mid n}\mu(d)\ \mathrm{tr}(A^{n/d})\equiv 0\mod n,\\
\label{Euler}
&\mathrm{tr}(A^{p^r})\equiv\mathrm{tr}(A^{p^{r-1}})\mod p^r.
\end{align}
{A. Dold in \cite{Dold} proved by a geometric argument the following congruence for the fixed point index of iterates of a map $f$ on a compact ANR $X$ and any natural number $n$
$$
\sum_{d\mid n}\mu(d)\ \mathrm{ind}(f^{n/d},X)\equiv 0\mod n,
$$
thus consequently
\begin{align}
\label{Dold}
\sum_{d\mid n}\mu(d)\ L(f^{n/d})\equiv 0\mod n\tag{DL}
\end{align}
by using Hopf theorem. These congruences are now called the Dold congruences.} It is also shown in \cite{mp99} (see also \cite[Theorem~9]{Z}) that the above congruences \eqref{Gauss}, \eqref{Euler} and \eqref{Dold} are equivalent. For example, $\eqref{Gauss}\Rightarrow \eqref{Dold}$ follows easily by the following observation: Let $A_i$ be an integer matrix obtained from the homomorphism $f_{i_*}:H_i(X;\mathbb{Q})\to H_i(X;\mathbb{Q})$. Then
\begin{align*}
\sum_{d\mid n}\mu\!\left(\frac{n}{d}\right) L(f^{d})
&=\sum_{d\mid n}\mu\!\left(\frac{n}{d}\right)\left(\sum_i(-1)^i \mathrm{tr}(A_i^d)\right)\\
&=\sum_i(-1)^i\left(\sum_{d\mid n}\mu\!\left(\frac{n}{d}\right)\mathrm{tr}(A_i^d)\right)\\
&\equiv\sum_i(-1)^i\ 0=0\mod n.
\end{align*}
Moreover, we have
\begin{align}\label{EL}
L(f^{p^r})&=\sum_i(-1)^i \mathrm{tr}(A_i^{p^r})
\equiv\sum_i(-1)^i\mathrm{tr}(A_i^{p^{r-1}})\tag{EL}\\
&=L(f^{p^{r-1}})\mod p^r.\notag
\end{align}
Now we shall consider the following congruences for the Nielsen numbers and Reidemeister numbers
\begin{align}
\label{DR}
&\sum_{d\mid n}\mu(d)\ R(f^{n/d})\equiv 0\mod n,\tag{DR}\\
\label{ER}
&R(f^{p^r})\equiv R(f^{p^{r-1}})\mod p^r,\tag{ER}\\
\label{DN}
&\sum_{d\mid n}\mu(d)\ N(f^{n/d})\equiv 0\mod n,\tag{DN}\\
\label{EN}
&N(f^{p^r})\equiv N(f^{p^{r-1}})\mod p^r\tag{EN}
\end{align}
and find the relations between them and the conditions on spaces, groups and/or on maps for which the congruences hold true.
\begin{Example}
Let $f$ be a map on an infra-solvmanifold of type $(\mathrm{R})$ which is homotopically periodic or virtually unipotent. Then $N(f^n)=L(f^n)$ for all $n>0$. The congruence \eqref{Dold} immediately implies the congruences \eqref{DN} and \eqref{EN}.
\end{Example}
\begin{Example}
Let $f:S^2\vee S^4\rightarrow S^2\vee S^4$ be the map considered in Example~\ref{wedge}. Then
\begin{align*}
&L(f)=N(f)=0,\ L(f^k)=2+(-2)^k,\ N(f^k)=1\ \ \forall k>1,\\
&R(f^k)=1\ \ \forall k\geq 1.
\end{align*}
Thus we have no congruence \eqref{DN} and we have nice congruences \eqref{DR} and \eqref{Dold}.
\end{Example}
\begin{Example}
Let $f$ be a map on the circle $S^1$ of degree $d$. Then $N(f^n)=|L(f^n)|=|1-d^n|(=R(f^n)$ when $d\ne\pm1$). When $d\ne\pm1$, then all $R(f^n)<\infty$ and so the congruences hold. When $d=1$, the congruence for the Nielsen number is obviously true. We assume $d=-1$. So, $N(f^n)=2$ for odd $n$ and $0$ for even $n$. For $n=2\cdot3^2\cdot5$, we have
\begin{align*}
\sum_{d\mid n}\mu(d)\ N(f^{n/d})&=\sum_{\substack{d\mid n\\ d\text{ even}}}\mu(d)\ 2\\
&=2\left(\mu(2)+\mu(2\cdot3)+\mu(2\cdot3^2)+\mu(2\cdot3\cdot5)+\mu(2\cdot3^2\cdot5)\right)\\
&=2\left((-1)+1+0+(-1)+0\right)=-2\\
&\ne0\mod 2\cdot3^2\cdot5.
\end{align*}
Thus we have no congruence \eqref{DN}.
Next we consider the congruences \eqref{EN} and \eqref{ER}.
If $d\ge0$, then $L(f^n)=1-d^n=-N(f^n)=-R(f^n)$. The congruence \eqref{EL} $L(f^{p^r})\equiv L(f^{p^{r-1}})\mod p^r$ implies the other congruences \eqref{EN} and \eqref{ER}. Assume $d<0$. The congruence \eqref{EL} $L(f^{p^r})\equiv L(f^{p^{r-1}})\mod p^r$ is exactly $1-d^{p^r}\equiv 1-d^{p^{r-1}}\mod p^r$, which implies that $d^{p^r}\equiv d^{p^{r-1}}\mod p^r$ and so $d^{p^r}\pm1\equiv d^{p^{r-1}}\pm1\mod p^r$. Thus the other congruences \eqref{EN} and \eqref{ER} hold true.
In summary, \eqref{EN} and \eqref{ER} are true, but \eqref{DN} is not true.
\end{Example}
The congruence (\ref{DR}) was previously known for automorphisms of almost polycyclic groups (\cite[p.~\!195]{crelle}) and for all continuous maps only on nilmanifolds (\cite[Theorem~58]{Fel00}) provided that all Reidemeister numbers of iterates of the maps are finite. We generalize these on infra-solvmanifolds of type $(\mathrm{R})$.
\begin{Thm}\label{congruence}
Let $f$ be any continuous map on an infra-solvmanifold of type $(\mathrm{R})$ such that all $R(f^n)$ are finite. Then we have
$$
\sum_{d\mid n}\mu(d)\ R(f^{n/d})=\sum_{d\mid n}\mu(d)\ N(f^{n/d})\equiv0\mod n
$$
for all $n>0$.
\end{Thm}
\begin{proof}
We define
\begin{align*}
P^n(f)&=\text{the set of isolated periodic points of $f$ with period $n$},\\
P_d(f)&=P^d(f)-\bigcup_{k\mid d}P^k(f)\\
&=\text{ the set of isolated periodic points of $f$ with least period $d$}.
\end{align*}
Then we have
$$
P^n(f)=\coprod_{d\mid n}P_d(f)\ \text{ or } \#P^n(f)=\sum_{d\mid n}\#P_d(f).
$$
By the M\"obius inversion formula when all terms are finite, we have
$$
\#P_n(f)=\sum_{d\mid n} \mu(d)\ \#P^{n/d}(f).
$$
On the other hand, if $x\in P_n(f)$ then $f(x)\in P_n(f)$. For, $f^k(f(x))=f(x)\Rightarrow f^{n-1}(f^k(f(x))=f^{n-1}(f(x))\Rightarrow f^k(x)=x$, showing that $x$ and $f(x)$ have the same least period $n$. It remains to show that if $x$ is isolated, then $f(x)$ is isolated. Let $U$ be a neighborhood of $x$ containing no other periodic points of period $n$. Then the inverse image $V$ of $U$ under $f^{n-1}$ is a neighborhood of $f(x)$. If $y\in V$ is a periodic point of $f$ with period $n$, then $f^{n-1}(y)\in U$ and so $f^{n-1}(y)=x \Rightarrow y=f^n(y)=f(x)$, which shows that $f(x)$ is isolated. Thus $f$ maps $P_n(f)$ into $P_n(f)$ and this implies that $P_n(f)$ is the disjoint union of $f$-orbits, each of length $n$. So, when $\#P_n(f)$ is finite, it is a multiple of $n$.
Let $M$ be an infra-solvmanifold of type $(\mathrm{R})$ and let $f$ be a map on $M$. Since we are working with the Nielsen numbers and the Reidemeister numbers of iterates of $f$, we may assume that $f$ is induced by an affine map on $S$.
Assume $R(f^n)<\infty$; then $N(f^n)=R(f^n)>0$ by Corollary~\ref{R-fix}. By Remark~\ref{isolated}, $N(f^n)$ is the number of isolated periodic points of $f$ with period $n$; $N(f^n)=\#P^n(f)$.
Consequently, what we have shown is that if all $R(f^{n})<\infty$, then
\begin{align*}
\sum_{d\mid n}\mu(d)\ R(f^{n/d})=\sum_{d\mid n}\mu(d)\ N(f^{n/d})=\#P_n(f)\equiv0\mod n.
\end{align*}
This proves our theorem.
\end{proof}
\begin{Cor}\label{cor:congruence}
Let $f$ be a map on an infra-solvmanifold of type $(\mathrm{R})$ which is homotopic to an affine diffeomorphism induced by an affine map $(d,D)$. If $D_*$ has no eigenvalue that is a root of unity, then all $R(f^n)$ are finite. Hence the Gauss congruences \eqref{DR} for the Reidemeister and \eqref{DN} for the Nielsen numbers hold true.
\end{Cor}
\begin{proof}
This follows from a straightforward generalization of \cite[Proposition~4.3]{DRPAn} from infra-nilmanifolds to infra-solvmanifolds of type $(\mathrm{R})$.
Let $M=\Pi\backslash{S}$ be the infra-solvmanifold of type $(\mathrm{R})$ with holonomy group $\Phi$. Recall that $f$ induces a homomorphism $\phi:\Pi\to\Pi$ given by $\phi(\alpha)\circ(d,D)=(d,D)\circ\alpha$ for all $\alpha\in\Pi$. That is, $\phi=\tau_{(d,D)}$. This implies that $(d,D)$ normalizes $\Pi$ and hence $D$ normalizes $\Phi$. So $AD^n$ normalizes $\Phi$ for all $A\in\Phi$ and all $n$.
Assume that $R(f^n)=\infty$. By Corollary~\ref{R-fix}, there exists $A\in\Phi$ such that $A_*D_*^n$ has eigenvalue $1$. By \cite[Lemma~3.2]{Malfait}, $D_*^n=A_*^{-1}(A_*D_*^n)$ is virtually unipotent. Thus $D_*$ is virtually unipotent, a contradiction.
\end{proof}
\begin{Example}
Let $f$ be an Anosov diffeomorphism on an infra-nilmanifold. By \cite[Lemma~4.2]{DRPAn}, $f$ has an affine homotopy lift $(d,D)$ with hyperbolic $D_*$. From the above corollary, the Gauss congruences \eqref{DR} and \eqref{DN} hold true.
\end{Example}
\begin{Example}[{\cite[Example~11]{Fel00}, \cite{LL-JGP}}]
Let $f:M\to M$ be an expanding smooth map on a closed smooth manifold. It is known in \cite{Gromov} that $f$ is topologically conjugate to an expanding map on an infra-nilmanifold. Thus we can assume that $M$ is an infra-nilmanifold and $f$ is a map induced by an affine map $(d,D)$, where all the eigenvalues of $D_*$ are of modulus $>1$.
Since $(d,D)$ satisfies the conditions of Corollary~\ref{cor:congruence}, all $R(f^n)$ are finite and so the congruences \eqref{DR} and \eqref{DN} hold true.
On the other hand, by \cite{Shub69}, the set $\mathrm{Fix}(f^n)$ of fixed points of the expanding map $f^n$ is nonempty and finite. Thus by Proposition~\ref{KL} and Corollary~\ref{R-fix} we have $N(f^n)=\#\mathrm{Fix}(f^n)=R(f^n)$.
\end{Example}
|
1,108,101,566,458 | arxiv | \section{Introduction}
\label{sec:introduction}
Since the late eighteenth century, there have been increased movements toward
equality (\cite{piketty_2022}).
%
{\em Political equality} is one of the most important types of equality.
%
As a core element of political equality,
{\em proportionality in representation} (PR), or {\em proportional representation},
is well-known.
%
As indicated by the slogan ``one person, one vote'',
PR reflects subgroups of a
population (or an electorate or votes; in the following work, we use ``population'' for
ease of writing.) {\em proportionally} in a legislature or an elected body.
%
PR is considered the ideal system for ensuring the
equality of individuals since it considers the contribution of {\em all} people and votes
(see, e.g., \cite{lijphart_1998}).
%
It has been adopted worldwide in modern comparative politics, apportionment, and elections
(see, e.g., \cite{allen_2017,benoit_2000,lijphart_1998,lijphart_2012,pukelsheim_2017,puyenbroeck_2008,samuels_2001,tg_2003,pr}).
To study the degree of representation inequality between two groups, PR schemes
use an indicator, namely, the {\em proportion of seats to population} (PSP),
to estimate the contributions (weights) of individuals within a group.
%
For example, in apportioning seats in a legislative body to different subgroups,
the PSP of subgroup $i$ is defined as
${\rm PSP}_i = \frac{s_i}{p_i}$, where $s_i$ and $p_i$ denote the number of distributed seats
and the population of subgroup $i$, respectively.
%
PR approaches require that ${\rm PSP}_i=c$, or equivalently, that
$s_i = c p_i$ for all $i$ for some {\em constant} $c>0$.
%
An apportionment with different PSPs between subgroups is
referred to as a {\em malapportionment} and is considered to violate
representation equality among individuals
(see, e.g., \cite{auerbach_1964,eckman_2021,frederick_2008,huntington_1942,samuels_2001}).
Unfortunately, as we will see later, the PSP has a bias in estimating
the true contribution; hence,
PR is insufficient for ensuring equality.
%
This study was motivated by a paradox of real-world PR schemes.
%
For each subgroup $i$, the number of seats $s_i$ is said to be ``proportional'' to the population $p_i$;
however, {\em no proportionality can be observed between the total number of seats $S = \sum s_i$
and the overall population $P = \sum p_i$} in the real world, as discussed in Subsection~\ref{subsec:number}.
%
We observed that this {\em inconsistency} occurs because the PSP
(i.e., $\frac{S}{P} = \frac{\sum s_i}{\sum p_i}$)
is {\em not} constant in real-world scenarios.
%
We explain why this inconsistency is critical by posing the following question.
\begin{question}\label{que:contribution}
Suppose that a group of $p$ people contributed equally to an outcome $s$.
%
What is the equal contribution of an individual within that group?
\end{question}
A simple answer is the proportion $\frac{s}{p}$.
%
It depends on the group size.
%
We use an example to study this bias on equality, as shown in Figure~\ref{fig:illustration}.
%
Assume that $s = \sqrt{p}$, a value proportional to the {\em perimeter}
of the square consisting of $p$ people, for all $p$.
%
Then, $\frac{s}{p} = \frac{1}{\sqrt{p}}$ estimates that the contribution
of an individual in a larger group is {\em less}
than that of an individual in a smaller group, even if the two individuals are completely identical.
%
Moreover, if $s>0$ is {\em independent} of $p$, an intuitive definition
of the contribution of an individual is {\em zero}, as no one can contribute to the outcome.
%
However, $\frac{s}{p} \neq 0$.
%
This population-dependent bias exists unless $s \propto p$
(e.g., $s$ is the {\em area} of the square in Figure~\ref{fig:illustration}).
%
Summarizing the above observation, we propose the following theorem.
\begin{theorem}\label{theorem:main}
The proportion of outcome to population has a population-dependent bias in estimating the
contribution of an individual within a group unless
the outcome is proportional to the population.
\end{theorem}
\begin{figure}[tb]
\centering
\includegraphics[width=0.5\textwidth]{figure-1}
\caption{Illustration of the bias in the proportion
for evaluating the contribution of an individual.
%
Identical individuals in different groups are evaluated differently unless
the outcome is proportional to the group size.}
\label{fig:illustration}
\end{figure}
This theorem implies that, in general, the PSP, and hence PR schemes,
cannot correctly estimate the contribution of an individual.
%
A common belief underlying the PR theory is that,
as long as the PSP between two subgroups is equal for a single apportionment,
there is no (relative) inequality.
%
However, this assumption is not true.
%
Consider the example shown in Figure~\ref{fig:illustration}.
%
Assume that $s^*=\sqrt{p}$ is a {\em standard} (e.g., the average)
number of seats for a population $p$.
%
Suppose that there are $5$ seats to apportion.
%
The PR scheme assigns $\frac{9}{25} \times 5 = 1.8$ and
$\frac{16}{25} \times 5 = 3.2$ seats to Groups A and B, respectively
(with the PSP $0.2$).
%
These numbers are rounded to the nearest whole numbers, namely, $2$ and $3$ seats, respectively.
%
With this apportionment, the PR scheme claims that people in {\em Group B}
are {\em underrepresented}, since ${\rm PSP}_B = \frac{3}{16} < \frac{2}{9} = {\rm PSP}_A$.
However, this argument neglects the population-dependent bias of the PSP.
%
In fact, since 2 seats is the standard number of seats for 4 people
according to the given assumption ($2 = \sqrt{4}$),
assigning 2 seats to Group A gives the 9 people
in Group A the same weight as 4 people, with respect to the standard.
%
Thus, the {\em true} (effective) weight of an individual in Group A is $\frac{4}{9}$.
%
Analogously, assigning 3 seats to Group B gives the
16 people in Group B the same weight as $3^2=9$ people.
%
Therefore, the true weight of an individual in Group B is $\frac{9}{16}$.
%
Since $\frac{4}{9} < \frac{9}{16}$, the people underrepresented are those
within Group A, not Group B.
%
This shows that the bias of the PR scheme is critical.
This example demonstrates that the PSP, and thus the PR scheme, is insufficient for ensuring equality.
%
The PR approach advocates that assigning $1.8$ and $3.2$ seats to Groups A and B, respectively, is equal.
%
However, there is a gap in the weights as large as $(1.8^2/9):(3.2^2/16)=9:16$,
with respect to a standard $s^*=\sqrt{p}$.
%
To ensure equality among individuals, an indicator and apportioning scheme should be developed
to counteract this population-dependent bias with an {\em appropriate} standard.
%
This concept is analogous to the design of the body mass index (BMI).
%
The BMI is defined as $\frac{{\rm weight}}{{\rm height}^2}$ as opposed to $\frac{{\rm weight}}{{\rm height}}$,
because empirical studies have shown that the standard (i.e., {\em average}) weight of an adult
is approximately proportional to the square of their height (\cite{eknoyan_2007}).
%
The PSP and PR approach are limited because they implicitly assume proportionality
between the optimal (or average) number of representatives (seats) and the population,
which is unfortunately not observed in real-world scenarios, as discussed in Section~\ref{sec:review}.
In the following work, we propose a {\em standardized representation} theory
with an unbiased {\em population seat index} (PSI)
by linking the number of representatives
(seats) with an apportioning scheme (Section~\ref{sec:theory}).
%
Moreover, we show that, in the real world,
the PR approach {\em overestimates} ({\em underestimates})
the weights of individuals in less (more) populous groups,
thus resulting in {\em underrepresentation} ({\em overrepresentation}) for the people within those groups.
%
In contrast, the proposed scheme, which is a special case of the degressive proportionality,
guarantees equality (Section~\ref{sec:discussion}).
\section{Literature review}\label{sec:review}
Determining the optimal number of representatives (seats) for a population is sometimes called
the most susceptible political problem
(\cite{madison_1788}).
%
This problem persisted for more than 130 years since the late eighteenth century and was divided into two subproblems in the 1920s (\cite{chafee_1929}).
%
The first subproblem is determining the optimal size of a legislative body, i.e., the {\em total} number of seats.
%
The other subproblem is optimally apportioning a {\em fixed} number of seats to different subgroups.
%
These two subproblems were originally linked by the Framers of the U.S. Constitution
(\cite{madison_1789,kromkowski_1991}; see also the 1790--1830 data in Figure~\ref{fig:us_house})
but were unlinked by Title 2 of the U.S. Code (\cite{kromkowski_1991}).
%
However, as discussed in Section~\ref{sec:introduction},
these two problems are indeed {\em dependent} (Theorem~\ref{theorem:main}).
%
In this section, we review the literature on both problems and relink them in the next section.
\subsection{Standard number of representatives (seats) for a population}\label{subsec:number}
There are many studies on the standard number $s^*=f(p)$ of representatives
(seats) for a population $p$.
%
Note that $s^*=f(p)$ assumes the equality of individuals since it depends
only on the population and not on any individual properties.
%
Table~\ref{tab:number} shows a summary of the key results.
%
We observe that all of these models can be formulated or approximated
with different values of $\gamma$ in Formula~\ref{eqn:general},
except for the logarithmic model developed by \cite{ismail_2018}, which has a
polynomial voting cost (omitted in Table~\ref{tab:number}).
%
\begin{equation}\label{eqn:general}
s^* \ = \ f(p) \ \propto \ {p}^{\gamma} \ \mbox{ for a constant $\gamma$, $0 \le \gamma \le 1$}.
\end{equation}
\begin{table}[htb]
\caption{Key studies on the standard number of representatives (see
Formula~\ref{eqn:general} for the meaning of $\gamma$).}\label{tab:number}
\centering
\begin{tabular}{c c c l}
{Scheme} & {$\gamma$} & {Reference} & {Data/Model/Remark} \\\hline\hline
Empirical & $\approx 0.37$ & \cite{stigler_1976} & 37 democratic countries\\
(Regression) & $\approx 0.41$ & \cite{auriol_2012} & 111 countries\\
& $\approx 0.39$ & \cite{zhao_2020} & 192 countries\\
& $\approx 0.37$ & Figure~\ref{fig:us} in this article & U.S. Congress between 1790 and 1920 \\ \hline \hline
Theoretical & & Sorted by $\gamma$ & \\\hline
Fixed-size & $0$ & U.S. Senate & 100 (two seats per state)\\
& & U.S. House after the 1920s & 435 \\\hline
Cubic root & $1/3$ & \cite{taagepera_1972} & Social mobilization-based model\\\hline
Sublinear & $\frac{1}{3} \le \gamma \le \frac{5}{9}$ & \cite{zhao_2020} & Social network-based model\\
& (e.g.) $0.4$ & (Same as above) & First model matching real-world data \\\hline
Square root & $1/2$ & \cite{penrose_1946} & Voting power-based model\\
& & \cite{auriol_2012} & Mechanism design\\
& & \cite{godefroy_2018} & Mechanism design\\
& & \cite{gamberi_2021} & Complex network-based model\\
& & \cite{margaritondo_2021} & Revision of \cite{taagepera_1972}\\
& & \cite{blonder_2021}& Derived from \cite{madison_1789}\\\hline
Proportional & $1$ & U.S. House before 1830 & Approximately (see Figure~\ref{fig:us_house})\\
& & \cite{ismail_2018} & Voting model (fixed voting cost)\\
& & \cite{revel_2022} & Voting model \\\hline
\end{tabular}
\end{table}
For clarity, we refer to the numbers obtained by empirical studies
as the {\em average} numbers since they were determined through regression.
%
The other values, i.e., the numbers obtained by theoretical studies, are referred to as
the {\em optimal} numbers since they were determined through optimization models.
%
The real-world data
showed $\gamma \approx 0.4$ (\cite{auriol_2012,stigler_1976,zhao_2020};
Figure~\ref{fig:us} of this study).
%
This phenomenon is surprising, suggesting {\em the existence of a standard number of representatives
and that this number depends largely on the population and
little on other factors}, such as location, race, culture, religion, economics,
or political system (\cite{stigler_1976,taagepera_1972,zhao_2020}).
%
Moreover, voting game- and mechanism design-based theoretical models
(\cite{auriol_2012,godefroy_2018,ismail_2018,penrose_1946,revel_2022})
are limited in explaining this phenomenon,
as they have different values of $\gamma$ and lack theoretical connections to social representations.
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{pop_seats_1790-1920_US-allometry.pdf}
\caption{Population of the U.S. (x-axis) and seats of the U.S. Congress (y-axis)
between 1790 and 1920 in log-scale.
%
Black dots denote the total size of the Congress (Senate + House),
whereas white dots denote the size of the House.
%
The regression result (thick line) shows that $y \propto x^{0.37}$, with a p-value of $1.8e-09$
and an adjusted $R^2=0.95$.
For reference, we also plot a standard formula $y = \frac{1}{3}x^{0.4}$
(\cite{zhao_2020}, dashed line).
Data source: \cite{usa-population} for population data and \cite{usa-house} for seat data.}
\label{fig:us}
\end{figure}
In particular, we remark that the formula $f(p) = \frac{1}{3} \times p^{0.4}$
proposed by \cite{zhao_2020} is the first model derived from a theoretical analysis that matches
the value of $\gamma$ observed in current real-world data.
%
Surprisingly, this $\gamma$ value also matches the size of the U.S. Congress
(Senate + House of Representatives)
between 1790 and 1920 with high accuracy (Figure~\ref{fig:us}).
%
The existence of this social network analysis-derived formula
supports that the (optimal) number of representatives may be largely
determined by social connections,
which is the basic idea underlying \cite{taagepera_1972} (see also \cite{margaritondo_2021}),
\cite{gamberi_2021}, and \cite{zhao_2020}.
%
We note that $\gamma < 1$ suggests that human society is
efficient in aggregating public voices in a superlinear manner.
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{us_house.pdf}
\caption{Population of the U.S. and the size of the House between 1790 and 1920 (after 1929, the size
was fixed to 435). Data sources: \cite{usa-population} and \cite{usa-house}.}
\label{fig:us_house}
\end{figure}
Formula \ref{eqn:general} with $\gamma < 1$ is called {\em subproportional}.
%
James Madison (1751--1836) was the first to note a subproportionality
between the number of representatives and the population.
%
He proposed an amendment to the U.S. Constitution (\cite{madison_1789})
that can be considered a square root scheme (\cite{blonder_2021}).
%
The most interesting idea of his proposal is that it is
{\em subproportional} in the {\em long-term} but {\em proportional} in the {\em short-term}
(\cite{blonder_2021,zhao_2022}).
%
This piecewise subproportionality is considered important, as it is easier
to be accepted and implemented than the straightforward nonlinear formula shown in (\ref{eqn:general}).
The idea proposed by Madison still affects the House today,
as shown by changes in the size of the House.
%
The U.S. Constitution states that the seats in the House must be reapportioned
to the states according to their populations every ten years (Article I, Section 2, Clause 3).
%
It seems that this clause had an implicit understanding
that the size of the House should increase {\em proportionally} to the total population
(\cite{madison_1789,kromkowski_1991}),
which can be approximately observed between 1790 and 1830 (Figure~\ref{fig:us_house}).
%
However, the size of the House between 1840 and 1920 increased by substantially
{\em less} than the size of the population (see Figure~\ref{fig:us_house}).
%
Therefore, the size of the House followed Madison's piecewise subproportionality proposal.
This piecewise subproportionality scheme went to an extreme in the 1920s,
when the size of the House was fixed to 435.
%
The reason behind this decision was that Congress failed to reapportion seats since
they did not have an optimal method for {\em rounding fractional seats to whole numbers}
(\cite{chafee_1929}).
%
As a result, the House failed to perform a reapportionment according to the 1920 census,
thereby violating the Constitution.
%
Finally, after several failed attempts,
due to the pressure of the approaching 1930 census,
Congress ``hastily
passed reapportionment legislation'' (\cite{kromkowski_1991}, 134) and
permanently fixed the size of the House for {\em ease of operation}.
The discord between the total number of representatives and apportionments
has led to several issues (\cite{kromkowski_1991,kromkowski_1992}).
%
We note that this discord introduces an inconsistency in the philosophy of the Constitution, namely, that fixing the size of the House suggests that the number
of representatives is {\em independent} of the population,
whereas the Constitution states that the number of representatives
should be proportional to the population.
%
Many people have suggested increasing the size of the House
(\cite{bowen_2021,frederick_2008,frederick_2009,kromkowski_1991,kromkowski_1992,leib_1998,lijphart_1998}).
%
Our study supports these voices.
\subsection{Seat apportionment schemes}\label{subsec:apportion}
There are three schemes for apportioning a (fixed) number of seats to different subgroups:
%
\begin{enumerate}
\item {\em Fixed Apportionment} (FA): This scheme assigns a fixed number of seats to
each subgroup.
%
An FA scheme is adopted by the U.S. Senate (two seats per state).
%
Single and fixed-member district electoral systems are also examples of this type of scheme.
\item {\em Proportional Apportionment} (PA): This scheme assigns seats proportionally according to
the populations of subgroups.
%
The House and many legislatures worldwide use this type of scheme.
\item {\em Degressive Proportionality} (DP):
This is a new type of apportionment scheme that has been adopted by the
European Parliament (\cite{european_2018}).
%
Let us discuss it in the following.
\end{enumerate}
%
As one of the most important steps toward individual equality, PA and PR schemes
began to dominate
the literature in the late eighteenth century (\cite{pr}).
%
Nevertheless, they have been challenged by DP schemes recently.
%
According to the official definition in \cite{european_2018}
(see also \cite{cegielka_2019,grimmett_2017}), DP is defined as follows.
%
For any two subgroups A and B with populations $p_A$ and $p_B$, respectively,
the numbers of seats $s_A>0$ and $s_B>0$ before rounding to whole
numbers should satisfy the following constraints:
%
\begin{eqnarray}
\left(\frac{p_A}{s_A} - \frac{p_B}{s_B}\right) \left(p_A - p_B\right) > 0 & \mbox{if $p_A \neq p_B$,}\label{eqn:dp1}\\
(s_A - s_B)(p_A - p_B) > 0 & \mbox{if $p_A \neq p_B$,}\label{eqn:dp2}\\
s_A = s_B & \mbox{otherwise ($p_A = p_B$).}\label{eqn:dp3}
\end{eqnarray}
%
For example, Formula~\ref{eqn:general} is a type of DP when $0<\gamma<1$.
%
Note that proportionality requires that $\frac{p_A}{s_A} = \frac{p_B}{s_B}$,
thereby satisfying (\ref{eqn:dp2}) and (\ref{eqn:dp3}) but never satisfying (\ref{eqn:dp1}).
%
Hence, despite its name, DP is {\em not} a type of proportionality approach.
%
Instead, we propose to use ``subproportionality'' to replace DP.
The nonproportionality of DP schemes has received criticism for ``unequal'' representations of individuals.
%
The European Parliament has explained (see \cite{grimmett_2017}) that DP {\em compromises} between
{\em individual} equality (per capita) and the equality of {\em state} (per state).
%
This ``compromise'' can also be observed in Canada, Germany, and the EU Council (\cite{allen_2017}).
%
In fact, the U.S. also utilizes this compromise with its {\em two} chambers:
The Senate follows a per-state principle, whereas the House follows a per-capita principle.
%
By adopting these two principles simultaneously, the U.S. Congress implements a type of
DP scheme.
%
Therefore, we used the total number in our regression study (Figure~\ref{fig:us}).
%
Nevertheless, later we will show that there is actually {\em no compromise}, and the DP method is better than PA/PR approaches in terms of {\em individual} equality.
Finally, we remark on the minor but extensively studied issue
of an ``optimal'' method for {\em rounding} fractional numbers of seats
to whole numbers.
%
The only consensus is that there is no method that is optimal in terms of all aspects.
%
We refer readers to
\cite{balinski_2001,chafee_1929,huntington_1942,kromkowski_1992,squire_2005} for discussions, and
\cite{benoit_2000,kalandrakis_2021,puyenbroeck_2008,samuels_2001,tg_2003}
for measuring disproportionality.
\section{Standardizing representation with an unbiased inequality indicator}
\label{sec:theory}
Thus far, we have described how the PSP and PR scheme are limited.
%
In this section, we propose an indicator to quantify the weight of an individual
in a subgroup without the bias of the PSP.
%
Then, we use this indicator to derive an apportionment scheme with unbiased individual equality.
We assume that a function $f = f(p)$ for the standard
number of representatives (seats) for population $p$ is available.
%
As discussed in Section~\ref{sec:introduction},
such a standard function is necessary to ensure equality among individuals.
%
It can be determined according to the average number obtained by an empirical study, the optimal number obtained by a theoretical study, or a model adopted by
policy makers (e.g., the formula adopted by the European Parliament).
%
See Subsection~\ref{subsec:number} for examples.
We first assume that $f$ is invertible.
%
This assumption is true for all existing models (Table~\ref{tab:number})
except for the $\gamma=0$ case.
%
Let $f^{-1}$ denote the inverse function of $f$.
%
Suppose that $s$ seats are assigned to population $p$.
%
As discussed in Section~\ref{sec:introduction},
for estimating the weight of an individual,
we should use the ratio $\frac{p^*}{p}$ of the
{\em effective} population $p^*$ to the {\em real} population $p$,
where the effective population is the standard population that deserves $s$ seats,
i. e., $p^* = f^{-1}(s)$.
%
Therefore, we propose
the next {\em population seat index} (PSI) as an indicator for estimating the
contribution (weight) of an individual.
%
\begin{equation}\label{eqn:main}
w(s, p) \ = \ \frac{f^{-1}(s)}{p}.
\end{equation}
%
Note that (\ref{eqn:main}) calculates a {\em scalar}.
%
This value is equal to $1$ if $s=f(p)$ is the standard number of seats for population $p$.
%
Thus, we can estimate the {\em absolute inequality} of an assignment
with respect to the standard value.
%
If the value calculated by Formula \ref{eqn:main} is greater than 1,
the number of seats $s$ is greater than the standard number, i.e.,
overrepresentation; however, if the value is less than 1, the number
of seats $s$ is less than the standard number, i.e., underrepresentation.
%
This fact is independent of the population $p$.
%
Therefore, the proposed PSI indicator has no population-dependent bias.
We illustrate this concept with the example in Section~\ref{sec:introduction},
where the standard is $f(p)= \sqrt{p}$ (see Figure~\ref{fig:illustration}).
%
Suppose that we assign 2 seats to Group A (9 people) and 3 seats to Group B (16 people).
%
According to Formula \ref{eqn:main}, the (unbiased) weight of an individual
in Group A is $w(2, 9) = \frac{2^2}{9} = \frac{4}{9}$, which is less than the weight $w(3, 16)=\frac{9}{16}$
of Group B.
%
Therefore, people in Group A are {\em less} represented than people in Group B,
in contrast to the (biased) analysis according to the PSP.
%
In fact, we can determine an apportionment with {\em relative} equality by solving
$w(s_A, p_A) = w(s_B, p_B) \Leftrightarrow \frac{s_A^2}{p_A} = \frac{s_B^2}{p_B}$
for $p_A = 9$, $p_B = 16$, and $s_B = 5-s_A$.
%
A simple calculation shows that $s_A = 15/7$ and $s_B = 20/7$ (with weight $25/49$).
%
Rounding these values to the nearest whole numbers, we obtain $2$ and $3$ seats, respectively.
%
This apportionment gives less representation
to Group A ($15/7 > 2$) and more representation to Group B ($20/7<3$),
matching the above analysis.
%
Note that the absolute equality condition, i.e., $w(s, p) = 1$ for all groups, occurs
if and only if the total number of seats is $3+4=7$.
In general, we can use the PSI to determine an apportionment with
absolute or relative individual equality.
%
Assume that there are $k$ groups with populations $p_1, p_2, \ldots, p_k$.
%
Given a total number $S$ of seats, the apportionment problem with (unbiased)
{\em relative} equal weight $w^*$ can be formulated as determining the number $s_i$ of seats assigned to group $i$, where $i=1, 2, \ldots, k$, as follows:
\begin{eqnarray}
w^* = \frac{f^{-1}(s_1)}{p_1} = \frac{f^{-1}(s_2)}{p_2} = \cdots = \frac{f^{-1}(s_k)}{p_k},\label{eqn:equation1}\\
s_1 + s_2 + \cdots + s_k = S.\label{eqn:equation2}
\end{eqnarray}
%
According to (\ref{eqn:equation1}), we have
\begin{equation}\label{eqn:solution}
s_i = f(w^* p_i) \ \mbox{ for $i=1,2,\ldots,k$}.
\end{equation}
%
Then, according to (\ref{eqn:equation2}), the weight $w^*$ can be calculated by solving:
\begin{equation}\label{eqn:weight}
\sum_{i=1}^{k}f(w^* p_i) \ = \ S.
\end{equation}
%
Once $w^*$ is found, the apportionment of seats can be determined with Formula~\ref{eqn:solution}.
%
We note that this scheme is the same as the traditional PR scheme when the standard function
$f(p) \propto p$, i.e., the total number of seats is proportional to the population.
%
The solution of Formula~\ref{eqn:weight} depends on $f$.
%
If $f(p) = cp^\gamma$ for some $c > 0$
and $\gamma \neq 0$, which is a special case of Formula~\ref{eqn:general}
that has been observed worldwide, the proposed indicator PSI is
\begin{equation}\label{eqn:simple}
w(s, p) \ = \ \frac{s^{1/\gamma}}{c^{1/\gamma}p}.
\end{equation}
%
The constant $\frac{1}{c^{1/\gamma}}$ can be removed if we are interested in only
{\em relative} equality.
%
This situation is sometimes convenient since not all theoretical models for determining the optimal
number of seats have a simple estimation on $c$.
%
With this simplification, the $\gamma=1$ case degenerates to the
PSP, i.e., $\frac{s}{p}$.
%
For a general $c>0$ and $\gamma \neq 0$, we can derive an apportionment scheme
with relative individual equality as follows.
%
A simple calculation with (\ref{eqn:weight}) shows that the weight is \begin{equation}\label{eqn:weight2}
w^* \ = \ \left(\frac{S}{c \sum_{i=1}^{k} p_i^\gamma}\right)^{1/\gamma}.
\end{equation}
%
Therefore, according to (\ref{eqn:solution}), the number of seats can be calculated as
\begin{equation}\label{eqn:solution2}
s_i = \frac{p_i^\gamma}{\sum_{j=1}^{k} p_j^\gamma} \times S \ \mbox{ for $i=1,2,\ldots,k$}.
\end{equation}
%
Thus, the proposed scheme distributes seats proportionally to the
$\gamma$-th powers of the populations.
%
For absolute equality, i.e., weight $w^*=1$,
the total number of seats must be equal to $S = c\sum_{j=1}^{k} p_j^\gamma$.
Finally, we consider a function $f$ with no inverse function.
%
We consider only $f = c$ for a constant $c>0$, which is adopted by the U.S. Senate.
%
Let $f_\epsilon(p) = c + \epsilon p$,
where $\epsilon > 0$ is a small number.
%
We define the weight $w(s, p)$ for $f$ according to the limit of $w_\epsilon(s, p)$
when $\epsilon \to 0$:
\begin{eqnarray}
w_\epsilon(s, p) = \frac{s-c}{\epsilon p} \to \left\{\begin{array}{lc}
0, & s=c,\ \epsilon \to 0,\\
+\infty, & s>c,\ \epsilon \to 0,\\
-\infty, & s<c,\ \epsilon \to 0.
\end{array}\right.
\end{eqnarray}
%
The weight is $w(s, p)=0$ if $s=c$.
%
This result matches our intuition, as no one contributes to the number of seats.
%
Otherwise, if $s>c$ (respectively, $s<c$), the weight is $+\infty$ ($-\infty$),
which is reasonable.
%
In either case, the result is independent of the population.
%
The only equal apportionment is $s_i = c$ for all $i$ (thus, $S = kc$).
%
Therefore, the apportionment of the U.S. Senate is consistent with respect to individual equality, with each individual having a constant weight of zero.
\section{Implications}
\label{sec:discussion}
We discuss the implications of the proposed theory for existing studies.
%
First, we empirically compare the PSI with the PSP using data from G20 countries (except for the EU).
%
For the standard function, we used $f(p) = \frac{1}{3}p^{0.4}$ (\cite{zhao_2020}), as
this function shows the average size of the congress in the world through regression.
%
Figure~\ref{fig:unbiased_g20} shows the results,
where ``Effective weight'' denotes the PSI calculated by Formula~\ref{eqn:main}.
%
We note several interesting findings in the figure.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{biased_g20_ipu_2021_10_16.pdf}
\includegraphics[width=0.9\textwidth]{unbiased_g20_ipu_2021_10_16.pdf}
\caption{Comparing G20 countries (except for the EU) according to the indicators
PSP (upper) and PSI (lower), where the total size of the congress is used for a bicameral country.
%
For the PSI, $f(p) = \frac{1}{3}p^{0.4}$ (\cite{zhao_2020}) is used as
the standard function, as it shows the global average.
%
Note that the PSP provides no measure of the appropriateness,
while the proposed PSI metric does: ${\rm PSI}=1$ indicates absolute compatibility
with the global standard (average); ${\rm PSI}>1$ (respectively, ${\rm PSI}<1$)
indicates that the size of the congress is more (less) than the global average.
%
Differences in the order due to different indicators can be confirmed,
e.g., Canada and Germany, Saudi Arabia and the USA, etc.
%
Additionally, Australia and Germany have almost the same PSP value but considerably different PSI values.
%
Data source: \cite{ipu_2021}.
}
\label{fig:unbiased_g20}
\end{figure}
Next, we consider the theoretical implications for existing apportioning schemes.
%
Suppose that the standard number of seats for a population $p$ is given by the function
$f(p)=c_1 p^{\gamma_1}$ for constants $c_1>0$ and $\gamma_1$.
%
We use ``Absolute Eq.'' and
``Relative Eq.'' to show the conditions for absolute equality
(i.e., an unbiased weight of $1$) and relative equality (i.e., the same weight for
two different groups).
%
For the latter, we use $p_1$ and $p_2$ to denote the populations of the two groups.
Table~\ref{tab:fixed} summarizes the results for an FA scheme.
%
Note that for a fixed-member district electoral system,
regardless of how the function $f(p)$ is chosen,
relative equality is always possible if the populations are equal.
%
Thus, our theory is compatible with equal redistricting (see \cite{auerbach_1964}).
%
We note that absolute equality can be achieved under certain conditions.
\begin{table}[htb]
\caption{Implications for the fixed apportionment (FA) scheme (assuming it assigns $c_2$ seats to each group).}\label{tab:fixed}
\centering
\begin{tabular}{c|c|c|c}
{Standard function $f(p)$} & {PSI ($=w(c_2,p)$)} & {Absolute Eq.} & {Relative Eq.} \\\hline\hline
$c_1$ (by $c_1+\epsilon p$, $\epsilon\to 0$) & 0 if $c_1 = c_2$, $\infty$ otherwise & $c_1 = c_2$ & Always \\\hline
$c_1 p^{\gamma_1}$, where $\gamma_1 \ne 0$ & $\frac{c_2^{1/\gamma_1}}{c_1^{1/\gamma_1}p} \propto \frac{1}{p}$ & $p = \left(\frac{c_2}{c_1}\right)^{1/\gamma_1}$ & $p_1 = p_2$\\\hline
\end{tabular}
\end{table}
Table~\ref{tab:sub-proportional} summarizes the results of the subproportional
apportionment (SA) scheme (i.e., the DP scheme),
assuming that $c_2p^{\gamma_2}$ seats are assigned to population $p$
for some constants $c_2 > 0$ and $\gamma_2$, with $0<\gamma_2<1$.
%
Relative equality can be achieved by selecting $\gamma_2 = \gamma_1$
or by redistricting groups to equal populations.
%
For the European Parliament, since equal populations are not an option,
$\gamma_2 = \gamma_1$ must be used,
in contrast to existing ``compromise'' proposals which try to keep $\gamma_2=1$.
%
We note that absolute equality can be achieved under certain conditions.
\begin{table}[htb]
\caption{Implications for the subproportional apportionment (SA) scheme (i.e., the DP scheme) with $\gamma_2 > 0$.}\label{tab:sub-proportional}
\centering
\begin{tabular}{c|c|c|c}
{Standard function $f(p)$} & {PSI ($=w(c_2 p^{\gamma_2}, p)$)} & {Absolute Eq.} & {Relative Eq.} \\\hline\hline
$c_1$ (by $c_1+\epsilon p$, $\epsilon\to 0$) & 0 if $p=(\frac{c_1}{c_2})^{1/\gamma_2}$, else $\infty$ & $p=(\frac{c_1}{c_2})^{1/\gamma_2}$ & $\frac{c_2p_1^{\gamma_2}-c_1}{p_1}=\frac{c_2p_2^{\gamma_2}-c_1}{p_2}$\\\hline
$c_1 p^{\gamma_1}$, $\gamma_1 \ne 0$ & $\left(\frac{c_2}{c_1}\right)^{1/\gamma_1}p^{\gamma_2/\gamma_1-1} $ & \vtop{\hbox{\strut $\gamma_1=\gamma_2$, $c_1=c_2$ or}\hbox{$\gamma_1\neq \gamma_2$, $p=\left(\frac{c_1}{c_2}\right)^{\frac{1}{\gamma_2-\gamma_1}}$}} & $\gamma_1 = \gamma_2$ or $p_1 = p_2$ \\\hline
\end{tabular}
\end{table}
For the proportional apportionment (PA) scheme,
the results can be obtained by simply
substituting $\gamma_2 = 1$ in Table~\ref{tab:sub-proportional}.
%
However, since $f(p) \propto p^{\gamma_1}$ for some $0<\gamma_1<1$ in the real world,
the only option for individual equality with the PA (i.e., the PR) scheme is choosing $p_1=p_2$,
i.e., equal redistricting.
%
This means that the PA/PR scheme can be completely replaced by the SA/DP scheme, since equal redistricting is
also supported by the SA/DP scheme.
We remark that the PA scheme adopted by the House and various
legislatures worldwide is not truly proportional.
%
A general idea of such an approach is to first determine a population size $d$,
which should ideally be $d = \frac{\sum_j p_j}{S}$ for a total of $S$ seats,
then, assign $\frac{p_i}{d}$ seats (before rounding to whole numbers) to group $i$
with population $p_i$ for all $i$.
%
However, in general, $d$ is not a constant; thus,
$\frac{p_i}{d} = \frac{S p_i}{\sum_j p_j}$ is not proportional to $p_i$.
%
As previously discussed, this approach has a population-dependent bias and thus
cannot ensure representation equality among individuals.
%
To achieve true equality, we should either resize the total number of
seats in proportional with the total population, as the House did in 1790--1830,
or adopt a subproportional apportionment scheme.
\section{Conclusion}
\label{sec:conclusion}
In the previous century, the literature has noted the existence of a standard
(average or optimal) number of representatives (seats) for a population,
and that follows some subproportionality scheme with respect to the size of the population.
%
Based on these previous works, this article pointed out a bias inherent in the PSP metric
and thus in the PR scheme in estimating the contribution (weight) of an individual.
%
To address this issue, it introduced a standard function $f(p)$ for the number of seats
for a population $p$ and a nonproportional indicator PSI.
It is shown that the proposed indicator does not have the bias inherent to the PSP.
%
By using it as the indicator to develop an apportionment scheme,
a standardized representation theory is proposed with absolute or relative individual equality.
%
In particular, if $f(p) \propto p^\gamma$ for some constant $\gamma$,
the proposed scheme distributes seats proportionally to the
$\gamma$-th power of the populations.
%
Because $0 \le \gamma < 1$ in the real world,
it is concluded that the PR scheme represents people in smaller groups {\em less}
than people in larger groups, whereas the proposed subproportionality scheme,
which is a type of degressive proportionality approach, guarantees equality.
In the future, rounding methods should be revised
since existing methods were designed for PR schemes only.
%
Empirical studies are also required to investigate the issue in detail,
e.g., the implication to the apportionment of the EU Parliament.
%
We also suggest that related sectors reconsider existing apportionment schemes
to better ensure individual equality.
\section*{Funding}
This work was supported by the Japan Society for the Promotion of Science (JSPS),
Grant-in-Aid for Scientific Research (C) [JP8K11182].
\section*{Acknowledgments}
We thank Prof. Takashi Sekiyama for the valuable discussions.
\printbibliography
\end{document}
|
1,108,101,566,459 | arxiv | \section{Introduction}%
\label{sec:intro
In models of warm inflation~\cite{Berera:1995wh,Berera:1995ie}, the inflaton interacts with a thermal bath of relativistic particles with a slowly evolving temperature $T$. In order to prevent the temperature from redshifting away, the thermal bath must be continuously replenished by some interaction with the inflaton - the form of interactions being model dependent. The spectrum of metric scalar perturbations in warm inflation has been studied in several works, see e.g.~\cite{Berera:1995wh,Berera:1995ie,Hall:2003zp,Graham:2009bf}, and its expression depends on the specific form of the interaction between the thermal bath and the inflaton. The tensor perturbations will see, as usual, their vacuum fluctuations amplified by the accelerated expansion, which will lead to a contribution to the their power spectrum with amplitude ${\cal P}^t_{\rm vac}=\frac{2}{\pi^2}\frac{H^2}{M_P^2}$. The thermal bath will provide an additional source of tensors. In this work we compute this contribution.
Since the interaction of gravitational waves with the thermal bath depends only on the properties of the latter, our results will not depend on the specifics of the inflaton sector. It will however depend on the strength of the interactions that maintain the thermal bath in equilibrium.
Besides the Hubble parameter $H$ and the temperature $T$, a relevant scale for our system will be given by the mean free path $\ell_{\rm mfp}$ of the particles in the thermal bath. For thermal inflation to be at work, the hierarchy $T\gtrsim \ell_{\rm mfp}^{-1}\gg H$ must be realized. The first inequality derives from the fact that one cannot define a mean free path shorter than the thermal wavelength, the second is equivalent to the requirement of thermal equilibrium in an expanding Universe. In Section~\ref{sec:short} we will compute the contribution to the tensor spectrum from modes at length scales much shorter than $\ell_{\rm mfp}$, whereas in Section~\ref{sec:long} we will compute the contributions from larger scales, that will give the dominant effect.
\section{The sourced tensor spectrum}%
We work in conformal time, and consider only transverse-traceless perturbations $h_{ij}({\bf x},\,\tau)$ around a flat Friedmann-Robertson-Walker background $ds^2=a(\tau)^2\left[-d\tau^2+\left(\delta_{ij}+h_{ij}\right)dx^idx^j\right]$. We will approximate the inflating Universe with a de Sitter space, $a(\tau)=-1/(H\tau)$. Then, in the presence of a stress-energy tensor $T_{ab}({\bf x},\,\tau)$, that we assume to be generated by a bath of relativistic particles, the tensor fluctuations satisfy the equation
\begin{align}\label{eq:eq1}
h_{ij}''({\bf x},\,\tau)+2\frac{a'}{a}\,&h_{ij}'({\bf x},\,\tau)-\Delta h_{ij}({\bf x},\,\tau)\nonumber\\
&=\frac{2}{M_P^2}\Pi_{ij}{}^{ab}(\partial_{\bf x})\,T_{ab}({\bf x},\,\tau)\,,
\end{align}
where $\Pi_{ij}{}^{ab}(\partial_{\bf x})=\Pi_i^a(\partial_{\bf x})\,\Pi_j^b(\partial_{\bf x})-\frac12 \Pi_{ij}(\partial_{\bf x})\,\Pi^{ab}(\partial_{\bf x})$ is the projector on the transverse-traceless modes, with $\Pi_{ij}(\partial_{\bf x})=\delta_{ij}-\partial_i\partial_j/\Delta$, while a prime denotes a derivative with respect to the conformal time $\tau$. The stress-energy tensor is defined in such a way that $T_{ab}\sim \partial_a\phi\,\partial_b\phi+\ldots$ for a scalar field whose kinetic term is normalized as $\int d\tau\,d^3{\bf x}\,\frac{a^2}{2}\phi'{}^2$. A transformation to canonically normalized fields brings $T_{ab}\rightarrow \frac{1}{a^2}T_{ab}^{(c)}$, where the index ${}^{(c)}$ refers to comoving quantities. Note that eq.~(\ref{eq:eq1}) does not assume thermalization of the gravitational waves. This possibility has been considered in~\cite{Ferreira:2017lnd} where it was shown that such a situation cannot be achieved consistently in warm inflation.
After taking the Fourier transform of eq.~(\ref{eq:eq1}), and solving it in terms of the Green's function $G_p(\tau,\,\tau')$, we obtain the correlator
\begin{align}\label{eq:main_corr}
&\langle h_{ij}({\bf p},\,\tau)h_{ij}({\bf p}',\,\tau)\rangle_{\rm s}=\frac{4}{M_P^4}\int^\tau \frac{d\tau'}{a(\tau')^2}\int^\tau \frac{d\tau''}{a(\tau'')^2}\nonumber\\
&\quad\times G_p(\tau,\,\tau')\,G_{p'}(\tau,\,\tau'')\,\Pi_{ij}{}^{ab}(-i{\bf p})\,\Pi_{ij}{}^{cd}(-i{\bf p}')\nonumber\\
&\quad\times\int \frac{d^3{\bf x}\,d^3{\bf x}'}{(2\pi)^{3}}e^{-i{\bf p}{\bf x}-i{\bf p}'{\bf x}'}\langle T{}^{(c)}_{ab}({\bf x},\,\tau')\,T{}^{(c)}_{cd}({\bf x}',\,\tau'')\rangle\,,
\end{align}
where $\langle ...\rangle_{\rm s}$ refers to the component of the correlator sourced by the thermal bath, and where the propagator, in the approximation of exact de Sitter background, reads
\begin{align}
G_p(\tau,\tau')&=\frac{1}{p^3\,\tau'{}^2}\Big[\left(1+p^2\,\tau\,\tau'\right)\sin \left(p\left(\tau-\tau'\right)\right)\nonumber\\
& - \left(p\left(\tau-\tau'\right)\right) \,\cos \left(p\left(\tau-\tau'\right)\right)\Big]\,\Theta\left(\tau-\tau'\right)\,.
\end{align}
In what follows we will consider the tensor spectrum evaluated at the end of inflation, $\tau=-1/H$, at large scales $p\ll H$, so that we will set $\tau=0$ in the propagator.
\section{Contribution from short wavelength modes
\label{sec:short
Let us start by computing the contribution to the graviton two point function from the stress-energy correlators when both comoving distances and (conformal) time differences are much shorter than the comoving mean free path $\ell^{(c)}_{\rm mfp}$. In this regime we can neglect the effects of interactions and treat our theory as that of a free field.
For definiteness we will assume that our system is given by a conformally coupled, canonically normalized massless scalar field $\varphi$ in thermal equilibrium at comoving temperature $T^{(c)}$. As a consequence, the stress-energy tensor correlator appearing in eq.~(\ref{eq:main_corr}) takes the form
\begin{widetext}
\begin{align}
\langle T{}^{(c)}_{ab}({\bf x},\,\tau')T{}^{(c)}_{cd}({\bf x}',\,\tau'')\rangle=&\partial_{y_1^a}\partial_{y_2^b}\partial_{y_3^c}\partial_{y_4^d}\Big[\langle\varphi({\bf y}_1,\,\tau')\varphi({\bf y}_2,\,\tau')\varphi({\bf y}_3,\,\tau'')\varphi({\bf y}_4,\,\tau'')\rangle\nonumber\\
&-\langle\varphi({\bf y}_1,\,\tau')\varphi({\bf y}_2,\,\tau')\rangle\,\langle\varphi({\bf y}_3,\,\tau'')\varphi({\bf y}_4,\,\tau'')\rangle\Big]\Big|_{{\bf y}_1={\bf y}_2={\bf x},\,{\bf y}_3={\bf y}_4={\bf x}'}\,,
\end{align}
\end{widetext}
where we ignored the part of stress-energy tensor proportional to $\delta_{ab}$ that is projected out by $\Pi_{ij}{}^{ab}(\partial_{\bf x})$.
To compute $\langle\varphi({\bf y}_1,\,\tau')\,\varphi({\bf y}_2,\,\tau')\,\varphi({\bf y}_3,\,\tau'')\,\varphi({\bf y}_4,\,\tau'')\rangle$ in a thermal state we Wick-rotate to Euclidean spacetime with periodic imaginary (conformal) time, $\varphi(i\tau+1/T^{(c)})=\varphi(i\tau)$ and we use Wick's theorem to decompose the four-point correlator into products of thermal Green's functions. The thermal Green's function at comoving temperature $T^{(c)}$, in terms of the Euclidean conformal time $\tau_E=i\tau$ reads
\begin{align}
&G_T(x,\,\tau_E)=-T^{(c)}\int \frac{d^3{\bf k}}{(2\pi)^3}\sum_{n=-\infty}^\infty \frac{e^{2\pi i n T^{(c)}\tau_E+i{\bf k}\cdot{\bf x}}}{(2\pi n\,T^{(c)})^2+{\bf k}^2}\nonumber\\
&\qquad=-\frac{T^{(c)}}{4\pi\, x}\frac{\sinh\left(2\pi T^{(c)} x\right)}{\cosh\left(2\pi T^{(c)} x\right)-\cos\left(2\pi T^{(c)} \tau_E\right)}\,,
\end{align}
that, rotating back to real conformal time, turns into~\cite{Eftekharzadeh:2010qp}
\begin{align}
&G_T(x,\,\tau)=-\frac{T^{(c)}}{4\pi\,x}\frac{\sinh\left(2\pi T^{(c)} x\right)}{\cosh\left(2\pi T^{(c)} x\right)-\cosh\left(2\pi T^{(c)} \tau\right)}\,.
\end{align}
Note that in the limit $T^{(c)}\to 0$ we obtain the Minkowskian Green's function for a massless field,
\begin{align}
&G_0(x,\,\tau)=-\frac{1}{4\pi^2}\,\frac{1}{x^2-\tau^2}\,.
\end{align}
To renormalize away the effects of the zero temperature fluctuations of $\varphi$, we will work with the subtracted Green's function
\begin{align}
G^{\rm sub}_T(x,\,\tau)=G_T(x,\,\tau)-G_0(x,\,\tau)\,.
\end{align}
We are now in position to compute
\begin{align}
&\langle T{}^{(c)}_{ab}({\bf x},\,\tau)T{}^{(c)}_{cd}({\bf 0},\,0)\rangle=2\,\hat{\bf x}_a\hat{\bf x}_b\hat{\bf x}_c\hat{\bf x}_d\,G_{T,\,xx}^{\rm sub}(x,\,\tau'-\tau'')^2,
\end{align}
where we denote $G_{T,\,xx}^{\rm sub}(x,\,\tau)=\partial_x^2\,G^{\rm sub}_T(x,\,\tau)$, and where a hat denotes a vector with unit length. We thus obtain
\begin{align}\label{eq:hh_noscattering}
&\langle h_{ij}({\bf p},\,0)\,h{}^{ij}({\bf p}',\,0)\rangle_{\rm s,\ short}=\frac{4}{M_P^4}\delta({\bf p}+{\bf p}')\nonumber\\
&\times\int \frac{d\tau'}{a(\tau')^2}\int \frac{d\tau''}{a(\tau'')^2} G_p(0,\,\tau')\,G_{p'}(0,\,\tau'')\, {\cal I}(p,\,\tau'-\tau''),
\end{align}
where we have defined
\begin{align}
&{\cal I}(p,\,\Delta\tau)\equiv 2\,\Pi_{ij}{}^{ab}(-i{\bf p})\,\Pi^{ij}{}^{cd}(-i{\bf p})\nonumber\\
&\qquad\times\int d^3{\bf x}\,e^{-i{\bf p}\cdot{\bf x}}\,\hat{{\bf x}}_a\,\hat{{\bf x}}_b\,\hat{{\bf x}}_c\,\hat{{\bf x}}_d\,G_{T,\,xx}^{\rm sub}(x,\,\Delta\tau)^2\,.
\end{align}
Since here we are considering only the short-distance modes, the upper limit of integration in $d{\bf x}$ in the integral above is given by $\approx \ell_{\rm mfp}^{(c)}$, but, since $G^{\rm sub}_T(x,\,\tau)\to 0$ for $2\pi T^{(c)}\,x\gtrsim 1$, we can approximate it by infinity assuming $2\pi T^{(c)}\,\ell_{\rm mfp}^{(c)}\gg 1$.
Numerical evaluation then gives that for $p\ll 2\pi T^{(c)}$,
\begin{align}\label{eq:calI_num}
{\cal I}(p,\,\Delta\tau)\simeq {\cal I}_0(p|\Delta\tau|)\,\, T^{(c)}{}^5\,,
\end{align}
where the function ${\cal I}_0(x)$ is plotted in Figure~\ref{fig:i0}. The modes with $p\gtrsim 2\pi T^{(c)}$ are suppressed and irrelevant.
\begin{figure}[h]
\centering
\includegraphics[scale=.6]{ploti0.pdf}
\caption{The function ${\cal I}_0(x)$, defined in eq.~(\ref{eq:calI_num}).}
\label{fig:i0}
\end{figure}
The comoving temperature $T^{(c)}$ appearing in eq.~(\ref{eq:calI_num}) is time-dependent, as it is given by $a\,T$, where the physical temperature $T$ is approximately constant during warm inflation. This raises the question of whether $T^{(c)}$ should be evaluated at time $\tau'$ or at time $\tau''$. The fact that we are considering short distance modes helps us here. In fact, for those modes $|\tau'-\tau''|\lesssim \ell_{\rm mfp}^{(c)}=-\tau' \left(\ell_{\rm mfp}H\right)$ and since thermalization requires $\left(\ell_{\rm mfp}H\right)\ll 1$, we have $|\tau'-\tau''|\ll |\tau'|\simeq |\tau''|$ in our integral. As a consequence, the short wavelength contribution to the graviton correlator will be confined to the region of integration with $\tau'\simeq\tau''$ and it makes no difference whether $T^{(c)}$ is evaluated at $\tau'$ or $\tau''$. To keep things symmetric, we will assume $T^{(c)}=\sqrt{a(\tau')\,a(\tau'')}\,T$ inside the integral.
The condition $|\tau'-\tau''|\lesssim \ell_{\rm mfp}^{(c)}$ also helps to simplify the next step. Since the propagators multiplied by the factor $T^{(c)}{}^5=a(\tau')^{5/2}\,a(\tau'')^{5/2}\,T^5$ give suppressed contribution unless $|p\tau'|\approx |p\tau''|={\cal O}(1)$, we obtain that $p|\tau'-\tau''|\lesssim |p\tau'| \left(\ell_{\rm mfp}H\right)={\cal O}\left(\ell_{\rm mfp}H\right)\ll 1$, so that we can approximate
\begin{align}
{\cal I}(p,\,\Delta\tau)\approx \left\{
\begin{array}{ll}
{\cal I}_0(0)\, T^{(c)}{}^5\simeq .02\, T^{(c)}{}^5, & p|\Delta\tau|\lesssim \left(\ell_{\rm mfp}H\right)\\
0, &p|\Delta\tau|\gtrsim \left(\ell_{\rm mfp}H\right).
\end{array}
\right.
\end{align}
We finally find the approximate result
\begin{align}
&\int^\tau \frac{d\tau'}{a(\tau')^2}\int^\tau \frac{d\tau''}{a(\tau'')^2} G_p(\tau,\,\tau')\,G_{p'}(\tau,\,\tau'')\, {\cal I}(p,\,\tau'-\tau'')\nonumber\\
&\qquad\approx \frac{\left(\ell_{\rm mfp}H\right)}{p}\int^\tau \frac{d\tau'}{a(\tau')^4} G_p(\tau,\,\tau')^2\times .02\, T^{(c)}{}^5\nonumber\\
&\qquad= \frac{5\times 10^{-3}}{p^3}\ell_{\rm mfp}\,T^5\,.
\end{align}
Introducing the tensor power spectrum ${\cal P}^t$ through $\langle h_{ij}({\bf p},\,\tau)h_{ij}({\bf p}',\,\tau)\rangle=\frac{2\pi^2}{p^3}{\delta^{(3)}({\bf p}+{\bf p}')}\,{\cal P}^t(p)$, we finally obtain
\begin{align}\label{eq:Ptshort}
{\cal P}^t_{\rm s,\ short}(p)\approx 10^{-3}\frac{\ell_{\rm mfp}\,T^5}{M_P^4}\,.
\end{align}
We will now consider the contribution from hydrodynamic modes with wavelength larger than the mean free path, and we will find that they give the dominant contribution to the sourced correlator.
\section{Contribution from hydrodynamic modes
\label{sec:long
In the hydrodynamic regime (in which either distances or time differences are larger than the mean free path of the particles) we can apply a treatment analogous to that used in~\cite{Ghiglieri:2015nfa,Ghiglieri:2020mhm} for the case of a radiation dominated Universe. We start from the relation~\cite{lifschitz}
\begin{align}
&\langle T{}^{(c)}_{ab}({\bf x},\,\tau)\,T{}^{(c)}_{cd}({\bf x}',\,\tau')\rangle=2\,T{}^{(c)}\,\Bigg[\eta{}^{(c)}\,(\delta_{ac}\,\delta_{bd}+\delta_{ad}\,\delta_{bc})\nonumber\\
&\quad \quad +\left(\zeta{}^{(c)}-\frac23\eta{}^{(c)}\right)\delta_{ab}\,\delta_{cd}\Bigg]\delta({\bf x}-{\bf x}')\,\delta(\tau-\tau')\,,
\end{align}
where $\eta{}^{(c)}$ and $\zeta{}^{(c)}$ are respectively the comoving shear and the bulk viscosity. Inserting the expression above into eq.~(\ref{eq:main_corr}) we obtain
\begin{align}\label{eq:final_hydro_pt}
{\cal P}^t_{\rm s,\ long}(p)=\frac{24\,p^3}{\pi^2\,M_P^4}\int \frac{d\tau'}{a(\tau')^4} G_p(0,\,\tau')^2\,T^{(c)}(\tau')\,\eta^{(c)}(\tau'),
\end{align}
where we used $\Pi_{ij}{}^{ab}(-i{\bf p})\,\Pi_{ij}{}^{ab}(-i{\bf p})=3$.
Eq.~(\ref{eq:final_hydro_pt}) is our main result. To proceed we need to specify the expression of $\eta^{(c)}$, that depends on the details of the interactions within the thermal bath.
The shear viscosity can take values between two limits.
A lower bound on $\eta^{(c)}$ is conjectured~\cite{Kovtun:2004de} to be
\begin{align}\label{eq:eta_small}
\eta^{(c)}\ge \frac{s^{(c)}}{4\pi}\,,
\end{align}
where $s^{(c)}=\frac{2\pi^2}{45}g_{*,S}\, T^{(c)}{}^3$ is the comoving entropy density of the thermal gas, with $g_{*,S}$ denoting the effective number of degrees of freedom in entropy. Applying the inequality~(\ref{eq:eta_small}), we obtain
\begin{align}\label{eq:lower_pt1}
{\cal P}^t_{\rm s,\ long}(p)\ge \frac{4}{15\pi M_P^4}\,g_{*,S}\,T{}^4\,p^3\int d\tau'\,G_p(0,\,\tau')^2
\end{align}
where we have assumed that the physical temperature $T=T^{(c)}/a$ is approximately constant. Evaluation of the integral in $d\tau'$ gives
\begin{align}\label{eq:lower_pt2}
{\cal P}_{\rm s,\ long}^t(p)\ge \frac{2}{45\,M_P^4}g_{*,S}\,T^4\simeq \frac{4}{3\pi^2}\frac{\rho_r}{M_P^4}
\end{align}
where in the last step we have introduced the energy density in the radiation, $\rho_r=\frac{\pi^2}{30}g_*\,T^4$ assuming $g_*\simeq g_{*,S}$.
Since by assumption the radiation must be subdominant with respect to the inflaton energy, $\rho_r\ll 3\,H^2M_P^2$, eq.~(\ref{eq:lower_pt2}) shows that if the inequality~(\ref{eq:eta_small}) is saturated, ${\cal P}^t_{\rm s,\ long}\ll {\cal P}^t_{\rm vac}\equiv\frac{2}{\pi^2}\,\frac{H^2}{M_P^2}$.
An upper bound on ${\cal P}^t_{\rm s,\ long}$ is induced by an upper bound on $\eta^{(c)}$. The shear viscosity is approximately given by
\begin{align}
\eta^{(c)}\approx \ell_{{\rm mfp}}^{(c)}\,T^{(c)}{}^4\,.
\end{align}
Imposing that the mean free path is much shorter than the horizon radius $\ell_{{\rm mfp}}^{(c)}\ll (a\,H)^{-1}$, we obtain the upper bound
\begin{align}
{\cal P}^t_{\rm s,\ long}(p)\ll \frac{p^3}{M_P^4}\int^\tau \frac{d\tau'}{a(\tau')^4} G_p(\tau,\,\tau')^2\,\frac{T^{(c)}(\tau')^5}{a(\tau')\,H}\simeq \frac{T^5}{H\,M_P^4}
\end{align}
that, for relatively large values of the temperature, can exceed $ {\cal P}^t_{\rm vac}$ even in a regime in which the energy density in radiation is subdominant with respect to that in the background, $T\ll \sqrt{H\,M_P}$.
\subsection{An example}
To work out a specific example, let us consider a model where the thermal bath is given by a real scalar field $\varphi$ with negligible mass and with self-interaction $V(\varphi)=\frac{\lambda}{4!}\,\varphi^4$.
The shear viscosity for this model, in the $\lambda\ll 1$ limit, was computed in~\cite{Jeon:1994if}, where was found that $\eta^{(c)}\simeq 2860\,T^{(c)}{}^3/\lambda^2$. The mean free path is given by $\left[\ell_{\rm mfp}^{(c)}\right]^{-1}=\sigma^{(c)}\,n^{(c)}$, where for a relativistic boson the comoving number density reads $n^{(c)}=\frac{\zeta(3)}{\pi^2}T^{(c)}{}^3$ and the cross section is $\sigma^{(c)}= \frac{\lambda^2}{32\pi^2\,s_{\rm Man}^{(c)}}\simeq\frac{\lambda^2}{128\pi^2\,T^{(c)}{}^2}$ (using the comoving Mandelstam invariant $s_{\rm Man}^{(c)}\simeq (2T^{(c)})^2$).
Using these formulae we obtain
\begin{align}
\eta^{(c)}\simeq 2860\,T^{(c)}{}^3\times\frac{\zeta(3)}{128\pi^4}\ell_{{\rm mfp}}^{(c)}\,T^{(c)}\,\simeq .2\, \ell_{{\rm mfp}}^{(c)}\,T^{(c)}{}^4
\end{align}
and going back to physical quantities, we finally obtain
\begin{align}\label{eq:Ptexample}
{\cal P}^t_{\rm s\ long}(p)\simeq .3\,\frac{\ell_{{\rm mfp}}\,T^5}{M_P^4}
\end{align}
where thermalization requires the model-dependent quantity $\ell_{{\rm mfp}}\ll 1/H$. Comparison of the amplitude of eq.~(\ref{eq:Ptexample}) with that of eq.~(\ref{eq:Ptshort}) shows that the hydrodynamic modes dominate the sourced component of the tensor spectrum.
\smallskip
If, to fix ideas, we set $\ell_{{\rm mfp}}H\simeq .2$, we see that a tensor spectrum as large as $\sim 10^{-10}$ (that saturates the current observational bounds) can be obtained for temperatures $T\simeq 10^{13}\left(H/{\rm GeV}\right)^{1/5}$GeV, where the condition that the radiation density is subdominant by a factor of at least $5$ with respect to the background inflaton energy $\simeq 3\, H^2\,M_P^2$ allows for a Hubble parameter during inflation as low as $\sim 2\times 10^{12}$~GeV. For such a value of the Hubble parameter one gets ${\cal P}^t_{\rm vac}\simeq 10^{-12}$. For this choice of parameters, therefore, the presence of the thermal bath enhances the tensor spectrum by about $2$ orders of magnitude.
\bigskip
To sum up, we have found that the spectrum of gravitational waves generated during thermal inflation includes a component $\propto \ell_{\rm mfp}\,T^5/M_P^4$, sourced by long wavelength thermal modes, that can dominate over the vacuum component in a viable region of parameter space. Our analysis has been agnostic regarding perturbations in the scalar sector, that depend on the details of the interactions between the thermal bath and the inflaton. For this reason, in particular, we give no expression of the amplitude of the tensor-to-scalar ratio (note however that~\cite{Mirbabayi:2014jqa} discussed how mechanisms sourcing tensor modes will generally source scalar perturbations with higher efficiency). It should also be noted that this mechanism might lead to large amplitude of tensor modes towards the end of inflation (where one expect the effects of temperature to be more important) that might be detectable by gravitational interferometers, as discussed for instance in~\cite{Cook:2011hg}.
\acknowledgements We thank Paul Anderson for useful discussions. This work is partially supported by the US-NSF grant PHY-1820675.
|
1,108,101,566,460 | arxiv | \section{Overview of term structure modeling}
The purpose of this paper is to study the generalized Fong--Vasicek two-factor interest rate model with stochastic volatility. In this model the dispersion of the stochastic short rate (square of volatility) is assumed to be stochastic as well and it follows a non-negative process with volatility proportional to the square root of dispersion. The drift of the stochastic process for the dispersion is assumed to be in a rather general form including, in particular, linear function having one root (yielding the original Fong--Vasicek model, cf. \cite{fv}) or a cubic like function having three roots (yielding a generalized Fong--Vasicek model for description of the volatility clustering, see e.g. \cite{iscam05}). We consider averaged bond prices with respect to the limiting distribution of stochastic dispersion. The averaged bond prices depend on time and current level of the short rate like it is the case in many popular one-factor interest rate model including in particular the Vasicek and Cox--Ingersoll-Ross model. However, as a main result of this paper we show that there is no such one-factor model yielding the same bond prices as the averaged values described above.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{obr/vynosy2.eps}
\caption{Examples of yield curves of governmental bonds: Australia, Brazil, Japan, United Kingdom
(27th May 2008).
Source: http://www.bloomberg.com}
\label{graf-vynosy}
\end{center}
\end{figure}
The term structure of bond prices (or yields) is a function of time to maturity, state variables like e.g. instantaneous interest rate as well as several model parameters. It describes a functional dependence between the time to maturity of a discount bond and its present price. The yield of bonds, as a function of maturity, forms a term structure of interest rates. Figure~\ref{graf-vynosy} shows the different shapes of term structures observed on the market based on data by Bloomberg. We can observe various type of functional dependences including non-monotone term structures (Brazil) or term structures having two local maxima (UK). Interest rate models are often formulated in terms of stochastic differential equations (SDE) for the instantaneous interest rate (or short rate) as well as equations for other relevant quantities like e.g. volatility of the short rate process. In one-factor models there is a single
stochastic differential equation for the short rate. The volatility of
the short rate process is given in a deterministic way. They can be written in the form
\begin{equation}
dr= \mu(t,r) dt + \sigma(t,r) dw,
\label{dr-1f}
\end{equation}
where $w$ is a Wiener process. We recall that a stochastic process $\{ w(t), t \geq 0 \}$ is called a Wiener process if $w(0)=0$, every increment $w(t+\Delta t)-w(t)$ has the normal distribution $N(0,\Delta t)$, the increments $w(t_n)-w(t_{n-1})$, $w(t_{n-1})-w(t_{n-2})$, $\dots$, $w(t_2)-w(t_1)$ for $0 \leq t_1 < \dots < t_n$ are independent and paths of the process are continuous (see e.g. \cite{kwok}). The function $\mu$ in (\ref{dr-1f}) determines the trend in evolution of the short rate, function $\sigma$ the nature of stochastic fluctuations. The price of a discount bond $P(t,r)$ at time $t$ when the value of short rate is $r$, is known to be a solution of the partial differential equation
\begin{equation}
\frac{\partial P}{\partial t} + (\mu(t,r) - \lambda \sigma(t,r)) \frac{\partial P}{\partial r} + \frac{\sigma^2(t,r)}{2} \frac{\partial^{2}P}{\partial r^2} - rP = 0,
\label{pdr-1f}
\end{equation}
with the terminal condition $P(T,r)=1$ for any $r\ge 0$ where $T>0$ is a maturity of the bond. Here $\lambda$ stands for the so-called market price of risk. The above linear parabolic PDE is derived by constructing a riskless portfolio of bonds and using It\=o's lemma for evaluating differentials of a stochastic portfolio that is balanced by buying or selling bonds with different maturities. We refer the reader for a comprehensive overview of term structure modeling to the book by Kwok \cite[Chapter 7]{kwok} for details.
We also remind ourselves that bond prices determine interest rates $R(t,r)$ by the formula $P=e^{-R (T-t)}$, i.e.
\[
R(t,r)=-\frac{1}{(T-t)} \log P(t,r).
\]
One of the first models of the class (\ref{dr-1f}) has been proposed by Old\v rich Va\v s\'\i\v cek in \cite{vasicek}. In this model, the short rate process is assumed to follow a stochastic differential equation:
\begin{equation}
dr=\kappa(\theta-r)dt+ \sigma dw,
\label{vasicek-sde}
\end{equation}
where $\kappa, \theta, \sigma >0$ are positive constants. Here $\sigma>0$ stands for volatility of random fluctuations of the short rate process. Deterministic part of the process $\kappa(\theta-r)$ represents a mean reversion process with a limit $\theta$, referred to as long term interest rate. The speed of reversion is given by the parameter $\kappa > 0$. In this model, for a constant market price of risk $\bar\lambda$, the corresponding PDE for bond prices
\begin{equation}
\frac{\partial P}{\partial t} + (\kappa(\theta-r) - \bar\lambda) \frac{\partial P}{\partial r} + \frac{\sigma^2}{2} \frac{\partial^{2}P}{\partial r^2} - rP = 0
\label{pdr-vasicek}
\end{equation}
has an explicit solution $P(t,r)$ satisfying the terminal condition $P(T,r)=1$ for any $r\ge 0$. It has the form
\begin{equation}
P(t,r)=A(t) e^{-B(t) r},
\label{tvar-riesenia}
\end{equation}
where the functions $A$ and $B$ can be expressed in a closed form (see, e.g. \cite{vasicek} or \cite{kwok}):
$$B(t)=\frac{1-e^{-\kappa(T-t)}}{\kappa}, \; \ln A(t)= (B(t)-(T-t))(\theta-\frac{\bar\lambda}{\kappa} -
\frac{\sigma^2}{2 \kappa^2} ) - \frac{\sigma^2}{4 \kappa} B(t)^2. $$
There is a rich variety of several other models in which the SDE for the short rate is given by a general process of the form:
\begin{equation}
dr=(a+ b r)dt + \sigma r^{\gamma} dw.
\label{ckls}
\end{equation}
This class of short rate models includes the well-known Cox-Ingersoll-Ross model \cite{cir} with $\gamma=1/2$. A thorough comparison of these models is a topic of the paper by Chan, Karolyi, Longstaff and Sanders \cite{ckls}. Using generalized method of moments they estimated the model (\ref{ckls}) and they studied restrictions on parameters imposed in this models. Their result that the optimal value of the parameter $\gamma$ is approximately $3/2$ (which is more than previous models assumed), started a broad discussion on the correct
form of volatility. Let us note that their result is not universal, e.g. in \cite{gmm-libor}, using the same
estimation methodology but for LIBOR rates, $\gamma$ was estimated to be less than unity (which means that volatility is less than proportional to short rate, unlike in the result due to Chan, Karolyi, Longstaff and Sanders). Approximate formulae for bond prices when the short rate follows (\ref{ckls}) has been developed recently by \cite{1f-approximation} and \cite{1f-approximation-2}.
In one-factor models, term structure of interest rates is a function of the short rate and model parameters. However, it means that as soon as the parameters of the model are chosen, the term structure corresponding to a given short rate is uniquely determined. This is a simplification from the reality, as it can be seen in Fig. \ref{real-term-structures-eu}, showing the examples from EURIBOR data. To capture this feature, two-factor models are introduced. In the two-factor models there are two sources of uncertainty yielding different term structures for the same short rate. They may depend on the value of the other factor. Moreover, two-factor models have more variety of possible shapes of term structures.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{obr/euro-2.eps}
\caption{Examples of real EURIBOR term structures. Source: http://www.euribor.org}
\label{real-term-structures-eu}
\end{center}
\end{figure}
A general two-factor model with the factors $x$, $y$ is given by the system of SDEs:
\begin{eqnarray}
dx&=& \mu_x dt + \sigma_x dw_1, \nonumber \\
dy&=& \mu_y dt + \sigma_y dw_2, \nonumber
\end{eqnarray}
where correlation between $dw_1$ and $dw_2$ is a assumed to be constant $\rho\in[-1,1]$, i.e. $E(dw_1 dw_2)=\rho dt$. The short rate is a function of these two factors, i.e. $r=r(x,y)$.
Let us denote by $P(t,x,y)$ the price of a zero coupon bond with maturity $T$, at the time $t$ when the values of the factors are $x$ and $y$.
the PDE satisfied by the bond price, which reads as (cf. \cite[Chapter 7]{kwok})
\begin{eqnarray}
\frac{\partial P}{\partial t} &+& (\mu_x - \lambda_x \sigma_x)\frac{\partial P}{\partial x} + (\mu_y - \lambda_y \sigma_y) \frac{\partial P}{\partial y}
\nonumber \\
&+& \frac{\sigma_x^2}{2} \frac{\partial^2 P}{\partial x^2} + \frac{\sigma_y^2}{2} \frac{\partial^2 P}{\partial y^2}
+
\rho \sigma_x \sigma_y \frac{\partial^2 P}{\partial x \partial y} - r(x,y) P =0.
\label{2f-general-pde}
\end{eqnarray}
The parameters $\lambda_x, \lambda_y$ stand for market prices of risk corresponding to factors $x$ and $y$, resp. The equation for the bond price is equipped by the terminal condition at $t=T$, $P(T,x,y)=1$ for any $x$, $y$.
There are several ways of incorporating the second stochastic factor.
Based on experimental data from real financial market it is reasonable to make an assumption that the market changes the volatility of the underlying process for the short rate. More precisely, the volatility of the stochastic process for the short rate is stochastic as well. An empirical confirmation of such an assumption can be found e.g. in the recent paper by the authors \cite{kybernetika}. In the so-called two-factor models with a stochastic volatility we allow the volatility to have a stochastic behavior driven
by another stochastic differential equation. As an example of such a two-factors model one can consider the Fong--Vasicek model (cf. \cite{fv}) in which the stochastic volatility follows a mean reverting Bessel-square root process. Another possibility is to consider the generalized Fong--Vasicek model in which the drift function is no longer linear but it may have cubic like behavior having three distinct roots representing possible steady states for dispersion. By this we can model so-called volatility clustering phenomenon observed in real markets (see \cite{kybernetika} for details). Now, as a consequence of the multidimensional It\=o's lemma the corresponding equation for the bond price
is a linear parabolic equation in two space dimensions. These spatial dimensions
correspond to the short rate and volatility.
Let us consider the process a Bessel square root process with a general drift function $\alpha$,
\begin{equation}
dy=\alpha(y)dt+ \omega\sqrt{y}dw.
\label{general-proc}
\end{equation}
It is well known that the density distribution of a stochastic process is a solution to the Focker-Planck partial differential equation (see \cite{goodman}).
Recall that the cumulative distribution function $\tilde F=\tilde F(y,t)=Prob(y(t)<y|y(0)=y_0)$ of the process $y=y(t)$ satisfying (\ref{general-proc}) and starting almost surely from the initial datum $y_0$ can be obtained from a solution $\tilde F=\partial \tilde F/\partial y$ to the so-called Focker-Planck equation for the density function:
\begin{equation}
\frac{\partial \tilde f}{\partial t} = \frac{\omega^2}{2} \frac{\partial^2}{\partial y^2} ( y \tilde f) - \frac{\partial}{\partial y}(\alpha(y) \tilde f) , \quad \tilde f(y,0)=\delta(y-y_0)\,.
\label{focker-planck}
\end{equation}
Here $\delta(y-y_0)$ denotes the Dirac delta function located at $y_0$. The limiting density $f(y)=\lim_{t\to\infty} \tilde f(y,t)$ of the process is therefore a stationary solution to the Focker-Planck equation (\ref{focker-planck}). A stationary solution satisfying $f(y)=0$ for $y\le0$ is therefore a solution to the differential equation (see \cite{kybernetika})
\begin{equation}
\frac{\omega^2}{2} \frac{\partial}{\partial y} (y f) - \alpha(y) f = 0.
\label{focker-planck-limit}
\end{equation}
Concerning structural assumption made on the drift function $\alpha:R\to R$ we shall henceforth assume the following hypothesis:
\[
(A)\qquad\qquad
\alpha \ \ \hbox{is a } C^1 \ \hbox{function on } [0,\infty),\ \ \frac{2\alpha(0)}{v^2} >1,\ \ \limsup_{y\to\infty} \frac{\alpha(y)}{y} <0.
\]
Now it follows from the assumption (A) made on the drift function $\alpha$ and \cite[Lemma 2]{kybernetika} that the stationary Focker--Planck equation (\ref{focker-planck-limit}) has a solution $f$ that can be explicitly expressed as:
\[
f(y) = C y^{-1}\exp\left(\frac{2}{v^2} \int_1^y \frac{\alpha(\xi)}{\xi} d\xi\right) = C y^{\frac{2\alpha(0)}{v^2} -1} \exp\left(\frac{2}{v^2} \int_1^y \hat\alpha(\xi) d\xi\right)
\]
for $y>0$ and $f(y)=0$ for $y \leq 0$. Here $\hat\alpha(y)= (\alpha(y)-\alpha(0))/y$ and $C>0$ is a normalization constant such that $\int_0^\infty f(y) dy =1$. For example, if we consider a mean reverting Bessel square root process for the stochastic dispersion $y$, i.e. the drift function is linear $\alpha(y)=\kappa_y(\theta_y -y)$ then the limiting density distribution function $f$ is the Gamma distribution function with shape parameters $2\kappa_y \theta_y/\omega^2$ and $2\kappa_y/\omega^2$ (see Kwok \cite{kwok}).
\section{Averaging with respect to stochastic volatility and relation to one-factor models}
Knowing the density distribution $f$ of the stochastic
volatility we are able to perform averaging of the bond price and the term structure
with respect to volatility. Unlike the short rate which is known from the market data on daily
basis, the volatility of the short rate process is unknown. The exact value of the stochastic volatility is not observable on the market, we can just observe its statistical properties.
Therefore such a volatility averaging is of special importance for practitioners.
We shall consider the following model with stochastic volatility:
\begin{eqnarray}
dr&=&\kappa(\theta-r)dt + \sqrt{y} dw_r, \\
dy&=&\alpha(y)dt+\omega \sqrt{y} dw_y, \label{sde-2f-model}
\end{eqnarray}
with uncorrelated increments $dw_r$ and $dw_y$ of a Wiener process. The market prices of risk are assumed to have a form $\lambda_r(r,y)=\lambda \sqrt{y}$ and $\lambda_y=\frac{\tilde{\lambda}}{\omega} \sqrt{y}$. Then the bond price $\pi(t,r,y)$ satisfies the following PDE:
\begin{eqnarray}
&&\frac{\partial \pi}{\partial t} + (\kappa(\theta-r) - \lambda y)\frac{\partial \pi}{\partial r} + (\alpha(y) - \tilde{\lambda} y) \frac{\partial \pi}{\partial y} + \frac{1}{2} y \frac{\partial^2 \pi }{\partial r^2} + \frac{\omega^2}{2} y \frac{\partial^2 \pi }{\partial y^2} - r \pi =0
\end{eqnarray}
with the terminal condition $\pi(T,r,y)=1$ for any $r,y\ge 0$. The explicit solution can be written as
\begin{equation}
\pi(t,r,y)=A(t,y) e^{-B(\tau)r}
\end{equation}
with the terminal conditions $A(y,T)=1$ for any $y>0$ and $B(T)=0$. The solution can be obtained by solving the following differential equations for the functions $A$ and $B$:
\begin{eqnarray}
&&-B^\prime + \kappa B-1=0, \label{B-eq}\\
&&\frac{\partial A}{\partial t} - B(\kappa \theta - \lambda y - \frac{y}{2}B)A + (\alpha(y)-\tilde{\lambda} y)\frac{\partial A}{\partial y}+ \frac{\omega^2}{2}y \frac{\partial^2 A}{\partial y^2} =0. \label{A-eq}
\end{eqnarray}
In what follows, we shall denote by $\langle\psi\rangle$ the averaged value of the function $\psi~: [0,\infty)\to R$ with respect to the limiting density $f$, i.e. $\langle\psi\rangle = \int_0^\infty \psi(y)f(y)\,dy$, where $f(y)$ satisfies the stationary Focker--Planck equation (\ref{focker-planck-limit})
The averaged bond price with respect to the limiting distribution of the volatility is given by
\begin{equation}
P(t,r)=\langle \pi(t,r,.) \rangle = a(t) e^{-B(t)r},
\end{equation}
where $a(t)=\int_0^{\infty} A(t,y) f(y) dy$.
The function $P=P(t,r)$ is a function of time $t$ and short rate $r$. Notice that it is the same functional dependence as for the bond price in one factor models, including, in particular, a solution to (\ref{vasicek-sde}) given by (\ref{tvar-riesenia}). However, we show that there is no such one-factor model yielding the same bond prices as those of averaged bond prices $P(t,r)$ given by the averaging of the two-factor model.
Now we are in a position to state our main result of the paper. We are going to prove that there is no one-factor interest rate model for the corresponding bond prices that is identical with the volatility averaged bond price $P(t,r)$ for any $t\in[0,T]$ and $r\ge0$.
Suppose to the contrary that $P=P(t,r)$ is a bond price from one-factor model in which the short rate is assumed to follow a general SDE
\begin{equation}
dr=\mu(r)dt+\Omega(r)dw.
\label{general-proc2}
\end{equation}
Then is satisfies the PDE
\begin{equation}
\frac{\partial P}{\partial t} + (\mu(r) -\Lambda(r))\frac{\partial P}{\partial r} + \frac{1}{2} \Omega^2(r) \frac{\partial^2 P}{\partial r^2} - rP=0,
\label{general-pde}
\end{equation}
where $\mu(r)$ is the drift of the short rate process, $\Omega(r)$ is its volatility and $\Lambda(r)$ is the product of the volatility and the corresponding market price of risk of the model. Substituting the form of the solution we obtain that
\begin{equation}
\frac{a^\prime(t)}{a(t)B(t)} = \kappa r + \mu(r) - \Lambda(r) - \frac{1}{2} \Omega^2(r) B(t).
\end{equation}
We see that the left hand side is a function of $t$ only. We denote it by $\phi(t)$, i.e. $\phi(t)=\dot{a}(t)/(a(t)B(t))$. Then,
\begin{equation}
\kappa r + \mu(r) - \Lambda(r) = \phi(t) + \frac{1}{2} \Omega^2(r) B(t),
\end{equation}
and so the right hand side is constant with respect to $t$. Hence for any $t$ it equals $\phi(T) + \frac{1}{2} \Omega^2(r) B(T) = \phi(T)$, which is a constant denoted by $K$. We have
\begin{equation}
\kappa r + \mu(r) - \Lambda(r) = \frac{1}{2} \Omega^2(r)B(t)+\phi(t) = K,
\end{equation}
from which it follows that $\Omega(r)\equiv \bar\Omega$ is a constant and that $\mu(r)-\Lambda(r)=K-\kappa r$.
Let us denote by $\sigma^2$ and $d$ the first two statistical moments of the random variable with respect to the limiting density function $f$, i.e.
\begin{eqnarray}
\sigma^2&=&\langle y \rangle = \int_0^{\infty} y f(y) dy, \quad
d = \langle y^2 \rangle = \int_0^{\infty} y^2 f(y) dy. \nonumber
\end{eqnarray}
We know that $a(T)=1$ and from the expression
\begin{equation}
a'(t)=\left(K-\frac{\bar\Omega^2}{2}B(t)\right)a(t)B(t)
\end{equation}
we can recursively compute the values of the time derivatives of function $a$ at time $T$:
\begin{eqnarray}
a'(T)&=&0, \label{eq-1} \\
a''(T)&=&-K, \label{eq-2} \\
a'''(T)&=&-K \kappa -\bar\Omega^2, \label{eq-3} \\
a''''(T) &=& 3 K^2 - 3\bar\Omega^2 \kappa -K\kappa . \label{eq-4}
\end{eqnarray}
\medskip
Another way of computing these derivatives is using the expression
\[
a(t)=\int_0^{\infty}A(t,y)f(y)dy
\]
and the partial differential equation (\ref{A-eq}) for the function $A$ and the stationary Focker--Planck equation for the limiting density function $f$. Indeed, by integration by parts and taking into account boundary conditions $yf(y)=0$ for $y=0,+\infty$ for the limiting density $f$ we obtain
\begin{eqnarray}
&&\int_0^\infty \left(\alpha(y) \frac{\partial A}{\partial y} + \frac{\omega^2}{2}y \frac{\partial^2 A}{\partial y^2}\right) f(y) dy \nonumber \\
&&= \int_0^\infty \left(- \frac{\partial }{\partial y}(\alpha(y) f(y)) + \frac{\omega^2}{2} \frac{\partial^2 }{\partial y^2}(yf(y)) \right) A dy = 0.\nonumber
\end{eqnarray}
Furthermore, by (\ref{focker-planck-limit}), we have
\[
\int_0^\infty \frac{\partial A}{\partial y} y f(y) dy
= - \frac{2}{\omega^2} \int_0^\infty A \alpha(y) f(y) dy.
\]
Therefore
\[
a^\prime(t) = \int_0^\infty \left(B(t)\left(\kappa\theta - \lambda y -\frac{y}{2} B(t)\right) - \frac{2\tilde\lambda}{\omega^2}\alpha(y) \right) A(t,y) f(y) dy.
\]
Now taking into account PDE (\ref{A-eq}) for the function $A(t,y)$ we can recurrently evaluate
\[
A(T,y)=1,\
\frac{\partial A}{\partial t}(T,y) = 0,\
\frac{\partial^2 A}{\partial t^2}(T,y) = -\kappa\theta +\lambda y,
\]
\[
\frac{\partial^3 A}{\partial t^3}(T,y) = -\kappa^2\theta - (1-\kappa\lambda) y -\lambda(\alpha(y)-\tilde\lambda y),\
\]
for any $y>0$. Using the above expressions and the identities $B(T)=0, B^\prime(T) = -1, B^{\prime\prime}(T)=-\kappa, B^{\prime\prime\prime}(T)=-\kappa^2$, after straightforward computations we obtain
\begin{eqnarray}
a'(T)&=&0, \label{eq-5} \\
a''(T)&=&-\kappa \theta + \lambda \sigma^2 \label{eq-6}
\end{eqnarray}
and by comparing (\ref{eq-2}) and (\ref{eq-6}) we obtain the expression for the constant $K$ in terms of the model parameters as
\begin{equation}
K=\kappa \theta - \lambda \sigma^2.
\end{equation}
Computing the next derivative we end up with
\begin{equation}
a'''(T)=\tilde{\lambda} \lambda \sigma^2 - \kappa^2 \theta + \kappa \lambda \sigma^2 - \sigma^2 \label{eq-7}
\end{equation}
and by comparing (\ref{eq-3}) and (\ref{eq-7}) we can express the volatility $\bar\Omega$ as
\begin{equation}
\bar\Omega^2 = \sigma^2 (1 - \tilde{\lambda} \lambda ) .
\end{equation}
Notice that the PDE for the averaged bond price now reads as follows:
\[
\frac{\partial P}{\partial t} + (\kappa(\theta -r)-\lambda\sigma^2)\frac{\partial P}{\partial r} + \frac{\sigma^2(1 - \tilde{\lambda} \lambda) }{2} \frac{\partial^2 P}{\partial r^2} - rP=0,
\]
which is the PDE corresponding to the classical one-factor Vasicek interest rate model. Now we fully determined the drift function minus market price function $\mu(r)-\Lambda(r)$ as well as the volatility function $\Omega(r)$ in (\ref{general-pde}).
In order to achieve contradiction we finally compute the fourth derivative as
\begin{eqnarray}
a''''(T) &=& 3 \lambda^2 d + (-6 \kappa \theta \lambda + \kappa^2 \lambda - 3 \kappa + \tilde{\lambda}(\kappa \lambda -1 + \lambda \tilde{\lambda})) \sigma^2 \nonumber \\
&& + 3 \kappa^2 \theta^2 -\kappa^3 \theta + \frac{2}{\omega^2} \tilde{\lambda} \lambda \int_0^{\infty} \alpha^2(y)f(y) dy.\label{eq-8}
\end{eqnarray}
Comparing (\ref{eq-4}) and (\ref{eq-8}) we get the condition
\begin{equation}
\sigma^2 (2 \kappa \tilde{\lambda} \lambda + 1 - \lambda \tilde{\lambda}^2) = \frac{2}{\omega^2} \tilde{\lambda} \lambda \int_0^{\infty} \alpha^2(y)f(y) dy + 3 \lambda^2 (d-\sigma^4).
\end{equation}
However, the latter equality can not be satisfied for general choice of model parameters. Indeed, setting $\lambda=0$, we obtain $0=\sigma^2=\int_0^\infty y f(y) dy$ which is not possible as $f(y)>0$ for $y>0$.
\bigskip
Summarizing, we have shown the following theorem:
\medskip
\begin{theorem}
Consider the generalized Fong-Vasicek two-factors model with stochastic volatility (\ref{sde-2f-model}) and the averaged bond price $P(t,r)$ with respect to the limiting distribution of the stochastic dispersion. Then there is no one-factor interest rate model (\ref{general-proc2}) with corresponding PDE for the bond price (\ref{general-pde}) yielding the same bond prices as the averaged values $P(t,r)$ from the two-factor model.
\end{theorem}
\section*{Acknowledgment}
The support from the grant VEGA 1/3767/06 is kindly acknowledged.
|
1,108,101,566,461 | arxiv | \section{Introduction}
Propylene oxide, CH$_3$C$_2$H$_3$O (\textit{alias} methyloxirane or propene oxide), is a stable chiral molecule whose microwave spectrum up to 40 GHz was first reported by Herschbach \& Swalen in 1957 \citep{Swalen.1957,Herschbach.1958}. Its high vapor pressure at room temperature and its commercial availability in high enantiomeric purity make it a well-suited object for studying chiral properties of gas-phase molecules. Propylene oxide (PO) is a simple epoxide ring with one hydrogen atom substituted by a methyl group. The large amplitude internal torsion of the methyl group in a triple well potential causes rotational levels of PO to split into sub-levels of A and E symmetry. From the analysis of A-E split rotational lines in the ground, first and second excited torsion states, Herschbach \& Swalen derived the potential barrier height of 895(5) cm$^{-1}$. In 2017 Mesko \textit{et al.} reported new ground state spectra of PO up to 1 THz which enabled improved molecular parameters and a more accurate potential barrier height \citep{Mesko.2017}. Recently, rotational spectra of PO in the first torsional excited state up to 1 THz were analyzed, which show large A-E splittings \citep{Stahl.2021}. These new measurements helped to derive more accurate tunneling parameters and molecular constants of the large amplitude motion in the vibrational excited state. Ground state spectra of $\mathrm{^{13}C}$-substituted isotopologues reported in 1977 by Crewell \textit{et al.} \citep{Creswell.1977}, and multiply deuterated and $\mathrm{^{18}O}$ substituted species were measured by Imachi \& Kuczkowski \citep{Imachi.1983}, which allowed them to experimentally derive the equilibrium structure of propylene oxide. In these studies, only lines of A symmetry were considered, leaving the question of the influence of isotopes on the tunneling barrier height unresolved.\\
In 2016, the detection of propylene oxide as the first chiral molecule discovered in space attracted great astrophysical interest. The observation of rotational lines towards the north core (N) of the giant molecular cloud SgrB2 \citep{McGuire.2016}
raised the fundamental question on how chiral molecules are formed in space. Possible chemical formation routes have been discussed
\citep{Ellinger.2020},
which also include reactions in or on cryogenic solids induced by electromagnetic radiation. If PO is formed via a solid state or surface reaction route at low temperatures, hydrogen deuterium exchange reactions might come into play with significantly enhanced concentrations of deuterium, thus giving important hints to the astrochemical formation processes. The detection of deuterated species and the investigation of their relative abundance to the respective main isotopologue might allow to model chemical pathways on the formation of complex organic molecules (COMs) in space \citep{Herbst.2009}. \\
In the presented study we analyze the A-E internal rotational splitting of the doubly deuterated propylene oxide,
$\mathrm{CH_{3}CHCD_{2}O}$, PO$-$d$_{5,6}$, which has two deuterium atoms substituted to the epoxide ring (see Fig. \ref{fig:structure}). To our knowledge, this is the first study to address the effect of deuterium substitution on the internal torsional motion of the methyl group. The symmetric substitution of the (5,6)-hydrogen atoms maintains the simple structure of a single chirality center. Furthermore, the new results may also foster the search of deuterated PO in cold molecular regions of space. A line list with transition frequencies of $\mathrm{CH_{3}CHCD_{2}O}$ is provided, which helps radio astronomers to search for this molecule in suitable astronomical sources.
\begin{figure}[hbtp]
\centering
\includegraphics[width=\hsize]{MOD_V3.png}
\caption{Threefold potential function $V_{3}$ for the internal rotation of the methyl group with A and E levels for different excited torsional states (left). The structure of doubly deuterated R-propylene oxide, calculated at the B3LYP/aug-cc-pVTZ level of theory using Gaussian 16\citep{Frisch.2016} (right). The two deuterium atoms are located at positions 5 and 6 at the carbon C-1.}
\label{fig:structure}
\end{figure}
\section{Synthesis of the deuterated PO sample} \label{syn}
Racemic PO$-$d$_{5,6}$ was synthesized starting from racemic alanine which was converted to 2-chloropropanoic acid following a published procedure \citep{Koppenhoefer.2003}. 2-Chloropropanoic acid was reduced with \ce{LiAlD4} to 2-chloropropan-1,1-d2-1-ol based on adapted literature procedures \citep{Koppenhoefer.2003b, Karakalos.2016}. Cyclization of neat 2-chloropropan-1,1-d2-1-ol (2.2\,g, 23.3\,mmol) was effected with 2\,g of KOH, dissolved in 5\,ml of water, in a distillation apparatus, where the product, PO$-$d$_{5,6}$, was collected in a trap cooled to -80$\degree$C, at 100\,mbar pressure. To remove water liberated during the reaction, the crude product was treated with \ce{CaH2} (2\,g) at 0$\degree$C until the gas evolution ceased and then was subjected to a second distillation furnishing 1.0\,g (17.2\,mmol, 74\,\%) PO$-$d$_{5,6}$. The product was characterized with ${^1}$H and $^{13}$C-NMR spectroscopy confirming its identity and purity: $\delta$(${^1}$H): 1.31 (d, $^{3}J_{\mathrm{HH}}$ = 5.2\,Hz, \ce{CH3}), 2.97 (q, $^{3}J_{\mathrm{HH}}$ = 5.2\,Hz, CH); $\delta$($^{13}$C{${^1}$H}): 18.1 (s, \ce{CH3}), 47.5 (q, $^{1}J_{\mathrm{CD}}$ = 26.6 Hz, \ce{CD2}), 48.3 (s, CH).
\section{Experimental setup} \label{meas}
Rotational spectra of PO$-$d$_{5,6}$ were recorded from 83$-$110\,GHz, 170$-$220\,GHz, and 260$-$330\,GHz using the Kassel THz spectrometer, which utilises a 2$f$ frequency lock-in modulation technique. The details and data reduction description of this spectrometer are reported elsewhere \citep{Herberth.2019,Stahl.2020}, thus only a brief description of the experiment is given. In our experiment, the deuterated PO sample was probed in a static vacuum glass cell with a total length of 3\,m at an operating pressure of about 1$-$2\,Pa. The cell was evacuated using a rotary vane pump in combination with a turbo molecular pump. The liquid sample was placed in a long-necked flat-bottomed glass flask, which was connected to the vacuum cell via a needle valve. PO has a high vapour pressure at room temperature\footnote{The main isotopologue has a vapour pressure of about 580\,hPa at room temperature \citep{Bott.1966}}. The needle valve allowed to accurately adjust the operating pressure in the glass cell without heating the sample. In this experiment, it was crucial to operate at a low pressure to resolve the small A-E splitting of the ground state, which otherwise is blended for higher pressures.
\section{Results \& Discussion}
\subsection{Quantum chemical calculations}
Anharmonic frequency calculations on the B3LYP/aug-cc-pVTZ level of theory were performed using the computational chemistry program Gaussian 16 \citep{Frisch.2016}.
The rotational constants and the centrifugal distortion constants up to sextic order were determined and are summarised in Table \ref{tab:results1}.
A harmonic hindered rotor calculation of the barrier height to internal rotation resulted in $V_{3}^{harm}=2.4\,\mathrm{kcal}=10.04\,\mathrm{kJ}=839.44\,\mathrm{cm^{-1}}$ (1\,atm, $T=298.15$\,K, 1\,mol). The electric dipole moment components were calculated from the equlibrium structure of PO$-$d$_{5,6}$ and are $\mu_{a}=0.87$\,D, $\mu_{b}=1.72$\,D, $\mu_{c}=0.59$\,D, and the total dipole moment is $\mu=2$\,D.
\subsection{Spectral analysis}
At the beginning of the assignment procedure of the A-E splittings of $\mathrm{CH_{3}CHCD_{2}O}$, the strong A state transitions, which resemble those of an asymmetric rigid rotor without internal rotation \citep{Swalen.1957}, were assigned using the software PGOPHER \citep{Western.2014,Western.2017}. For the initial prediction of the A states, the rotational constants and centrifugal distortion constants from the anharmonic frequency calculations were taken. Hundreds of A state transitions were assigned, until a robust fit of the A state was achieved. An initial prediction of both the A and E state transitions was obtained using the program XIAM, where the barrier height to internal rotation $V_{3}$ and the structural parameters $\rho$, $\beta$, and $\gamma$, which describe the internal rotor axis relative to the remaining frame of the molecule, were included. The initial values of these parameters were based on the knowledge of previous work on the main isotopologue \citep{Herschbach.1958,Herschbach.1959,Mesko.2017,Stahl.2021}. For further assignments, we then utilised the AABS package \citep{Kisiel.2005, Kisiel.2012} from the PROSPE website \citep{Demaison.2001}, which is able to handle torsional quantum number labels, hence, to assign A and E levels in a combined line list. A and E state transitions were assigned to the line list from the XIAM prediction with the help of AABS. The AABS assignments were fit with XIAM to obtain a more accurate prediction, and the AABS line list was complemented by new assignments from the observations. Iteratively, the XIAM parameters were fit and refined to the successively updated line list until a robust parameter set for the XIAM Hamiltonian was found. The conversion between the XIAM format and the AABS format was achieved with the help of a homemade python code. The final prediction and the experimental results are presented below.
\\
The spectrum of the ground torsional state of PO$-$d$_{5,6}$ shows Q-branches and strong R-branch transitions. At $T=300$\,K, the maximum intensity of the spectrum is between 600$-$800\,GHz with strong R-branch transitions dominating the spectrum above 350\,GHz. The spacing of the Q branches is, to a good approximation, given by $2\cdot(A-\dfrac{B+C}{2})\approx 20\,\mathrm{GHz}$, which is clearly distinct from the 23.4\,GHz spacing of the main isotopologue \citep{Stahl.2021}.
\\
Rotational levels of low $K_a$ are split by the molecular asymmetry. Due to internal rotation of the methyl group each asymmetry level is further split into one component of non-degenerate A symmetry and one of degenerate E symmetry which leads to A and E rotational transitions of comparable intensity ratios (A:E=1:1) \citep{LIN.1959}. For high $K_a$ states the asymmetry splitting vanishes, resulting into two A components of same energy and two separated components of E symmetry. The spectra of high $K_a$ states show two separated lines (E) and one single line (A) of twice the intensity (E:E:A=1:1:2). This behavior was described in detail by Herschbach \& Swalen in Ref. \citep{Swalen.1957,Herschbach.1958}. The A-E splitting strongly varies with $J$, $K_a$. For the ground state transitions of PO$-$d$_{5,6}$ the splitting is typically below 1 MHz, leaving some of the splittings unresolved.
Example spectra and the simulated room temperature spectra based on parameters derived from a least-squares-fit analysis with XIAM (Tables \ref{tab:results1} and \ref{tab:results2}) are shown in Figures \ref{fig:Qbranch}, \ref{fig:Rbranch}, and \ref{fig:blended_Qbranch}.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\hsize]{MOD_Q_190G_zoom.png}
\caption{
Spectrum of deuterated propylene oxide around 190\,GHz measured in a static cell at room temperature. Top graph: the rQ$_{9}$-branch from $J''=10$ to $J''=22$ can be seen as a measured spectrum (black), and as a stick spectrum prediction of the A (blue) and E (red) states, based on a least-squares-fit analysis with XIAM (Tables \ref{tab:results1} and \ref{tab:results2}). Bottom graph: the detailed view of the rQ$_{9}$-branch shows the splittings of a rotational transitions into A and E components.
}
\label{fig:Qbranch}
\end{figure*}
Figure \ref{fig:Qbranch} shows the rQ$_{9}$-branch with transitions from $J''=10$ to $J''=22$ around 190\,GHz (with $J''$ denoting the quantum number of the lower rotational level). A stick spectrum prediction of the A and E states\footnote{In this work, the A levels are marked in blue, the E levels are marked in red.}, based on a least-squares-fit analysis with XIAM (Tables \ref{tab:results1} and \ref{tab:results2}), is also given. The lines are assigned and labeled by asymmetric rotor quantum numbers. Note, that for high $K_a$ states the line splitting is caused by the internal rotation rather than by asymmetry splitting. To be consistent with the assignment of transitions used in our fit routines we kept the labeling of asymmetric top molecules, although the involved energy states are mixtures of both asymmetry components. In the detailed view of the Q-branch in Figure \ref{fig:Qbranch}, the small A-E splitting of the \textit{b}-type transitions with $J''=12$ can be seen. The A states are blended, whereas the predicted E states are split by about 140\,kHz. In the measured spectrum, the doublets of A and E states are equal in intensity. There are more lines in the observed spectrum than lines predicted by XIAM. These lines could belong to higher torsionally excited states or to impurities in the sample cell.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\hsize]{MOD_R_257G_2.png}
\caption{
Spectrum of deuterated propylene oxide around 257.7\,GHz measured in a static cell at room temperature. R-branch transitions with $J''=22$ can be seen in the measured spectrum (black) with the stick spectrum prediction of the A (blue) and E (red) states, based on a least-squares-fit analysis with XIAM (Tables \ref{tab:results1} and \ref{tab:results2}). The transitions are labeled by their quantum numbers. The outer pairs of transitions are \textit{b}-type transitions, which are split up into their A and E components. In the case of the inner lines, which are \textit{a}-type transitions, the A and E states are blended.
}
\label{fig:Rbranch}
\end{figure*}
Figure \ref{fig:Rbranch} shows four doublets of A and E state transitions in the R-branch with $23_{K'_{a},23}\leftarrow 22_{K''_{a},22}$ . The outer doublets in Figure \ref{fig:Rbranch} are \textit{b}-type transitions, which are split into well-separated A and E components. The $23_{0,23}\leftarrow 22_{1,22}$ doublet has a splitting of about 800 kHz, the $23_{1,23}\leftarrow 22_{0,22}$ doublet has a splitting of 600 kHz. In contrast, the A and E states of the two inner lines of Figure \ref{fig:Rbranch} are \textit{a}-type transitions with $23_{1,23}\leftarrow 22_{1,22}$, and $23_{0,23}\leftarrow 22_{0,22}$, which A and E components are not resolved, but line broadening due to the A-E splitting of the $23_{0,23}\leftarrow 22_{0,22}$ transition can be seen.
In Figure \ref{fig:blended_Qbranch}, Q$_{13}$-branch transitions with $J''=15$, $J''=16$, and $J''=17$ can be seen together with the predicted spectrum from the XIAM program. The asymmetry splitting of A states has vanished, whereas the E state transitions are split by 790\,kHz. As can be seen from the line intensities, one of the E components is blended with the corresponding A component.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\hsize]{MOD_R_270G_new.png}
\caption{
Spectrum of deuterated propylene oxide around 270\,GHz measured in a static cell at room temperature. The rQ$_{13}$-branch transitions from $J''=15$ to $J''=17$ can be seen in the measured spectrum (black) with the stick spectrum prediction of the A (blue) and E (red) states, based on a least-squares-fit analysis with XIAM (Tables \ref{tab:results1} and \ref{tab:results2}). The transitions are labelded by their quantum numbers.
The A state asymmetry sides are blended, whereas the E state asymmetry sides are split, with one of them is blended with the respective A state line.
}
\label{fig:blended_Qbranch}
\end{figure*}
\subsection{Spectral results \& discussion}
The results of our investigation of the A-E splitting due to internal rotation with the help of the program XIAM are shown in Tables \ref{tab:results1} and \ref{tab:results2}.
\begin{table*}[!ht]
\centering
\caption{Rotational constants and centrifugal distortion constants of $\mathrm{CH_{3}CHCD_{2}O}$ from the XIAM analysis. The Hamiltonian is described in Watson's A-reduction and in $I^{r}$ representation. Quantum chemical calculations, and literature values from the main isotopologue are also given for comparison.
The rotational constants from Imachi \& Kuczowski \citep{Imachi.1983} (originally taken from Reference 8 in \citep{Imachi.1983}) are calculated from the moments of inertia by using the factor 505379.05 MHz/amu$\AA^2$.
}
\begin{tabular}{ccccc}
\hline
Parameter& This work (XIAM) & Imachi \& Kuczowski &B3LYP/aug-cc-pVTZ \\
\hline
$A$ /MHz & \tablenum{15916.944077(329)} & \tablenum{15916.90897} &\tablenum{16076.69}\\
$B$ /MHz & \tablenum{6246.890290(572)} &\tablenum{6246.59075 }&\tablenum{6224.51} \\
$C$ /MHz & \tablenum{5545.600172(584)} &\tablenum{5545.75072}& \tablenum{5528.22}\\
$\Delta_{J}$ /kHz & \tablenum{2.468321(137)} & & \tablenum{2.44} \\
$\Delta_{JK}$ /kHz & \tablenum{3.122962(310)}&& \tablenum{3.19 } \\
$\Delta_{K}$ /kHz & \tablenum{14.387857(1143) }& & \tablenum{14.46 } \\
$\delta_{J}$ /kHz & \tablenum{0.213706(18)}&& \tablenum{0.21} \\
$\delta_{K}$ /kHz & \tablenum{2.962002(632)} & & \tablenum{2.92} \\
$\Phi_{J}$ /Hz & \tablenum{0.000820(38) } & & \tablenum{ 0.0014} \\
$\Phi_{JK}$ /Hz & & & \tablenum{0.0019 } \\
$\Phi_{KJ}$ /Hz & \tablenum{ -0.008553(754) } & & \tablenum{-0.0087} \\
$\Phi_{K}$ /Hz & \tablenum{ 0.214783(2840) } && \tablenum{ 0.0645 } \\
$\phi_{J}$ /Hz & \tablenum{ 0.000155(4) } & & \tablenum{ 0.0002} \\
$\phi_{JK}$ /Hz & \tablenum{ 0.002964(208) } &&\tablenum{ 0.0050 } \\
$\phi_{K}$ /Hz & \tablenum{ 0.100146(1277) } & & \tablenum{ 0.1324 } \\
\hline
No. of lines $N$ & \tablenum{4199} & & \\
$\sigma$ /kHz & \tablenum{ 81.7 }& \multicolumn{2}{c}{$\sqrt{\frac{1}{N}\sum \left(x_{obs}-x_{calc}\right)^2}$} \\
\hline
\end{tabular}
\label{tab:results1}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{Tunneling and structure parameters of $\mathrm{CH_{3}CHCD_{2}O}$ from the XIAM analysis. The Hamiltonian is described in Watson's A-reduction and in $I^{r}$ representation. Literature values from Refs. \citep{Stahl.2021,Mesko.2017,Swalen.1957,Herschbach.1958} for the main isotopologue are also given for comparison.
}
\begin{tabular}{ccccc}
\hline
Parameter& This work (XIAM) & Stahl \textit{et al.}$^{a}$ \citep{Stahl.2021}&Mesko \textit{et al.}$^{a}$ \citep{Mesko.2017} &Herschbach and Swalen$^{a}$ \citep{Swalen.1957,Herschbach.1958} \\
\hline
$V_{3}$/$\mathrm{cm^{-1}}$ & 882.7835(1.7179) & 898.6611(894) & 892.71 (58)& 895(5) \\
$\rho\cdot 10^{-3}$ & 92.248198$^{b}$ & 102.562756& 102.248991 & 103 \\
$\beta$ /rad & -0.1707(11) & 0.1647 &0.1726& \\
$\gamma$ /rad & -1.7013(351) &1.574 &1.5457& \\
$\epsilon$ /rad & 1.4548$^{b}$& 1.574(80) &1.55 (12) & \\
$\delta$ /rad & 0.4588$^{b}$ &0.46661(12) &0.4858 (19)& \\
$\angle(i,a)$ /$^\circ$ & 26.2900(1690) &26.7345(65) &27.8368(1098) & 26.8616702 \\
$\angle(i,b)$ /$^\circ$ & 87.0610(8049) &90.1(2.1) &89.4020(3.3326) & 89.7364385 \\
$\angle(i,c)$ /$^\circ$ & 63.9002(2563) & 63.2656(1052) &62.1708(2602) & 63.1407711 \\
$F_{0}$ /GHz & 156.98027(26010)& 159.0197(1628) &158.2278$^{c}$ & 158.2278 \\
$F$ /GHz & 172.211630665$^{b}$ & 176.281928 &175.277860737 & 175.55846 \\
$I_{\alpha}$ & 3.21938(533) & 3.178091(325) &3.193997$^{d}$& 3.194 \\
$s$ & 68.301456$^{b}$ & 67.924493 &67.861583$^{d}$ & 68.0 \\
\hline
\multicolumn{5}{l}{$^{a}$ These parameters are the values from the main isotopologue $\mathrm{CH_{3}C_{2}H_{3}O}$.} \\
\multicolumn{5}{l}{$^{b}$ These parameters were derived from the fit parameters in the XIAM analysis.} \\
\multicolumn{5}{l}{$^{c}$ The $F_{0}$ value in Mesko \textit{et al.} was fixed to the value of \citep{Herschbach.1958}.} \\
\multicolumn{5}{l}{$^{d}$ Taken from the XIAM output file} \\
\multicolumn{5}{l}{ ~~ Mesko \textit{et al.} \citep{Mesko.2017} under \url{https://doi.org/10.1016/j.jms.2017.02.003}.}\\
\end{tabular}
\label{tab:results2}
\end{table*}
The rotational constants $A$, $B$, and $C$, the quartic centrifugal distortion parameters $\Delta_{J}$, $\Delta_{JK}$, $\Delta_{K}$, $\delta_{J}$ and $\delta_{K}$, and the sextic centrifugal distortion constants $\Phi_{J}$, $\Phi_{KJ}$, $\Phi_{K}$, $\phi_{J}$, $\phi_{JK}$, and $\phi_{K}$ were used in Watson's A-reduction in I$^{r}$ representation. Moreover, the barrier height to internal rotation $V_{3}$ and structure parameters have been included to describe the line splitting of A and E components.
In the XIAM analysis, the A and E states were fitted to an uncertainty level of 82\,kHz, including 4199 transitions with $J,K_{a},K_{a}\leq 67, 28, 62$ between 83\,GHz and 330\,GHz. The obtained fit parameters $V_{3}$, $\beta$, and $\gamma$, and the reduced rotational constant $F_{0}$ allowed to derive the rho axis vector $\rho$, the angles between the principal axis and the internal axis $\angle(i,a/b/c)$, the internal moment of inertia $I_{\alpha}$, and the dimensionless parameter $s$.
A description of the internal rotor parameters and the XIAM angles can be found in Refs. \citep{LIN.1959,Lister.1978} and \citep{Hartwig.1996,Kleiner.2010}, respectively.
The additional XIAM tunneling parameters, $\Delta_{\pi2J}$, $\Delta_{\pi2K}$, $\Delta_{\pi2-}$, $D_{c3J}$, $D_{c3K}$, and $D_{c3-}$ \cite{Hansen.1999,Herbers.2020,Herbers.2020b} did not improve the fit and were therefore omitted. This also holds for the $\Delta_{\pi2K}$, which was used for the main isotopologue study \citep{Mesko.2017}.
\\
\\
Imachi \& Kuczowski \citep{Imachi.1983} determined the structure of propylene oxide from an isotopologue analysis, including three $^{13}$C and several D substitutions. They also reported the moments of inertia\footnote{The moments of inertia reported in Ref. \citep{Imachi.1983} are based on the A state transitions.} of the doubly deuterated propylene oxide with the deuterium atoms at position 5 and 6, which are taken from Ref. 8 in \citep{Imachi.1983}. The rotational constants given in Table \ref{tab:results1} were calculated from the moments of inertia using the factor 505379.05 MHz/amu$\AA^2$. The $A$, $B$, and $C$ reported in Imachi \& Kuczowski are close to the values of this work, for example the relative deviation ($\frac{A-A_{lit}}{A}$, with the literature value $A_{lit}$) for $A$ is $2.2\cdot 10^{-6}$.
The rotational constants retrieved from the B3LYP/aug-cc-pVTZ anharmonic frequency calculations have a relative deviation of 1\% and below compared to the experimental results with XIAM. The calculated $A$ value is overestimated by about 160\,MHz, whereas the $B$ and $C$ values are undererstimated by about 20\,MHz compared to the experiment. The quartic centrifugal distortion constants are in good agreement with experiments, but larger deviations are found for the sextic centrifugal distortion constants. The calculated barrier height to internal rotation $V_{3}^{harm}= 839.44\,\mathrm{cm^{-1}}$ is about 40$\,\mathrm{cm^{-1}}$ below the experimental value of $V_{3}=882.7835(1.7179)\,\mathrm{cm^{-1}}$.
\\
\\
In comparison to the the main isotopologue $\mathrm{CH_{3}C_{2}H_{3}O}$, the barrier height to internal rotation of PO$-$d$_{5,6}$ is lower: Herschbach and Swalen \citep{Herschbach.1958} reported $V_{3}=895(5)\,\mathrm{cm^{-1}}$, Mesko \textit{et al.} \citep{Mesko.2017} reported $V_{3}=892.71(58)\,\mathrm{cm^{-1}}$, which is about 10\,$\mathrm{cm^{-1}}$ higher than for PO$-$d$_{5,6}$.
The larger uncertainty of 1.7179\,$\mathrm{cm^{-1}}$ in this work compared to 0.58\,$\mathrm{cm^{-1}}$ from Ref. \citep{Mesko.2017} can be explained as follows: the main isotopologue study included 15200 lines up to 1 THz compared to 4199 lines up to 330\,GHz in this work; moreover the value of $F_{0}$ was fixed in Ref. \citep{Mesko.2017}, whereas in this work, the reduced rotational constant was fit, which directly affects the value of $V_{3}$.
Compared to the value $V_{3}=898.6611(894)\,\mathrm{cm^{-1}}$ of the analysis of PO's first excited torsional state \citep{Stahl.2021}, the barrier height of PO$-$d$_{5,6}$ is also lower by about $16\,\mathrm{cm^{-1}}$. Thus, the influence of deuterating PO's main isotopologue at positions 5 and 6 (see Figure \ref{fig:structure}) on the barrier height to internal rotation is small but not negligible. As was done in the previous work on PO \citep{Herschbach.1957,Herschbach.1958,Mesko.2017,Stahl.2021}, the sixfold potential function $V_{6}$ and higher order contributions to the potential function describing the internal rotation were omitted.
Deviations from PO$-$d$_{5,6}$ and the main isotopologue \citep{Herschbach.1957,Herschbach.1958,Mesko.2017,Stahl.2021} can also be seen in the other structure parameters, namely, $\rho$, $\beta$, and $\gamma$, and the angles $\angle(i,a/b/c)$. The $\rho$ value of $\cdot 10^{-3}$ is smaller than the main isotopologue values $\rho_{PO}=102.249\cdot 10^{-3}$ \citep{Mesko.2017}, $\rho_{PO}=103\cdot 10^{-3}$ \citep{Swalen.1957,Herschbach.1958}, and $\rho_{PO}=102.52334(449)\cdot 10^{-3}$ and $\rho_{PO}=102.562756\cdot 10^{-3}$ for the ERHAM and XIAM analysis in \citep{Stahl.2021}, respectively. The reduced rotational constants $F_{0}$ and $F$, which are linked to the barrier height to internal rotation, are also smaller than the main isotopologue values.
Differences of the results of the PO$-$d$_{5,6}$ analysis and its main isotopologue PO can be seen in the comparison of the results with the literature values of the main isotopologue (see Table \ref{tab:results2}). The deuterium atoms affect both the characteristics of the asymmetric rotor, for example the rotational constants have changed, and the internal rotational motion, which can be seen from the structure and tunneling parameters from Tables \ref{tab:results1} and \ref{tab:results2}.
\\
In total, the results of the XIAM analysis of of PO$-$d$_{5,6}$ are in good agreement with the previous work of Imachi \& Kuczowski \citep{Imachi.1983}. The quantum chemical calculations are in good agreement with the experiment. The XIAM prediction, which is based on the least-squares-fit analysis given in Tables \ref{tab:results1} and \ref{tab:results2}, matches the observed spectra and shows an uncertainty of below 82\,kHz. It includes 4199 lines up to 330\,GHz and describes the A-E splitting due to internal rotation. The uncertainty level of our analysis and the inclusion of higher order centrifugal distortion parameters enables the prediction of A and E lines beyond the performed measurements up to 330\,GHz.
\ \\
The main isotopologue has been found in space towards SgrB2(N) \citep{McGuire.2016}, and a search for deuterated PO species appears promising. Especially, in cold clouds, prestellar and protostellar cores a significantly higher deuterium fractionation compared to the elemental abundance D/H$=(1.5\pm 0.1)\cdot 10^{-5}$ \citep{Linsky.1998,Roueff.2005,Herbst.2009} can be expected. Ethylene oxide, the simplest epoxide $\mathrm{C_{2}H_{4}O}$, has been detected in space towards several sources, among them SgrB2(N)
\citep{Dickens.1997}
and the solar-type protostellar binary IRAS 16293-2422 \citep{Lykke.2017}. The millimeter and submillimeter spectra of its singly deuterated isotopologues are known from literature \citep{Albert.2019}, but yet these species have not been detected in space. Although the methyl radical $\mathrm{CH_{3}}$ was observed in space \citep{Feuchtgruber.2000} it is unlikely that it is directly involved in the propylene oxide formation. Other methyl bearing molecules like methanol $\mathrm{CH_{3}OH}$ are detected towards SgrB2(N) \citep{Ball.1970} and many other sources.
In the case of SgrB2(N) only a tentative detection of the deuterated species $\mathrm{CH_{2}DOH}$ is reported \citep{Belloche.2016}.
However, methanol can be highly deuterated, like the triply-deuterated methanol found towards IRAS 16293-2422, i.e. in the same source where also ethylene oxide has been detected \citep{Parise.2004}. The spectra around 10$-$40\,GHz of singly deuterated species of propylene oxide have been studied by Imachi \& Kuczkowski \citep{Imachi.1983}. If propylene oxide is found in IRAS 16293-2422 or in similar sources, deuteration might occur more readily and might also allow for doubly deuterated species, like $\mathrm{CH_{3}CHCD_{2}O}$. For a possible detection,
we provide a line list with A and E level transitions of the ground state of PO$-$d$_{5,6}$ in the millimeter and submillimeter-wave range. In case of its astronomical detection, the new measurements will enable the calculation of accurate column densities which further elucidate astrochemical formation processes of chiral molecules in the interstellar medium.
\section{Conclusions}
In this work, the high-resolution internal rotation analysis of the ground state of the chiral molecule $\mathrm{CH_{3}CHCD_{2}O}$ is presented. The least-squares-fit analysis using XIAM included 4199 A and E state transitions between 83\,GHz and 330\,GHz. The overall Hamiltonian in XIAM with the parameters from Tables \ref{tab:results1} and \ref{tab:results2} describes the A-E splitting up to sub-millimeterwave frequencies with an uncertainty of 81.7\,kHz. Furthermore, the barrier height to internal rotation $V_{3}$ was determined to be $V_{3}=882.7835(1.7179)\,\mathrm{cm^{-1}}$. Anharmonic frequency calculations were performed on the B3LYP/aug-cc-pVTZ level of theory, which are in good agreement with the experiment with rotational constants up to quartic centrifugal distortion parameters; the calculated sextic centrifugal distortion parameters showed larger deviations but they were not relevant in the initial assignment procedure. The rotational constants are in good agreement with the values reported in Ref. \citep{Imachi.1983}. The results of PO$-$d$_{5,6}$ were compared to those of the main isotopologue, and the influence of the deuterium substitution on the structure and internal rotational motion was briefly discussed.
\section{Acknowledgements}
TFG, RP, DK and PS acknowledge the funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Projektnummer 328961117 – CRC 1319 ELCH. PS and GWF also gratefully acknowledge the funding from the DFG-FU 715/3-1 project.
|
1,108,101,566,462 | arxiv | \section{Introduction}
\label{sec:intro}
The memristor is the first (and possibly the only) non-linear fundamental circuit element, and, as such, has a lot to offer to those interested in non-linear dynamics. By virtue of being a fundamental element, we can be reasonably certain that it's the simplest non-linear circuit element. It is a physical device, announced to the world in 2008~\cite{strukov08}, although its existence was predicted many years earlier in 1971~\cite{Chua1971}. The non-linear behaviour arises because the memristor relates the total history of the charge (the time integral of current), that has passed through the device, with the total history of the magnetic flux (the time integral of voltage)~\cite{Chua1971}.
Since the first physical realization of a memristor \cite{strukov08} a huge amount of effort has been devoted to the study of materials and manufacturing techniques for memristor devices, as well as theoretical models, (as is covered in detail in a recent review~\cite{RevMemReRAM}). In terms of materials, the first and archetypal memristor is a device made of titanium dioxide sandwiched between platinum electrodes. Studies on TiO$_2$ memristor~\cite{strukov08} have shown that the main cause of the memristive effect is the motion of oxygen vacancies from the high resistance layer, the TiO$_2$ layer, to the sub-oxide (TiO$_{2-x}$), or low resistance layer, under the application of an external bias: it is this interconversion that changes the resistance of the device. The realization of the active layer requires expensive techniques, such nano-imprint lithography and atomic layer deposition, followed by subsequent annealing at high temperature. Most memristors are fabricated by atomic layer deposition as perfect crystals and annealing is usually required to introduce vacancies, as these are errors in a perfect crystal. The fabrication techniques significantly influence the device performance and significantly impact the cost-effectiveness of the device. For this reason, new methods, typical of the printing industry, screen printing or ink-jet printing, have been also investigated for the realization of memristive devices \cite{duraisamy,memZno}. Despite the numerous realizations of memristors, a breakthrough in ease of manufacturing was the fabrication of a device by spin-coating a titanium isopropoxide solution on a flexible plastic substrate \cite{hackett}. Unlike devices created by atomic deposition, sol-gel and ink-jet printed devices already have vacancy errors, so they require no annealing step for the formation of the active layer.
In this paper, we make use of flexible TiO$_2$ memristors \cite{ella2011}.
Memristors belong to the class of resistive switching devices, which in recent years, attracted a lot of interest for memory, logic and neuromorphic applications. In particular, the memory effect is due to the possibility to switch the status of the device, through SET and RESET processes, between two states, the high resistance state (HRS), and the low resistance state (LRS). In bipolar RRAM, the SET and RESET processes occur through the formation/dissolution of conductive filaments \cite{ielmini2011}. This mechanism is controlled by a compliance current. Depending on the compliance current, during the switching of a metal-insulator-metal structure between the HRS and the LRS, random telegraph noise (RTN) may be observed. RTN can be reduced with proper SET and RESET processes, but, on the contrary, fluctuations of current at different resistance states can be exploited for non-destructive studies of switching phenomena at microscopic scale \cite{ielmini10}. RTN is also used as the principle for true random number generators based on a contact resistive RAM \cite{Huang}.
As resistive switching devices, the memristors are also of interest for implementation of Boolean logic gates \cite{borghetti2010,Rose2012,Ella2013,Kvatinsky2014,Vourkas2014}, for applications as nonvolatile memories \cite{nonvolatileMem2011}, and as synapses in neuromorphic circuits \cite{Jo2010}.
Due to its non-linearity, the use of the memristor as nonlinear component in chaotic circuits \cite{muthuswamy10} has been envisaged. To this aim the memristor has been used to substitute for Chua's diode in versions of Chua's circuits \cite{itoh08,muthuswamy10,253} and simulated chaos has been observed \cite{82,61,70,232}. Most of the memristor-based oscillators used in these chaotic circuit simulations assume ideal characteristics for the memristor, \emph{e.g.} cubic or piece-wise-linear (PWL) nonlinearities \cite{itoh08}. Since the Strukov memristor is a passive non-symmetrical element having a non-linearity different from the one most frequently investigated, its use for chaotic circuits design is not trivial. Specifically, the non-linearity assumed in literature is usually a continuously varying function or a piece-wise linear-continuously varying function for filamentary type devices.
We would like to demonstrate experimentally that the memristor can be used as the non-linear element in a chaotic circuit. The first step is to demonstrate that simulated circuits including a realistic model of memristor exhibit chaos, this has been done in ~\cite{buscarino12,Buscarino2013} where recently a gallery of chaotic circuits using the HP model has been proposed: these papers utilised a particular configuration of two memristors connected in anti-parallel, which allowed the authors obtain a symmetrical characteristic suitable for chaos generation. This sub-circuit has been experimentally realised in ~\cite{spike} where circuits made of 2 or 3 memristors were found to give rise to complex dynamics. What has not been done is to conclusively demonstrate chaotic dynamics in an experimental circuit containing a memristor and to do so with only one non-linearity in the circuit (i.e. only one memristor). In our experiments we do not apply specific SET and RESET processes, but we drive the memristor by a signal which is function of the actual status of the device, exploiting its switching properties to obtain deterministic chaos.
The aim of this work is to provide an experimental demonstration of chaos with memristive devices. For this purpose, a simple experimental set-up is proposed in Sec.~\ref{sec:modelA}, where chaos can be uniquely attributed to the presence of the memristor, as described in Sec.~\ref{sec:resultsA}. We then perform a numerical simulation of the set-up as described in Sec.~\ref{sec:modelB} using the simplest model available and explore the parameter space to demonstrate that the chaos is a robust phenomenon observed over a large parameter space in Sec.~\ref{sec:resultsB}. In Sec.~\ref{sec:conclusion} the conclusions of the work are drawn.
\section{Methodology}
\label{sec:model}
\subsection{Experimental}
\label{sec:modelA}
The idea underlying the experimental set-up originates from the way in which memristors are usually characterized. One of the main fingerprints of memristors is the pinched hysteresis loop in the $v-i$ plane measured under periodic excitation \cite{Chua2011,Biolek2013}. A commonly used technique to obtain the $v-i$ characteristics is to apply a sinusoidal voltage input to the memristor (alternatively, the time response is studied through the application of pulse functions) and to measure the current through the device. To do this, a (Keithley 2602) programmable sourcemeter is commonly used because it allows the tuning of the input parameter and the recording of the current. The core of this work is to dynamically change the parameters of the input waveforms as a function of the actual state of the memristor, to our knowledge, this is the first experiment using such a feedback loop.
The memristor used in this work has a sandwich structure of Al-TiO$_2$-Al: the aluminium electrodes were sputter-coated onto PET plastic substrate via a mask as in \cite{hackett,ella2014}, the sol-gel layer \cite{ella2014} was deposited via spin-coating (spun at 33r/s for 60s) and then left under vacuum to hydrolyse for an hour as in \cite{ella2011}. The electrodes are 4mm wide, crossed at 90$^{\circ}$, giving an active area of 16mm$^2$, the titanium gel layer is 40nm thick. In contrast to other more expensive techniques for the fabrication of the active layer, the spin-coated memristor was created with a simple procedure, and without the need for a forming step.
The experimental methodology consists of two separate steps. In the first, we test the memristive behavior by exciting it with a periodic input and observing the behavior in the $v-i$ plane. In the second step, we drive the memristor with pulses which are related to the actual state of the memristor through a law discussed in details below. The first step corresponds to operating the memristor in open loop conditions, while the second in feedback mode. In feedback conditions the whole system operates without external inputs, so the actual status of the system (which includes the memristor status and the voltage applied to it) is the result of an autonomous evolution.
The memristor characterisation analysis (first step of our methodology) was performed by setting the sourcemeter to run linear voltage sweeps between a range of $\pm 1V$, with a voltage step size of $0.02V$ and a settling time of $0.01s$ (settling time is the delay period after which the measurement is made). The signal may be viewed as the DC equivalent of an AC waveform with a frequency of $0.5Hz$. The measurements have been repeated several times to test the device performance over iterated cycles.
In the second step, we investigate the effect of establishing a relationship between the measured current and the next sample of the applied voltage signal, that is, we drive the memristor on the basis of the current flowing through it. This is particularly simple to realize since it only requires a memristor and a programmable sourcemeter.
The scheme adopted to test for non-linear dynamics is illustrated in Fig.~\ref{fig:schema2}, where $v(t)$ indicates the voltage applied to the memristor, i.e. a digital-to-analog converted signal. The memristor current, indicated as $i(t)$, is sampled as a sequence of measurements $i_h$. Each sample $i_h$ is used to generate the next voltage sample $v_{h+1}$ through the relation:
\begin{equation}
\label{eq:vh1}
v_{h+1}=r(1-k i_h)
\end{equation}
\noindent where $k$ and $r$ are constant tunable parameters. The sequence of samples $v_{h}$ is converted into the analog signal $v(t)$ through the zero-order hold (ZOH), so that:
\begin{equation}
\label{eq:vt}
\begin{array}{ll}
v(t)=v_h, & t_h \leq t < t_{h+1}
\end{array}
\end{equation}
\noindent with $t_h=h\Delta T$, where $\Delta T$ is the sampling time. The sampling of $i(t)$ occurs immediately before the sweep of the voltage from $v_h$ to $v_{h+1}$, that is, $i_h=i(t)|_{t=t_{h+1}^-}$. We note that Eq. (\ref{eq:vh1}) is linear, so that any non-linearity in the system comes from the memristor.
The sampler, the processor implementing Eq. (\ref{eq:vh1}) and the ZOH are all implemented in the sourcemeter. During operations in feedback conditions, in the sourcemeter Eq. (\ref{eq:vh1}) is iterated, so that the voltage levels are automatically generated for a given time window during which a number of samples of the waveforms is acquired. Specifically, we used a Keithley 2602 programmable sourcemeter to apply a step-wise sweep voltage (both the step amplitude and the duration can be programmed). We used steps with fixed duration so that the applied waveform is a continuous-time signal generated by the conversion of a discrete-time signal through a zero-order hold. The measurement is made at each step after a specified delay period (the sampling time $\Delta T$). Lyapunov exponents of the data were calculated using the TISEAN package \cite{tisean}, which is based on Rosenstein's algorithm \cite{Rosenstein}.
\begin{figure}
\centering {{\includegraphics[width=8.5cm]{schema2.eps}}}
\caption{\label{fig:schema2} The memristor based circuit.}
\end{figure}
\subsection{Simulational}
\label{sec:modelB}
Many models of memristors, ranging from very simple ones to ones incorporating a very detailed level of description of the phenomena occurring in the device, have been proposed in literature. The first electronic engineering model of a memristor was Chua's original model in 1971 \cite{Chua1971}, this model offered a way to relate the circuit measurables to the state of the system, but was too abstract to model specific systems. The first materials science model of a memristor was in the original Strukov paper \cite{strukov08}, this included a very simple approach based on modelling the memristor as a space-conserving variable resistor, where the boundary, $w$, between TiO$_2$ and TiO$_{2-x}$ (Fig.~\ref{fig:HPmem}) is used as a state variable which holds the memory of the device (this fits the chemistry of the system, as when the voltage is removed the oxygen vacancies do not dissipate like electrons or holes do in semiconductors, but remain `remembering' a state). The Strukov model assumes a linear dopant drift under a finite uniform field, which is widely thought to be unlikely in thin-film~\cite{86}. Several papers have attempted to improve on this, by introducing a window function to prevent the system from going outside of its bounds 0 and D, the thickness of the device, and slowing the boundary down near the edges \cite{2,46,306}. Nonetheless, the original model allows ease of modelling and simplicity to the interpretation of results as long as the simulation code takes care to avoid $w$ taking un-physical values, and has been widely adopted for simulations \cite{345,346}. More realistic models have also been proposed, an example is \cite{220} which gives good agreement with the experimental data, but requires 8 fitting parameters and contains difficult to simulate terms.
\begin{figure}
\centering {{\includegraphics[width=8.5cm]{HPmem.eps}}}
\caption{\label{fig:HPmem} Schematic representation of a TiO$_2$ memristor.}
\end{figure}
We show here that the presence of chaos is demonstrated even with a very simple model capturing only the main characteristics of the memristive behavior. Specifically, numerical results are obtained by substituting the memristor in the scheme of Fig.~\ref{fig:schema2} with its modelling equations as proposed by Strukov et al. \cite{strukov08}: the resulting simulational scheme is shown in Fig.~\ref{fig:schema1}. The relationship between current and voltage in the memristor is governed by:
\begin{equation} \label{eqn:hp model}
i(t)=v(t)/ \left( R_{ON}{w(t)\over{D}}+{R_{OFF}\left( {1-{w(t)\over{D}}} \right)} \right) \: .
\end{equation}
The variable $w(t)$ (the width of the doped region) represents an internal memory variable limited to values between zero and $D$, the thickness of the device, and is characterized by the following dynamics:
\begin{equation} \label{eq:w2}
\dot{w}(t)={\eta \frac{\mu_v R_{ON}}{D}} i(t)
\end{equation}
\begin{figure}
\centering {{\includegraphics[width=8.5cm]{schema1.eps}}}
\caption{\label{fig:schema1} Scheme used for numerical analysis of the memristor based circuit.}
\end{figure}
\noindent where $\eta$ characterizes the polarity of the memristor ($\eta=1$ if the doped region is expanding, $\eta=-1$ otherwise), $\mu_v$ is the oxygen vacancy ion mobility, and $R_{ON}$ ($R_{OFF}$) is the resistance of the device in its lowest (highest) resistance state.
Following \cite{strukov08} Eqs.~(\ref{eqn:hp model}) and (\ref{eq:w2}) may be rescaled to obtain:
\begin{equation} \label{eqn:hp model2}
i(t)=v(t)/ \left( \bar{w}(t)+{\beta\left({1-\bar{w}(t)}\right)} \right )
\end{equation}
\noindent and
\begin{equation} \label{eq:w2_2}
\dot{\bar{w}}(t)={\eta} i(t)
\end{equation}
\noindent where $\beta=R_{OFF}/R_{ON}$ and time is now expressed in the units of $t_0=D^2/\mu_v v_0$, with $v_0=1V$. The numerical simulations have been based on Eqs. (\ref{eqn:hp model2}) and (\ref{eq:w2_2}), which are dimensionless and contain only one parameter ($\beta$).
\section{Results}
\label{sec:results}
\subsection{Experimental results}
\label{sec:resultsA}
The experimental results are reported for two devices, indicated as memristor 1 and memristor 2. The two memristors were fabricated with the same procedure, however parametric tolerances due to the fabrication process do appear. We first describe the $v-i$ characterization under periodic excitation of the memristors analyzed in this work. The results, which refer to memristor 1, are reported in Fig.~\ref{fig:v-iMv47}, showing that the resistance change is not fully reversed over the range of the hysteresis loop. However, in view of the application as chaos generator, this does not represent a severe issue, since the non-linearity is not required to be time-invariant. Similar behavior has been obtained for memristor 2.
\begin{figure}
\centering {{\includegraphics[width=8.5cm]{v-iMv47.eps}}}
\caption{\label{fig:v-iMv47} Two $v-i$ curves for one of the memristors (memristor 1) used in the experiments (similar results were seen for memristor 2 and are not shown). The input signal is a linear voltage sweep between a range of $\pm 1V$, with a voltage step size of $0.02V$ and a settling time of $0.01s$, that is the DC equivalent of an AC waveform with a frequency of $0.5Hz$.}
\end{figure}
The spin-coated TiO$_2$ memristor was then controlled by a feedback signal generated by using the scheme of Fig.~\ref{fig:schema2}. In both cases chaos has been observed, but the two devices required a different parameter tuning of the control law (\ref{eq:vh1}). For memristor 1 we have chosen $r=0.25$ and $k=1750$ and obtained the waveforms shown in Fig.~\ref{fig:timeevolutioniandv2}. For memristor 2 we have chosen $k=850$ and $r=0.25$; the time evolution of $i(t)$ and $v(t)$ is reported in Fig.~\ref{fig:timeevolutioniandv}. Starting from the acquired data reported in Figs.~\ref{fig:timeevolutioniandv2} and~\ref{fig:timeevolutioniandv} we have estimated the largest Lyapunov exponents: for the data referring to memristor 1 (Fig.~\ref{fig:timeevolutioniandv2}) the largest Lyapunov exponent is $\lambda_{max}=0.7947$, and for the data from memristor 2 (Fig.~\ref{fig:timeevolutioniandv}) $\lambda_{max}=0.9230$. The existence of positive values for the largest Lyapunov exponent indicates that the behavior is chaotic. As the rest of the set-up was linear, this chaos arises from the memristor, and thus we have demonstrated chaotic dynamics from a single memristor device.
\begin{figure}
\centering
\subfigure[]{\includegraphics[width=8cm]{i.eps}}
\subfigure[]{\includegraphics[width=8cm]{v.eps}}\\
\caption{\label{fig:timeevolutioniandv2} Experimental results: time evolution of current and voltage in the memristor 1 based circuit for $r=0.25$ and $k=1750$.}
\end{figure}
\begin{figure}
\centering
\subfigure[]{\includegraphics[width=8cm]{iM49.eps}}
\subfigure[]{\includegraphics[width=8cm]{vM49.eps}}\\
\caption{\label{fig:timeevolutioniandv} Experimental results: time evolution of current and voltage in the memristor 2 based circuit for $r=0.25$ and $k=850$.}
\end{figure}
\subsection{Numerical results}
\label{sec:resultsB}
In order to demonstrate that the chaos observed with our devices was a real effect generally due to the memristor behaviour, we simulated the set-up and numerically analyzed it with respect to the parameters $k$ and $\beta$. Several regions in the parameter space for which chaotic behavior arises were found. An example of chaotic behavior is reported in Fig.~\ref{fig:trend} where the time evolutions of the variables $v(t)$ and $i(t)$ are shown for $k=10$ and $\beta=100$. In Fig.~\ref{fig:bifk} the bifurcation diagram with respect to $k$ (for $\beta=100$) is illustrated, it shows alternating windows of periodic behavior and chaos. The bifurcation diagram with respect to $\beta$ is reported in Fig.~\ref{fig:bifbeta} (for $k=10$), showing how chaos is preserved for a wide range of values of the constitutive parameter $\beta$. Since $\beta$ may vary in real memristors due to technology, fabrication process and device characteristics, the robustness with respect to this parameter is particularly important.
\begin{figure}
\centering
\subfigure[]{{\includegraphics[width=8cm]{iSim.eps}}}
\subfigure[]{{\includegraphics[width=8cm]{vSim.eps}}}
\caption{\label{fig:trend} Trend of the current (a) and voltage (b) of the memristor circuit for $k=10$ and $\beta=100$.}
\end{figure}
\begin{figure}
\centering {{\includegraphics[width=8cm]{bif_k.eps}}}
\caption{\label{fig:bifk} Bifurcation diagram of the memristor circuit with respect to $k$ ($\beta=100$, $r=1$).}
\end{figure}
\begin{figure}
\centering {{\includegraphics[width=8cm]{bif_beta_k10.eps}}}
\caption{\label{fig:bifbeta} Bifurcation diagram of the memristor circuit with respect to $\beta$ ($k=10$, $r=1$).}
\end{figure}
We observe that the numerical results have been obtained with a dimensionless set of equations; the parameters of the control law ($k$ and $r$) have been rescaled during the experimental tests. The numerical results show that chaos can be generated using this approach. However, the waveforms observed in the experiments and in the numerical simulations do not coincide, indicating that the simplified mathematical model of Eqs. (\ref{eqn:hp model})-(\ref{eq:w2}) is able to capture the general behavior of the system, but not the details of the waveforms observed in the experiments. This shows that the chaotic behaviour comes from an aspect of the memristor that is captured qualitatively by the simplest possible memristor model. However, we strongly suspect that further work involving using more complicated models will produce numerical results closer to experimental data, and this might provide memristor modellers a test case for the improvement of their models.
It is interesting to ask why the memristor allows chaos to arise in such a simple system, and the fact that such a simple model of memristance demonstrates chaotic dynamics provides a clue. As mentioned earlier, the memristor non-linearity is not time-invariant, and it is known that memristors possess a memory. We posit that it is specifically the interaction of this time-varying memory with the time-based feedback that gives rise to chaos in this system (as the voltage for each step is generated on the data from the previous step, there is a memory in the experimental system). The simulation equations are so simple that there is only one time-varying variable, $w(t)$ which is the internal state of the memristor and its memory. This variable depends on $i(t)$ (which is linearly updated through the feedback) but also on the past history of the memristor through a functional which, instead, is non-linear.
\section{Conclusions}
\label{sec:conclusion}
In this work experimental findings on the generation of chaos with a spin-coated memristor are presented. The experimental set-up consists of a single memristor driven by a linear control law relating the applied voltage to the actual value of the current flowing in it. This feedback law does not introduce any non-linearity, so that the observed behavior can be uniquely attributed to the memristor characteristics.
Chaotic waveforms have been experimentally observed in several samples of spin-coated memristors and confirmed by a numerical analysis. The largest Lyapunov exponent has been calculated in the experimental time-series, and the positive value obtained indicates chaotic dynamics. A simple model of memristive behavior has been shown to be sufficient to reproduce the onset of chaos, although capturing all the features of the waveforms observed in the experiments requires a more detailed model. The experiments showing the irregularity of the behavior of the circuit demonstrate the suitability of this simple approach to generate chaos with memristors and suggests that memristors might be useful components for hardware chaos-based encryption.
The analysis performed through the use of the Strukov model suggests that the main mechanism explaining chaos in the experimental and simulation behaviors is the presence of the pinched hysteretic behavior. However, as in the experiments specific SET and RESET processes are not applied, it cannot be excluded the presence of RTN, which on the contrary could be a possible explanation for the differences between experimental and simulation results.
|
1,108,101,566,463 | arxiv |
\section{Introduction}
\label{sect:intro}
Gravitationally lensed quasars offer unique insights into a number of cosmological and astrophysical questions \citep[e.g.,][and references therein]{CSS02}. For example, they can be used to infer properties of the quasar host galaxy \citep{Peng:2006p236}, measure the gravitational profile of lensing object, and probe the nature of dark matter via measurement of substructure. With the addition of time domain information, they can be used for cosmography by using the time-delay between multiple images as a distance indicator \citep[e.g][]{Ref64,Sch++97,K+F99,Ogu07,Suy++10,Suy++13,Suy++14,T+M16}.
Currently, the main limitation to detailed analyses is the small sample size of known lensed quasars. Only on the order of 100 lenses have been found so far, including only 10-20 of the most valuable kinds like quadruply imaged systems, or highly variable sources. The small sample size is due to the fact that lenses are rare objects and are difficult to find. With the capabilities of current ground based surveys, only about 0.4 are expected per square degree \citep{O+M10}. For example,\footnote{Acronyms of surveys mentioned throughout this paper: 2MASS \citep{Skr06}; DES \citep{DES14}; SDSS \citep{SDSSDR12}; UKIDSS \citep{Hew06}; VHS \citep{VHS13}; VST-ATLAS \citep{VST15}; WISE \citep{WISE10}.}
in the Dark Energy Survey (DES), 1146 lenses are expected to be found in the 5000 deg$^2$ footprint with roughly 120 of those brighter than 21 in the $i$-band. Of those brighter than 21, about 20\% are expected to be quadruply imaged. Visual inspection in the first 80 deg$^2$ of the Hyper-Suprime Camera \citep[HSC,][]{HSC12} survey has led to the `serendipitous' discovery of a quadruply lensed AGN \citep{Anu16}, owing to a combination of depth and excellent image quality. Similarly, inspection of $\approx6000$ objects in the DES Y1A1 release ($\approx1500\rm{deg}^{2}$), selected as blue point sources near luminous red galaxies, has yielded one new quad lens (Lin et al. 2016, in prep.).
In order to collect a larger lens sample, we need to be able to search through many surveys to pick out potential lens candidates for more detailed follow-up. Due to the size of modern surveys and the lack of readily available spectroscopic data for most objects, we need efficient selection strategies that give a high true positive detection rate based on purely photometric information. Following the strategy and terminology outlined by \citet{Agn++15a}, this selection will result in a smaller and more tractable list of {\it targets}. In turn, those can be subject to computationally more expensive analysis based on the twodimensional survey images to identify {\it candidates} for spectroscopic or high resolution imaging follow-up, with the aim of minimizing time lost to false positives \citep[see also][for an alternative approach]{Sch++16}. This purely photometric strategy differs from the one adopted by the SDSS Quasar Lens Search \citep{Og++06,Ina++12} and its continuation by \citet{Anu16a}, who have relied on objects that already had a confirmed quasar component based on fibre spectra.
The purely photometric selection of targets/candidates can be solved with suitable applications of classification methods in machine learning \citep[see][for a general review of data mining and classification problems in astronomy]{bb10}. One such method is described by \citet{Agn++15a}, where artificial neural networks (ANNs) are used to classify objects as lensed quasars, pairs of closely aligned quasars, alignments of a luminous red galaxy and an unlensed quasar, or blue-cloud galaxies based on their multi-band photometry. This method led to the successful discovery of the first two lensed quasars in DES \citep{Agn++15b}. However, ANNs and similar methods applied to catalog data are very sensitive to systematic differences in the photometry between simulated and real datasets, that can arise from varying image quality conditions or systematics in the creations of catalogs. Furthermore, the machinery cannot be easily carried over from survey to survey, as the photometry of objects (especially those with extended morphology) depends appreciably on survey specifics like image quality, depth, and filter bandpasses. Hence, supervised methods like ANNs require a dedicated training set, which must be tailored closely to the survey being investigated. Another issue is that of extrapolation and generalization, as the properties of objects in different ANN classes reflect directly those that were used as training/validating sets. Finally, ANNs amount to drawing a finite set of hyperplanes in feature space (in our case, colours and magnitudes), governed by the number of nodes chosen, which however cannot be made arbitrarily large before starting to overfit features specific to the training set.
Population mixture models offer several potential advantages with respect to ANNs. Different classes of (known) objects occupy specific regions of colour-magnitude space, so we may model the multi-band properties of objects in wide-field surveys as a superposition of populations, each with its own parent distribution function (PDF). Hence, with a population mixture model one may associate \textit{membership probabilities} to each object in a survey, yielding a smooth (rather than hard-edge) classification. Furthermore, the structural parameters of the PDFs can be fit from the data themselves, allowing for survey-specific adjustments of the classification scheme to account for e.g. different magnitude calibrations. This phenomenon of \textit{augmentation} allows us to initialize the single classes upon a small training set and then adjust them on a large set of un-labelled objects to fit their distribution in feature space.
We will describe the colour-magnitude properties of objects to be classified using a mixture of Gaussian probability distribution functions, whose parameters can be determined recursively with Expectation-Maximization techniques. This approach has been already used in the past for different purposes. It is known alternatively as \textit{Extreme Deconvolution} (XD), as detailed by \citet{bov11a} and applied to quasar identification from point sources in the SDSS. Quasar selection and photometric redshift estimation, among point-sources in the SDSS DR12, has been performed with XD by \citet{DiP15} using optical and WISE magnitudes. A combination of XD and colour cuts was used to select and spectroscopically confirm a sample of $\approx10^4$ quasars in the VST-ATLAS footprint by \citet{che16}. The use of XD tailored to identify lensed quasar targets/candidates has been initiated by Marshall et al.\footnote{https://github.com/drphilmarshall/PS1QLS} for the Panoramic Survey Telescope and Rapid Response System\footnote{Pan-STARRS, http://pan-starrs.ifa.hawaii.edu/public/}. \citet{Fe++16} have discovered at least one new lensed quasar in the DES footprint, among objects classified as quasars through a population mixture approach on optical and infrared magnitudes\footnote{We should point out that, despite the claim of `morphology-independent' data mining, any technique that involves \texttt{psf} and \texttt{model} magnitudes or stellarity is inherently dependent on morphology.}.
In a completely different context, population mixture models have been exploited by \citet{wal09} for the case of multiple stellar populations in nearby dwarf Spheroidal galaxies, then adapted for chemo-kinematic separation of stellar populations and substructure \citep{wp11,kop11,amo12,amo14}, Globular Cluster populations around early-type galaxies \citep{pot13,Agn++14} and substructure detection \citep{lon15}.
This paper is structured as follows. In Section~\ref{sect:comparison} we briefly illustrate the cross-calibration of magnitudes in the SDSS and VST-ATLAS for extended objects. In Section~\ref{sect:popmix} we outline the population-mixture model used to classify the objects in this work. In Section \ref{sect:EMperf}, we examine the performance of our model in SDSS and VST-ATLAS. In Section \ref{sect:candidates}, we present objects selected by the model as potential lens candidates. We conclude in Section~\ref{sect:concl}.
All magnitudes are given in their native system (AB for SDSS and Vega for WISE and VST-ATLAS), and a standard concordance
cosmology with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, and $h=0.7$ is assumed
when necessary.
\section{Photometry of Extended blue objects in SDSS and VST-ATLAS}
\label{sect:comparison}
Photometry can vary appreciably within one survey and especially from one survey to another due to differences in image quality and depth. This is a worry when methods akin to ANNs are used to classify obects outside of the survey for which they were designed.
We explore the importance of photometry differences between surveys be examining the cross-calibration of magnitudes in the SDSS and VST-ATLAS, finding the regression that best translates SDSS magnitudes to VST-ATLAS magnitudes. We are interested in both PSF magnitudes and model magnitudes, as one of the key features identifying lensed quasars with image separation comparable or smaller than the seeing is that they have colours similar to quasars but they are extended and therefore should be clearly separable from non-lensed quasars based on the difference between \texttt{model} and \texttt{psf} magnitudes.
A tight fit between SDSS and VST-ATLAS magnitudes is required for ANNs to translate well across the two surveys. Conversely, if there is an appreciable scatter around the regression, this will cause issues for ANNs, illustrating the need for a more robust classification procedure. VST-ATLAS has already been calibrated against SDSS to a precision of $0.02$ magnitudes in zeropoints for point sources, using bright stars \citep[$i<16$,][]{sha15}. In our case, we are interested in the properties of extended and fainter objects, for which the magnitude conversion can differ and scatter can be appreciable.
The SDSS DR12 and VST-ATLAS DR2 (public) footprints overlap mainly in a region with right ascension (r.a.) and declination (dec.) approximately $150^\circ < \text{r.a.} < 230^\circ$ and $-4^\circ < \text{dec.} < -2^\circ$ and one with $-20^\circ < \text{r.a.} < 32^\circ$ and $-12^\circ < \text{dec.} < -10^\circ$. Using matched objects in this region, we can compare the photometry of the two surveys. This will then enable the selection of plausible quasars with extended hosts, or lensed quasar candidates, separating them from classes of contaminants like Seyfert and narrow-line galaxies.
Magnitudes are cross-calibrated between SDSS and VST-ATLAS as follows. We first look at objects that lie within a 5$^{\prime\prime}$ radius of each other in the two surveys. For cross-matched objects, we find that $\Delta\mathrm{r.a.}=-0.08\pm0.81$ and $\Delta\mathrm{dec.}=0.11\pm0.66$ (in arcseconds), where $\Delta\mathrm{r.a.}=\mathrm{r.a.}_{\rm atlas}-\mathrm{r.a.}_{\rm sdss}$ and $\Delta\mathrm{dec.}=\mathrm{dec.}_{\rm atlas}-\mathrm{dec.}_{\rm sdss}$. The distance between matched objects is $\mathrm{dist.} = 0.44 \pm 0.95$ arcseconds. For each band and \texttt{psf} or \texttt{model} magnitude choice, its SDSS and VST-ATLAS counterparts are fit by finding the best offset between the two surveys. In the case of VST-ATLAS, we use the aperture magnitude \texttt{AperMag3} with a 1 arcsecond aperture for our \texttt{psf} magnitudes, and we use \texttt{AperMag6} with a $2\sqrt{2}$ arcsecond aperture for our \texttt{model} magnitudes. The residuals are then fit against adjacent colours, using the maximum likelihood estimator\footnote{Openly available at https://github.com/cristobal-sifon/lnr} \texttt{lnr.mle}, to account for the different shape of the response curves and hence the dependence on the object SED (e.g. quasar or blue galaxy). The resulting regressions for the \texttt{model} magnitudes are
\begin{align}
\nonumber ~&~&~&\text{Intrinsic scatter} \\
\nonumber u_{\rm{atlas}} & = & u_{\rm{sdss}} -0.231(u_{\rm{sdss}}-g_{\rm{sdss}}) - 0.055 &~~~~~~~0.417
\\
\nonumber g_{\rm{atlas}} & = & g_{\rm{sdss}} - 0.242 (g_{\rm{sdss}}-r_{\rm{sdss}})+ 0.238 &~~~~~~~0.363
\\
\nonumber r_{\rm{atlas}} & = & r_{\rm{sdss}} + 0.042(g_{\rm{sdss}}-r_{\rm{sdss}}) + 0.035 &~~~~~~~0.273
\\
\nonumber i_{\rm{atlas}} & = & i_{\rm{sdss}} - 0.005(i_{\rm{sdss}}-z_{\rm{sdss}}) + 0.077 &~~~~~~~0.287
\\
z_{\rm{atlas}} & = & z_{\rm{sdss}}+0.402(i_{\rm{sdss}}-z_{\rm{sdss}}) - 0.084 &~~~~~~~0.284
\label{eq:modcal}
\end{align}
For the \texttt{psf} magnitudes, we obtain
\begin{align}
\nonumber ~&~&~&\text{Intrinsic scatter} \\
\nonumber g_{\rm{atlas}} & = & g_{\rm{sdss}} - 0.317(g_{\rm{sdss}}-r_{\rm{sdss}})+ 0.158 &~~~~~~~0.343 \\
\nonumber r_{\rm{atlas}} & = & r_{\rm{sdss}} -0.577(r_{\rm{sdss}}-i_{\rm{sdss}})+ 0.113 &~~~~~~~0.241 \\
i_{\rm{atlas}} & = & i_{\rm{sdss}}-0.108(r_{\rm{sdss}}-i_{\rm{sdss}})-0.021 &~~~~~~~0.247
\label{eq:psfcal}
\end{align}
The appreciable intrinsic scatter in the translated magnitudes, for the object classes of our interest, is larger than the magnitude uncertainties of single objects.
\section{Population Mixture Models}
\label{sect:popmix}
To deal with issues in translating from SDSS to other surveys or simply from training sets to real data within a survey, we need a technique that is not sensitive to small shifts and scatter in photometry. Population mixture models offer a classification scheme that can be adjusted to the data themselves. In this sense, the model can fine tune itself to fix for any small photometric differences.
In a population mixture model, we attempt to describe our data as a superposition of parent distribution functions (PDFs), where each PDF captures a different class of objects. For a set of $K$ PDFs each described by parameters $\bm{\theta}_k$, we can construct a log-likelihood function
\begin{equation}
l(\bm{\theta}) = \log p(\{\bm{x}_i\} | \bm{\theta}) = \log \prod_i^N \sum_k^K p(\bm{x}_i | \bm{\theta}_k),
\end{equation}
where $N$ is the number of objects, $\bm{x}_i$ is a vector of features for each object, and $p(\bm{x}_i | \bm{\theta}_k)$ is the probability that the object $\bm{x}_i$ belongs to class $k$. For computational simplicity, we choose Gaussians as our PDFs, so
\begin{equation}
p(\bm{x}_i | \bm{\theta}_k) = \frac{\alpha_k}{(2\pi)^{P/2} |\bm{\Sigma}_k |^{1/2}} e^{-\frac{1}{2}(\bm{x}_i - \bm{\mu}_{k})^T\bm{\Sigma}_k^{-1}(\bm{x}_i - \bm{\mu}_{k})},
\end{equation}
where $\bm{\mu}_k$ is the mean, $\bm{\Sigma}_k$ is the covariance matrix, $P$ is the number of features, and $\alpha_k$ is a weight parameter defined such that $\sum_k^K \alpha_k = 1$.
In general, one can account for noise in the measurements by convolving with the noise PDF. Since uncertainties are likely dominated by systematic errors such as PSF mismatch and mismatch between the model and the data, the random error is likely a lower limit to the error rather than being representative of the true error. We do not account for noise in this exploration, which is done in the more thorough approach of Extreme Deconvolution \citep{bov11a}. This is appropriate in our context, where random errors are negligible compared to systematic errors associated with model magnitudes, and compared to the intrinsic scatter of the transformations between surveys.
In the case of missing data, we can adjust our model by marginalizing over the missing features. In the case of Gaussian PDFs, this corresponds to restricting $\bm{\Sigma}_k$ and $\bm{\mu}_k$ only to the non-empty entries for each object.
Given this model, our classification problem becomes a matter of finding the parameters $\bm{\theta}$ that maximize the likelihood function. This can be done iteratively using the Expectation Maximization algorithm, briefly described below in the next section and explained in detail in Appendix \ref{sect:EMap}.
The main benefit of using a population mixture model is its flexibility. If two classes are well separated in feature space in our training sets, we expect that they will also be well separated in real survey data, assuming our training sets are good representations of the survey data. Similarly, if they are well separated in one survey, they should be similarly separated in other surveys, despite photometric differences. The changes between different data sets should be only minor changes that can be captured by small adjustments to our parameters, $\bm{\theta}$. Thus, we can train our model in one survey and then let the PDFs adjust themselves via the EM algorithm to translate to other surveys, eliminating the need for separate training sets for each survey.
Further, population mixture models offer a smooth classification of objects in the survey. Each object is assigned a membership probability vector of length $K$, where each entry is the probability that the object belongs to a given class. Finally, adjusting the classes on the whole survey enables a generalization from the objects used in the training (and validating) sets to those that can be met in reality.
\subsection{Expectation-Maximization}
The Expectation Maximization (EM) algorithm is a two-step iterative procedure for finding the parameters that maximize the likelihood function. In the context of our model, we initialize the algorithm with guesses for the parameters $\bm{\theta}$. These can either be random or informed, based on our knowledge of the properties of the classes we wish to describe. In principle, the algorithm should converge to the same parameters regardless of the initial guesses, but as it is often the case in high dimensinoal spaces a good guess helps significantly in guaranteeing rapid convergence to the absolute maximum. In the case of random assignment, we would examine the objects that lie near the mean of each Gaussian at the end of the EM procedure in order to determine which class of objects each Gaussian describes. However, since we know where the objects we wish to categorize lie in feature space, we make this identification at the beginning of the procedure by initializing the parameters based on the expected features for each class.
At each iteration, the \textit{Expectation} step computes the expected value of the log-likelihood function, given the current parameters. This step also computes \textit{membership probabilities}, i.e. a vector for each object giving the probability of belonging to each class. Using these membership probabilities, one can find the parameters that maximize the expected value of the log-likelihood function (i.e. a \textit{Maximization} step). The details on how parameters are updated are described fully in Appendix \ref{sect:EMap}.
In order to keep parameters from over-adjusting at each step, we introduce a regularization parameter which limits the update of each parameter to a fraction of that proposed by the Maximization step. Since we operate under the assumption that our initial guesses for the parameters are close to the true best parameters, we expect that any updates will be small. By constraining the allowable size of the updates, we avoid any spurious changes to the parameters that might be indicative of one of the Gaussians attempting to encompass multiple classes of objects, or simply due to noise.
In addition, we explored the use of {\it adaptive second moments} to suppress contributions from data lying far from the means of the PDFs. This works by multiplying by an additional windowing factor when calculating the new covariances in the Maximization step. We used a Gaussian centered on the mean of the PDF as the windowing factor so that points farther from the mean would be given less weight. However, this typically led to shrinking the covariances too far to the point that many of the PDF weights went to zero and so we chose not to implement this further.
\subsection{Implementation}
For our particular implementation, we want to distinguish between lensed quasars and various contaminants such as unlensed quasars of different types and blue-cloud galaxies. Typical lensed quasars have the colours of quasars mixed with those of the red lensing galaxy. In particular, their mid-IR colours are similar to those of quasars, while their other colours may be slightly redder. In addition, lenses should be extended and so we expect them to have higher \texttt{psf-model} magnitudes than unlensed QSOs. The value of \texttt{psf-model} will depend on colour and seeing since objects may become deblended in some bands and because the relative contribution of the lens and source vary with wavelength. Blue-cloud galaxies and mergers should have colours similar to QSOs in the optical, but significantly different colours in the mid-IR.
If we construct features as combinations of colours, then the lenses and contaminants will occupy different locations in features space. Thus, by fitting a PDF to each locus of objects, we can piece together a model describing the overall distribution of our objects.
\subsubsection{SDSS SpecPhoto training sets}
To develop our training sets, we use objects that are
spectroscopically identified in SDSS SpecPhoto. In particular, we use
objects identified as `QSO' for our unlensed quasar classes and those identified as `GALAXY' for our galaxy class. These will be preferentially brighter than the objects that we wish to classify, but differences in their features should be small. Thus, going from the training sets to real data will require only small adjustments, determined by the EM algorithm. The validity of this hypothesis is quantified in Section~\ref{sect:confusion}.
At different redshifts, quasar spectral features will shift between
filters, intrucing additional complexity to their location in feature
space. To account for this, we break up our quasar classes into six
redshift bins: $z < 0.35$, $0.35 < z < 0.75$, $0.75 < z < 1.2$, $1.2 <
z < 1.75$, $1.75 < z < 2.4$, and $z > 2.4$.
At lower redshifts, we expect some quasar colours to be affected by strong contributions from their host galaxies, making them look more extended. To encapsulate the range of host galaxy contributions, we further break our quasar class into `point-like QSOs' with \texttt{psf - model} magnitudes $<$ 0.15 in $g$, $r$, and $i$ bands, and `extended QSOs' with \texttt{psf - model} $>$ 0.15 in the $i$ band.
For our galaxy contaminants, we consider only blue-cloud galaxies, selected according to
\begin{align}
\nonumber &u_\text{mod}-r_\text{mod} < 2.2,\\
\nonumber &g_\text{mod}-r_\text{mod} > 0.55 - 0.66(u_\text{mod} - g_\text{mod} - 0.6),\\
\nonumber &g_{\text{psf}} - g_{\text{mod}} > 0.15,\\
\nonumber &r_{\text{psf}} - r_{\text{mod}} > 0.15,\\
&i_{\text{psf}} - i_{\text{mod}} > 0.2
\end{align}
We experimented with including additional galaxy classes, but found that the weights reduced to zero, indicating that they were unnecessary classes that did not describe any new populations in our data.
We use 1000 objects for each class in our training sets. While this will not give the proper weights for the PDFs representing the different classes, it ensures that there are sufficient objects for each class for the machinery to train on.
\subsubsection{Simulated lensed quasar training sets}
Due to the paucity of known lensed quasars needed to create a full training set, we need to introduce mock lens systems. These are simulated according to the distributions given by \citet{O+M10}. We further divide the lens class into five separate classes to better capture their diversity.
First, we split the lens objects into those with lower and higher $W1-W2$. Those with lower $W1 - W2$ typically have $i_{\text{psf}} - i_{\text{mod}} < 0.2$ and have colours similar to those of galaxies. These are likely objects with a significant contribution from the lensing galaxy, either due to a larger and brighter galaxy or due to a fainter QSO.
We break the set of objects with higher $W1-W2$ into four subclasses:
\begin{itemize}
\item Those with $W1-W2 > 1.5$: These typically have $g - i \sim 1.301$, $i - W1 \sim 3.426$, and $W1 - W2 \sim 1.7$ or higher
\item Those with higher redshift sources: These have $g - i \sim 0.555$, $i - W1 \sim 3.28$, and $W1 - W2 \sim 1.34$
\item Those with lower redshift sources: These have $g - i \sim 0.533$, $i - W1 \sim 4.285$, and $W1 - W2 \sim 1.06$
\item Redder objects: These have $g - i \sim 1.6$, $i - W1 \sim 3.60$, and $W1 - W2 \sim 1.29$
\end{itemize}
In total, our simlated lensed quasar training sets have 2000 objects, 1000 with high $W1-W2$ and 1000 with low $W1-W2$. the simulated lens sample is cut to retain just objects with $W1-W2>0.55,$ where most known lenses lie.
\begin{figure*}
\centering
\includegraphics[width=6.5in]{plots/colourcolour.png}
\caption{Colour-colour plots showing the locations of our training sets in feature space. The red, green, blue, and orange points represent the point-like QSOs, extended QSOs, simulated lensed quasars, and blue cloud galaxies, respectively. The black stars are known lensed quasars that lie in the SDSS footprint. The dashed lines show the colour cuts listed in Equation \ref{eq:colourcuts}. Dotted lines represent either cuts that are within `or' statements or other cuts often used to eliminate contaminants.}
\label{fig:colourcolour}
\end{figure*}
\subsubsection{Large SDSS data set}
\label{sect:largesdss}
After training our model with the training set data, we run the Expectation Maximization algorithm with a large data set. To keep things from becoming too computationally expensive, we first make colour cuts to remove obvious contaminants, and we only take a subset of those objects that satisfy the cuts. The cuts we use are
\begin{eqnarray}
\nonumber &W1 - W2 > 0.35,~W2 - W3 < 4.5,~W2 < 17.5,\\
\nonumber &(W1 - W2 > 0.375 + 0.25\cdot (W2 - 14.76)\\
\nonumber &\text{ or } W2 - W3 < 3.15 + 1.5\cdot(W1 - W2 - 1.075)\\
\nonumber &\text{ or } W1 - W2 > 1.075) \\
\nonumber &1.25 < i_\text{mod} - W1 < 5.75,~i_\text{mod} - W3 < 11.0\\
\nonumber &(g_\text{mod} - i_\text{mod} < 1.2\cdot(i_\text{mod} - W1) - 1.4 \\
\nonumber &\text{ or } g_\text{mod} - i_\text{mod} < 0.65)\\
\nonumber &r_\text{psf} - r_\text{mod} \ge 0.075,~i_\text{psf} - i_\text{model} \ge 0.075 \\
\nonumber &g_\text{mod} - i_\text{mod} < 3.85,~r_\text{mod} - z_\text{mod} < 2.5 \\
&15.0 < i_\text{mod} < 20.5,~ u_\text{mod} - g_\text{mod} < 1.3
\label{eq:colourcuts}
\end{eqnarray}
There are $\sim$6.4 million objects that satisfy these conditions,
which are rather loose. In what follows, we use a random subset of
the top $75\times10^4$ query results. As noted in Table
\ref{tab:colourcuts}, we lose nearly 60 known lenses when we make even
these loose cuts, but these are typically deblended, large separation
lenses. Our search is aimed primarly at the blended small-separation
($\lesssim 2''$)systems, which are the most abundant systems
\citep{Oguri06}, and the least represented in previous searches.
So these cuts are acceptable for our purposes.
We also see that as we impose even stricter cuts, we can decrease the
number of SDSS objects by an order of magnitude, while keeping nearly
all of the known lenses. We do not use the stricter cuts in this
implementation, but they could be used in the future with larger data
sets to keep things more tractable.
\begin{table}
\centering
\begin{tabular}{l c c}
\hline
Cut & Number in SDSS & Number of lenses \\
\hline
Eq. \ref{eq:colourcuts} cuts & $6.4\times 10^6$ & 53 \\
All magnitudes exist & $5.1\times 10^6$ & 53 \\
$i < 20.5,$ $W2 < 15.6$ & $3.6\times 10^6$ & 53 \\
$W1 - W2 > 0.55$ & $2.5\times 10^6$ & 53 \\
Additional (see caption) & $0.5 \times 10^6$ & 50\\
\hline
\end{tabular}
\caption{The number of SDSS objects and known lenses (out of 128) that survive colour cuts increasing in strictness. The SDSS numbers are found by doing a `select count(*)' query, which may count duplicate objects. For this reason, the SDSS values should be treated as order-of-magnitude estimates. The additional colour cuts in the last row are $(u - g < 0.4$ or $g - r < 0.6 - 0.8\cdot (u - g - 0.6)$ or $g - r < 0.4)$. We also note that when we do not include any cuts in \texttt{psf} magnitudes, the number of remaining known lenses increases to 97. When we remove the cuts in $u - g$ as well, the sample increases to 114.}
\label{tab:colourcuts}
\end{table}
\subsubsection{Choosing features}
\label{sect:feats}
We build our features from combinations of magnitudes in different bands. Figure \ref{fig:colourcolour} shows the distribution of our training set data in varius colour and magnitude spaces. As can be seen, different kinds of objects occupy specific locations in feature space. By choosing features that best distinguish between types of objects and combining them into one high dimensional space, our model can more easily separate the different populations of objects.
The black stars represent 128 known lenses that lie in SDSS, found by various selection techniques. Many have similar colours to point-like quasars, indicative of the colour selection step described by \citet{Og++06}. Others have redder colours, corresponding to either Type-II AGN sources or bright lensing galaxies. We are mainly concerned with finding blended, small-separation lenses, since the large separation counterparts are more likely to have been found already. Therefore, we are less interested in the objects that fall on the point-like QSO locus.
The dashed lines in the plot show the typical cuts that we use when selecting objects. In order to include lenses that have larger contributions from the lensing galaxy, we use relatively loose cuts in $u-g$ and $W1-W2$. However, known lenses do not lie predominantly at low $W1-W2$ or high $u-g$. Additional metrics, such as those using the $K$ band, can be useful for trimming contaminants in these regimes, as is done by \citet{Fe++16}, but this introduces two issues for our purposes. First, many of the simulated lenses are particularly bright in $K$. This is because the lenses are constructed via sparse interpolation over 2MASS magnitudes, which suffer from Malmquist bias. Second, we are introduced with a depth-vs-footprint trade-off that must be addressed. Surveys such as the Two Micron All-Sky Survey (2MASS) cover the whole sky, but are not always deep enough for our purposes. In fact, only 16\% of our SDSS training set objects have valid $K$ magnitudes in 2MASS. Deeper surveys such as the UKIRT Infrared Deep Sky Survey (UKIDSS) and the VISTA Hemisphere Survey (VHS), on the other hand, cover only a fraction of the SDSS footprint near the equator.
Finally, we see that most of the known lenses have $i_\text{psf} - i_\text{mod} < 0.8$, with only 10 being more extended. The more extended objects are nearly all lenses with a strong component from a bright lens galaxy. Of the known lenses that do not have $i_\text{psf} - i_\text{mod} \approx 0$, most tend to have $(g_\text{psf} - r_\text{psf}) < (g_\text{mod} - r_\text{mod}),$ indicating that they have a redder extended component. This can be inerpreted as the mixed contribution of the blue (point-like) quasar images and the red (extended) lens galaxy.
For our Gaussian mixture model implementation, we choose to use magnitudes in the SDSS $griz$ bands, avoiding the $u$ band since it is not available in many surveys, for instance, the Dark Energy Survey. In addition, we use the WISE infrared $W1$, $W2$, and $W3$ bands, but exclude the $K$ band for the reasons discussed above. In our `bare-bones' Gaussian mixture model, we use six features: $\texttt{W2}$, $\texttt{mod\_g} - \texttt{mod\_r}$, $\texttt{mod\_g} - \texttt{mod\_i}$, $\texttt{mod\_r} - \texttt{mod\_z}$, $\texttt{mod\_i} - \texttt{W1}$, $\texttt{W1} - \texttt{W2}$. We also experiment with two extensions of the model to include \texttt{W3} magnitudes and \texttt{psf} magnitudes, when available. With the \texttt{W3} magnitudes, we intruduce a $\texttt{W2} - \texttt{W3}$ feature, bringing us to 7 features. The second extension of our model includes $\texttt{psf} - \texttt{model}$ magnitudes as a measure of extendedness. Specifically, we use $\texttt{psf\_i} - \texttt{mod\_i}$, $(\texttt{psf\_g} - \texttt{mod\_g}) - (\texttt{psf\_r} - \texttt{mod\_r})$, and $(\texttt{psf\_r} - \texttt{mod\_r}) - (\texttt{psf\_i} - \texttt{mod\_i})$, bringing us to 9 features.
\subsubsection{Running EM}
\label{sect:running}
We run the EM algorithm in three consecutive steps: First, we take only our lens classes and our lens training sets. Using initial guesses for the parameters that best describe the five classes, we initialize the EM procedure. This then outputs the parameters that maximize the likelihood function. Next, we use all 18 of our classes and include the remainder of our training sets. We use the parameters found in the previous step as our new initial guesses for the lens classes and use our best guesses to initialize the parameters for the remaining classes. Finally, using the parameters found in the second step, we apply the EM algorithm to the large data sets of different surveys.
\section{Gaussian Mixture Model Performance}
\label{sect:EMperf}
When training our model, we withhold 30 percent of the objects from the training sets and save them in \textit{validating sets}. We then use these to examine how the model classifies objects that come from the same population as the training sets, but were hidden from the training process. This serves as a check against overtraining, i.e., fitting too closely to the specifics of the training data while failing to accurately represent the class populations as a whole.
After calculating the membership probabilities of objects in the validating sets, we evaluate the performance of our model by means of confusion matrices and receiver operating characteristic (ROC) curves. Section \ref{sect:confusion} presents the confusion matrices, square matrices showing how objects are classified based on the classes to which they truly belong. These give us insight into how different types of objects are misclassified. Section \ref{sect:roc} shows ROC curves which illustrate how the true positive and false positive lens detection rates change as we vary the acceptance threshold of being a lens, i.e., a minimum lens probability that we use to identify an object as a lens.
\subsection{Confusion Matrices}
\label{sect:confusion}
The confusion matrices are constructed as follows: First, we compute the membership probabilities for all the objects in the validating set, given our output parameters. Next, for each object, we add these probabilities to the cells along the row of the class from which the object truly derives. Finally, we normalize the rows such that the sum of the cells across each row is 1. This, in essence, gives the mean membership probability vector for each class of objects. For a perfect classification scheme, we expect to see ones on the diagonal and zeros elsewhere.
We calculate the confusion matrices at two stages: after running the EM alogrithm with the training set data and after running the algorithm with the full SDSS data set. This allows us to see how the model performs with the real training set data, and then how the performance changes after being `mixed up' by real data. In the latter stage, we look both at what happens when you let only the PDF weights evolve, and then also when you let all parameters evolve. The former is akin to adjusting for the relative abundances of the different class populations, but assuming that the training sets are otherwise perfect representations of the real data. By comparing this to the results of adjusting all parameters, we can determine whether there is anything to be gained from adjusting the means and covariances as well.
The three left panels of Figure \ref{fig:confusiontrain} show confusion matrices generated from the parameters after training on the training set. The top, middle, and bottom panels give the results from the 6, 7, and 9 feature implementations, respectively. We see that the algorithm has difficulties distinguishing between the different `Extended QSO' classes. However, the Point-like QSOs, lensed QSO, and Blue Cloud galaxy classes are well classified. As we add more features, we see further improvement in lens classification. Further, we can note that adding the \texttt{psf\_mag - model\_mag} features helps significantly in distinguishing between the point-like and extended QSOs, as we expected.
\begin{figure*}
\centering
\includegraphics[height=7.5in]{plots/SDSSconfusion.png}
\caption{Confusion matrices showing how validiating set objects are classified, based on the parameters derived from running the Expectation Maximization algorithm on the training set data. The $y$-axis shows the true class of the input object and the $x$-axis shows to which class the object was identified. The point-like (PL) QSO and extended (Ext) QSO labels are in order of increasing redshift bins from top to bottom on the $y$-axis and left to right on the $x$-axis. The lensed quasar classes are labelled L QSO, and the blue cloud galaxy class is labelled BC gal.}
\label{fig:confusiontrain}
\end{figure*}
The middle column of panels shows the results from running the EM algorithm with the large SDSS data set, but only adjusting the weights, while the rightmost column shows the results after adjusting all parameters. In all three implementations, we see drastic improvement in the classification of point-like QSOs as well as a slight improvement in the classification of all other objects. The improvement demonstrates that adjusting the means and covariances does, indeed, improve the classification abilities of the model beyond that obtained by merely carrying over the same parameters from our training sets to the real data. This step highlights the power of our Gaussian mixture model over more rigid classification schemes.
\subsection{ROC Curves}
\label{sect:roc}
Since our end goal is identifying lensed quasars in large surveys, we are interested in the relationship between the true positive and false positive selection rates of our model. Figure~\ref{fig:roccurve} gives the relationship between these two values in the form of a receiver operating characteristic curve. The true and false positive rates are computed by varying the acceptance threshold for identifying a lens. Objects with a combined lens probability above this threshold are identified as lenses and those below the threshold as non-lenses. Given a certain threshold, the true and false positive rates are
\begin{equation}
\text{TPR} = \frac{\text{\# lensing objects identified as lenses}}{\text{\# lensing objects}}
\end{equation}
and
\begin{equation}
\text{FPR} = \frac{\text{\# non-lensing objects identified as lenses}}{\text{\# non-lensing objects}}.
\end{equation}
If we were to randomly classify each object as a `lens' or `nonlens', we would expect TPR and FPR to be identical, corresponding to a line of slope 1 passing through the origin in a ROC curve. The better the performance of the classification scheme, the higher and further left its curve should appear on the diagram. Figure \ref{fig:roccurve} indicates better lens selection rates as we move to models with more features. Note that this shows a zoomed-in portion of the ROC curve with the $x$-axis spanning from 0 to 0.5 and the $y$-axis from 0.5 to 1.0.
\begin{figure}
\centering
\includegraphics[width=0.3\textheight]{plots/roc.png}
\caption{Receiver operating characteristic curve, showing the performance of the EM algorithm with the training data. The red, green, and blue lines show the performance based on the 6, 7, and 9 features implementations, respectively. The solid lines show the results from using the training sets, the dashed lines from adjusting only the weights with the SDSD data, and the dotted lines from adjusting all parameters with the SDSS data.}
\label{fig:roccurve}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.2in]{plots/ATLASconfusionfixed.png}
\caption{Confusion matrices showing the performance of the model when trained on ATLAS data. The point-like (PL) QSO and extended (Ext) QSO labels are in order of increasing redshift bins from top to bottom on the $y$-axis and left to right on the $x$-axis. The lensed quasar classes are labelled L QSO, and the blue cloud galaxy class is labelled BC gal.}
\label{fig:ATLASconfusion}
\end{figure}
\subsection{Testing on SDSS/VST-ATLAS Overlap}
We expect that the PDFs that describe the data from SDSS should be similar to the PDFs that describe the data from other surveys. Only small tweaks should need to be made to the parameters, which can be done using the Expectation Maximization algorithm. We examine this by translating our model trained on the SDSS data to describe similar data in VST-ATLAS.
First, we take the parameters found by training the model on the SDSS data and convert them to VST-ATLAS colours, based on the conversions found in Section \ref{sect:comparison}. If
\begin{equation}
\mu_\text{atlas} = \bf{A}\mu_\text{sdss}+\bf{B},
\end{equation} where ${\bf A}$ and ${\bf B}$ come from Equations \ref{eq:modcal} and \ref{eq:psfcal}, then
\begin{equation}
\Sigma_\text{atlas} = {\bf A}\Sigma_\text{sdss}{\bf A}^T
\end{equation}
and
\begin{equation}
\alpha_\text{atlas} = \alpha_\text{sdss}.
\end{equation}
We then use the converted parameters to initialize the EM algorithm with the VST-ATLAS data. After letting the algorithm run, we obtain a new set of parameters to describe the objects in VST-ATLAS.
Since we do not have validating sets in VST-ATLAS, we use the SDSS training sets to evaluate the performance, first converting the SDSS colours to VST-ATLAS colours, again using Equations \ref{eq:modcal} and \ref{eq:psfcal}. The resulting confusion matrices are shown in Figure \ref{fig:ATLASconfusion}. The left column of matrices shows the results when we adjust only the weights. This would be the case if we did not account for differences in photometry and instead fit only for the relative abundance of the populations. The right column shows the results when we adjust all parameters.
We can see that the 6 feature and 7 feature models do well at classifying point-like QSOs, but the performance of the model suffers when the psf-model features are added, which is the opposite of what we see in SDSS. This is likely a sign that unfortunately the VST-ATLAS magnitudes AperMag6 and AperMag3 are not a good proxy for the SDSS model and psf magnitudes.
\subsection{Classification of known lenses in SDSS}
We finalize our tests of the Gaussian mixture model by examining how it classifies known lenses. Of a list of known lenses, 128 lie in the SDSS footprint for which we can obtain colours and calculate membership probabilities. Figure \ref{fig:knownlensscores} shows the typical scores assigned to such lenses by the 6, 7, and 9 feature models. The distribution is strongly peaked at the very high probability end and the very low probability end. This is especially pronounced in the 9 feature implementation, where 80 percent of all objects have either $p(\text{lens}) > 0.9$ or $p(\text{lens}) < 0.1$. This shows that the 9 feature implementation is very decisive in making `lens' or `not a lens' assignments. We expect this to be the case as we increase the number of features in the model.
\begin{figure}
\centering
\includegraphics[width=3in]{plots/knownlensscores.png}
\caption{Membership probabilities for the 128 known lenses in the SDSS footprint for the 6, 7, and 9 feature models. Note the strong peak at very low and very high probabilities, especially in the 9 feature model.}
\label{fig:knownlensscores}
\end{figure}
\newcommand{0.6}{0.6}
\newcommand{0.1in}{0.1in}
\begin{figure}
\begin{flushleft} (a)\end{flushleft}
\centering
\includegraphics[scale=0.6]{plots/known1.jpeg}\hspace*{0.1in}
\includegraphics[scale=0.6]{plots/known2.jpeg}
\hspace*{0.1in}
\includegraphics[scale=0.6]{plots/known3.jpeg}\\
p(lens)=1.0000\hspace*{0.1in}p(lens)=0.9915\hspace*{0.1in}
p(lens) = 0.9600\\
\vspace*{0.1in}
\includegraphics[scale=0.6]{plots/known4.jpeg}\hspace*{0.1in}
\includegraphics[scale=0.6]{plots/known5.jpeg}
\hspace*{0.1in}
\includegraphics[scale=0.6]{plots/known6.jpeg}\\
p(lens) = 0.0501\hspace*{0.1in}
p(lens) = 0.0135\hspace*{0.1in}
p(lens) = 0.0002\\
\vspace*{0.1in}
\begin{flushleft} (b)\end{flushleft}
\vspace*{-12pt}
\includegraphics[width=3in]{plots/knownlenses_blendedvsdeblended.png}
\caption{(a): Image cutouts of known lensed quasars with the lens probabilities assigned by our model. (b): Membership probabilities for the 128 known lenses, divided into `blended' and `deblended' categories. `Blended' objects look like those in the the top three images of part (a) while `deblended' objects look like those in the bottom three images of part (a). The $x$-axis gives the lens probability and the combined lens and extended QSO probabilities in the top and bottom plots, respectively.}
\label{fig:knowncutouts}
\end{figure}
Examining the SDSS image cutouts, we can see a pattern that the objects with the highest lens probability are typically blended, while those with the lowest probabilities are often very well separated. A select few cutouts are shown in Figure \ref{fig:knowncutouts} (a), along with their $p$(lens) score. In Figure \ref{fig:knowncutouts} (b), we show the score distribution for deblended and blended objects. The top plot displays only the lens probability while the bottom plot uses the sum of the lens probability and the extended QSO probabilities. Approximately one-third of the blended objects have a combined lens and extended QSO probability greater than 0.95. Similarly, approximately one-third of the deblended objects have a combined lens and extended QSO probability less than 0.05, with the highest scores coming from the point-like QSO classes. The complete list of lenses is given in Table \ref{tab:knownlensscores}, along with selected scores. We note that, owing to selection effects, wide separation systems are over-represented in the list of known lenses, with respect to what is expected in nature.
\begin{table*}
\centering
\setlength{\tabcolsep}{4.0pt}
{\renewcommand{\arraystretch}{0.98}
\begin{tabular}{c r r r c c c c c}
\hline
Name & ra (deg) & dec (deg) & p(lens) & $\begin{array}{c}p\text{(PL QSO)}\\\text{all redshifts}\end{array}$ & $\begin{array}{c}p\text{(Ext QSO})\\ 1.2 < z < 1.75\end{array}$ & $\begin{array}{c}p\text{(Ext QSO})\\ 1.75 < z < 2.4\end{array}$ & $\begin{array}{c}p\text{(Ext QSO})\\ z > 2.4\end{array}$ & Sep ('') \\
\hline
Q0015+0239 & 4.5474477 & 2.9444539 & 0.00014 & 0.99772 & 0.00000 & 0.00206 & 0.00000 & 2.20 \\
TEX0023+171A & 6.4040475 & 17.4680462 & 0.00278 & 0.48241 & 0.00000 & 0.00779 & 0.00001 & 4.80 \\
SDSSJ00480-1051A & 12.0032122 & -10.8634797 & 0.06170 & 0.93825 & 0.00000 & 0.00002 & 0.00000 & 3.60 \\
PMNJ0134-0931 & 23.6486210 & -9.5175140 & 0.99999 & 0.00001 & 0.00000 & 0.00000 & 0.00000 & 0.73 \\
Q0142-100 & 26.3191555 & -9.7548180 & 0.69409 & 0.30590 & 0.00000 & 0.00000 & 0.00000 & 2.24 \\
PHL1222 & 28.4745284 & 5.0491993 & 0.46181 & 0.53818 & 0.00000 & 0.00000 & 0.00000 & 3.30 \\
SDSSJ02452-0113A & 41.3005190 & -1.2205661 & 0.00071 & 0.99840 & 0.00000 & 0.00087 & 0.00000 & 4.50 \\
SDSS0246-0825 & 41.6420778 & -8.4267136 & 0.36548 & 0.63451 & 0.00000 & 0.00000 & 0.00000 & 1.19 \\
SDSSJ02483+0009A & 42.0866741 & 0.1657625 & 0.00038 & 0.99953 & 0.00000 & 0.00005 & 0.00000 & 6.90 \\
MG0414+0534 & 63.6573133 & 5.5784262 & 0.99779 & 0.00000 & 0.00000 & 0.00221 & 0.00000 & 2.40 \\
B0445+123 & 72.0918766 & 12.4654827 & 0.23227 & 0.00000 & 0.00000 & 0.64398 & 0.00229 & 1.35 \\
SDSSJ07402+2926A & 115.0560383 & 29.4467821 & 0.57241 & 0.42759 & 0.00000 & 0.00000 & 0.00000 & 2.60 \\
SDSS J0743+2457 & 115.9692451 & 24.9621262 & 0.00408 & 0.65394 & 0.00002 & 0.01489 & 0.00093 & 1.03 \\
SDSS0746+4403 & 116.7210121 & 44.0642878 & 0.06333 & 0.88548 & 0.00000 & 0.00077 & 0.00003 & 1.11 \\
SDSSJ07479+4318A & 116.9959352 & 43.3014976 & 0.01982 & 0.97833 & 0.00000 & 0.00001 & 0.00000 & 9.20 \\
SDSS0806+2006 & 121.5987673 & 20.1088663 & 0.03328 & 0.96555 & 0.00000 & 0.00001 & 0.00000 & 1.40 \\
HS0810+2554 & 123.3803634 & 25.7508530 & 0.65535 & 0.34465 & 0.00000 & 0.00000 & 0.00000 & 0.96 \\
SDSSJ08199+5356 & 124.9992399 & 53.9400507 & 0.65332 & 0.00000 & 0.07066 & 0.00041 & 0.00313 & 4.04 \\
SDSSJ08202+0812 & 125.0671257 & 8.2044367 & 0.00219 & 0.99698 & 0.00000 & 0.00002 & 0.00000 & 2.30 \\
HS0818+1227 & 125.4122634 & 12.2916624 & 0.10073 & 0.89729 & 0.00000 & 0.00092 & 0.00000 & 2.10 \\
APM08279+5255 & 127.9237751 & 52.7548742 & 0.99989 & 0.00011 & 0.00000 & 0.00000 & 0.00000 & 0.38 \\
SDSS J0832+0404 & 128.0708254 & 4.0681127 & 0.37806 & 0.62059 & 0.00000 & 0.00006 & 0.00000 & 1.98 \\
SDSS0903+5028 & 135.8955917 & 50.4720356 & 0.86865 & 0.00000 & 0.00000 & 0.00024 & 0.00027 & 2.99 \\
SDSS J0904+1512 & 136.0173140 & 15.2151511 & 0.37964 & 0.62023 & 0.00000 & 0.00000 & 0.00000 & 1.13 \\
RXJ0911+0551 & 137.8650616 & 5.8483616 & 0.26514 & 0.73404 & 0.00000 & 0.00073 & 0.00000 & 2.47 \\
SBS0909+523 & 138.2542954 & 52.9913607 & 0.99151 & 0.00849 & 0.00000 & 0.00000 & 0.00000 & 1.17 \\
RXJ0921+4529 & 140.3034237 & 45.4844531 & 0.01664 & 0.98311 & 0.00000 & 0.00001 & 0.00000 & 6.97 \\
SDSS0924+0219 & 141.2325339 & 2.3236837 & 0.37140 & 0.62585 & 0.00000 & 0.00001 & 0.00000 & 1.75 \\
SDSS J0946+1835 & 146.5199732 & 18.5943605 & 0.00491 & 0.00000 & 0.17294 & 0.00226 & 0.27944 & 3.06 \\
FBQ0951+2635 & 147.8440691 & 26.5872082 & 0.72388 & 0.27612 & 0.00000 & 0.00000 & 0.00000 & 1.11 \\
BRI0952-0115 & 148.7503824 & -1.5018700 & 0.00320 & 0.00011 & 0.10729 & 0.03752 & 0.07157 & 1.00 \\
SDSSJ09591+5449A & 149.7811562 & 54.8184576 & 0.00001 & 0.99991 & 0.00000 & 0.00007 & 0.00000 & 3.90 \\
Q0957+561 & 150.3368322 & 55.8971172 & 0.42382 & 0.57618 & 0.00000 & 0.00000 & 0.00000 & 6.26 \\
SDSS1001+5027 & 150.3692164 & 50.4657960 & 0.55823 & 0.44177 & 0.00000 & 0.00000 & 0.00000 & 2.82 \\
J1004+1229 & 151.1037295 & 12.4895218 & 0.99618 & 0.00002 & 0.00000 & 0.00375 & 0.00000 & 1.54 \\
SDSS1004+4112 & 151.1455147 & 41.2118874 & 0.02993 & 0.96983 & 0.00000 & 0.00000 & 0.00000 & 15.99 \\
SDSS1011+0143 & 152.8729178 & 1.7231710 & 0.00144 & 0.00000 & 0.41954 & 0.00155 & 0.28144 & 3.67 \\
LBQS1009-0252 & 153.0650962 & -3.1169016 & 0.08552 & 0.91415 & 0.00000 & 0.00024 & 0.00000 & 1.54 \\
SDSS1021+4913 & 155.2959315 & 49.2250908 & 0.33645 & 0.65626 & 0.00000 & 0.00009 & 0.00000 & 1.14 \\
FSC10214+4724 & 156.1440054 & 47.1526938 & 0.05473 & 0.51565 & 0.00000 & 0.00042 & 0.00000 & 1.59 \\
SDSSJ10287+3929A & 157.1819591 & 39.4935992 & 0.00024 & 0.98231 & 0.00000 & 0.01161 & 0.00002 & 7.50 \\
B1030+074 & 158.3917723 & 7.1905771 & 0.38193 & 0.61359 & 0.00000 & 0.00001 & 0.00000 & 1.65 \\
SDSSJ10353+0752A & 158.8306849 & 7.8827901 & 0.06756 & 0.93243 & 0.00000 & 0.00001 & 0.00000 & 2.70 \\
SDSS J1054+2733 & 163.6701660 & 27.5517870 & 0.72399 & 0.27601 & 0.00000 & 0.00000 & 0.00000 & 1.27 \\
SDSS J1055+4628 & 163.9393929 & 46.4776390 & 0.13538 & 0.85958 & 0.00000 & 0.00142 & 0.00021 & 1.15 \\
SDSSJ10567-0059A & 164.1870081 & -0.9926159 & 0.00000 & 0.99815 & 0.00000 & 0.00171 & 0.00001 & 7.20 \\
HE1104-1805 & 166.6399556 & -18.3569516 & 0.64122 & 0.35878 & 0.00000 & 0.00000 & 0.00000 & 3.19 \\
SDSSJ11161+4118A & 169.0488890 & 41.3059807 & 0.37839 & 0.49905 & 0.00000 & 0.04115 & 0.00058 & 13.00 \\
PG1115+080 & 169.5706277 & 7.7661757 & 0.79756 & 0.20244 & 0.00000 & 0.00000 & 0.00000 & 2.32 \\
SDSSJ11202+6711 & 170.0504960 & 67.1877758 & 0.03567 & 0.92827 & 0.00000 & 0.00097 & 0.00001 & 1.50 \\
UM425 & 170.8363735 & 1.6298543 & 0.95997 & 0.04003 & 0.00000 & 0.00000 & 0.00000 & 6.50 \\
SDSSJ11249+5710A & 171.2302020 & 57.1823829 & 0.32289 & 0.66965 & 0.00000 & 0.00437 & 0.00000 & 2.20 \\
SDSS J1128+2402 & 172.0770482 & 24.0381996 & 0.12290 & 0.87701 & 0.00000 & 0.00000 & 0.00000 & 0.84 \\
SDSS J1131+1915 & 172.9905204 & 19.2576997 & 0.23650 & 0.76314 & 0.00000 & 0.00031 & 0.00000 & 1.46 \\
SDSS1138+0314 & 174.5155742 & 3.2493912 & 0.01296 & 0.95709 & 0.00000 & 0.00040 & 0.00002 & 1.34 \\
SDSSJ11381+6807A & 174.5383643 & 68.1274026 & 0.18903 & 0.81097 & 0.00000 & 0.00000 & 0.00000 & 2.60 \\
SDSS1155+6346 & 178.8222976 & 63.7727990 & 0.04832 & 0.00050 & 0.00002 & 0.00002 & 0.00002 & 1.95 \\
B1152+200 & 178.8262178 & 19.6617257 & 0.58448 & 0.41552 & 0.00000 & 0.00000 & 0.00000 & 1.59 \\
SDSSJ11583+1235A & 179.5948971 & 12.5884958 & 0.00432 & 0.98312 & 0.00000 & 0.00003 & 0.00000 & 3.60 \\
SDSS1206+4332 & 181.6235366 & 43.5382141 & 0.12480 & 0.87505 & 0.00000 & 0.00000 & 0.00000 & 2.90 \\
1208+1011 & 182.7376442 & 9.9074865 & 0.60758 & 0.30387 & 0.00063 & 0.07242 & 0.01046 & 0.45 \\
SDSSJ12167+3529 & 184.1918625 & 35.4948606 & 0.00195 & 0.99741 & 0.00000 & 0.00007 & 0.00000 & 1.50 \\
HS1216+5032A & 184.6708726 & 50.2599576 & 0.53323 & 0.46677 & 0.00000 & 0.00000 & 0.00000 & 8.90 \\
SDSSJ12257+5644A & 186.4405459 & 56.7446074 & 0.00076 & 0.97687 & 0.00001 & 0.01671 & 0.00004 & 6.00 \\
\hline
\end{tabular}}
\caption{\textit{Continued on next page.}}
\end{table*}
\addtocounter{table}{-1}
\begin{table*}
\centering
\setlength{\tabcolsep}{4.0pt}
{\renewcommand{\arraystretch}{0.98}
\begin{tabular}{c r r r c c c c c}
\hline
Name & ra (deg) & dec (deg) & p(lens) & $\begin{array}{c}p\text{(PL QSO)}\\\text{all redshifts}\end{array}$ & $\begin{array}{c}p\text{(Ext QSO})\\ 1.2 < z < 1.75\end{array}$ & $\begin{array}{c}p\text{(Ext QSO})\\ 1.75 < z < 2.4\end{array}$ & $\begin{array}{c}p\text{(Ext QSO})\\ z > 2.4\end{array}$ & Sep ('') \\
\hline
SDSS1226-0006 & 186.5334332 & -0.1006261 & 0.64129 & 0.35862 & 0.00000 & 0.00002 & 0.00000 & 1.26 \\
SDSSJ12511+2935 & 192.7815624 & 29.5945841 & 0.76803 & 0.18325 & 0.00000 & 0.00001 & 0.00000 & 1.79 \\
SDSSJ12543+2235 & 193.5789495 & 22.5934873 & 0.03605 & 0.26113 & 0.00314 & 0.07851 & 0.04694 & 1.56 \\
SDSSJ12583+1657 & 194.5801292 & 16.9549317 & 0.25021 & 0.74965 & 0.00000 & 0.00006 & 0.00000 & 1.28 \\
SDSSJ12599+1241A & 194.9817306 & 12.6982776 & 0.00006 & 0.99936 & 0.00000 & 0.00023 & 0.00000 & 3.60 \\
SDSSJ13034+5100A & 195.8590567 & 51.0131162 & 0.00183 & 0.99740 & 0.00000 & 0.00070 & 0.00000 & 3.80 \\
SDSS J1304+2001 & 196.1816022 & 20.0178274 & 0.01713 & 0.98218 & 0.00000 & 0.00002 & 0.00000 & 1.87 \\
SDSSJ13136+5151 & 198.4166011 & 51.8579110 & 0.29308 & 0.70692 & 0.00000 & 0.00000 & 0.00000 & 1.24 \\
SDSS J1320+1644 & 200.2465628 & 16.7340505 & 0.03072 & 0.96907 & 0.00000 & 0.00003 & 0.00000 & 8.59 \\
SDSS J1322+1052 & 200.6517436 & 10.8776181 & 0.16367 & 0.83628 & 0.00000 & 0.00000 & 0.00000 & 2.00 \\
SDSS J1330+1810 & 202.5776972 & 18.1755935 & 0.76688 & 0.23279 & 0.00000 & 0.00001 & 0.00000 & 1.76 \\
SDSS1332+0347 & 203.0943281 & 3.7944053 & 0.00938 & 0.00905 & 0.00000 & 0.00009 & 0.00004 & 1.14 \\
SDSS J1334+3315 & 203.5058238 & 33.2595348 & 0.02000 & 0.79386 & 0.00000 & 0.01041 & 0.00720 & 0.83 \\
LBQS1333+0113 & 203.8949735 & 1.3015460 & 0.42463 & 0.57506 & 0.00000 & 0.00000 & 0.00000 & 1.63 \\
SDSSJ13372+6012A & 204.3047305 & 60.2018269 & 0.00503 & 0.99495 & 0.00000 & 0.00001 & 0.00000 & 3.10 \\
SDSS J1339+1310 & 204.7797429 & 13.1776846 & 0.01348 & 0.98534 & 0.00000 & 0.00032 & 0.00000 & 1.69 \\
SDSSJ13494+1227A & 207.3743497 & 12.4519370 & 0.45317 & 0.54683 & 0.00000 & 0.00000 & 0.00000 & 3.00 \\
SDSS1353+1138 & 208.2764435 & 11.6346476 & 0.81294 & 0.18704 & 0.00000 & 0.00000 & 0.00000 & 1.41 \\
SDSS J1400+3134 & 210.0532059 & 31.5817065 & 0.00149 & 0.99490 & 0.00000 & 0.00013 & 0.00000 & 1.74 \\
SDSSJ14002+3134 & 210.0535531 & 31.5813192 & 0.00839 & 0.98560 & 0.00000 & 0.00022 & 0.00000 & 1.74 \\
B1359+154 & 210.3982833 & 15.2233761 & 0.21200 & 0.12899 & 0.00000 & 0.11572 & 0.04951 & 1.71 \\
SDSS1402+6321 & 210.6175989 & 63.3592669 & 0.03197 & 0.00000 & 0.00180 & 0.00006 & 0.02126 & 1.35 \\
SDSSJ14050+4447A & 211.2580744 & 44.7999512 & 0.32177 & 0.67821 & 0.00000 & 0.00002 & 0.00000 & 7.40 \\
SDSS J1405+0959 & 211.3142594 & 9.9920306 & 0.07177 & 0.92672 & 0.00000 & 0.00001 & 0.00000 & 1.98 \\
SDSS1406+6126 & 211.6034810 & 61.4447165 & 0.00423 & 0.97378 & 0.00000 & 0.01210 & 0.00000 & 1.98 \\
SDSSJ14098+3919A & 212.4739213 & 39.3333624 & 0.00000 & 0.99201 & 0.00000 & 0.00764 & 0.00006 & 6.80 \\
HST14113+5211 & 212.8320325 & 52.1916252 & 0.00000 & 0.00000 & 0.00032 & 0.64556 & 0.30989 & 1.80 \\
J141546.24+112943.4 & 213.9426675 & 11.4953999 & 0.96943 & 0.03057 & 0.00000 & 0.00000 & 0.00000 & 1.35 \\
HST14176+5226 & 214.3989070 & 52.4462055 & 0.00000 & 0.00000 & 0.64088 & 0.00082 & 0.11522 & 2.83 \\
SDSSJ14189+2441A & 214.7308964 & 24.6858055 & 0.01147 & 0.98760 & 0.00000 & 0.00001 & 0.00000 & 4.50 \\
B1422+231 & 216.1587845 & 22.9335307 & 1.00000 & 0.00000 & 0.00000 & 0.00000 & 0.00000 & 1.68 \\
SDSSJ14260+0719A & 216.5177783 & 7.3238325 & 0.02721 & 0.96679 & 0.00000 & 0.00597 & 0.00000 & 4.30 \\
SDSS J1455+1447 & 223.7580380 & 14.7929914 & 0.20414 & 0.79581 & 0.00000 & 0.00000 & 0.00000 & 1.73 \\
SDSSJ15087+3328A & 227.1758283 & 33.4673933 & 0.35610 & 0.64390 & 0.00000 & 0.00000 & 0.00000 & 2.90 \\
SDSS J1515+1511 & 228.9108052 & 15.1933151 & 0.18959 & 0.80792 & 0.00000 & 0.00003 & 0.00000 & 2.00 \\
SBS1520+530 & 230.4368204 & 52.9134682 & 0.58626 & 0.41373 & 0.00000 & 0.00001 & 0.00000 & 1.59 \\
SDSSJ15247+4409 & 231.1900916 & 44.1637501 & 0.28051 & 0.00485 & 0.00001 & 0.00007 & 0.00081 & 1.70 \\
SDSS J1527+0141 & 231.8338784 & 1.6943353 & 0.11274 & 0.88725 & 0.00000 & 0.00000 & 0.00000 & 2.58 \\
SDSS J1529+1038 & 232.4120988 & 10.6344195 & 0.29694 & 0.70278 & 0.00000 & 0.00000 & 0.00000 & 1.27 \\
SDSSJ15306+5304A & 232.6606914 & 53.0677617 & 0.00052 & 0.99830 & 0.00000 & 0.00035 & 0.00000 & 4.10 \\
HST15433+5352 & 235.8370915 & 53.8645264 & 0.00000 & 0.01033 & 0.00099 & 0.01248 & 0.01926 & 1.18 \\
MG1549+3047 & 237.3013867 & 30.7879099 & 0.00013 & 0.00000 & 0.00002 & 0.00000 & 0.00043 & 1.70 \\
SDSSJ16002+0000 & 240.0645942 & 0.0126311 & 0.22794 & 0.77205 & 0.00000 & 0.00001 & 0.00000 & 1.80 \\
B1600+434 & 240.4187611 & 43.2796578 & 0.00189 & 0.00220 & 0.00011 & 0.00082 & 0.00486 & 1.40 \\
SDSSJ16060+2900A & 241.5117058 & 29.0135566 & 0.24344 & 0.75656 & 0.00000 & 0.00000 & 0.00000 & 3.40 \\
SDSS J1620+1203 & 245.1089177 & 12.0616814 & 0.26150 & 0.68051 & 0.00000 & 0.00251 & 0.00000 & 2.77 \\
1WGAJ16290+3724A & 247.2608250 & 37.4085587 & 0.16648 & 0.83351 & 0.00000 & 0.00000 & 0.00000 & 4.30 \\
PMNJ1632-0033 & 248.2403586 & -0.5558697 & 0.00504 & 0.31223 & 0.00000 & 0.05356 & 0.00759 & 1.47 \\
FBQ1633+3134 & 248.4541069 & 31.5699816 & 0.89503 & 0.10497 & 0.00000 & 0.00000 & 0.00000 & 0.75 \\
SDSSJ16351+2911A & 248.7922560 & 29.1890689 & 0.01140 & 0.98689 & 0.00000 & 0.00027 & 0.00000 & 4.90 \\
KP1634.9+26.7A & 249.2538546 & 26.6027532 & 0.00003 & 0.99978 & 0.00000 & 0.00017 & 0.00000 & 3.80 \\
QJ1643+3156B & 250.7974541 & 31.9390721 & 0.12417 & 0.87517 & 0.00000 & 0.00000 & 0.00000 & 2.30 \\
SDSS1650+4251 & 252.6810110 & 42.8637037 & 0.26140 & 0.73860 & 0.00000 & 0.00000 & 0.00000 & 1.23 \\
MG1654+1346 & 253.6741318 & 13.7725911 & 0.00030 & 0.00000 & 0.08538 & 0.00089 & 0.25019 & 2.10 \\
SDSSJ17232+5904A & 260.8225806 & 59.0795939 & 0.05343 & 0.94657 & 0.00000 & 0.00000 & 0.00000 & 3.70 \\
B2108+213 & 317.7255259 & 21.5161064 & 0.89651 & 0.00000 & 0.00647 & 0.00015 & 0.00446 & 4.57 \\
B2114+022 & 319.2115894 & 2.4295637 & 0.56744 & 0.07080 & 0.00000 & 0.00001 & 0.00001 & 1.31 \\
SDSSJ22144+1326A & 333.6126343 & 13.4491709 & 0.00023 & 0.99124 & 0.00000 & 0.00348 & 0.00000 & 5.80 \\
Q2237+030 & 340.1259423 & 3.3584156 & 0.55174 & 0.11150 & 0.00000 & 0.00001 & 0.00000 & 1.78 \\
B2319+052 & 350.4199408 & 5.4599435 & 0.00000 & 0.00000 & 0.01405 & 0.03596 & 0.83984 & 1.36 \\
PSS2322+1944 & 350.5298481 & 19.7397163 & 0.02608 & 0.04620 & 0.01689 & 0.04352 & 0.45616 & 1.49 \\
SDSSpJ23365-0107 & 354.1489629 & -1.1260454 & 0.36245 & 0.63754 & 0.00000 & 0.00001 & 0.00000 & 1.70 \\
SDSS J2343-0050 & 355.7997514 & -0.8428717 & 0.34923 & 0.64959 & 0.00000 & 0.00093 & 0.00001 & 1.51 \\
Q2345+007A & 357.0816039 & 0.9559653 & 0.00123 & 0.99781 & 0.00000 & 0.00093 & 0.00000 & 7.10 \\
\hline
\end{tabular}}
\caption{List of all known lenses in SDSS, along with selected membership probabilities. (\textit{Continued from previous page})}
\label{tab:knownlensscores}
\end{table*}
The discrepancy in the scores is likely due to the fact that we trained our model on blended, small separation lens systems and so the model is designed to pick out similar objects. Since the colours of deblended objects come only from a single image, they appear as stand-alone, point-like QSO, as in the bottom three images in Figure \ref{fig:knowncutouts} (a). Blended objects, such as those in the top three images of Figure \ref{fig:knowncutouts} (a), will be extended and will have the colours of quasars. Thus, our model will identify them either as lenses or extended QSO.
In some cases, lenses that are blended in SDSS will become deblended in surveys such as DES, where the seeing and image quality are better. This may pose a problem with our model as those systems will likely receive low lens membership probabilities when examined in DES. In order to capture as many lenses as possible, it becomes increasingly important to extend our training sets to include objects of all image configurations and separations.
\section{Lens Candidates}
\label{sect:candidates}
The membership probabilities produced by our model allow for numerous
methods of selecting lens candidates. The simplest method is to take
as targets the objects with the highest combined lens probabilities,
while alternative choices may place upper limits on, say, the
blue-cloud galaxy probabilities. After a list of targets is compiled,
candidate selection can be based for example on visual inspection of
the images, machine learning pixel based techniques \citep{Agn++15a},
or fast lens modeling \citep{Marshall:2009p593,Cha++15}.
As an illustration, we make a simple list of candidates by first examining the objects assigned the highest combined lens probabilities, summed over all three implementations. Taking the top 2000 candidates, we select those with available SDSS spectroscopy, in order to simulate a potential follow-up campaign. Of the 2000 objects in the list, 458 have spectra. We identify those with `QSO' or `Galaxy AGN' spectra as QSOs, those with `Galaxy,' `Galaxy starburst,' or `Galaxy starforming' spectra as Galaxy contaminants, and those with stellar spectra as stellar contaminants. The distribution of all objects with spectra is shown in the top frame of Figure \ref{fig:spectrabreakdown}. As hoped, nearly 80\% of the selected objects with spectra are quasars, with the majority of contaminants accounted for by other galaxies.
\begin{figure}
\centering
\includegraphics[width=3in]{plots/spectrabreakdown.png}
\caption{Distribution of the 458 objects with spectra from our list of 2000 objects receiving the highest $p$(lens) scores. The bottom panel shows the fraction of objects with spectra that are, indeed, QSOs. The dashed red line indicates the mean fraction.\hspace{\textwidth}
$^\dag$ We define $f_{\text{QSO}} = N_\text{objects with QSO specra}/N_\text{objects with specra}$}
\label{fig:spectrabreakdown}
\end{figure}
\newcommand{0.7in}{0.7in}
\begin{figure*}
\centering
\includegraphics[width=0.7in]{plots/candidate01.png}
\includegraphics[width=0.7in]{plots/candidate02.jpeg}
\includegraphics[width=0.7in]{plots/candidate03.png}
\includegraphics[width=0.7in]{plots/candidate04.jpeg}
\includegraphics[width=0.7in]{plots/candidate05.jpeg}
\includegraphics[width=0.7in]{plots/candidate06.jpeg}
\includegraphics[width=0.7in]{plots/candidate07.png}
\includegraphics[width=0.7in]{plots/candidate08.jpeg}
\includegraphics[width=0.7in]{plots/candidate09.jpeg}
\includegraphics[width=0.7in]{plots/candidate10.jpeg}
\includegraphics[width=0.7in]{plots/candidate11.jpeg}
\includegraphics[width=0.7in]{plots/candidate12.jpeg}
\includegraphics[width=0.7in]{plots/candidate13.jpeg}
\includegraphics[width=0.7in]{plots/candidate14.jpeg}
\includegraphics[width=0.7in]{plots/candidate15.jpeg}
\includegraphics[width=0.7in]{plots/candidate16.jpeg}
\includegraphics[width=0.7in]{plots/candidate17.jpeg}
\includegraphics[width=0.7in]{plots/candidate18.jpeg}
\includegraphics[width=0.7in]{plots/candidate19.jpeg}
\includegraphics[width=0.7in]{plots/candidate20.jpeg}
\includegraphics[width=0.7in]{plots/candidate21.jpeg}
\includegraphics[width=0.7in]{plots/candidate22.jpeg}
\includegraphics[width=0.7in]{plots/candidate23.jpeg}
\includegraphics[width=0.7in]{plots/candidate24.jpeg}
\includegraphics[width=0.7in]{plots/candidate25.jpeg}
\includegraphics[width=0.7in]{plots/candidate26.jpeg}
\includegraphics[width=0.7in]{plots/candidate27.jpeg}
\includegraphics[width=0.7in]{plots/candidate28.jpeg}
\includegraphics[width=0.7in]{plots/candidate29.jpeg}
\includegraphics[width=0.7in]{plots/candidate30.jpeg}
\includegraphics[width=0.7in]{plots/candidate31.png}
\includegraphics[width=0.7in]{plots/candidate32.jpeg}
\includegraphics[width=0.7in]{plots/candidate33.jpeg}
\includegraphics[width=0.7in]{plots/candidate34.jpeg}
\includegraphics[width=0.7in]{plots/candidate35.jpeg}
\includegraphics[width=0.7in]{plots/candidate36.jpeg}
\includegraphics[width=0.7in]{plots/candidate37.jpeg}
\includegraphics[width=0.7in]{plots/candidate38.jpeg}
\includegraphics[width=0.7in]{plots/candidate39.png}
\includegraphics[width=0.7in]{plots/candidate40.jpeg}
\includegraphics[width=0.7in]{plots/candidate41.jpeg}
\includegraphics[width=0.7in]{plots/candidate42.jpeg}
\includegraphics[width=0.7in]{plots/candidate43.jpeg}
\caption{Image cutouts of the 43 SDSS objects receiving the highest combined lensed quasar probability, corresponding to those listed in Table \ref{tab:candcoord}. All the objects are spectroscopically confirmed to host an active galactic nucleus. The first object in Table \ref{tab:candcoord} corresponds to the top left image. Subsequent objects are in order of left to right first and then top to bottom. Each image is 12 arcseconds on a side with North up and East to the left. The five encircled objects already have HST imaging counterparts (two known lenses in green and three singly imaged AGN in red), as discussed in the text and fig.~\ref{fig:HSTimages}.}
\label{fig:candidatecutouts}
\end{figure*}
\begin{table*}
\centering
\setlength{\tabcolsep}{4.5pt}
\begin{tabular}{r r r c c c c c c c c c c c c c}
\hline
ra & dec & u & g & r & i & z & W1 & W2 & psf\_g & psf\_r & psf\_i & W3 & redshift & $p$(lens) \\
\hline
26.3191345 & -9.7547843 & 17.79 & 17.28 & 17.07 & 16.63 & 16.85 & 13.47 & 12.63 & 17.05 & 16.89 & 16.66 & 9.52 & 2.725 & 0.92514 \\
322.2760283 & -7.2703745 & 19.25 & 19.01 & 18.67 & 19.45 & 18.74 & 14.97 & 13.68 & 19.03 & 18.65 & 19.04 & 11.30 & 1.215 & 0.90465 \\
203.8949676 & 1.3015746 & 18.74 & 18.81 & 18.70 & 17.97 & 18.20 & 14.06 & 12.79 & 18.57 & 18.38 & 17.79 & 9.61 & 1.576 & 0.99458 \\
39.0260148 & -8.7158960 & 17.59 & 17.38 & 17.32 & 17.36 & 17.32 & 13.52 & 12.18 & 17.35 & 17.22 & 17.33 & 9.29 & 0.893 & 0.82064 \\
356.5790860 & -9.5192272 & 20.39 & 19.80 & 18.98 & 18.91 & 19.07 & 14.26 & 13.39 & 19.96 & 19.15 & 19.17 & 10.94 & 1.111 & 0.99347 \\
245.3698819 & -0.6473633 & 20.00 & 19.58 & 18.88 & 19.16 & 18.63 & 14.66 & 13.71 & 19.61 & 18.91 & 19.17 & 11.11 & 1.061 & 0.97723 \\
24.8632464 & 1.2630244 & 22.06 & 21.65 & 20.58 & 19.69 & 19.84 & 15.13 & 13.81 & 22.35 & 21.06 & 20.27 & 9.39 & 0.230 & 0.96277 \\
232.9027584 & 4.2669427 & 18.92 & 18.25 & 18.68 & 18.43 & 18.17 & 14.98 & 14.03 & 18.81 & 19.23 & 19.00 & 11.01 & 2.000 & 0.95502 \\
257.0625197 & 34.8380944 & 19.37 & 19.15 & 19.69 & 19.12 & 18.92 & 14.47 & 13.17 & 18.95 & 18.93 & 18.53 & 10.29 & 1.600 & 0.90956 \\
189.0097472 & -3.5249859 & 17.09 & 16.96 & 16.92 & 16.68 & 16.74 & 13.58 & 12.28 & 16.98 & 16.91 & 16.66 & 9.14 & 1.824 & 0.90437 \\
\\
36.7783671 & -9.1214118 & 19.79 & 19.16 & 18.60 & 18.59 & 18.34 & 13.87 & 12.63 & 19.48 & 18.97 & 19.04 & 9.84 & 0.960 & 0.82243 \\
191.2738981 & 3.1463089 & 20.57 & 19.55 & 18.36 & 17.95 & 17.71 & 12.91 & 11.78 & 19.62 & 18.43 & 18.02 & 9.10 & 1.097 & 0.81733 \\
115.3586716 & 42.2585368 & 17.62 & 17.49 & 17.53 & 17.13 & 17.05 & 13.96 & 12.59 & 17.53 & 17.60 & 17.12 & 9.27 & 1.850 & 0.98504 \\
18.6724107 & -9.2997739 & 18.78 & 18.38 & 18.30 & 18.43 & 19.28 & 14.11 & 13.22 & 18.36 & 18.31 & 18.44 & 10.91 & 0.763 & 0.98000 \\
211.1369562 & 1.1384257 & 18.84 & 18.56 & 18.77 & 18.77 & 19.45 & 14.62 & 13.56 & 18.57 & 18.70 & 18.73 & 10.86 & 0.634 & 0.92927 \\
26.8469730 & 14.7225253 & 18.75 & 18.53 & 18.03 & 18.11 & 17.93 & 13.62 & 12.58 & 18.62 & 18.34 & 18.28 & 9.73 & 0.433 & 0.91726 \\
26.0004127 & 13.2088043 & 19.46 & 19.07 & 18.48 & 18.21 & 17.74 & 13.95 & 13.34 & 19.61 & 19.16 & 18.93 & 10.82 & 0.289 & 0.87390 \\
158.1535243 & -1.1030481 & 17.09 & 16.99 & 16.77 & 16.72 & 16.80 & 13.43 & 12.00 & 17.05 & 16.77 & 16.71 & 8.97 & 1.263 & 0.86483 \\
33.2371116 & -9.4852927 & 19.72 & 19.47 & 19.43 & 18.80 & 18.55 & 14.43 & 13.47 & 19.65 & 19.65 & 19.06 & 10.66 & 0.415 & 0.84925 \\
211.5935059 & -1.2085686 & 20.43 & 19.58 & 18.96 & 18.72 & 18.68 & 14.44 & 12.87 & 19.74 & 19.12 & 18.88 & 9.61 & 1.154 & 0.83553 \\
\\
57.1105520 & -0.8798747 & 20.84 & 20.41 & 19.41 & 19.07 & 18.53 & 14.88 & 14.12 & 20.67 & 19.76 & 19.43 & 12.10 & 0.266 & 0.81980 \\
352.8876504 & -9.0462571 & 18.70 & 18.00 & 18.06 & 17.78 & 17.31 & 14.37 & 13.39 & 18.05 & 18.20 & 17.90 & 9.80 & 2.455 & 0.93794 \\
10.4610855 & 0.1244942 & 20.64 & 19.97 & 20.24 & 18.78 & 18.56 & 13.52 & 12.57 & 19.96 & 20.22 & 18.81 & 9.75 & 0.456 & 1.00000 \\
200.7151939 & 0.7818887 & 18.65 & 18.42 & 18.76 & 18.53 & 19.38 & 13.64 & 12.98 & 18.42 & 18.65 & 18.49 & 10.75 & 0.520 & 0.99882 \\
113.1172315 & 38.4402326 & 19.06 & 18.85 & 20.84 & 18.57 & 18.43 & 14.83 & 13.38 & 19.09 & 21.38 & 18.87 & 10.72 & 1.138 & 0.99872 \\
146.0592382 & 1.0485406 & 19.76 & 20.13 & 18.73 & 18.53 & 20.21 & 13.98 & 12.95 & 19.84 & 18.70 & 18.55 & 10.24 & 0.693 & 0.98932 \\
6.1838017 & 0.5393128 & 17.18 & 16.93 & 16.58 & 16.89 & 16.40 & 12.67 & 11.57 & 16.91 & 16.77 & 16.90 & 9.28 & 0.402 & 0.98864 \\
123.6649547 & 47.3068383 & 20.09 & 20.85 & 20.18 & 20.15 & 20.63 & 14.52 & 13.60 & 20.23 & 19.86 & 19.84 & 10.64 & 0.782 & 0.98531 \\
202.8579850 & 0.7374107 & 19.01 & 18.71 & 19.72 & 18.20 & 17.99 & 16.10 & 14.45 & 18.85 & 19.98 & 18.39 & 10.95 & 2.018 & 0.98444 \\
157.5410952 & 1.4848465 & 18.99 & 18.74 & 18.32 & 17.89 & 18.21 & 14.49 & 13.14 & 18.77 & 18.32 & 17.79 & 10.58 & 1.277 & 0.95529 \\
\\
12.6721473 & -9.4847679 & 16.63 & 16.24 & 15.81 & 15.57 & 15.33 & 13.38 & 12.39 & 16.31 & 15.79 & 15.52 & 9.78 & 1.192 & 0.94780 \\
163.5762432 & 0.1043169 & 19.52 & 19.26 & 18.71 & 18.33 & 17.81 & 13.92 & 13.32 & 19.75 & 19.30 & 18.97 & 10.79 & 0.349 & 0.93549 \\
124.4711822 & 45.8888824 & 19.31 & 19.19 & 19.44 & 18.77 & 18.77 & 15.69 & 14.31 & 19.28 & 19.73 & 18.91 & 11.21 & 1.742 & 0.90558 \\
43.9399370 & -0.8650154 & 19.62 & 19.38 & 19.25 & 19.98 & 19.70 & 14.68 & 13.79 & 19.41 & 19.27 & 19.68 & 11.31 & 0.751 & 0.89951 \\
112.6878885 & 36.5752726 & 19.93 & 19.06 & 18.37 & 18.34 & 18.02 & 13.77 & 12.46 & 19.06 & 18.36 & 18.39 & 9.78 & 1.063 & 0.89873 \\
122.4955314 & 45.7172141 & 19.39 & 19.14 & 18.68 & 18.39 & 17.83 & 14.30 & 13.60 & 19.75 & 19.51 & 19.30 & 10.87 & 0.366 & 0.89737 \\
358.1587047 & 1.0978856 & 18.98 & 18.16 & 17.54 & 17.23 & 17.07 & 14.05 & 12.84 & 18.11 & 17.54 & 17.21 & 9.25 & 2.994 & 0.88778 \\
209.9342104 & 1.4693924 & 19.84 & 19.67 & 18.97 & 18.85 & 18.74 & 14.75 & 13.50 & 19.88 & 19.30 & 19.27 & 10.06 & 1.096 & 0.88693 \\
39.2512724 & -1.0251575 & 19.86 & 19.35 & 18.68 & 18.47 & 18.00 & 14.39 & 13.80 & 19.71 & 19.06 & 18.86 & 10.88 & 0.344 & 0.88225 \\
15.0132242 & 15.8495361 & 17.66 & 17.32 & 17.07 & 16.59 & 16.70 & 13.68 & 12.73 & 17.39 & 17.15 & 16.65 & 9.32 & 0.109 & 0.86729 \\
\\
14.6031392 & 0.6870584 & 17.02 & 17.05 & 16.94 & 16.65 & 16.53 & 13.78 & 12.53 & 17.09 & 17.00 & 16.69 & 9.29 & 1.921 & 0.85083 \\
197.0064644 & 0.0958673 & 20.92 & 20.12 & 19.44 & 18.69 & 18.54 & 14.43 & 13.53 & 21.07 & 20.64 & 19.92 & 10.42 & 0.480 & 0.83156 \\
33.2483206 & -0.5081826 & 17.90 & 17.65 & 17.67 & 17.49 & 17.08 & 14.34 & 13.21 & 17.70 & 17.73 & 17.56 & 9.94 & 0.395 & 0.83109 \\
\hline
\end{tabular}
\caption{Coordinates and colours of the 43 SDSS objects selected by visual inspection. The candidates are ranked primarily by visual inspection score and secondarily by lens score.}
\label{tab:candcoord}
\end{table*}
Taking the list of objects with spectra, we visually inspect them and assign a score of 0-3, where 0 corresponds to `not a lens' and 3 corresponds to `likely a lens.' We assign the rankings blind to the spectra so as to not be biased in our score assignments. We use the spectra afterwards as a check on our visual inspection step to ensure that we do not assign high scores to the contaminant classes. The distribution of our scores is shown in Figure \ref{fig:ourrankings}. We first note the difficulty in distinguishing between QSOs and stellar contaminants. However, this is a very small sample of our list, making up only $\sim$2\% of all objects. We are much better at identifying galaxy contaminants, assigning scores less than 3\% of them scores greater than 1.5.
\begin{figure}
\centering
\includegraphics[width=3in]{plots/ourrankings.png}
\caption{Distributions of visual inspection scores, split into QSOs, galaxy contaminants, and stellar contaminants. Note that the stellar contaminants come from a small sample size of $N=11$.}
\label{fig:ourrankings}
\end{figure}
From our visual inspection scores, we have 43 objects with score greater than 1.5, after eliminating all contaminants based on the spectroscopic information. These are listed in Table \ref{tab:candcoord}, sorted first by visual inspection score, and secondarily by the lens probability assigned by our model. Image cutouts for of the objects are shown in Figure \ref{fig:candidatecutouts}.
Of the 43 selected candidates, 5 objects have imaging available in the Hubble Legacy Archive \citep{hst1,hst2,hst3,hst4,hst5}. The images of these objects are shown below their SDSS counterparts in \ref{fig:HSTimages}. Of the five objects, two are known lensed quasars while the others are singly imaged AGN. The two known lenses correspond to rank 1 and 3 in our list of candidates, providing more confidence in our visual inspection step.
\newcommand{1.3in}{1.3in}
\begin{figure*}
\includegraphics[width=1.3in]{plots/sdss1.png}
\includegraphics[width=1.3in]{plots/sdss2.png}
\includegraphics[width=1.3in]{plots/sdss3.png}
\includegraphics[width=1.3in]{plots/sdss4.png}
\includegraphics[width=1.3in]{plots/sdss5.png}
\includegraphics[width=1.3in]{plots/HST1.png}
\includegraphics[width=1.3in]{plots/HST2.png}
\includegraphics[width=1.3in]{plots/HST3.png}
\includegraphics[width=1.3in]{plots/HST4.png}
\includegraphics[width=1.3in]{plots/HST5.png}
\caption{5 of our top 43 candidates with imaging data available in the Hubble Legacy Archive. From left to right, the objects correspond to 31st, 7th, 1st, 39, and 3rd objects in Table \ref{tab:candcoord}. The middle and rightmost objects are both lenses.}
\label{fig:HSTimages}
\end{figure*}
It is important to remember that we only used the probabilities from about 10\% of the objects in SDSS passing our initial colour cuts (Section \ref{sect:largesdss}), so we expected to recover at best 10\% of the known lenses in the SDSS footprint. In our list of 2000 objects with the highest lens probability, the lowest probability was near 0.8. In a full examination of the entire survey, a combination of cuts would be needed: more demanding cuts at query level, e.g. on the limiting $W2$ magnitude and WISE colours; upper cuts in membership probabilities relative to other classes; a cutoff in the minimum $p(\rm{lens})$ score; and possibly a re-modulation of other membership probabilities, such as extended quasars at $1.2<z<1.75,$ to account for the artificial clustering around specific classes.
\section{Summary and Conclusions}
\label{sect:concl}
We have demonstrated the use of Gaussian mixture models as a possible method of searching for strongly lensed quasars using purely photometric data. We begin our search by first identifying different types of possible contaminants. For each of these classes, we develop training sets either with simulated objects in the case of rare objects such as lenses, or with real objects from SDSS for the more populated classes. Using these classes with our model along with the Expectation Maximization algorithm, we fit our model to the data in order to best describe the different populations. Our main results can be summarized as follows:
\begin{enumerate}
\item Our Gaussian mixture model is capable of discriminating between our chosen classes, and the performance improves by adding features. After training, the model is flexible enough to adjust itself to real and larger SDSS datasets, and further improve its performance.
\item When translated to VST-ATLAS, the model trained on SDSS is still able to sort most objects into their proper classes.
However, we find that the \texttt{psf} and \texttt{model} magnitudes, which prove useful in the SDSS implementation, cannot be easily replaced by the \texttt{AperMag3} and \texttt{AperMag6} magnitudes in VST-ATLAS.
\item When tested on the known lenses in SDSS, the model typically assigns either very high or very low lens probabilities. The model performs well on small separation blended systems that are the main focus of our search. Conversely, lenses that receive low probabilities typically receive high point-like QSO probabilities and are preferentially lenses with large image separation. In future implementations, one could include additional training sets designed to find this additional class of lenses.
\item Of the objects receiving high lens probabilities, roughly 80\% are QSOs and 20\% are either galaxy or stellar contaminants. An additional classification step is necessary before they can be considered viable lens candidates in order to minimize contamination. We illustrate this step using visual classification, and produce a list of 43 lensed quasar candidates selected from 10\% of SDSS, all of which are known to contain an active galactic nucleus from SDSS spectroscopy. Five of the candidates happen to have archival HST images, including two known lenses.
\end{enumerate}
A word of caution is also in order. The expectation maximization algorithm used on a Gaussian mixture model remains a semi-supervised machine learning method. The Gaussian parameters adjust themselves based on the data, but the method requires user supervision to ensure that the classes assigned by the model do, indeed, correspond to the classes they were programmed to describe. Since the EM algorithm can only maximize the likelihood function and is not penalized for misclassifying objects in the training set, it is possible that one of the Gaussians might describe a different class of objects than it was initially set to describe.
In conclusion, our study demonstrates the power of Gaussian Mixture
Model for selecting samples of lens quasar candidates with high
purity, suitable for spectroscopic and/or high resolution imaging
follow-up. The strategy illustrated here, however, is by no means
unique and different choices and improvements are possible. For
example, one could include stricter colour cuts before the visual
inspection steps in an effort to increase the purity of the sample. As
we showed with the known lenses in SDSS, one can reduce the total
sample by an order of magnitude while losing only small fraction of
the true lenses. Furthermore, we have seen that increasing features
and the proper selection of features can have a drastic effect on the
performance of the model. By extending the model to higher dimensions,
it will be possible to improve the selection, but with increased
computational complexity. Alternatively, the visual inspection step
could be replaced with pixel based machine learning techniques
\citep[e.g.][]{Agn++15a} or model based techniques \cite{Marshall:2009p593,Cha++15}.
\section*{Acknowledgments}
We are indebted to Philip J. Marshall for the suggestion of using
population-based approaches and for insightful discussions during the
development of the method that helped improve this paper. We thank our
colleagues in the STrong lensing Insights into the Dark Energy Survey
(STRIDES\footnote{STRIDES is a Dark Energy Survey Broad External
Collaboration; PI: Treu. \url{http://strides.astro.ucla.edu}})
collaboration for many stimulating conversations on lens finding. We
acknowledges support by the Packard Foundations through a Packard
Research Fellowship and by the National Science Foundation through
grant AST-1450141.
The code used to implement gaussian mixture models as well as our tables of training sets are available through the github repository \url{https://github.com/tommasotreu/WAT_GMM}. Please email TT to request access.
\bibliographystyle{mnras}
|
1,108,101,566,464 | arxiv |
\section{Introduction}
\label{sec:introduction}
The initial discovery of the astrophysical neutrino flux around PeV energies \citep{Aartsen2013} a few years ago marked the beginning of high-energy neutrino astronomy. Since then, its properties have been measured with increasing accuracy \citep{Aartsen2015}. The most recent results indicate a soft spectrum with a spectral index of $-2.5 \pm 0.1$ between around 10 TeV and 2 PeV with no significant deviation from an equal flavor composition at Earth \citep{Aartsen2015a}.
The neutrino signal has been found to be compatible with an isotropic distribution on the sky. This apparent isotropy suggests that a significant fraction of the observed neutrinos is of extragalactic origin, a result which is also supported by \citet{Ahlers2015}. However, there are also indications for a 3-$\sigma$ anisotropy \citep{Neronov2016} if low-energy events ($<100$ TeV) are omitted. Further data are required to settle this issue.
Potential extragalactic source candidates
are Active Galactic nuclei (\textbf{AGN}), where both radio-quiet \citep{Stecker1991} and radio-loud \citep{Mannheim1995} objects have been considered for neutrino production since many years.
Blazars, a subset of radio-loud active galactic nuclei with relativistic jets pointing towards Earth \citep{Urry1995}, are investigated in this paper. They are commonly classified based on the properties of the spectral energy distribution (\textbf{SED}) of their electromagnetic emission. The blazar SED features two distinctive peaks: a low-energy peak between infrared and X-ray energies,
attributed to synchrotron emission of energetic electrons, and a high-energy peak at $\gamma$-ray energies, which can be explained by several and possibly competing interaction and radiation processes of high-energy electrons and high-energy nuclei \citep{Boettcher2013}.
Several works suggest that blazar SEDs follow a sequence \citep{Fossati1998, Bottcher2002, Cavaliere2002,Meyer2011}, in which the peak energy of the synchrotron emission spectrum decreases with increasing blazar luminosity. Accordingly, blazars can be classified into low synchrotron peak (\textbf{LSP}), intermediate synchrotron peak (\textbf{ISP}) and high synchrotron peak (\textbf{HSP}) objects\footnote{This scheme is a generalization of the XBL/RBL classification of BL\,Lac objects introduced by \citet{Padovani1995}.}, a classification scheme introduced in \citet{Abdo2010} which we use throughout this work. A second classifier is based on the prominence of emission lines in the SED over the non-thermal continuum emission of the jet. Flat Spectrum Radio Quasars (\textbf{FSRQ}s) show Doppler-broadened optical emission lines \citep{Stickel1991}, while in so called BL\,Lac objects emission lines are hidden under a strong continuum emission.
Many calculations of high-energy neutrino emission from the jets of blazars can be found in the literature. Neutrinos could be produced via charged pion decay in interactions of high-energy protons
with gas (pp-interactions) in the jets \citep{Schuster2002} or in interactions of protons with internal \citep{Mannheim1995} or
external \citep{Atoyan2001} photon fields (p$\gamma$-interactions).
Early models for the neutrino emission from blazars made no explicit distinction based on the blazar class. Some of these have already been explicitly excluded at $90 \%$ C.L. by past diffuse neutrino flux measurements \citep{Abbasi2011, Aartsen2014a}, for example the combined pp+p$\gamma$ predictions in \citet{Mannheim1995}.
More recent publications, on the other hand, differentiate between specific classes of blazars and are largely not constrained by experiment, yet. The neutrino production of BL\,Lac objects is modeled e.g.
in \citet{Muecke2003,Tavecchio2014,Padovani2015} while neutrino production of FSRQs is calculated e.g. in \citet{Becker2005,Murase2014}. The models by \citet{Tavecchio2014} and \citet{Padovani2015} were in particular constructed to explain parts or all of the astrophysical neutrino flux.
With the analysis presented here, we are able to test large parts of the parameter space of many of these models for the first time.
We do not consider theoretical calculations from the literature for individual objects, since these are not directly comparable to our results.
The neutrinos predicted by most models are produced in charged pion decays which come with an associated flux of $\gamma$-rays from neutral pion decays.
Even if the hadronic fraction is sub-dominant, one could on average expect a higher neutrino luminosity for a higher observed $\gamma$-luminosity \citep{Murase2014}.
On a source-by-source basis, however, variations in the exact $\nu/\gamma$ correlation are likely. One strategy to cope with this uncertainty, which we follow in this paper, is to analyze large samples of objects and thereby to investigate average properties. We use the Fermi-LAT 2LAC catalogue\footnote{The successor catalogue 3LAC \citep{Ackermann2015} was not yet published when this analysis was carried out. For the $\gamma$-weighting scheme (see section \ref{sec:gamma_weighting}), the results are expected to be nearly identical. The 2LAC sample already resolves the majority of the GeV-blazar flux and the brightest blazars are also expected to be bright in the 3LAC catalogue in the quasi-steady approximation.} \citep{Ackermann2011} to define search positions for our analysis (see section \ref{section:blazar_populations}). The blazars in the 2LAC catalogue comprise the majority ($ \approx 70 \%$) of the total $\gamma$-ray flux emitted from all GeV-blazars in the observable universe between $100 \ \mathrm{MeV}$ and $100 \ \mathrm{GeV}$ (see appendix \ref{appendix:correction_factor}). Compared to other Fermi catalogues starting at higher energies, 1FHL \citep{Ackermann2013} or 2FHL \citep{Ackermann2016}, the 2LAC contains more than twice the number of blazars.
The goal is to look for a cumulative neutrino flux excess from all 862 2LAC blazars or from specifically selected sub-populations using muon-track data with an angular resolution of about a degree in an unbinned maximum-likelihood stacking approach. We use two different "weighting schemes" (see section \ref{section:weighting_schemes}) to define the probability density functions (\textbf{PDF}s) for the neutrino signal, expressing different assumptions about the relative neutrino flux for each source. Each weighting scheme represents its own test of the data.
The analysis differs from previous point source searches most drastically in two points. \begin{enumerate}
\item The blazar populations comprise nearly 2 orders of magnitude more sources.
\item For the first time, we use a model-independent weighting scheme. In this test of the data, we make nearly no assumption about the exact $\nu/\gamma$ correlation, except that the neutrino flux originates from the defined blazar positions.
\end{enumerate}
Section \ref{section:blazar_populations} defines the five blazar populations considered in this analysis.
Section \ref{section:data} describes the muon track dataset used for this search. Section \ref{section:analysis}
summarizes the analysis, including the technique of the unbinned stacking search, a description of different weighting schemes,
the confidence interval construction, and a discussion on potential biases from non-hadronic contributions to the $\gamma$-ray flux.
Section \ref{section:results} presents the analysis results and section \ref{section:discussion} discusses their implications.
\section{The blazar populations}
\label{section:blazar_populations}
The Fermi-LAT 2LAC catalogue \citep{Ackermann2011} contains 862 GeV-emitting blazars at high galactic latitudes $|b|>10^{\circ}$ that are not affected by potential source confusion\footnote{No source confusion means that the \texttt{CLEAN} flag from the
catalogue for a particular source is set.}. The data for this catalog was taken between August 2008 and August 2010. We use the spectroscopic classification into FSRQ and
BL\,Lac objects \citep{Stickel1991} and the independent classification into LSP, ISP and HSP
objects \citep{Abdo2010} to define sub-populations of these 862 objects.
We do not impose any other cuts (e.g. on the $\gamma$-ray flux) because the exact neutrino flux expectations are unknown as outlined in section \ref{sec:introduction}.
The motivations for the particular sub-samples are described in the following.
\begin{description}
\item[All 2LAC Blazars (862 objects)]{
The evolutionary blazar sequence \citep{Cavaliere2002, Bottcher2002} suggests that blazars form a continuous spectrum of objects that are connected via cosmological evolution. A recent study by \citet{Ajello2014} supports this hypothesis. Since the corresponding evolution of the neutrino emission is not known, the most unbiased assumption is to group all blazars together.
This is especially justified for the
analysis using the equal weighting scheme discussed in section \ref{section:weighting_schemes}.
}
\item[FSRQs (310 objects)]{
The class of FSRQs show strong, broad emission lines that potentially act as intense radiation targets for photomeson
production of neutrinos \citep{Atoyan2001,Murase2014}.
}
\item[LSPs (308 objects)]{
The majority of FSRQs are LSP objects. \citet{Giommi2012} argue that LSP-BL\,Lacs are actually physically similar to FSRQs, but whose emission lines are overwhelmed by the strong jet continuum. This sample therefore groups all LSP objects together.}
\item[ISP+HSPs (301 objects)]{
HSP objects differ from LSP objects in terms of their luminosities and mainly consist of BL\,Lacs \citep{Ajello2014}.
The peak-frequency boundary between LSP and HSP is only defined artificially, with ISP objects filling the gap.
In order to have a larger sample of objects, the HSP objects are grouped together
with the ISP objects in one combined sample for this analysis.}
\item[LSP-BL\,Lacs (68 objects)]{
Objects that share the LSP and BL\,Lac classification have been specifically considered for neutrino emission
in \citet{Muecke2003}. Therefore we test them as a separate sample. They form the smallest sub population in this analysis.
}
\end{description}
The distribution of the sources on the sky for the largest sample (all 2LAC blazars) and smallest sample (LSP-BL\,Lacs) are shown in
figure \ref{fig:blazar_distribution}. A modest LAT-exposure deficit and lower sky-coverage by optical surveys in the southern sky lead to a slight deficit of objects in southern hemisphere \citep{Ackermann2011}. The effect is most prominent for the BL Lac-dominated samples. However, blazars without optical association are also included in the 2LAC catalog and partly make up for this asymmetry in the total sample. For simplicity, we assume a quasi-isotropic source distribution for all populations (excluding the source-free region around the galactic plane) for the calculation of quasi-diffuse fluxes. This assumption also seems reasonable looking at the weight distribution of sources (equal weighting) in figures \ref{fig:histogrammed_weights} (a)--(e), appendix \ref{appendix:supplementary}. Figure \ref{fig:population_overlap} shows the overlap between the samples.
The LSP-BL\,Lac, FSRQ and ISP+HSP sample are nearly independent, with a small overlap of 3 sources between the FSRQs and ISP+HSP samples. The largest overlap exists between the FSRQ and LSP samples, which share around $60 \%$ of their sources. The all-blazar sample contains 167 sources that are not shared with any sub sample. These are sources that are either unclassified or only classified as BL\,Lac objects with no corresponding synchrotron peak classification.
\begin{figure*}
\epsscale{1.1}
\plottwo{skymap_allblazars.pdf}{skymap_lspbllacs.pdf}
\caption{Distribution of sources in the sky for the largest and smallest sample of blazars (in equatorial Mollweide projection) --- (left) The largest sample, all 2LAC blazars (862 sources) --- (right) The smallest sample, LSP-BL\,Lacs (68 sources). The excluded region of the catalogue ($|b|\leq 10 ^\circ$) is highlighted in red.}
\label{fig:blazar_distribution}
\end{figure*}
\begin{figure}
\epsscale{1.1}
\plotone{overlap.pdf}
\caption{Visualization of the source overlap between the different blazar populations.}
\label{fig:population_overlap}
\end{figure}
\section{Data selection}
\label{section:data}
IceCube is a neutrino telescope located at the geographic South pole. It consists of about one $\mathrm{km}^3$ of Antarctic ice that is instrumented with 5160 optical photo sensors which are connected via cables (``strings'') with the data aqcuisition system at the surface. The photo sensors detect Cherenkov light emitted by charged particles that are produced in the interactions of neutrinos with nuclei and electrons in the glacial ice. The geometry and sensitivity of the photo sensors leads to an effective energy threshold for neutrinos of about $100 \ \mathrm{GeV}$. A more detailed description of the detector and the data acquisition can be found in \citet{Abbasi2009}.
Two main signatures can be distinguished for the recorded events: ``track-like'' and ``shower-like''. Only track-like events are of
interest for the analysis here. They are the characteristic signature of muons produced in the charged-current interactions of muon neutrinos\footnote{We neglect track-like signals from $\nu_{\tau} + N \rightarrow \tau + X \rightarrow \mu + X$, i.e. muons as end products of a $\nu_{\tau}$ charged-current interaction chain. The $\tau \rightarrow \mu$ decay happens with a branching fraction of only $ 17 \%$ \citep{Olive2014}, and the additional decay step lowers the outgoing muon energy, leading to an even further suppression of the $\nu_\tau$ contribution in a sample of track-like events. For hard fluxes (spectral index 1-2) above PeV energies, where the $\nu_\tau$-influence becomes measurable due to $\nu_\tau$-regeneration \citep{Bugaev2004}, this treatment is conservative.}.
IceCube was constructed between 2006 and 2011 with a final configuration of 86 strings.
We use data from the 59-string (\textbf{IC-59}), 79-string \textbf{(IC-79}) and 86-string (\textbf{IC-86}) configurations of the IceCube detector recorded between May 2009 and April 2012.
In contrast to previous publications, we do not include data from the 40-string configuration here since the ice model description in the IC-40 MonteCarlo datasets is substantially different and the sensitivity gain would be marginal.
The track event selection for the three years of data is similar to the ones described in \citet{Aartsen2013a} and \citet{Aartsen2014}. The angular resolution of the majority of events in the track sample is better than $1^{\circ}$ for events with reconstructed muon energies above 10 TeV \citep{Aartsen2014}. The angular reconstruction uncertainty is calculated following the prescription given in \citet{Neunhoffer2006}. We apply one additional minor selection criterion for the estimated angular uncertainty of the reconstructed tracks ($\sigma_{est.} \leq 5^{\circ}$) for computational reasons. The removed events do not have any measurable effect on the sensitivity.
Event numbers for the individual datasets are summarized in table \ref{table:dataset_statistics}.
\begin{table}
\centering
\begin{tabular}{ c||c|c|c }
\hline
dataset & all sky & northern sky & southern sky \\
\hline
\hline
IC-59 & 107011 & 42781 & 64230 \\
\hline
IC-79 & 93720 & 48782 & 44938 \\
\hline
IC-86 & 136245 & 61325 & 74920 \\
\hline
\end{tabular}
\caption{Total number of data events in the respective datasets of IC-59, IC-79 and IC-86 for each celestial hemisphere. ``Northern sky'' means the zenith angle $\theta$ for the incoming particle directions is equal to or larger than $90^{\circ}$. ``Southern sky'' means $\theta < 90^{\circ}$.
}
\label{table:dataset_statistics}
\end{table}
The dataset is dominated by bundles of atmospheric muons produced in cosmic-ray air shower interactions for tracks coming from the southern hemisphere ($\theta < 90^{\circ}$).
Tracks from the northern hemisphere ($\theta \geq 90^{\circ}$) originate mostly from atmospheric neutrino interactions that produce muons.
In order to reduce the overwhelming background of direct atmospheric muons to an acceptable level, it is necessary to impose a high-energy cut for events from the southern hemisphere. The cut raises the effective neutrino energy threshold to approximately 100 TeV \citep{Aartsen2014}, reducing the sensitivity to neutrino sources in this region by at least 1 order of magnitude for spectra softer than $\mathrm{E}^{-2}$. Only for harder spectra, the southern sky has a significant contribution to the overall sensitivity.
The northern sky does not require such an energy cut, as upgoing tracks can only originate from neutrino interactions, which have a much lower incidence rate. However, at very high energies (again around $100 \mathrm{TeV}$), the Earth absorbs a substantial fraction of neutrinos, reducing also the expected astrophysical signal. Charged-current $\nu_\mu$-interactions can happen far outside the instrumented volume and still be detected, as high-energy muons may travel several kilometers through the glacial ice before entering the detector. This effect increases the effective detection area for certain arrival directions, mostly around the horizon.
The most sensitive region is therefore around the celestial equator, which does not require a high energy cut, provides ample target material surrounding the detector, i.e. a large effective area, and does not suffer from absorption of neutrinos above $100$ TeV.
However, these zenith-dependent sensitivity changes are mostly important for the interpretation of the results (see e.g. section \ref{section:generic_upper_limits}). The likelihood approach takes these differences into account with the "acceptance" term in eq. (\ref{eq:weighting_term}), section \ref{section:llh}, and a separation into several zenith-dependent analyses is not necessary. For more details on the properties of the datasets and the zenith-dependent sensitivity behaviour, we refer to \citet{Aartsen2013a} and \citet{Aartsen2014}.
\section{Analysis}
\label{section:analysis}
\subsection{The likelihood function for unbinned ML stacking}
\label{section:llh}
The analysis is performed via an extended unbinned maximum likelihood fit \citep{Barlow1990}. The likelihood function consists
of two PDFs, one PDF $B(\overline{x})$ for a background hypothesis and one PDF $S(\overline{x})$ for a signal hypothesis.
Requiring the total number of observed events to be the sum of the signal and background events, the log-likelihood function can be written as
\begin{equation}
\begin{split}
\mathrm{ln}(L)\{n_s, \Gamma_{\mathrm{SI}}\} &= \sum_{i=1}^{N} \mathrm{ln}
\left( \frac{n_s}{N} \cdot S(\delta_i, RA_i, \sigma_i, \varepsilon_i; \Gamma_{\mathrm{SI}})
\right. \\ &\left. + \left(1-\frac{n_s}{N} \right)\cdot B(\mathrm{sin}(\delta_i), \varepsilon_i) \right) ,
\end{split}
\label{eq:llh}
\end{equation}
where $i$ indexes individual neutrino events.
The likelihood function depends on two free parameters: the normalization factor $n_s$ and spectral index $\Gamma_{\mathrm{SI}}$ of the total blazar signal. For computational reasons we assume that each source of a given population shares the same spectral index.
The background evaluation for each event depends on the reconstructed declination $\delta_i$
and the reconstructed muon energy $\varepsilon_i$. The signal part additionally depends on the reconstructed right ascension $\mathrm{RA}_i$, the angular error estimator $\sigma_i$ and the power-law spectral index $\Gamma_{\mathrm{SI}}$.
The background PDF is constructed from binning the recorded data in reconstructed declination and energy. It is evaluated as
\begin{equation}
B(\mathrm{sin}(\delta_i), \varepsilon_i)=\frac{1}{2 \pi} \cdot f(\mathrm{sin}(\delta_i), \varepsilon_i),
\end{equation}
where $\frac{1}{2 \pi}$ arises from integration over the right ascension and $f$ is the normalized joint probability distribution of the events in declination $\mathrm{sin}(\delta)$ and energy $\varepsilon$.
The signal PDF that describes a given blazar population is a superposition of the individual PDFs for each source,
\begin{equation}
\begin{split}
&S(\delta_i, \mathrm{RA}_i, \sigma_i, \varepsilon_i; \Gamma_{\mathrm{SI}})
\\ &= \frac{\sum_{j=1}^{N_{src}}{w_{j} \cdot S_j(\delta_i, \mathrm{RA}_i, \sigma_i, \varepsilon_i; \Gamma_{\mathrm{SI}})}}{\sum_{j=1}^{N_{src}}{w_{j}}} ,
\end{split}
\label{eq:cumulative_signal}
\end{equation}
where $w_j$ is a weight determining the relative normalization of the PDF $S_j$ for source $j$. This weight therefore accounts for the relative contribution of source $j$ to the combined signal. In general, different choices of $w_j$ are possible.
The two choices used in this work are discussed in section \ref{section:weighting_schemes}. Each term $S_j$ in equation \ref{eq:cumulative_signal} is evaluated as
\begin{equation}
\begin{split}
&S_j(\delta_i, \mathrm{RA}_i, \sigma_i, \varepsilon_i; \Gamma_{\mathrm{SI}}) \\ = \frac{1}{2\pi {\sigma_i}^2}\cdot &\mathrm{exp}\left( \frac{1}{2} \cdot \left(\frac{ \Psi_{ij}[\delta_i, \mathrm{RA}_i]}{\sigma_i}\right)^2\right) \cdot g_j(\varepsilon_i; \Gamma_{\mathrm{SI}}) ,
\end{split}
\end{equation}
where the spatial term is expressed as a 2D symmetric normal distribution and $g_j$ is the normalized PDF for the reconstructed muon
energy for source $j$. The term $\Psi_{ij}$ is the angular
separation between event $i$ and source $j$.
\subsection{Weighting Schemes}
\label{section:weighting_schemes}
The term $w_j$ in equation \ref{eq:cumulative_signal} parametrizes the relative contribution of source $j$ to the combined signal. It corresponds to the expected number of events for source $j$, which can be expressed as
\begin{equation}
w_{j} = \int_{E_{\nu,min}}^{E_{\nu,max}}{ \Phi_{0,j} \cdot h_j(E_{\nu}) \cdot A_{\mathrm{eff}}(\theta_{j} , E_{\nu} ) \ dE_{\nu} } ,
\end{equation}
where $A_{\mathrm{eff}}(\theta_{j}, E_{\nu})$ is the effective area for incoming muon neutrinos from a given source direction at a given energy,
$h_j(E_{\nu})$ denotes the normalized neutrino energy spectrum for source $j$, and $\Phi_{0,j}$ its overall flux normalization. The integration bounds $E_{\nu,\mathrm{min}}$ and $E_{\nu,\mathrm{max}}$ are set to $10^{2} \ \mathrm{GeV}$ and $10^9 \ \mathrm{GeV}$ respectively, except for the differential analysis (see section \ref{section:statistical_tests}), in which they are defined for the given energy band.
Under the assumption that all sources share the same spectral power-law shape,
$w_j$ further simplifies via
\begin{align}
w_{j} &= \left[\Phi_{0,j}\right] \cdot \left[\int_{E_{\nu,min}}^{E_{\nu,max}}{ h(E_\nu; \Gamma_{\mathrm{SI}}) \cdot A_{\mathrm{eff}}(\theta_{j} , E_{\nu} ) \ dE_{\nu} } \right] \nonumber \\ &= \left[C \cdot w_{j, \mathrm{model}} \right] \cdot \left[w_{j, acc.}(\theta_{j}, \Gamma_{\mathrm{SI}})\right] ,
\label{eq:weighting_term}
\end{align}
and splits into a ``model'' term, where $w_{j, \mathrm{model}}$ is proportional to the expected relative neutrino flux of source $j$, and into an ``acceptance'' term, which is fixed by the position of the source and the global energy spectrum.
The term $w_{j, \mathrm{model}}$ is not known, and its choice defines the ``weighting scheme'' for the stacking analysis.
The following two separate weighting schemes are used for the signal PDF in the likelihood analysis, leading to two different sets of tests.
\subsubsection{$\gamma$-weighting}
\label{sec:gamma_weighting}
For this weighting scheme we first have to assume that the $\gamma$-ray flux can be modeled as being quasi-steady between 2008 and 2010, the time period which forms the basis for the 2LAC catalog. This makes it possible to extrapolate the flux expectation of each source to other time periods, e.g. into the non-overlapping part of the data-taking period of the IceCube data for this analysis (2009-2012).
Each model weight, i.e. the relative neutrino flux expected to arrive from a given source, is then given by the source's $\gamma$-ray energy flux observed by Fermi-LAT in the energy range between $E> 100 \ \mathrm{MeV}$ and $E> 100 \ \mathrm{GeV}$.
\begin{equation}
w_{j, \mathrm{model}}=\int_{100 \mathrm{MeV}}^{100 \mathrm{GeV}}{E_{\gamma} \frac{d \phi_{\gamma, j}}{d E_{\gamma}} \ dE_{\gamma}}
\end{equation}
This is motivated by the fact that a similar amount of energy is channeled into the neutrino and $\gamma$-ray emission if pion decay from $pp$ or $p\gamma$ interactions dominates the high-energy interaction. While the source environment is transparent to high-energy neutrinos, it might not be for $\gamma$-rays. Reprocessing of $\gamma$-rays due to $\gamma \gamma$ interactions might then shift the energies of the photons to GeV and sub-GeV energies before they can leave the sources, which would make them detectable by the Fermi-LAT. This might even be expected in $p\gamma$ scenarios \citep{Murase2015}. Since a large fraction of blazars are located at high redshifts $z\geq1$ \footnote{With the exception of HSP objects, see \citet{Ackermann2011}.} , this reprocessing will also take place during propagation of the photons in the extragalactic background light (\textbf{EBL}),
shifting $\gamma$-ray energies below a few hundred GeV for such sources \citep{Dominguez2013}. This places them potentially again in the energy range of the Fermi-LAT 2LAC catalogue.
Even in the case that synchrotron contributions (e.g. muon or pion synchrotron radiation) dominate over pion decay in the $\mathrm{MeV}$-$\mathrm{GeV}$ range,
which has been considered in particular for BL\,Lac objects \citep{Muecke2003}, one would expect the overall $\gamma$-ray emission to be proportional to the neutrino emission. This is also the case in models where inverse Compton processes dominate the high-energy $\gamma$-ray emission \citep{Murase2014}.
The preceding arguments in favour of a $\gamma$-weighting scheme assume that all sources show equal proportionality. On a source-by-source basis, however, the proportionality factor can vary, as already mentioned in section \ref{sec:introduction}.
One contributing factor is the fact that Fermi probes different sections of the blazar $\gamma$-ray peak for each source relative to the peak position. For simplicity, we do not perform a spectral source-by-source fit in this paper, leaving this aspect for potential future work. This is also mostly an issue for the "All 2LAC-Blazar" sample, since the other sub-classifications described in section \ref{section:blazar_populations} depend on the peak position and this effect is largely mitigated.
There are additional reasons for source-by-source fluctuations in the $\gamma/\nu$ correlation due to EBL reprocessing. First, the EBL absorption might not be sufficient for close-by sources, such that emerging high-energy $\gamma$-rays are not reprocessed into the energy range of the 2LAC catalogue which ends at $100 \ \mathrm{GeV}$.
Second, EBL reprocessing differs between sources depending on the line-of-sight magnetic fields which deflect charged particle pairs produced in EBL cascades \citep{Aharonian1994} differently.
Third, strong in-source $\gamma \gamma$ reprocessing could lead to $\gamma$-rays at even lower energies than $100 \ \mathrm{MeV}$ \citep{Murase2015} which would be below the 2LAC energy range.
All results presented in section \ref{section:results} making use of the $\gamma$-weighting scheme assume that the potential source-to-source fluctuations in the $\gamma-\nu$ correlation described here average out for large source populations and can be neglected. More information on the distribution of the weights in dependence of declination can be found in figures \ref{fig:histogrammed_weights} (a)--(e), appendix \ref{appendix:supplementary}.
\subsubsection{Equal weighting}
The $\gamma$-weighting scheme is optimal under the assumption that the neutrino flux follows the measured $\gamma$-energy flux exactly. Given the the uncertainties discussed in section \ref{sec:gamma_weighting}, we also use another weighting scheme,
\begin{equation}
w_{j,\mathrm{model}}=1 ,
\end{equation}
which we expect to be more sensitive eventually if the actual $\gamma-\nu$ correlation varies strongly from source to source. It provides a complementary and model-independent test in which we are maximally agnostic to the degree of correlation between $\gamma$-ray and neutrino luminosities.
We do not assume a specific neutrino emission in a given source when calculating the flux upper limits for the equal weighting scheme, in particular no equal emission. We only assume, to some approximation, that the differential source count distributions (\textbf{SCD}s) of $\gamma$-rays and neutrinos have comparable shapes. The differential source count distribution, $dN/dS$, describes how the energy-integrated flux $S$ is distributed over all sources and is a crucial property of any cosmological source population.
Section \ref{section:ensemble_simulations} provides more information on the technical aspects of neutrino flux injection in the equal weighting test. Appendix \ref{appendix:scd_dependence} then discusses why the methodology is robust against variations in the actual shape of the $dN/dS$ distribution for the neutrino flux in the IceCube energy range, and why the final result is valid even if the neutrino SCD is different from the $\gamma$-ray SCD.
\subsection{Statistical tests}
\label{section:statistical_tests}
We perform statistical tests for each population of blazars.
The log-likelihood difference $\lambda$ defines our test statistic (\textbf{TS}), given by
\begin{equation}
\begin{split}
\lambda= &-2 \cdot \mathrm{log}(L)\{n_s=0\} \\ &+ 2 \cdot \mathrm{log}(L)\{n_s=n_{s,\mathrm{max}}, \Gamma_{\mathrm{SI}}=\Gamma_{\mathrm{SI},\mathrm{max}}\} ,
\end{split}
\end{equation}
where $n_{s,\mathrm{max}}$ and $\Gamma_{\mathrm{SI},\mathrm{max}}$
are the number of signal events and the signal spectral index that maximize
the TS. We simulate an ensemble of background-only skymaps, where the TS distribution is compared with the TS value obtained from the data. The p-value is then defined as the fraction of skymaps in the background ensemble that has a larger TS value than the one observed. Ensembles of skymaps with different injected signal strengths are then used to calculate the resulting confidence interval.
See section \ref{section:ensemble_simulations} for details on the skymap simulations.
In total we perform two distinct types of tests for which p-values are calculated. The first (``integral'') assumes a power law spectrum for the blazar emission over the full energy range observable with IceCube (unless stated otherwise).
The second (``differential'') assumes a neutrino signal that is confined to a small energy range (half a decade in energy) and has a power law spectrum with a spectral index of $-2$ within this range. We perform the differential test for 14 energy ranges between $100 \ \mathrm{GeV}$ and $1 \ \mathrm{EeV}$.
\subsection{Simulations}
\label{section:ensemble_simulations}
We estimate the sensitivity of our searches in both weighting schemes using an ensemble of simulated skymaps containing both background and signal events.
We simulate the background by drawing events from the experimental data sample, then randomizing their right ascensions to remove any correlation with blazar positions. This is the same method used in previous IceCube point source searches \citep{Aartsen2013a,Aartsen2014} and mitigates systematic uncertainties in the background description due to the data-driven event injection.
The injection for signal differs depending on the weighting schemes. For the $\gamma$-weighting scheme, we inject signal events with the relative flux contribution of each source determined by the weight factors $w_{j,\mathrm{model}}$ that are used in the PDF.
In the equal weighting scheme, following the same approach would lead to a simulated signal of $n$ equally bright sources, which
is not realistic for a population distributed widely in redshift and luminosity. Therefore
we inject events using a relative neutrino flux contribution that follows a realistic SCD. Since the neutrino $dN/dS$ distribution of blazars is unknown, we have chosen to use the blazar $\gamma$-ray SCD published in \citet{Abdo2010a} as a template\footnote{This blazar SCD strictly stems from the 1FGL catalogue \citep{Abdo2010b}, but any SCD based on a newer catalog is not expected to change significantly since a large fraction of the total $\gamma$-ray flux is already resolved in the 1FGL.}.
Here we assume that for the population under investigation, the relative contributions to the total neutrino flux
are distributed in a similar fashion as the relative contributions to the total $\gamma$-ray flux.
However, there are no assumptions about the correlation of the neutrino and $\gamma$-ray flux for individual sources.
There are two reasons to choose the $\gamma$-ray SCD as the primary template for the shape of the neutrino SCD. The first is that we select the populations based on their $\gamma$-ray emission to start with.
The second is that the form of the high-energy $\gamma$-ray SCD is quite general, and has also been observed with AGN detected in the radio \citep{Hopkins2003} and x-ray \citep{Georgakakis2008a} bands. It starts with quasi-Euclidean behavior ($S^{5/2} \cdot dN/dS \approx \mathrm{const.}$) at high fluxes,
and then changes to a harder power law index towards smaller flux values which ensures that the total flux from the population remains finite.
The skymap simulations are performed for many possible SCD realizations by sampling from the $dN/dS$ distribution. This is necessary since the number of signal events expected in IceCube for a given neutrino flux varies greatly over the two hemispheres (see section \ref{section:data}). Thus, it matters how the neutrino flux is distributed over the individual sources for the value of the resulting confidence interval.
The shape of the SCD and the flux sampling range have an additional impact. See appendix \ref{appendix:scd_dependence} for further details in the context of confidence interval construction.
\section{Results}
\label{section:results}
\subsection{Observed p-values}
Table \ref{table:integral_pvalues} summarizes p-values
for the ``integral'' test (see section \ref{section:statistical_tests}). Nine out of the ten tests show over-fluctuations, but no significant excess.
We find the strongest over-fluctuation, a $6 \%$ p-value, using the equal-weighting scheme for all 2LAC blazars. We omit a trial-factor correction because the populations have a large overlap and the result is not significant.
Figure \ref{fig:differential_pvalues} shows the p-values from the corresponding ``differential'' test. The largest excess is visible in the $5-10 \ \mathrm{TeV}$ energy band with a pre-trial p-value of $4 \cdot 10^{-3}$. This outcome is totally compatible with a fluctuation of the background, since the effect of multiple trials has to be taken into account which reduces the significance of the observation substantially. An accurate calculation of the trial-corrected p-value is again difficult, as neither the five blazar samples, nor the 14 tested energy ranges per sample, are independent. We again omit it for simplicity.
Comparing the differential p-value plot of all 2LAC blazars with the other populations (see figures \ref{fig:differential_limits_and_pvalues} (a)--(e) in appendix \ref{appendix:supplementary}), one finds that the overfluctuation is caused by the LSP-BL Lac-, FSRQ- and ISP/HSP population, which are nearly independent and each show a small excess in 5 TeV - 20 TeV region. In the $\gamma$-weighting scheme, the ISP/HSP p-value distribution is nearly flat, which leads to the weaker overfluctuation in the "all 2LAC blazar" sample compared to the equal weighting scenario.
\begin{table}
\centering
\begin{tabular}{ c||c|c }
\hline
\multirow{2}[0]{*}{Population} & \multicolumn{2}{c}{p-value} \\
& $\gamma$-weighting & equal weighting \\
\hline
\hline
All 2LAC blazars & $36 \%$ $(+0.4 \sigma)$ & $6 \%$ $(+1.6 \sigma)$ \\
\hline
FSRQs & $34 \%$ $(+0.4 \sigma)$ & $34 \%$ $(+0.4 \sigma)$ \\
\hline
LSPs & $36 \%$ $(+0.4 \sigma)$ & $28 \%$ $(+0.6 \sigma)$ \\
\hline
ISP/HSPs & $>50 \%$ & $11 \%$ $(+1.2 \sigma)$ \\
\hline
LSP-BL\,Lacs & $13 \%$ $(+1.1 \sigma)$ & $7 \%$ $(+1.5 \sigma)$ \\
\hline
\end{tabular}
\caption[P-values for the spectrum integrated search]{P-values and the corresponding significance in units of standard normal deviations in the power-law test. The table shows the results for both weighting schemes
The values do not include a trial factor correction.}
\label{table:integral_pvalues}
\end{table}
\begin{figure}
\epsscale{1.1}
\plotone{pvalues_all_differential.pdf}
\caption{Local p-values for the sample containing all 2LAC blazars using the equal-weighting scheme (black) and $\gamma$-weighting scheme (green) in the differential test.}
\label{fig:differential_pvalues}
\end{figure}
\subsection{Flux upper limits}
\label{section:upper_limits}
Since no statistically significant neutrino emission from the analyzed source populations was found, we calculate flux upper limits using various assumptions about their energy spectrum. We use the $CL_s$ upper limit construction \citep{Read2000}. It is more conservative than a standard Neyman construction, e.g. used in \citet{Aartsen2014}, but allows for a proper evaluation of under-fluctuations of the background which is used for the construction of differential flux upper limits.
We give all further results in intensity units and calculate the quasi-diffuse flux\footnote{The flux divided by the solid angle of the sky above 10 degrees galactic latitude, i.e. $0.83 \times 4 \pi$. See section \ref{section:blazar_populations} for a justification.} for each population.
The flux upper limits in the equal weighting scheme are calculated using multiple samplings from an assumed neutrino SCD for the blazars, as already outlined in section \ref{section:ensemble_simulations}. Please refer to appendix \ref{appendix:scd_dependence}
for further details about the dependence of the flux upper limit on the choice of the SCD and a discussion of the robustness of the equal-weighting results. In general, the equal weighting upper limit results do not correspond to a single flux value, but span a range of flux values.
For each upper limit\footnote{With the exception of the differential upper limit.} we determine the valid energy range according to the procedure in appendix \ref{appendix:energy_range}. This energy range specifies where IceCube has exclusion power for a particular model, and is also used for visualization purposes in all figures.
Systematic effects influencing the upper limits are dominated by uncertainties on the absorption and scattering properties of the Antarctic ice and the detection efficiency of the optical modules.
Following \citet{Aartsen2014}, the total systematic uncertainty on the upper limits is estimated to be $21 \%$. Since we are dealing with upper limits only, we conservatively include the uncertainty additively in all figures and tables.
\subsection{Generic upper limits}
\label{section:generic_upper_limits}
Table \ref{table:generic_upper_limits} shows flux upper limits assuming a generic power-law spectrum for the tested blazar populations, calculated for the three different spectral indices $-1.5$, $-2.0$, and $-2.7$.
\begin{table}
\begin{tabular}{ c|c|c }
\hline
\multicolumn{3}{c}{ \rule{0pt}{2.5ex} Spectrum: $\Phi_0 \cdot (E/\mathrm{GeV})^{-1.5}$ }\\
\hline
\multirow{2}[3]{*}{Blazar Class} &
\multicolumn{2}{c}{\rule{0pt}{3ex} $ {\Phi_{0}}^{90 \%} [\mathrm{GeV}^{-1} \mathrm{cm}^{-2} \mathrm{s}^{-1} \mathrm{sr}^{-1}]$} \\
& $\gamma$-weighting & equal weighting \\
\hline
\hline
\rule{0pt}{2.5ex} All 2LAC Blazars & {\color{black}$1.6 \times 10^{-12}$} & {\color{black}$4.6 \ (3.8 - 5.3) \times 10^{-12}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} FSRQs & {\color{black}$0.8 \times 10^{-12}$} & {\color{black}$2.1 \ (1.0 - 3.1) \times 10^{-12}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} LSPs & {\color{black}$1.0 \times 10^{-12}$} & {\color{black}$1.9 \ (1.2 - 2.6) \times 10^{-12}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} ISPs/HSPs & {\color{black}$1.8 \times 10^{-12}$} & {\color{black}$2.6 \ (2.0 - 3.2) \times 10^{-12}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} LSP-BL\,Lacs & {\color{black}$1.1 \times 10^{-12}$} & {\color{black}$1.4 \ (0.5 - 2.3) \times 10^{-12}$} \\
\cline{1-3}
\hline\hline
\multicolumn{3}{c}{ \rule{0pt}{2.5ex} Spectrum: $\Phi_0 \cdot (E/\mathrm{GeV})^{-2.0}$ }\\
\hline
\multirow{2}[3]{*}{Blazar Class} &
\multicolumn{2}{c}{\rule{0pt}{3ex} $ {\Phi_{0}}^{90 \%} [\mathrm{GeV}^{-1} \mathrm{cm}^{-2} \mathrm{s}^{-1} \mathrm{sr}^{-1}]$} \\
& $\gamma$-weighting & equal weighting \\
\hline
\hline
\rule{0pt}{2.5ex} All 2LAC Blazars & {\color{black}$1.5 \times 10^{-9}$} & {\color{black}$4.7 \ (3.9 - 5.4) \times 10^{-9}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} FSRQs & {\color{black}$0.9 \times 10^{-9}$} & {\color{black}$1.7 \ (0.8 - 2.6) \times 10^{-9}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} LSPs & {\color{black}$0.9 \times 10^{-9}$} & {\color{black}$2.2 \ (1.4 - 3.0) \times 10^{-9}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} ISPs/HSPs & {\color{black}$1.3 \times 10^{-9}$} & {\color{black}$2.5 \ (1.9 - 3.1) \times 10^{-9}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} LSP-BL\,Lacs & {\color{black}$1.2 \times 10^{-9}$} & {\color{black}$1.5 \ (0.5 - 2.4) \times 10^{-9}$} \\
\cline{1-3}
\hline\hline
\multicolumn{3}{c}{ \rule{0pt}{2.5ex} Spectrum: $\Phi_0 \cdot (E/\mathrm{GeV})^{-2.7}$ }\\
\hline
\multirow{2}[3]{*}{Blazar Class} &
\multicolumn{2}{c}{\rule{0pt}{3ex} $ {\Phi_{0}}^{90 \%} [\mathrm{GeV}^{-1} \mathrm{cm}^{-2} \mathrm{s}^{-1} \mathrm{sr}^{-1}]$} \\
& $\gamma$-weighting & equal weighting \\
\hline
\hline
\rule{0pt}{2.5ex} All 2LAC Blazars & {\color{black}$2.5 \times 10^{-6}$} & {\color{black}$8.3 \ (7.0 - 9.7) \times 10^{-6}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} FSRQs & {\color{black}$1.7 \times 10^{-6}$} & {\color{black}$3.3 \ (1.6 - 5.1) \times 10^{-6}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} LSPs & {\color{black}$1.6 \times 10^{-6}$} & {\color{black}$3.8 \ (2.4 - 5.2) \times 10^{-6}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} ISPs/HSPs & {\color{black}$1.6 \times 10^{-6}$} & {\color{black}$4.6 \ (3.5 - 5.6) \times 10^{-6}$} \\
\cline{1-3}
\rule{0pt}{2.5ex} LSP-BL\,Lacs & {\color{black}$2.2 \times 10^{-6}$} & {\color{black}$2.8 \ (1.0 - 4.6) \times 10^{-6}$} \\
\cline{1-3}
\hline\end{tabular}
\caption{$90 \% $ C.L. upper limits on the diffuse ($\nu_\mu+\overline{\nu}_\mu$)-flux from the different blazar populations tested. The table contains results for power-law spectra with spectral indices $-1.5$, $-2.0$, and $-2.7$. The equal-weighting column shows the median flux upper limit
and the $90 \%$ central interval of different sample realizations of the Fermi-LAT source count contribution (in parentheses). All values include systematic uncertainties.}
\label{table:generic_upper_limits}
\end{table}
The distribution of the $\gamma$-ray energy flux among the sources in each population governs the flux upper limit in the $\gamma$-weighting scheme.
It is mostly driven by the declination of the strongest sources in the population, due to the strong declination dependence of IceCube's effective area. For FSRQs, the two sources with the
largest $\gamma$-weights (\texttt{3C 454.3} at $\mathrm{DEC}_{2000}=16^{\circ}$ and \texttt{PKS1510-08} at $\mathrm{DEC}_{2000}=-9^{\circ}$)
carry around $15 \%$ of the total $\gamma$-weight of all FSRQs. Their positions close to the equator place them in the most sensitive region
for the IceCube detector, and the $\gamma$-weighting upper limits for FSRQs are more than a factor of 2 lower than the corresponding equal-weighting limits. For the LSP-BL\,Lacs, the two strongest sources (\texttt{PKS 0426-380} at $\mathrm{DEC}_{2000}=-38^{\circ}$
and \texttt{PKS 0537-441} at $\mathrm{DEC}_{2000}=-44^{\circ}$) carry nearly $30 \%$ of the total $\gamma$-weight but are located in the southern sky, where IceCube is not very sensitive. The $\gamma$-weighting upper limit is therefore comparable to the equal-weighting upper limit. The reader is referred to appendix \ref{appendix:supplementary} for more information on the weight distribution.
Figure \ref{fig:diff_upper_limit_brazilian} shows the differential upper limit in comparison to the median sensitivity for all 2LAC blazars using the equal-weighting scheme. This population showed the largest overfluctuation. We plot here the upper limit derived from the median SCD sampling outcome, since in general the equal weighting upper limit depends on the neutrino flux realization of the SCD (see appendix \ref{appendix:scd_dependence}). As expected, the differential limit is slightly higher, by a factor of about 2, than the median outcome in the energy range between 5~TeV and 10~TeV where the largest excess is observed. This is the average behavior for a soft flux with spectral index of about $-3.0$ \footnote{This can be read off in figure \ref{fig:energy_interval}. The ratio function indicates in which energy range a given flux function appears first, on average.}, if one assumes a simple power-law fit to explain the data. While such a physical interpretation can not be made yet, it will be interesting to observe this excess with future IceCube data. For information on the differential upper limits from the other samples the reader is referred to appendix \ref{appendix:supplementary}.
\begin{figure}
\epsscale{1.1}
\plotone{differential_equal_brazilian_v2.pdf}
\caption{Differential $90 \% \ \mathrm{C.L.}$ upper limit on the ($\nu_\mu + \overline{\nu}_\mu$)-flux using equal weighting for all 2LAC blazars.
The $\pm 1 \sigma$ and $\pm 2 \sigma$ null expectation is shown in green and yellow, respectively. The upper limit and expected regions correspond to the median SCD sampling outcome.}
\label{fig:diff_upper_limit_brazilian}
\end{figure}
\subsection{The maximal contribution to the diffuse astrophysical flux}
\label{section:contribution_to_diffuse}
\begin{figure}
\epsscale{1.1}
\plotone{blazar_contribution.pdf}
\caption{$90 \% \ \mathrm{C.L.}$ flux upper limits for all 2LAC blazars in comparison to the observed astrophysical diffuse neutrino flux. The latest combined diffuse neutrino flux results from \citet{Aartsen2015a} are plotted as the best-fit power-law with spectral index $-2.5$ , and as a differential flux unfolding using $68 \%$ central and $90 \%$ U.L. confidence intervals.
The flux upper limit is shown using both weighting schemes for a power-law with spectral index $-2.5$ (blue). Percentages denote the fraction of the upper limit compared to the astrophysical best fit value. The equal-weighting upper limit for a flux with a harder spectral index of $-2.2$ is shown in green.}
\label{fig:blazar_contribution2}
\end{figure}
The astrophysical neutrino flux is observed
between 10~TeV and 2~PeV \citep{Aartsen2015a}. Its spectrum has been found to be compatible with a single power-law and a spectral index of $-2.5$ over most of this energy range. Accordingly, we use a power-law with the same spectral index and a minimum neutrino energy of 10~TeV for the signal injected into the simulated skymaps when calculating the upper limit for a direct comparison. Figure \ref{fig:blazar_contribution2} shows the flux upper limit for an $E^{-2.5}$ power-law spectrum starting at 10~TeV for both weighting schemes in comparison to the most recent global fit of the astrophysical diffuse neutrino flux, assuming an equal composition of flavors arriving at Earth.
The equal-weighting upper limit results in a maximally $19 \%$-$27 \%$ contribution of the total 2LAC blazar sample to the observed best fit value of the astrophysical neutrino
flux, including systematic uncertainties. This limit is independent of the detailed correlation between the $\gamma$-ray and neutrino flux from these sources. The only assumption is that the respective neutrino and $\gamma$-ray SCDs have similar shapes (see section \ref{section:upper_limits} for details on signal injection). We use the Fermi-LAT blazar SCD as published in \citet{Abdo2010a} as a template for sampling.
However, we find that even if the shape of the SCD differs from this template, the upper limit still holds and is robust. In appendix \ref{appendix:scd_dependence} we discuss the effect of different SCD shapes and discuss how the combination with existing point source constraints \citep{Aartsen2015b} leads to a nearly SCD-independent result, since a point source analysis and a stacking search with equal weights effectively trace opposite parts of the available parameter space for the $dN/dS$ distribution.
In case we assume a proportionality between the $\gamma$-ray and neutrino luminosities of the sources, the $\gamma$-weighting limit constrains the maximal flux contribution of all 2LAC blazars to $7 \%$ of the observed neutrino flux in the full 10 TeV to 2 PeV range. Since the blazars resolved in the 2LAC account for $70 \ \%$ of the total $\gamma$-ray emission from all GeV blazars \citep{Ajello2015
this further implies that at most $10 \% $ of the astrophysical neutrino flux stems from all GeV blazars extrapolated to the whole universe, again in the full 10 TeV to 2 PeV range and assuming the $\gamma$-weighting is an appropriate weighting assumption.
Table \ref{table:relative_contribution} summarizes the maximal contributions for all populations, including the $\gamma$-weighting result scaled to the total respective total population of sources in the observable universe.
It is interesting to compare these numbers directly to the $\gamma$-ray sector. \citet{Ajello2015} show that GeV blazars ($100 \ \mathrm{MeV}-100 \ \mathrm{GeV}$) contribute approximately $50 \%$ to the extragalactic gamma-ray background (\textbf{EGB}). The resolved 1FGL \citep{Abdo2010b} blazar component in particular contributes around $35 \%$. This estimate should be rather similar for the 2LAC blazars studied here, which are defined based on the more recent 2FGL catalogue \citep{Nolan2012} (see appendix \ref{appendix:correction_factor} for a discussion). The 2LAC blazar contribution to the astrophysical neutrino flux is therefore by at least a factor $0.75$ smaller than the corresponding extragalactic contribution in the $\gamma$-regime. The difference of this contribution between the two sectors becomes substantial ($7 \%$ maximally allowed contribution for neutrinos versus $35 \%$ for $\gamma$-rays) if one assumes a $\gamma$/$\nu$-correlation.
\begin{table}
\centering
\begin{tabular}{ c||c|c|c }
\hline
\hline
\multirow{2}[2]{*}{Population} & \multicolumn{3}{c}{ \rule{0pt}{2.5ex} weighting scheme } \\ & equal & $\gamma$ & $\gamma$ (extrapol.) \\
\hline
\hline
all 2LAC blazars & $19 \%-27 \%$ & $7 \%$ & $10 \%$ \\
\hline
FSRQs & $5 \%-17 \%$ & $5 \%$ & $7 \%$ \\
\hline
LSPs & $6 \%-15 \%$ & $5 \%$ & $7 \%$ \\
\hline
ISP/HSPs & $9 \%-15 \%$ & $5 \%$ & $7 \%$ \\
\hline
LSP-BL Lacs & $3 \%-13 \%$ & $6 \%$ & $9 \%$ \\
\hline
\end{tabular}
\caption{Maximal contributions to the best-fit diffuse flux from \citet{Aartsen2015a} assuming equipartition of neutrino flavours.
The equal-weighting case shows this maximal contribution for the $90 \%$ central outcomes of potential dN/dS realizations. The last column shows the maximal contribution of the integrated emission from the total parent population in the observable universe exploiting the $\gamma$-ray completeness of the 2LAC blazars (see appendix \ref{appendix:correction_factor}).
}
\label{table:relative_contribution}
\end{table}
Figure \ref{fig:blazar_contribution2} also shows the equal-weighting constraint for a harder neutrino spectrum with a spectral index of $-2.2$. This harder spectral index is about 3 standard deviations away from the best fit value derived in \citet{Aartsen2015a}, and can be used as an extremal case given the current observations. The comparison of this upper limit with the hard end of the "butterfly" shows that even in this case less than half of the bulk emission can originate in the 2LAC blazars with minimal assumptions about the relative neutrino emission strengths. Due to the low-count status of the data, we omit multi power-law spectra tests at this point. However, one can estimate the constraints for more complicated models using figure \ref{fig:energy_interval} in appendix \ref{appendix:energy_range}, which shows the energy range for a given spectrum that contributes the dominant fraction to the sensitivity. The sensitivity for a possible two-component model, for example, having a soft component at TeV energies and a hard component in the PeV range, would be dominated by the soft regime, as the "ratio function" (see appendix \ref{appendix:energy_range}, figure \ref{fig:energy_interval}) by the hard component above a PeV is negligible. In such a scenario we expect the constraint to be rather similar to our result from the simple power-law test with spectral index $-2.5$.
\subsection{Upper limits on models for diffuse neutrino emission}
\label{section:upper_limits_models}
For the experimental constraints on existing theoretical calculations, we only considered models for the diffuse emission from blazar populations, not predictions for specific objects. These include the calculations by
\citet{Mannheim1995}, \citet{Halzen1997} and \citet{Protheroe1997} for generic blazars, the calculations by \citet{Becker2005} and \citet{Murase2014} for FSRQs and calculations by \citet{Muecke2003},\citet{Tavecchio2014},\citet{Tavecchio2015} and \citet{Padovani2015} for BL\,Lacs.
The upper limits in this section are calculated using the $\gamma$-weighting scheme and therefore assume a correlation between the neutrino flux and the measured $\gamma$-ray energy flux. This allows us to account for the fraction of the neutrino emission that arises from the blazars not detected in $\gamma$-rays.
The fraction of $\gamma$-ray emission from resolved 2LAC blazars in general (including BL\,Lacs), and of FSRQs in particular,
is about $70 \%$ \citep{Ajello2015,Ajello2012}. Therefore, the flux upper limits for the entire population are a factor $1/0.7 \approx 1.43$ weaker than those derived for the quasi-diffuse flux of the 2LAC blazars. See appendix \ref{appendix:correction_factor} for more details on this factor.
\begin{table*}
\centering
\begin{threeparttable}
\begin{tabular}{c|cc|c}
\hline
Type & \multicolumn{2}{c|}{Model} & MRF
\\
\hline
\hline
\multirow{4}[4]{*}{Generic blazars} & \multirow{2}[0]{*}{\citep{Mannheim1995}} & (A) & $1.30$ \\ [0.1cm]
& & (B) & $<0.1$ \\
\cline{2-4}
& \citep{Halzen1997} & & $<0.1$ \\
\cline{2-4}
& \citep{Protheroe1997} & & $<0.1$ \\
\hline
\multirow{5}[5]{*}{FSRQs} & \citep{Becker2005} & & $2.28$ \\
\cline{2-4}
& \multirow{4}[5]{*}{\citep{Murase2014}} & $\Gamma_{\mathrm{SI}}=-2.0$ (BLR)& $ \xi_{\mathrm{CR}}<12$ \\
& & $\Gamma_{\mathrm{SI}}=-2.0$ (blazar) & $ \xi_{\mathrm{CR}}<21$ \\
& & $\Gamma_{\mathrm{SI}}=-2.3$ (BLR) & $\xi_{\mathrm{CR}}<153 $ \\
& & $\Gamma_{\mathrm{SI}}=-2.3$ (blazar) & $ \xi_{\mathrm{CR}}<241$ \\
\hline
\multirow{3}[8]{*}{BL\,Lacs} & \multirow{2}[0]{*}{\citep{Muecke2003}} & HSP (optimistic) & $76.29$ \\
& & LSP (optimistic) & $5.78$ \\
\cline{2-4}
& \multirow{2}[0]{*}{ \begin{tabular}{c} \citep{Tavecchio2014} \\ \end{tabular} } & HSP-dominated (1) & $1.06$ \\
& \tnote{a} & HSP-dominated (2) & $0.35$ \\
& \citep{Tavecchio2015} & LSP-dominated & $0.21$ \\
\cline{2-4}
& \citep{Padovani2015} & HSP (baseline) & $0.75$ \\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] Predictions from \citet{Tavecchio2014,Tavecchio2015} enhanced by a factor 3 in correspondence with the authors.
\end{tablenotes}
\end{threeparttable}
\caption{Summary of constraints and model rejection factors for the diffuse neutrino flux predictions from blazar populations. The values include a correction factor for unresolved sources (see appendix \ref{appendix:correction_factor}) and systematic uncertainties. For models involving a range of flux predictions we calculate the MRF with respect to the lower flux of the optimistic templates \citep{Muecke2003} or constraints on baryon to photon luminosity ratios $\xi_{\mathrm{CR}}$ \citep{Murase2014}. }
\label{table:model_rejection_factors}
\end{table*}
\begin{figure*}
\centering
\begin{tabular}[b]{@{}p{0.45\textwidth}@{}}
\centering\includegraphics[width=.45\textwidth]{generic_blazars.pdf} \\
\centering\small (a) generic blazars
\end{tabular}%
\quad
\begin{tabular}[b]{@{}p{0.45\textwidth}@{}}
\centering\includegraphics[width=.45\textwidth]{bllacs.pdf} \\
\centering\small (b) BL\,Lacs
\end{tabular} \\
\begin{tabular}[b]{@{}p{0.45\textwidth}@{}}
\centering\includegraphics[width=.45\textwidth]{fsrqs1.pdf} \\
\centering\small (c) FSRQs - 1
\end{tabular}%
\quad
\begin{tabular}[b]{@{}p{0.45\textwidth}@{}}
\centering\includegraphics[width=.45\textwidth]{fsrqs2.pdf} \\
\centering\small (d) FSRQs - 2
\end{tabular}
\caption{$90 \% \ \mathrm{C.L.}$ upper limits on the ($\nu_\mu + \overline{\nu}_\mu$)-flux for models of the neutrino emission from (a) generic blazars \citep{Mannheim1995,Halzen1997,Protheroe1997}, (b) BL\,Lacs \citep{Muecke2003,Tavecchio2015,Padovani2015} and (c)+(d) FSRQs \citep{Becker2005,Murase2014}. The upper limits include a correction factor that takes into account the flux from unresolved sources (see appendix \ref{appendix:correction_factor}) and systematic uncertainties. The astrophysical diffuse neutrino flux measurement \citep{Aartsen2015a} is shown in green for comparison.}
\label{fig:neutrino_model_uls}
\end{figure*}
Table \ref{table:model_rejection_factors} summarizes model rejection
factors \citep{Hill2003}\footnote{The flux upper limit divided by the
flux predicted in the model.} for all considered models.
Many of these models can be constrained by this analysis. Figure \ref{fig:neutrino_model_uls} a)-d) visualizes the flux upper limits in comparison to the neutrino flux predictions.
In the early models (before the year 2000) the neutrino flux per source is calculated as
being directly proportional to the $\gamma$-ray flux in the energy range $E_{\gamma}>100 \ \mathrm{MeV}$\citep{Mannheim1995}
(A), $E_{\gamma}>1 \ \mathrm{MeV}$ \citep{Mannheim1995} (B),
$20 \ \mathrm{MeV} < E_{\gamma} < 30 \ \mathrm{GeV}$ \citep{Halzen1997}
and $E_{\gamma}>100 \ \mathrm{MeV}$ \citep{Protheroe1997}. The $\gamma$-weighting
scheme is therefore almost implicit in all these calculations, although the energy ranges vary slightly from the $100 \ \mathrm{MeV} - 100 \ \mathrm{GeV}$ energy range used for the $\gamma$-weighting.
From the newer models, only \citet{Padovani2015} uses a direct proportionality between neutrino and $\gamma$-ray flux (for $E_{\gamma}>10 \ \mathrm{GeV}$), where the proportionality factor encodes a possible leptonic contribution. In all other publications a direct correlation to $\gamma$-rays is not used for the neutrino flux calculation.
Since all these models assume that p/$\gamma$-interactions dominate the neutrino production, the resulting neutrino fluxes are calculated via the luminosity in the target photon fields. In \citet{Becker2005}
the neutrino flux is proportional to the target radio flux which in turn is connected to the disk luminosity
via the model from \citet{Falcke1995}. In \citet{Muecke2003} it is directly proportional to the radiation of the synchrotron peak. In \citet{Murase2014} the neutrino flux is
connected to the x-ray luminosity, which in turn is proportional to the luminosity in various target photon fields. In \citet{Tavecchio2014} the neutrino luminosity is calculated using target photon fields from the inner jet ``spine-layer''.
However, a correlation to the $\gamma$-ray flux in these latter models may still exist, even in the case that leptonic $\gamma$-ray contributions dominate. This is mentioned in \citet{Murase2014}, which explicitly predicts the strongest $\gamma$-ray emitters to be also the strongest neutrino emitters, even though the model contains leptonically produced $\gamma$-ray emission. It should be noted that an independent IceCube analysis studying the all-flavor diffuse neutrino flux at PeV energies and beyond \citep{Aartsen2016b} recently also put strong constraints on some of the flux predictions discussed in this section.
\section{Summary and outlook}
\label{section:discussion}
In this paper, we have analyzed all 862 Fermi-LAT 2LAC blazars and 4 spectrally selected sub populations
via an unbinned likelihood stacking approach for a cumulative neutrino excess from the given blazar directions. The study uses 3 years of IceCube data (2009-2012) amounting to a total of around $340000$ muon-track events.
Each of the 5 populations were analyzed with two weighting schemes which encode the assumptions
about the relative neutrino flux from each source in a given population.
The first weighting scheme uses the energy flux observed in $\gamma$-rays as weights,
the second scheme gives each source the same weight. This resulted in a total of 10 statistical tests which were in turn analyzed in two different ways.
The first is an ``integral'' test, in which a power-law flux with a variable spectral
index is fitted over the full energy range that IceCube is sensitive to.
The second is a differential analysis, in which 14 energy segments between $10^2 \ \mathrm{GeV}$ and $10^9 \ \mathrm{GeV}$, each spanning half a decade
in energy, are fit independently with a constant spectral index of $-2$.
Nine from ten integral tests show over-fluctuations, but none of them are significant. The largest overfluctuation, a $6\%$ p-value, is observed for all 862 2LAC blazars combined using the model-independent equal-weighting scheme. The differential test for all 2LAC blazars using equal source weighting reveals that the excess appears in the 5-10 TeV region with a local p-value of $2.6 \sigma$. No correction for testing multiple hypotheses is applied, since even without a trial correction this excess cannot be considered significant.
Given the null results we then calculated flux upper limits.
The two most important results of this paper are:
\begin{enumerate}
\item{We calculated a flux upper limit for a power-law spectrum starting at $10 \ \mathrm{TeV}$ with a spectral index of $-2.5$ for all 2LAC blazars. We compared this upper limit to the diffuse
astrophysical neutrino flux observed by IceCube \citep{Aartsen2015a}. We found that the maximal contribution from all
2LAC blazars in the energy range between 10 TeV and 2 PeV is at most $27 \%$, including systematic effects and with minimal assumptions about the neutrino/$\gamma$-ray correlation in each source. Changing the spectral index of the tested flux to $-2.2$, a value allowed at about 3 standard deviations given the current global fit result \citep{Aartsen2015a}, weakens this constraint by about a factor of two.
If we assume for each source a similar proportionality between the $\gamma$-ray luminosity in the 2LAC energy range and the neutrino luminosity, we can extend the constraint to the parent population of all GeV blazars in the observable universe. The corresponding maximal contribution from all GeV blazars is then around $10 \%$, or $5-10 \%$ from the other blazar sub-populations. In each case we use the same power-law assumption as before in order to compare to the observed flux. For FSRQs our analysis allows for a $7 \%$ contribution to the diffuse flux, which is in general agreement with a result found by \citet{Wang2015} who independently estimated that FSRQs do not contribute more than $10 \%$ to the diffuse flux using our earlier small-sample stacking result for 33 FSRQs \citep{Aartsen2014}.
}
\item{
We calculated upper limits using the $\gamma$-weighting scheme for 15 models of the diffuse neutrino emission from blazar populations found in the literature. For most of these models, the upper limit constrains the model prediction, for some of them by more than an order of magnitude. The implicit assumption in all these upper limits is a proportionality between the source-by-source $\gamma$-ray luminosity in the 2LAC energy range and its corresponding neutrino luminosity. All models published before the year 2000, and the model by \citet{Padovani2015} implicitly contain this assumption, although some of their energy ranges differ from the exact energy range in the 2LAC catalogue. Even for the other models the proportionality assumption may still hold, as indicated by \citet{Murase2014}.
}
\end{enumerate}
\citet{Kadler2016} recently claimed a $5 \%$ chance probability for a PeV IceCube event to have originated close to blazar \texttt{PKS B1424-418} during a high-fluence state. While $5 \%$ is not yet statistical evidence, our results do not contradict such single PeV-event associations, especially since a dominant fraction of the sensitivity of our analysis comes from the sub-PeV energy range. The same authors also show that the measured all-sky PeV neutrino flux can not be compatible with an origin in a pure FSRQ population that has a peaked spectrum around PeV energies, as it would over-predict the number of observed events. Instead, one has to invoke additional assumptions, for example a certain contribution from BL Lacs, leptonic contributions to the SED, or a spectral broadening of the arriving neutrino flux down to TeV energies due to Doppler shifts from the jets and the intrinsic redshift distribution of the blazars. Our results suggest that the last assumption, a spectrum broadening down to TeV energies, only works if the resulting power-law spectral index is harder than around $-2.2$, as the flux is otherwise in tension with our $\gamma$-weighting upper limit. A hard PeV spectrum is interestingly also seen by a recent IceCube analysis \citep{Aartsen2016} that probes the PeV range with muon neutrinos. Regardless of these speculations, the existing sub-PeV data requires an explanation beyond the 2LAC sample from a yet unidentified galactic or extra-galactic source class.
Our results do not provide a solution to explain the bulk emission of the astrophysical diffuse
neutrinos, but they provide robust constraints that might help to construct the global picture.
Recently, \citet{Murase2015} argued that current observations favour sources that are opaque to $\gamma$-rays. This would for example be expected in the cores of AGN. Our findings on the 2LAC blazars mostly probe the emission from relativistically beamed AGN jets and are in line with these expectations. We also do not constrain neutrinos from blazar classes that are not part of the 2LAC catalogue, for example extreme HSP objects. These sources might emit up to $30 \%$ of the diffuse flux \citep{Padovani2016}, and studies in this direction with other catalogues are in progress.
While the slight excess in the 5-10 TeV region is not yet significant, further observations by IceCube may clarify if we see an emerging soft signal or just a statistical fluctuation.
\acknowledgments
We acknowledge the support from the following agencies: U.S. National Science Foundation-Office of Polar Programs, U.S. National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foundation, the Grid Laboratory Of Wisconsin (GLOW) grid infrastructure at the University of Wisconsin - Madison, the Open Science Grid (OSG) grid infrastructure; U.S. Department of Energy, and National Energy Research Scientific Computing Center, the Louisiana Optical Network Initiative (LONI) grid computing resources; Natural Sciences and Engineering Research Council of Canada, WestGrid and Compute/Calcul Canada; Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research (BMBF), Deutsche Forschungsgemeinschaft (DFG), Helmholtz Alliance for Astroparticle Physics (HAP), Research Department of Plasmas with Complex Interactions (Bochum), Germany; Fund for Scientific Research (FNRS-FWO), FWO Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office (Belspo); University of Oxford, United Kingdom; Marsden Fund, New Zealand; Australian Research Council; Japan Society for Promotion of Science (JSPS); the Swiss National Science Foundation (SNSF), Switzerland; National Research Foundation of Korea (NRF); Villum Fonden, Danish National Research Foundation (DNRF), Denmark
|
1,108,101,566,465 | arxiv | \section{Introduction}
\begin{center}
\small {Ripple in still water}\\
\small {When there is no pebble tossed}\\
\small {Nor wind to blow}\\
\small {\it -The Grateful Dead}\\
\end{center}
Originally \cite{HarHaw83}, the No-Boundary Proposal was an
attempt to eliminate the initial and final singularities (which are
`boundaries' for spacetime in cosmological scenarios), by
considering the universe as a history in imaginary time. One is then
led to a picture of a universe which is finite in imaginary time and
`without boundary'. This picture involves both `real time'
(or Lorentzian) and `imaginary time' (or Euclidean) sections of the
full complexified spacetime. When one combines this picture with the
sum-over-histories prescription for calculating amplitudes, it is
natural to think of a Euclidean section (which matches smoothly to a
Lorentzian section across a spacelike three-surface of vanishing
extrinsic curvature) as an `instanton', which mediates the creation of
the Lorentzian section from `nothing'. In this way,
a new universe can appear from `nothingness', even though there was no
source around to precipitate such an event. One then speaks of
creating universes from nothing. Similarly, one can consider the time
reverse and speak of universes `annihilating' to
nothingness. One can calculate Euclidean actions of the relevant
instantons and obtain the rate at which various universes appear
from nothing. One can thus `predict' the most likely initial
state for the universe. The key point about this version of
the No-Boundary Proposal is that it tells you how to calculate the
probability that {\it something} appears from {\it nothing}.
Here, we are concerned with another variant of the
No-Boundary Proposal, which tells you how to calculate the rate at
which {\it something} decays into {\it something else}. This version
of the proposal is often applied to the study of black hole pair
creation. Throughout this paper we use units
in which $\hbar = c = G = 1$.
\section{Approaches to Gravitational Tunneling}
The pair creation of black holes, first discovered by Gibbons
\cite{gazzorigin}, has been studied enthusiastically for
a number of years. It corresponds to a non-perturbative, topological
fluctuation of the gravitational field. As such it is one of the few
effects of quantum gravity that one can hope to study quantitatively
in a semi-classical approximation. It has been used to
investigate the entropy of black
holes~\cite{GarGid94,DowGau94b,HawHor95}, and electric-magnetic
duality in quantum gravity~\cite{HawRos95b}. In the cosmological
context, it has clarified
the important role of the Hartle-Hawking No-Boundary Proposal in
quantum gravity~\cite{BouHaw95,BouHaw96}, and it may have profound
consequences for the global structure of the universe~\cite{Bou98}.
\subsection{Bounce Approach} \label{ssec-instapproach}
Black hole pair creation can be analyzed semi-classically by the use
of instanton methods. Typically, the nucleation process is described
by a single Euclidean solution of the Einstein equations, a bounce. It
interpolates between an initial spacelike section without black holes,
and a final spacelike section containing black holes, bouncing back to
the initial section in a time-symmetric fashion. An instanton is half
a bounce, i.e.\ a geometry connecting initial and final spacelike
sections, but not bouncing back. One calculates the bounce action,
$I_{\rm pc}$, which must be renormalized by subtracting off the action
$I_{\rm bg}$ of a Euclidean geometry containing only background
spacelike sections. Thus one obtains the pair creation rate $\Gamma$:
\begin{equation}
\Gamma =
\exp \left[ - \left( I_{\rm pc} - I_{\rm bg} \right) \right],
\label{eq-pcr-usual}
\end{equation}
where we neglect a prefactor. Note that both $I_{\rm pc}$ and $I_{\rm
bg}$ are typically infinite, but their difference can be made well
defined and finite in a suitable limit. This prescription has been
used very successfully by a number of authors
\cite{GarGid94,DowGau94a,DowGau94b,HawHor95,HawRos95b,Ros94,Ros95} for
the pair creation of black holes on various backgrounds. It is
motivated by analogies in quantum mechanics and quantum field
theory~\cite{Col77,CalCol77}, as well as by considerations of black hole
entropy~\cite{GarGid94,DowGau94b,HawHor95}.
\subsection{Quantum Cosmological Approach} \label{ssec-qcapproach}
A positive cosmological constant, or a domain wall,
typically causes the universe to close. In these situations, there are
Lorentzian solutions with and without black holes, but there are no
known Euclidean solutions connecting their spacelike slices.
Instead, there are two separate, compact Euclidean geometries
corresponding to creation from nothing of a universe with, and
without a black hole pair. Thus the instanton technique outlined
above could not be used directly. Before describing how
virtual domain walls mend this problem, we review how the
problem was circumvented using concepts from quantum cosmology.
In quantum cosmology one works with the concept of the {\em wave
function of the universe}. The wave function takes different values
for a universe with, and without black holes.
The squared amplitude of the wave function yields a probability
measure. According to the Hartle-Hawking No-Boundary Proposal, the
wave function of the universe, evaluated for a specified
three-geometry, is given by a path integral over all closed,
smooth complex geometries that match the specified boundary conditions
on the spacelike section, and have no other boundary; the integrand is
the exponential of minus the Euclidean action of the geometry. In
the semi-classical approximation, the wave function is
approximated as
\begin{equation}
\Psi = e^{-I_{\rm inst}},
\end{equation}
where $I_{\rm inst}$ is the action of a saddlepoint solution which
satisfies the Einstein equations under the given boundary conditions.
If there is no such instanton, the wave function will be zero
semi-classically; if there are several, they have to be summed over.
The probability measure for a given universe is thus related to the
action of an instanton which describes the nucleation of the universe
from nothing:
\begin{equation}
P = \Psi^* \Psi = e^{-2 I^{\rm Re}_{\rm inst}}.
\label{eq-P}
\end{equation}
Clearly, the probability depends only on the real part of the
instanton action, $I^{\rm Re}_{\rm inst}$.
We shall see in the next section that there are, indeed, two
instantons, each of which nucleates a universe from
nothing. One will lead to spacelike sections with black holes, the
other to an empty background universe. Because of the cosmological
term, the Euclidean geometry is compact, and the actions of both
instantons are finite. Thus probability measures can be assigned to a
universe with, and without black holes.
But how are these probability measures related to pair creation? It
seems that all one can do in quantum cosmology is to compare the
probability measure for a universe with {\em one} pair of black holes
to that of an empty universe. The black hole instanton would then be
without any cosmological relevance whatsoever -- it could only produce
a single black hole pair in an exponentially large universe. It is
possible, however, to propose the following approach \cite{BouHaw96}:
Consider an arbitrary Hubble volume in an inflating universe.
Typically, this volume will not contain black holes; it will be
similar to a Hubble volume of de~Sitter space. After one Hubble time,
its spatial volume will have increased by a factor of $e^3 \approx
20$. By the de~Sitter no hair theorem, one can regard each of these
$20$ Hubble volumes as having been nucleated
independently~\cite{GarLin94}, through either the empty, or the black
hole instanton. Thus, one allows for black hole
pair creation, since some of the new Hubble volumes may contain black
holes. We shall see later that the spacelike sections can be taken to
be three-spheres in the case of a universe without black holes, and $
S^1 \times S^2 $ for a universe with a single pair of black holes. We
will therefore compare the wave functions for spacelike slices with
these two topologies. Using the No-Boundary
Proposal~\cite{HarHaw83}, one can assign probability measures to both
instanton types. The ratio of the probability measures,
\begin{equation}
\Gamma = \frac{P_{\rm BH}}{P_{\rm no\, BH}},
\label{eq-pcr-qc}
\end{equation}
reflects the ratio of the number of Hubble volumes
containing black holes, to the number of empty Hubble
volumes. This is true as long as this ratio is very
small, so that the holes will be widely separated. But we shall
see that this condition is satisfied whenever the semi-classical
approximation is valid. Since this argument applies to every new
generation of Hubble volumes, the ratio $\Gamma$ is the number of
black hole pairs produced per Hubble volume, per Hubble time. In other
words, $\Gamma$ is the rate of black hole pair creation in inflation.
\subsection{The Cosmological Pair Creation Instantons}
We now illustrate this approach by reviewing its implementation
for the case of neutral black holes created on a cosmological
background~\cite{GinPer83,BouHaw95,BouHaw96}. We begin with the
simpler of the two spacetimes, an inflationary universe without black
holes, where the spacelike sections are round three-spheres. In
the Euclidean de Sitter solution, the three-spheres begin at zero
radius, expand and then contract in Euclidean time. Thus they form a
four-sphere of radius $\sqrt{3/\Lambda}$. The analytic continuation
can be visualized (see Fig.~\ref{fig-tun}) as cutting the four-sphere
in half, and then joining to it half the Lorentzian de~Sitter
hyperboloid. The real part of the Euclidean action for this
geometry comes from the half-four-sphere only: $I^{\rm
Re}_{\rm no\, BH} = - 3\pi/2\Lambda$. Thus, the
probability measure for de~Sitter space is
\begin{equation}
P_{\rm no\, BH} = \exp \left( \frac{3\pi}{\Lambda} \right).
\end{equation}
\begin{figure}[htb]
\epsfxsize=\textwidth
\epsfbox{tun.eps}
\caption[]%
{\small\sl The creation of a de Sitter universe (left) can
be visualized as half of a Euclidean four-sphere joined to a
Lorentzian four-hyperboloid. The picture on the right shows the
corresponding nucleation process for a de Sitter universe containing
a pair of black holes. In this case the spacelike slices have
non-trivial topology.}
\label{fig-tun}
\end{figure}
Now we need to go through the same procedure with the
Schwarzschild-de~Sitter solution, which corresponds to a pair of black
holes immersed in de~Sitter space. Its Lorentzian metric is given by
\begin{equation}
ds^2 = -V(r) dt^2 + V(r)^{-1} dr^2 + r^2 d\Omega^2,
\end{equation}
where
\begin{equation}
V(r) = 1 - \frac{2\mu}{r} - \frac{\Lambda}{3} r^2.
\end{equation}
Here $\mu$ parameterizes the mass of the black hole, and for $\mu=0$
the metric reduces to de~Sitter space. The spacelike sections have
the topology $S^1 \times S^2$. This can be seen by the following
analogy: Empty Minkowski space has spacelike sections of topology
${\rm {\bf R}}^3$. Inserting a black hole changes the topology to $S^2
\times {\rm {\bf R}}$. Similarly, if we start with de~Sitter space
(topology $S^3$), inserting a black hole is like punching a hole
through the three-sphere, thus changing the topology to $S^1 \times
S^2$. In general, the radius of the $S^2$ varies along the $S^1$. In
the static slicing of Schwarzschild-de~Sitter, the maximum two-sphere
corresponds to the cosmological horizon, the minimum to the black hole
horizon. This is shown in Fig.~\ref{fig-space-secs}.
\begin{figure}[htb]
\epsfxsize=\textwidth
\epsfbox{space-secs.eps}
\caption[Spacelike sections of Schwarzschild-de~Sitter space]%
{\small\sl The spacelike slices of Schwarzschild-de~Sitter space have
the topology $S^1 \times S^2$. In general (left), the size of the
two-sphere varies along the one-sphere. If the black hole mass is
maximal, however, all the two-spheres have the same size (right).
Only in this case is a smooth Euclidean solution admitted.}
\label{fig-space-secs}
\end{figure}
What we need is a Euclidean solution that can be analytically
continued to contain this kind of spacelike slice. It turns out that
such a smooth instanton does not exist in general for the Lorentzian
Schwarzschild-de~Sitter spacetimes. The only exception is the
degenerate case, where the black hole has the maximum possible size,
and the radius of the two-spheres is constant along the $S^1$ (see
Fig.~\ref{fig-space-secs}). The corresponding Euclidean solution is
just the topological product of two round two-spheres, both of radius
$1/\sqrt{\Lambda}$~\cite{GinPer83}. It can be analytically continued
to the Lorentzian Schwarzschild-de~Sitter solution by cutting one of
the two-spheres in half, and joining to it the 2-dimensional
hyperboloid of $1+1$ dimensional Lorentzian de~Sitter space, as shown
in Fig.~\ref{fig-tun}. In the Lorentzian regime the $S^1$ expands
exponentially, while the two-sphere just retains its constant radius.
Thus, unless more sophisticated boundary conditions are
employed~\cite{BouHaw98}, the Euclidean approach predicts the
nucleation of black holes of the maximum size, $r_{\rm BH} =
\Lambda^{-1/2}$.
The real part of the Euclidean action for this instanton is given by
$I^{\rm Re}_{\rm BH} = - \pi/\Lambda$, and the corresponding
probability measure is
\begin{equation}
P_{\rm BH} = \exp \left( \frac{2\pi}{\Lambda} \right).
\end{equation}
Now we can take the ratio of the two probability measures, and obtain
the pair creation rate:
\begin{equation}
\Gamma = \exp \left( -\frac{\pi}{\Lambda} \right).
\end{equation}
This example illustrates the analogy between the standard prescription
for pair creation, Eq.~\ref{eq-pcr-usual}, and the result obtained from
the No-Boundary Proposal: By Eqs.~(\ref{eq-pcr-qc})
and~(\ref{eq-P}),
\begin{equation}
\Gamma =
\frac{P_{\rm BH}}{P_{\rm no\, BH}} =
\exp \left[ - \left( 2I^{\rm Re}_{\rm BH} - 2I^{\rm Re}_{\rm no\, BH}
\right) \right],
\label{eq-pcr-unified}
\end{equation}
where $I^{\rm Re}$ denotes the real part of the Euclidean action of a
nucleation geometry. But we have seen that the only contribution to
$I^{\rm Re}$ comes from the action of the Euclidean sector of the
nucleation geometry, the instanton. This, in turn, is equal to half of
the action of the complete bounce solution, which is used in the usual
pair creation framework. Thus $I_{\rm pc} = 2I^{\rm Re}_{S^1 \times
S^2}$ and $I_{\rm bg} = 2I^{\rm Re}_{S^3}$, and we recover
Eq.~\ref{eq-pcr-usual}.
The quantum cosmological approach to black hole pair creation outlined
above clearly differs from the usual bounce approach. In the latter, a
Euclidean time region smoothly interpolates between the two different
spacelike slices; in quantum cosmology, on the other hand, one can
think of the pair creation process as the annihilation of a Hubble
volume, and its subsequent recreation with a different spatial
topology. Thus it is, perhaps, less obvious to see the analogy to
quantum mechanical tunneling instantons that motivate the bounce
approach to pair creation. Some have therefore regarded cosmological
and domain wall pair creation scenarios with a high degree of
scepticism. While we have always found the probabilistic argument
given above convincing, and thought it justified to calculate a pair
creation rate from the No-Boundary Proposal, we hope to allay any
further worries by explicitly constructing an interpolating tunneling
path using virtual wormholes, or domain walls. It will be shown that
the wormhole contribution to the Euclidean action can be negligible.
The quantum cosmological formula for the pair creation rate,
Eq.~(\ref{eq-pcr-qc}), will thus be confirmed.
\subsection{The Patching Proposal}
As we have discussed above, when one uses the No-Boundary Proposal to
calculate a tunneling amplitude one does not actually construct an
imaginary time resonance connecting the initial and final states. In
fact, in many scenarios involving topology change, there simply {\it
does not exist} any globally regular solution of the relevant
Euclidean equations of motion which interpolates between ingoing and
outgoing data. There are thus two problems: First, if one takes the
point of view that disconnected geometries should not be allowed in
the path integral, one would have to exclude the solutions given in
the previous section. Then it seems that there would be no saddle
point, and the transition rate should vanish semi-classically. Second,
even if disconnected geometries are admitted, it is not immediately
obvious why the actions of the two disjoint instantons should be
subtracted, rather than added. Here we make a proposal that solves
both problems: the idea is to join the disconnected geometries by
small virtual domain walls.
In the path integral approach, one obtains an amplitude by summing
over {\it all} paths, whether they are solutions or not. Given a path
integral with one saddlepoint, it will be of interest to consider the
consequences of taking the integral over all paths {\em except} for
the saddlepoint and very small perturbations about it. If the removed
region is sufficiently small, there will still be a region of
stationary action, located around the excised area. There the
oscillations of the integrand will not be destructive and the
amplitudes will add up. Paths which are close to being solutions will
then dominate the sum.
To make this precise, let us assume that the saddlepoint action is
given by $I_0$, and consider perturbations of this solution
parameterized by $\delta$. Then the action near the saddlepoint will
be given by
\begin{equation}
I(\delta) = I_0 + \frac{1}{2} \rho \delta^2,
\end{equation}
where $\rho$ is the second derivative of the action evaluated at the
saddlepoint. Ignoring other perturbations, the path integral will be
given by
\begin{eqnarray}
\int_{-\infty}^{\infty} d\delta\, e^{-I}
& = & e^{-I_0} \int_{-\infty}^{\infty} d\delta\, e^{-\frac{1}{2} \rho
\delta^2 }
\label{eq-pathint} \\
& = & \sqrt{\frac{2\pi}{\rho}} e^{-I_0}
\label{eq-prefexp}
\end{eqnarray}
in the saddlepoint approximation.
Our idea is the following: We take the saddlepoint geometry to be the
combination of a half-four-sphere (which annihilates a de~Sitter
Hubble volume) with half of an $S^2 \times S^2$, which will create
Schwarzschild-de~Sitter space from nothing. One may connect these two
disjoint instantons by removing a small four-ball of radius $\delta$
on each and joining them together on the resulting boundaries. This
leads to a family of near-solutions, in which the instantons are
connected though a virtual domain wall of size $\delta$. These
geometries, which violate Einstein's equations (with positive
energy sources) in a small region, will
actually interpolate between the initial and final spacelike sections.
The disjoint saddlepoint solution is recovered in the limit where
$\delta \rightarrow 0$. The idea works just the same for the disjoint
instantons associated with black hole pair creation on (real) domain
walls, which will be discussed below.
One must assume that the virtual domain walls may not be smaller than
Planck size: $|\delta|>1$. Forbidding the disconnected geometries thus
corresponds to removing the region $|\delta|<1$ from the range of
integration in Eq.~(\ref{eq-pathint}). By calculating the action
contribution of the virtual domain wall, we will show that $\rho$ is
of order one. Thus we are excising a region of about one standard
deviation from the integral. This will reduce the prefactor of the
wave function to about a third of its value in Eq.~(\ref{eq-prefexp}).
The exponent, which typically is much more significant, will not
change at all.
(Since $\delta$ should be small compared to the size of the instanton,
there will strictly also be an upper bound. But the overwhelming
contribution to the integral comes from the first few standard deviations.
Therefore the error from integrating to infinity will be negligible except
for Planck scale instantons, when the semiclassical approach breaks down
anyway).
Therefore, connected geometries will dominate the path integral in the
absence of disconnected ones. This solves the first problem raised at
the beginning of this subsection. The second problem is resolved by
the change of orientation at the virtual domain wall, which will cause
the two instanton actions to enter the exponent with opposite sign.
The construction of the off-shell interpolating Euclidean paths will
be presented in detail in Sec.~\ref{sec-interpol}. In order to be
precise, we will explicitly go through our construction for the
scenario where black holes are pair produced in the background of a
Vilenkin-Ipser-Sikivie domain wall, as discussed in \cite{cald}. Our
motivation for treating the tunneling process of black hole pair
production in the presence of a domain wall is twofold. First of all,
we are going to have to introduce the notion of a domain wall, or
infinitely thin wormhole, anyway in order to implement our proposal.
Second of all, the qualitative features of black hole pair production
by a domain wall are identical to those of black hole pair production
in a de Sitter background; indeed, it will not be hard to see that
everything we will say here for the domain wall situation will go
through for the de Sitter situation, where one is interested in the
creation of primordial black holes in the early universe
\cite{BouHaw95,BouHaw96,BouHaw97b}. With all of this in mind, we now
present a remedial overview of domain walls.
\section{Black Hole Pair Creation on Domain Walls}
\subsection{Domain Walls: A Brief Introduction}
A vacuum domain wall is a two-dimensional topological defect which can
form whenever there is a breaking of a discrete symmetry. Commonly,
one thinks of the symmetry breaking in terms of some Higgs field
$\Phi$. If ${\cal M}_{0}$ denotes the `vacuum manifold' of $\Phi$
(i.e., the submanifold of the Higgs field configuration space on which
the Higgs acquires a vacuum expectation value because it will minimize
the potential energy $V(\Phi)$), then a necessary condition for a
domain wall to exist is that ${\pi}_{0}({\cal M}_{0}) \not= 0$. In
other words, vacuum domain walls arise whenever the vacuum manifold is
not connected. The simplest example of a potential energy which gives
rise to vacuum domain walls is the classic `double well' potential,
which is discussed in detail (along with many related things) in
\cite{pod}.
(Note: In general, domain walls can arise as (D-2)-dimensional defects
(or extended objects) in D-dimensional spacetimes. In fact, domain
walls are a common feature in the menagerie of objects which can arise
in the low-energy limit of string theory, as has been discussed in
detail in \cite{paul} and \cite{cvet}.)
>From what we have said so far, the Lagrangian density for the matter
field $\Phi$ is given as \cite{pod}
\begin{equation}
{\cal L}_{m} =
-{\frac{1}{2}}g^{{\alpha}{\beta}}{\partial}_{\alpha}
{\Phi}{\partial}_{\beta}{\Phi} -
V(\Phi).
\end{equation}
The exact form of $V(\Phi)$ is not terribly important. All that we
require in order for domain walls to be present is that $V(\Phi)$ has
a discrete set of degenerate minima, where the potential vanishes.
Given this matter content, the full (Lorentzian) Einstein-matter
action then reads:
\begin{equation}
S = {\int}_{\!\!M} d^4\! x \, \sqrt{-g}\,
\Big[ {R \over 16 \pi} + {\cal L}_{m} \Big] +
\frac{1}{8{\pi}}{\int}_{\!\!\partial M} d^{3}\!x\, \sqrt{h}K.
\end{equation}
Here, $M$ denotes the four-volume of the system, and $\partial M$
denotes the boundary of this region. One obtains the Euclidean
action, $I$, for the Euclidean section of this configuration by
analytically continuing the metric and fields and reversing the
overall sign. The `simplified' form of this Euclidean action in the
thin wall limit has been derived in a number of recent papers
(\cite{cald}, \cite{shawn1}) and so we will not reproduce the full
argument here. Basically, one first {\it assumes} that there is no
cosmological constant ($R = 0$) and then one uses the fact that the
fields appearing in the matter field Lagrangian depend only on the
coordinate `$z$' normal to the wall, and one integrates out this
$z$-dependence to obtain the expression
\begin{equation}
I = - \sum_{i=1}^{n}
{\frac{\sigma_{i}}{2}}{\int}_{\!\!D_{i}}d^{3}\!x \sqrt{h_{i}}.
\end{equation}
Here, $D_i$ denotes the $i$-th domain wall, ${\sigma}_i$ is the energy
density of the domain wall $D_i$, $h_i$ is the determinant of the
three-dimensional metric ${h^{ab}}_{(i)}$ induced on the domain wall
$D_i$ and $n$ is the total number of domain walls. Now, it is well
known that variation relative to ${h^{ab}}_{(i)}$ on each domain wall
will yield the Israel matching conditions. Since we will make use of
these conditions, we reproduce them here for the convenience of the
reader:
\begin{enumerate}
\item A domain wall hypersurface is totally umbilic, i.e., the second
fundamental form $K_{ij}$ is proportional to the induced metric
$h_{ij}$ on each domain wall world sheet.
\item The discontinuity in the second fundamental form on each domain
wall hypersurface is $[K_{ij}]_\pm = 4 \pi \sigma h_{ij}$.
\end{enumerate}
Thus, the energy density of a thin domain wall measures the jump in
the extrinsic curvature of surfaces parallel to the wall as one moves
through the wall. We will use these conditions to do quick
`cut-and-paste' constructions of virtual domain wall surfaces.
Now, the above the discussion is a nice summary of the field
theoretical aspects of a generic vacuum domain wall, but what would a
gravitating domain wall actually look like?
\subsection{The VIS Domain Wall Spacetime}
Solutions for the gravitational field of a domain wall were found by
Vilenkin \cite{vil} (for an open wall) and Ipser and Sikivie \cite{ip}
(for closed walls). The global structure of these
Vilenkin-Ipser-Sikivie (or `VIS') domain walls has been extensively
discussed recently (\cite{cald}, \cite{shawn1}) so we will only
present a brief sketch here.
To start with, we are looking for a solution of the Einstein equations
where the source term is an energy momentum tensor describing a
distributional source located at $z = 0$:
\begin{equation}
T_{\mu \nu} = \sigma\, \delta(x)\, {\rm diag} (1,1,1,0).
\label{stressenergy}
\end{equation}
It is not possible to find a static solution of the Einstein equations
with this source term; indeed, the VIS solution is a time-dependent
solution describing a uniformly accelerating domain wall. In order to
understand the global causal structure of the VIS domain wall, it is
most useful to use coordinates $(t, x, y, z)$ so that the metric takes
the form
\begin{equation}
ds^{2} = \Big(1 - k|z|\Big)^2
dt^{2} - dz^{2} - \Big(1 - k|z|\Big)^2
e^{2kt} (dy^{2} + dx^{2}).
\label{vismetric}
\end{equation}
Here, $k = 2{\pi}{\sigma}$. The gravitational field of this solution
has unexpected properties. For example, in the Newtonian limit of the
Einstein equations for (\ref{vismetric}) one obtains the equation
\[
{\nabla}^{2}{\phi} = -2{\pi}{\sigma},
\]
\noindent where $\phi$ is the Newtonian gravitational
potential and $\sigma$ is the energy density of the wall. From this
equation it is clear that a wall with {\it positive} surface energy
density will have a repulsive gravitational field, whereas a wall with
negative energy density will have an attractive gravitational field.
An even simpler way to see that the (positive $\sigma$) VIS wall is
repulsive is to notice that the $t-z$ part of the metric is just the
Rindler metric.
Further information is recovered by noticing that the $z$=constant
hypersurfaces are all {\it isometric} to $2+1$ dimensional de Sitter
space:
\begin{equation}
ds^{2} = dt^{2} - e^{2kt}(dy^{2} + dx^{2}).
\end{equation}
Given that $2+1$ de Sitter has the topology ${\rm S}^{2} \times {\bf
R}$ it follows that the domain wall world sheet has this topology.
In other words, at each instant of time the domain wall is
topologically a two-dimensional sphere. Indeed, in the original
Ipser-Sikivie paper a coordinate transformation was found which takes
the $(t, x, y, z)$ coordinates to new coordinates $(T, X, Y, Z)$ such
that in the new coordinates the metric becomes (on each side of the
domain wall):
\begin{equation}
ds^{2} = dT^{2} - dX^{2} - dY^{2} - dZ^{2}.
\end{equation}
\noindent Furthermore the domain wall,
which in the old coordinates is a plane located at $z = 0$, is in the
new coordinates the hyperboloid
\begin{equation}
X^{2} + Y^{2} + Z^{2} = {1\over {k}^2} + T^{2}.
\label{dwhyper}
\end{equation}
Of course, the metric induced on a hyperboloid embedded in Minkowski
spacetime is just the de Sitter metric, and so this is consistent with
what we have already noted. This metric provides us with a useful way
of constructing the maximal extension of the domain wall spacetime:
\noindent First, take two
copies of Minkowski space, and in each copy consider the interior of
the hyperboloid determined by equation (\ref{dwhyper}). Then match
these solid hyperboloids to each other across their respective
boundaries; there will be a ridge of curvature (much like the edge of
of a lens) along the matching surface, where the domain wall is
located. Thus, an inertial observer on one side of the wall will see
the domain wall as a sphere which accelerates towards the observer for
$T<0$, stops at $T=0$ at a radius ${k}^{-1}$, then accelerates away
for $T>0$. We illustrate this construction in Fig.~\ref{vis}, where
we include the acceleration horizons to emphasize the causal
structure. \vspace*{0.3cm}
\begin{figure}[htb]
\hspace*{\fill} \vbox{\epsfxsize=11cm
\rotate[r]{\epsfbox{wall.ps}}} \hspace*{\fill}
\caption[Causal structure of VIS domain wall spacetime.]
{\small\sl}
\label{vis}
\end{figure}
Now, the repulsive effect of this vacuum domain wall is very similar
to the inflationary effect of a positive cosmological constant seen in
de Sitter space. As we noted above, inflation provides an energy
source for the pair creation of black hole pairs in the early
universe. Similarly, we would expect the repulsive gravitational
energy of the VIS domain wall to provide a mechanism for black hole
creation and indeed this was shown to be the case in \cite{cald}. We
will now discuss this process of black hole pair creation in some
detail, because it will provide a prototypical example for our general
construction.
\subsection{Black Hole Pair Creation on a Domain Wall Background}
In \cite{cald} the creation rates of charged and uncharged black hole
pairs on a VIS background were calculated using the No-Boundary
Proposal. Thus, amplitudes were calculated by first finding the
Euclidean action of the initial state (consisting of a single domain
wall with no black holes present), then finding the Euclidean action
of the final state (describing a domain wall with a black hole on each
side) and then applying equation (\ref{eq-pcr-qc}) to obtain the
correct rate. The actual black hole creation process which was
studied is illustrated in Fig.~\ref{bhpair}. \vspace*{0.2cm}
\begin{figure}[htb]
\hspace*{\fill} \vbox{\epsfxsize=11cm
\rotate[r]{\epsfbox{bhpair.ps}}}\hspace*{\fill}
\vspace*{0.3cm}
\caption[Pair creation of black holes by domain walls.]
{\small\sl A pair of black holes nucleated via the repulsive
gravitational energy of the VIS domain wall.}
\label{bhpair}
\end{figure}
\noindent (Actually, in \cite{cald} the
only creation process studied was that of black holes which are
spherically centered on the domain wall, i.e., each black hole was
required to sit exactly in the center of each side of the original
spherical domain wall. This is because the motion of a domain wall
which is spherically centered on a black hole is known in analytic
form. Of course, one could also study the pair creation of black
holes which are `off center', but it is likely that numerical methods
would be required to simulate the exact wall motion after the black
holes were created.)
The instanton, or Euclidean section, of the VIS solution is very
similar to the $S^4$ instanton which mediates the creation from
nothing of a de Sitter universe. This instanton allows us to
calculate the rate at which the initial state, with no black holes,
will be created from nothing. Since the Lorentzian section of the VIS
configuration is just two portions of flat Minkowski spacetime glued
together, a natural `guess' for the Euclidean section is to take two
flat Euclidean four-balls and glue them together along a common
($S^3$) boundary. When we do this we obtain the so-called `lens
instanton', which acquires its name from the fact that it looks rather
like a lens with a ridge of curvature running along the hemisphere
where the domain wall, $D$, is located, as illustrated in
Fig.~\ref{lens}. \vspace*{0.2cm}
\begin{figure}[htb]
\hspace*{\fill} \vbox{\epsfxsize=11cm
\rotate[r]{\epsfbox{lens.ps}}}\hspace*{\fill}
\vspace*{0.3cm}
\caption[The lens instanton.]
{\small\sl The lens instanton is obtained by gluing two flat
four-balls together along their respective $S^3$ boundaries.}
\label{lens}
\end{figure}
Using equation (3.3) above, one calculates \cite{cald} that the total
Euclidean action of this instanton is
\begin{equation}
I_{D} = -\frac{\sigma}{2}{\int}_{\!\!D}d^{3}\!x\, \sqrt{h}
= \frac{-1}{8{\pi}{\sigma}^2},
\end{equation}
\noindent where ${\int}_{\!D} d^{3}\!x \sqrt{h}$
is the volume of the domain wall worldsheet (the `ridge') on the
instanton, and we have used the fact that the radius, $r$, of each
four-ball is given in terms of the energy density as
\begin{equation}
r = \frac{1}{2{\pi}{\sigma}}.
\end{equation}
\noindent Note that the energy density
${\sigma} = 1/(2{\pi}r)$ is manifestly positive here because of the
sign of the extrinsic curvature as one moves across the domain wall
worldsheet. More explicitly, the extrinsic curvature of a sphere of
radius $r$ in flat space is of course ${\pm}1/r$, where the sign is
determined by whether one is calculating relative to the outward or
inward normal to the surface. As one approaches the VIS domain wall
from one side, the 3-spheres are locally `expanding' and so the
extrinsic curvature is given as $K_{ij}^{+} = +(1/r)h_{ij}$.
Likewise, as one recedes from the domain wall on the other side the
3-spheres are locally contracting, and so the extrinsic curvature has
the sign $K_{ij}^{-} = -(1/r)h_{ij}$. Using condition (2) of the
Israel matching conditions described above we thus see that the energy
density satisfies $(1/r)h_{ij} - (-(1/r)h_{ij}) =
4{\pi}{\sigma}h_{ij}$, from which Eq.~(3.10) follows. The reason we
are emphasizing this point is that one can reverse the sign of the
energy density simply by considering domain wall worldsheets where the
extrinsic curvature has the opposite behavior as one moves through
the domain wall. For a domain wall of negative energy density, as one
approaches the wall from one side the spheres will be locally {\it
contracting}, and when one leaves the wall from the other side the
spheres will start expanding again. Thus, locally the Euclidean
section of such a domain wall would look rather like a `yo-yo', with
the domain wall worldsheet running along the groove for the string, as
illustrated in Fig.~\ref{yoyo}. \vspace*{0.2cm}
\begin{figure}[htb]
\hspace*{\fill} \vbox{\epsfxsize=8cm
\rotate[r]{\epsfbox{yoyo.ps}}}\hspace*{\fill}
\caption[A `yo-yo' instanton.]
{\small\sl Instanton (locally) for a
negative energy density domain wall.}
\label{yoyo}
\end{figure}
We now need to describe the Euclidean section of the `final state',
which consists of a pair of black holes moving relative to the domain
wall. As discussed in \cite{cald}, the motion of black holes relative
to a thin domain wall was worked out long ago by Berezin et
al.~\cite{ber} and Hiscock \cite{cock}, for the case where the black
holes are spherically centered in the middle of each side of the
domain wall. The exact equations were presented in \cite{cald} and
are not important for our analysis here. Here we simply sketch the
main physical properties of a black hole - domain wall configuration.
If the created black holes each carry a single $U(1)$ charge, then the
only physical parameters in the problem are the masses of the holes
(assumed to be equal), the charges of the holes (assumed to be
opposite and equal) and the energy density of the wall. Since the
domain wall is repulsive and the black holes attract each other, there
are basically three cases:
\noindent {\bf Case 1}: The repulsive energy density of the
wall overwhelms the attractive force between the holes and the black
holes continue to move apart after they have been created.
\noindent {\bf Case 2}: The repulsive energy of the wall exactly
counterbalances the attraction between the black holes and the final
configuration is in static equilibrium.
\noindent {\bf Case 3}: The attractive force
between the holes is greater than the repulsive force of the domain
wall and the holes eventually crash together.
While all of these possibilities are in principle allowed, we will
focus on Case 2 since in that situation the construction of the
instanton is much simpler. It will be clear, however, that everything
we say here will also go through for the other cases. As discussed in
\cite{cald}, a solution always exists for the motion of a static
domain wall relative to a black hole. If the black hole has mass $m$
and charge ${\pm q}$, then the (totally umbilic) domain wall
hypersurface lies at the constant radius $r_{\rm s}$ given by
\[
r_{\rm s} = {3 \over 2} m \Big[ 1 +
\sqrt{1 - {8 \over 9} {q^2 \over m^2}} \Big].
\]
\noindent Thus the final state,
consisting of two black holes of opposite charge separated by a static
spherical domain wall, is obtained by taking two copies of
Reissner-Nordstr{\"o}m, cutting each copy along a timelike cylinder at
$r = r_{\rm s}$, then gluing the two solid interiors of the cylinders
along the domain wall hypersurface. The Euclidean section for this
configuration is therefore obtained by taking two `cigar instantons'
(for Reissner-Nordstr{\"o}m spacetime), snipping each cigar along the
hypersurface $r = r_{\rm s}$ (taking care to keep the `tip' of each
cigar, where the black hole horizons are), then gluing the two ends of
the cigars together along this surface. The action of this instanton
is calculated to be
\begin{equation}
I_{\rm Dbh} = -2 \pi \sigma r^2 \beta_{\rm RN}
{\widetilde f}^{1/2} |_{r_{\rm s}}
+ q^2 \beta_{\rm RN} \Big( {1 \over r_{+}} - {1 \over r_{\rm s}} \Big),
\end{equation}
where ${\beta}_{\rm RN}$ is the period of the Reissner-Nordstr{\"o}m
cigar instanton, $r_{+}$ denotes the outer black hole horizon radius,
and $\widetilde f = 1 - 2m/r + q^2/r^2$.
Given this, we can apply the No-Boundary Proposal and
Eq.~(\ref{eq-pcr-qc} to obtain the probability that static black hole
pairs (of mass $m$ and charge ${\pm q}$) will be nucleated by a VIS
domain wall:
\begin{equation}
P = \frac{P_{\rm BH}}{P_{\rm noBH}} =
\exp\Big[ -{1 \over 8 \pi \sigma^2}
+ 2 \pi \sigma r_{\rm s}^2 \beta_{RN} \widetilde f^{1/2}
- q^2 \beta_{RN} \Big( {1 \over r_{+}} - {1 \over r_{\rm s}} \Big)
\Big]
\end{equation}
Similar probabilities are obtained when the created holes are allowed
to accelerate relative to the domain wall (the only subtlety in the
non-static situation is that there are non-trivial matching conditions
which the Euclidean sections of the black hole - domain wall
configurations must satisfy in order to be well-defined instantons).
Clearly, the probability is heavily suppressed when the wall energy
density $\sigma$ is small, as would be expected.
Using the No-Boundary Proposal, we have described the calculation of a
(generic) probability that stationary black hole pairs will be created
in the presence of a VIS domain wall. However, this approach has not
told us how to construct an imaginary time path connecting the initial
and final states. We will now show how to construct such a path which
contains an off-shell wormhole fluctuation of arbitrarily large energy
density, but arbitrarily small action.
\section{How to Build Interpolating Paths}
\label{sec-interpol}
When one uses the No-Boundary Proposal to calculate a tunneling
amplitude in a cosmological scenario, what one does conceptually is
calculate the rate at which one universe will annihilate, and another
universe will be created, in its place. Here, we will describe how to
`patch up' this picture by performing surgery on the no-boundary
instantons to obtain a connected path which connects the ingoing and
outgoing universes.
The surgery is actually quite simple, and it is closely related to the
operation of taking the connected sum of two manifolds in topology.
The only subtlety here is that we want to take the connected sum in
such a way that the surface along which the manifolds are joined
satisfies the Israel matching conditions, so that we can interpret the
matching surface as a virtual domain wall. The resulting manifold
will then be an off-shell history which connects the ingoing state to
the outgoing state.
In order to have an explicit example where we can implement this
construction, let us return to the above scenario where black holes
are created on the VIS background. As we saw above, the instanton for
the initial state was half of the `lens instanton' ($M_L$), which was
obtained by gluing two flat four-balls together. Likewise, the
instanton for the final state was half the `baguette' instanton
($M_B$), obtained by gluing the ends of two cigar instantons together.
We now patch these two instantons together by first removing a (solid)
four-dimensional ball, $B^{4}(\delta)$, of radius $\delta$, from each
of the instantons. The boundary components left behind once we remove
these small balls are then totally umbilic three-spheres of equal
radius; we now join the two instantons together along these
three-spheres in such a way that the joining surface satisfies the
Israel matching conditions. This construction is illustrated in
Fig.~\ref{vis2}. \vspace*{0.1cm}
\begin{figure}[htb]
\hspace*{\fill} \vbox{\epsfxsize=11cm
\rotate[r]{\epsfbox{virtual.ps}}}\hspace*{\fill}
\caption[The patching procedure.]
{\small\sl Gluing the instantons together along the virtual domain
wall, or thin wormhole.}
\label{vis2}
\end{figure}
In this way we obtain an interpolating history which models the decay
of of VIS domain wall to a VIS - black hole system. We will show
below that the action for this path will be very similar to the action
difference of the two instantons used in the No-Boundary Proposal.
However, before turning to this calculation we should point out that
the energy density of the virtual wormhole must be {\em negative}.
This follows easily from the analysis presented above: As one approaches
the wall hypersurface from one side, neighboring umbilic three-spheres
are shrinking, and as one leaves the wall on the other side the
three-spheres are expanding. Thus, the three-sphere corresponding to
the domain wall worldsheet (which we interpret as a `virtual' domain
wall of topology $S^2$ which appears from nothing, expands briefly,
then annihilates) has negative energy density. Indeed, the energy
density, which we denote by ${\bar \sigma}$, is set by the scale of
the virtual domain wall; it is calculated to be
\begin{equation}
{\bar \sigma} = \frac{-1}{2{\pi}{\delta}}.
\label{eq-sigma-delta}
\end{equation}
Regions of negative energy are perfectly allowed for off-shell paths
such as the ones we are constructing here. There are two reasons why
one should not attempt a similar prescription using real domain walls
(on-shell paths): First, it would require assumptions about the matter
fields -- they would have to allow domain walls. This would make the
analysis less general. Second, there is no sensible classical field
theory which can give rise to domain walls of negative energy density,
because this would destroy vacuum stability. Of course, this may seem
confusing given that wormholes supported with large amounts of
negative energy density are considered every day by the `wormhole
engineers' \cite{visser}, who need the negative energy density in
order to make the wormholes traversable. However, these engineers use
quantum mechanical processes (such as the Casimir effect) to construct
regions with large negative energy density. We will not consider such
issues here.
Given this off-shell resonance, we now want to calculate the action
and get a tunneling amplitude. This calculation is fairly
straightforward once we notice several elementary points.
First of all, there are no volume contributions to the action in this
particular scenario, and so we don't have to worry about the fact that
we have removed four-balls from the original instantons (as expected,
the only contribution there will come from the virtual wormhole
itself). Actually, as we shall show in a moment that one never has to
worry about the volume contributions (even when there is a
cosmological constant, for example) since the volume terms from the
removed balls will always cancel each other.
Second, the virtual domain wall hypersurface is crucial because it
ensures that the initial and final instantons will have {\it opposite}
relative orientations. This means that the two actions will appear
with the correct relative signs.
In order to understand this orientation change, it is useful to
consider what happens when you try to extend a ($C^{0}$) tetrad field
through the surface of the domain wall. To be concrete, assume that
the tetrad fields on each of the little balls removed from the the
instantons are comprised of the vectors naturally associated with
polar normal coordinates on each ball. That is to say, three vectors
of a tetrad are taken to be (angular) coordinates tangent to the
boundary of a ball, and the fourth vector is a (radial) vector normal
to the boundary of a ball. Now, if we want the tetrads on each ball
to `match' at the surface of the domain wall, it follows that we {\it
must} take the normal component of the tetrad to be inward pointing
on one ball and outward pointing on the other. Thus, as we move from
one instanton to the next we reflect about one leg of the tetrad,
i.e., we reverse the orientation.
Indeed, this is precisely why we don't have to worry about the removed
volume contributions. If the removed four-balls have equal, but
opposite action, then the final difference (after they have been
removed) is equal to the original No-Boundary Proposal difference. Of
course, this cancellation actually only works when the four-balls have
{\it exactly} equal action contributions. One might imagine a
situation where, for example, the cosmological constant `jumped' to a
different value during the decay process (as you moved from the
initial instanton to the final instanton across the virtual domain
wall worldsheet). In such a scenario, the actions of the two removed
four-balls would not be equal and so there would be an extra
correction term to the no-boundary result. However, we will not
consider such complications in this paper.
Given these comments, it is now clear that the final action, $I_T$,
for our interpolating path is just the original no-boundary difference
plus a small correction term coming from the virtual domain wall:
\begin{equation}
I_{T} = \frac{1}{2} I_{\rm Dbh} - \frac{1}{2} I_{D}
+ \frac{1}{8{\pi}{\bar \sigma}^2}
\label{eq-halfaction}
\end{equation}
where $I_{\rm Dbh}$ and $I_D$ are given by (3.11) and (3.9)
respectively and the correction term involving ${\bar \sigma}$ appears
with a plus sign because the virtual domain wall has a negative energy
density.
Taking the point of view of the bounce approach (see
Sec.~\ref{ssec-instapproach}) one should complete the newly connected
half-instantons with their mirror image to obtain a bounce, in which
one starts with the initial spacelike section, goes though a final one
and back to the initial type at the other end of the Euclidean
geometry. This requires another surgery in Fig.~7, with a second
virtual domain wall allowing the transition from the baguette back to
the lens. The total action will obviously be twice that in
Eq.~(\ref{eq-halfaction}). Using Eq.~(\ref{eq-sigma-delta}), it may be
written in the form
\begin{equation}
I(\delta) = I_0 + \frac{1}{2} \rho \delta^2,
\end{equation}
where
\begin{equation}
I_0 = I_{\rm Dbh} - I_{\rm D},~~~~
\rho = 2\pi.
\label{eq-gauss}
\end{equation}
The transition rate will be given by
\begin{eqnarray}
\Gamma & = & \int_{-\infty}^{\infty} d\delta\, e^{-I} \\
& = & e^{-I_0} \int_{-\infty}^{\infty} d\delta\,
e^{-\frac{1}{2} \rho \delta^2 }
\label{eq-pathint2} \\
& = & e^{-I_0}.
\label{eq-prefexp2}
\end{eqnarray}
We demand that the disconnected geometry be excluded from this path,
and we assume that the connecting virtual domain wall must have a
diameter of at least the Planck length. This will restrict the range
of integration in Eq.~(\ref{eq-pathint2}) to the regions of more than
one standard variation, and will therefore reduce the value of the
prefactor from 1 to about $1/3$. Given the exponential suppression of
black hole pair creation, this change is negligible. Therefore, the
requirement for connected interpolating geometries will not alter the
no-boundary approach to cosmological pair creation significantly: The
exponent will be unchanged, and the prefactor, which is usually
neglected anyway, will still be of the same order of magnitude. Our
approximation breaks down only for Planck-scale background geometries,
when the semi-classical approach fails in any case.
Of course, one can also apply this construction to other gravitational
tunneling phenomena, such as black hole pair creation on a
cosmological background. There again, as long as the cosmological
constant is conserved in the decay process, the final answer will be
the original no-boundary result with a reduced prefactor.
One might object to the use of off-shell paths. However, they are not
only an essential part of the formal path integral formalism, but they
give a significant contribution to the saddlepoint approximation --
after all, the actual saddlepoint solution forms a set of measure zero
in the saddle point contour. Crucially, we demonstrated that the
interpolating off-shell paths we considered are in fact arbitrarily
small perturbations of the saddlepoint solution. Thus, if the
saddlepoint is excluded from the integral, these geometries will still
dominate. Therefore we stress that on the contrary, our proposal
offers a consistent way of including the effects of spacetime foam in
semi-classical calculations.
We should also point out here that our construction is in many
respects similar to the earlier work of Farhi, Guth and Guven
\cite{fgg}, who were studying the rate at which a new universe could
be created in the laboratory. In their approach, one constructs an
interpolating instanton by gluing the two instantons together to
obtain a `two-sheeted' pseudomanifold. The basic idea there is that
the ingoing state is on one sheet, and the outgoing state is on the
other sheet. This approach was recently employed by Kolitch and
Eardley, who studied the decay of vacuum domain walls. Interestingly,
they found that the rate calculated using the Farhi, Guth, Guven (FGG)
technique is identical to the rate calculated using the No-Boundary
Proposal. Thus, it would seem that attempts to construct some
interpolating geometry (in situations where no connected, on-shell
path exists) will always lead back to the no-boundary ansatz.
The current debate about the correct boundary conditions on the wave
function of the universe~\cite{argue} centers on the question of
whether the Hartle-Hawking No-Boundary Proposal, or the Tunneling
Proposal favored by Linde~\cite{Lin84b}, Vilenkin~\cite{Vil86} and
others, should be used to describe the creation of a universe from
{\it nothing}. We emphasize here that the use of the No-Boundary
Proposal for the tunneling processes on an {\em existing} background
is not called into doubt by either Hawking or Linde. Here we have
aimed to elucidate the reasons for the success of the No-Boundary
Proposal for such processes.
Finally, in light of recent work \cite{alex} concerning the possible
role of singular instantons in describing tunneling phenomena, we would
like to emphasize that the interpolating paths which we have constructed
here are in no way singular. Rather, these trajectories simply contain
off-shell fluctuations which may be regarded as distributional sources
of negative energy density.
\section{Summary}
In this paper we have aimed to justify the use of compact instantons
for the semiclassical description of tunneling processes on
cosmological backgrounds. We found connected off-shell interpolating
geometries which can be viewed as small perturbations of the
disconnected on-shell instantons. Therefore they can dominate the path
integral. Since the disjoint solutions are linked by a virtual domain
wall in our approach, the total action of the interpolating geometry
will be the {\em difference} between the instanton actions, and we
recover the result obtained from the No-Boundary approach.
Over the years, black hole pair creation has been investigated on a
variety of backgrounds. Usually, one finds a Euclidean bounce solution
which includes spacelike sections with and without black holes. The
difference between the bounce action and the action of the Euclidean
background solution is calculated. The pair creation rate is obtained
as the exponential of minus this action difference, as in
Eq.~(\ref{eq-pcr-usual}).
This method fails for the cosmological and domain wall backgrounds we
have considered, because there is no single Euclidean solution that
will interpolate between the initial and final spacelike sections.
Instead, one can construct two nucleation geometries (half-bounces,
which are continued in a Lorentzian direction rather than back to the
initial spacelike section), which both describe the nucleation of a
universe from nothing: one with black holes, the other without.
Therefore, the pair creation problem can be attacked within the
framework of quantum cosmology. This leads to a prescription for the
pair creation rate of cosmological black holes.
This `quantum cosmological' approach rests on the assumption that each
Hubble volume in an inflationary universe can be regarded as having
been nucleated independently. In quantum cosmology, one constructs a
wave function of the universe. The square of its amplitude gives a
probability measure. By taking the ratio of the probability measures
assigned to the two instanton types, one can calculate the relative
probability for a Hubble volume to nucleate with a black hole pair,
compared to an empty Hubble volume. Unless the cosmological constant
is of Planck order, this number is small, and can be interpreted as a
pair creation rate in the natural length and time scale, the Hubble
scale.
One then uses the Hartle-Hawking No-Boundary Proposal to determine the
wave function semi-classically. According to this proposal, the
probability measure will be given by the exponential of minus twice
the real part of the instanton action, which is equivalent to the full
bounce action of the usual pair creation treatment. The ratio of these
two exponentials is, of course, equivalent to a single exponential of
the action difference. Thus, one recovers the usual prescription for
the pair creation rate, as far as is possible given the fundamental
differences between the cosmological and the non-cosmological
situations. This is an important test for consistency, especially
since the instanton actions reflect the geometric entropy of the
nucleated spacetimes. Schwarzschild-de~Sitter space has a lower
geometric entropy than de~Sitter space. Thus the instanton actions
reflect the physical necessity that transitions in the direction of
lower entropy are suppressed.
While this application of the No-Boundary Proposal to decay processes
in quantum cosmology would seem to be intuitively justified by these
arguments, it seemed to rely on two disconnected instantons, instead
of describing the transition though a single Euclidean geometry
connecting the initial state to the final state.
In this paper, we showed that the exact saddlepoint solution is a
special disconnected geometry in a generic class of connected ones, in
which the instantons are patched together using virtual domain walls.
These off-shell geometries can be arbitrarily close to the saddlepoint
and contribute to its domination of the path integral. The exclusion
of disconnected geometries therefore has no fundamental effect on the
formalism. The exponential suppression of the pair creation process is
left unchanged, and we estimated that the prefactor would be
diminished by a factor of a third.
One could also use our method to study other decay processes which are
expected to occur in the early universe. A natural candidate process
would be the decay of vacuum domain walls by quantum tunneling
recently discussed by Kolitch and Eardley (\cite{shawn1},
\cite{shawn2}). This process is important because it provides another
decay mode capable of eliminating the unwanted gravitational effects
of domain walls in the early universe. It is also possible to generalize the
Kolitch-Eardley analysis to supergravity domain walls (such as the
D8-branes of the IIA theory) which arise in the low-energy limit of
string theory \cite{d8}. Research on these and related problems is
currently underway.
\mbox{}\\
{\noindent \bf Acknowledgments}\\
The authors would like to thank Doug Eardley, Gary Gibbons, Stephen
Hawking, Ted Jacobson, Shawn Kolitch, Andrei Linde, Robert Mann and
Don Marolf for useful conversations. R.B.\ was supported by
NATO/DAAD. A.C.\ was supported by Pembroke College, University of
Cambridge.
|
1,108,101,566,466 | arxiv | \section{Introduction}
Conic optimization problems arise frequently when modeling
parametric value-at-risk (VaR) minimization, portfolio optimization, and robust optimization with ellipsoidal objective uncertainty. Although convex versions of these models are solved efficiently by polynomial interior-point algorithms, their discrete counterparts are intractable. Branch-and-bound and branch-and-cut algorithms
require excessive computation time even for relatively small instances. The computational difficulty is exacerbated by the lack of effective warm-start procedures for conic optimization.
In this paper, we consider a reformulation of a conic quadratic discrete mean-risk minimization problem that lends itself to a successive quadratic optimization procedure benefiting from fast warm-starts and eliminating the need to solve conic optimization problems directly.
Let $u$ be an $n$-dimensional random vector and $x$ be an $n$-dimensional decision vector in a closed set
$X \subseteq \ensuremath{\mathbb{R}}^n$.
If $u$ is normally distributed with mean $c$ and covariance $Q$,
the minimum value-at-risk for $u'x$ at confidence level $1-\epsilon \ $, i.e.,
\begin{align*}
\tagsleft@true
\begin{split}
\zeta(\epsilon) = \min \ & \bigg \{ z : \operatorname{\mathbf{Prob}} \left( u'x > z \right) \leq \epsilon, \ \ x \in X \bigg \} ,
\end{split}
\end{align*}
for $0 < \epsilon \le 0.5$, is computed by solving the mean-risk optimization problem
\begin{align*}
(\ensuremath{\text{MR}}) \ \ \ \min \ & \bigg \{ c' x + \Omega \displaystyle \sqrt{x'Q x} : \ x \in X \bigg \},
\end{align*}
where $\Omega = \Phi^{-1}(1-\epsilon)$ and $\Phi$ is the c.d.f. of the standard normal distribution \cite{Birge:SPbook}.
If $u$ is not normally distributed, but its mean and variance are known, (\ensuremath{\text{MR}}) yields a
robust version by letting $\Omega = \sqrt{(1-\epsilon)/\epsilon}$, which provides an upper bound on the worst-case VaR
\cite{bertsimas.popescu:05,ghaoui.etal:03}.
Alternatively, if $u_i$'s are independent and symmetric with support $[c_i - \sigma_i, c_i + \sigma_i]$, then letting $\Omega = \sqrt{\ln(1/\epsilon)}$ with $Q_{ii} = \sigma_i^2$ gives an upper bound on \del{on} the worst-case VaR as well \cite{BN:robust-mp}. The reader is referred to
\citet{RO-book} for an in-depth treatment of robust models through conic optimization.
Hence, under various assumptions on the uncertainty of $u$, one arrives at different instances of the mean-risk model (\ensuremath{\text{MR}}) with a conic quadratic objective.
Ahmed \cite{ahmed:06} studies the complexity and tractability of various stochastic objectives for mean-risk optimization. Maximization of the mean-risk objective is \NP-hard even for a diagonal covariance matrix \citep{AA:utility,AG:max-util}. If $X$ is a polyhedron,
(\ensuremath{\text{MR}}) is a special case of conic quadratic optimization \citep{Alizadeh2003,Lobo1998}, which can be solved by polynomial-time interior points algorithm \citep{Alizadeh1995,Nesterov1998,BTN:ModernOptBook}.
\citet{AG:simplex-qp} give simplex QP-based algorithms for this case.
The interest of the current paper is in the discrete case of (\ensuremath{\text{MR}})
with integrality restrictions: $X \subseteq \ensuremath{\mathbb{Z}}^n$, which is \NP-hard.
\citet{AN:conicmir:ipco} describe mixed-integer rounding cuts, and
\citet{CI:cmip} give disjunctive cuts for conic mixed-integer programming.
The integral case is more predominantly addressed in the special case of independent random variables over binaries. In the absence of correlations, the covariance matrix reduces to a diagonal matrix $Q = diag(q)$, where $q$ is the vector of variances. In addition, when the decision variables are binary, (\ensuremath{\text{MR}}) reduces to
\begin{align*}
(\ensuremath{\text{DMR}}) \ \ \ \min \left\{c' x + \Omega \sqrt{q' x} : x \in X \subseteq \mathbb{B}^n \right\} \cdot
\end{align*}
\ins{Several approaches are available for (\ensuremath{\text{DMR}}) for specific constraint sets $X$.}
\citet{Ishii1981} give an $O(n^6)$ algorithm \del{for (\ensuremath{\text{DMR}})} \rep{when the feasible set $X$ is the set of}{over} spanning trees; \citet{hassin1989maximizing} utilize parametric linear programming to solve \rep{(\ensuremath{\text{DMR}})}{it} \rep{when $X$ defines a matroid}{over matroids} in polynomial time.
\citet{atamturk2008polymatroids} give a cutting plane algorithm utilizing the submodularity of the objective; \citet{AJ:lifted-polymatroid} extend it to the mixed 0-1 case with indicator variables.
\citet{Atamturk2009} give an $O(n^3)$ algorithm over a cardinality constraint.
\citet{shen2003joint} provide a greedy $O(n \log n)$
algorithm to solve the diagonal case over the unit hypercube.
\citet{nikolova2009strategic} gives
a \ins{fully polynomial-time approximation scheme} (FPTAS) for an arbitrary set $X \subseteq \mathbb{B}^n$ provided the deterministic problem with $\Omega=0$ can be solved in polynomial time.
The reformulation we give in Section~\ref{sec:qp-ub} reduces the general discrete mean-risk problem (\ensuremath{\text{MR}}) to a sequence of discrete quadratic optimization problems, which is often more tractable than the conic quadratic case \cite{AG:m-matrix}. The uncorrelated case (\ensuremath{\text{DMR}}) reduces to a sequence of binary linear optimization problems. Therefore, one can utilize the simplex-based algorithms with fast warm-starts for general constraint sets. Moreover, the implementations can benefit significantly for structured constraint sets, such as spanning trees, matroids, graph cuts, shortest paths, for which efficient algorithms are known for non-negative linear objectives.
\subsubsection*{A motivating application: Network interdiction with stochastic capacities}
\label{sec:PNI}
Our motivating problem for the paper is network interdiction with stochastic capacities; although, since the proposed approach is independent of the
feasible set $X$, it can be applied to any problem with a mean-risk objective (\ensuremath{\text{MR}}).
The deterministic interdiction problem is a generalization of the classical min-cut problem, where an interdictor with a
limited budget minimizes the maximum flow on a network by stopping the flow on a subset of the arcs at a cost per interdicted arc. Consider a graph $G = (N,A)$ with nodes $N$ and arcs $A$.
Let $s$ be the source node and $t$ be the sink node. Let $\alpha_a$ be the cost of interdicting arc $a \in A$ and $\beta$ be the total budget for interdiction. Then, given a set $I$ of interdicted arcs, the maximum $s-t$
flow on the remaining arcs is the capacity of the minimum cut on the arcs $A \setminus I$. \citet{wood1993deterministic} shows
that the deterministic interdiction problem is \NP-hard and gives integer formulations for it.
\citet{royset2007solving} give algorithms for a bi-criteria interdiction problem and generate an
efficient frontier of maximum flow vs. interdiction cost.
\citet{cormican1998stochastic,janjarassuk2008reformulation} consider a stochastic version of the problem,
where interdiction success is probabilistic. \citet{HHW:stoc-interdiction} develop a decomposition approach
for interdiction when network topology is stochastic. Network interdiction
is a dual counterpart of survivable network design \cite{BM:surv-cuts,RA:review}, where one installs capacity to maximize the minimum flow against an adversary blocking the arcs. See \citet{Smith2013} for a review of network interdiction models and algorithms.
\ignore{
The decision variables for (DNI) based on graph $G$ are defined as follows:
\begin{equation*}
\begin{aligned}
x_{a} &= \left\{
\begin{array}{cl}
1 & \text{ if } a \in A \text{ is within cut} \\
0 & \text{ otherwise,}
\end{array} \right.\\
z_{a} &= \left\{
\begin{array}{cl}
1 & \text{ if } a \in A \text{ is within cut but removed by interdictor} \\
0 & \text{ otherwise,}
\end{array} \right. \\
y_{i} &= \left\{
\begin{array}{cl}
1 & \text{ if } i \in N \text{ is within source set} \\
0 & \text{ otherwise.}
\end{array} \right. \\
\end{aligned}
\end{equation*}
Using this set of variables, we write the \eqref{eq:problemPNI} formulation as follows:
}
When the arc capacities are stochastic, we are interested in an optimal interdiction plan that minimizes the maximum flow-at-risk. Unlike the expectation criterion used in previous stochastic interdiction models, this approach provides a confidence level for the maximum flow on the arcs that are not interdicted.
Letting $c$ be the mean capacity vector and $Q$ the covariance matrix, the mean-risk network interdiction problem is modeled as
\begin{align*}
\min \quad & c' x + \Omega \sqrt{x'Q x} \\
\text{s.t.} \quad & B y \le x + z,\\
(\ensuremath{\text{MRNI}}) \ \ \ \ \ \quad \quad &\alpha^{\prime} z \leq \beta, \\
&y_s = 1, \ y_t =0, \\
&x \in \{0,1\}^A, y \in \{0,1\}^N, z \in \{0,1\}^A,
\end{align*}
\noindent where $B$ is the node-arc incidence matrix of $G$. Here, $z_a$ is one if arc $a$ is interdicted
at a cost of $\alpha_a$ and
zero otherwise; and $x_a$ is one if arc $a$ is in the minimum mean-risk cut and zero otherwise. The optimal value of (\ensuremath{\text{MRNI}}) is the ``flow-at-risk" for a given interdiction budget $\beta$.
Note that when $\Omega = 0$, (\ensuremath{\text{MRNI}}) reduces to the deterministic network interdiction model of \citet{wood1993deterministic}; and, in addition, if $z$ is a vector of zeros, it reduces to the standard $s-t$ min-cut problem. \ins{In a recent paper
\citet{LSS:interdiction} give a scenario-based approach stochastic network interdiction under conditional value-at-risk measure.}
The following example underlines the difference of interdiction solutions between deterministic and mean-risk models with stochastic capacities.
\begin{example}
Consider the simple network in Figure~\ref{fig:mrni-ex} with two arcs from $s$ to $t$. Arc 1 has mean capacity 1 and 0 variance, whereas arc 2 has mean capacity 0.9 and variance $\sigma^2$. Suppose the budget allows a single-arc interdiction. Then, the deterministic model with $\Omega=0$ would interdict arc 1 with higher mean and leave arc 2 with high variance intact. Consequently, the maximum $s-t$ flow would exceed $0.9+0.5 \sigma$ with probability 0.3085 according to the normal distribution. On the other hand, the mean-risk model with $\Omega > 0.2$, interdicts arc 2 with lower mean, but high variance ensuring that the maximum $s-t$ flow to be no more than 1.
\end{example}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.32 \linewidth]{2nodes_sigma.png}
\caption{Mean-risk network interdiction.}
\label{fig:mrni-ex}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.65 \linewidth]{EF1.png}
\caption{Flow-at-risk vs. interdiction budget for risk aversion levels.}
\label{fig:efficientFrontier}
\end{center}
\end{figure}
The combinatorial aspect of network interdiction, coupled with correlations, make it extremely challenging to determine the least cost subset of arcs to interdict for a desired confidence level in the maximum flow even for moderate sized networks.
Yet, understanding the cost and benefit of an interdiction strategy is of critical interest for planning purposes. Toward this end, the proposed approach in the current paper allows one to quickly build efficient frontiers of flow-at-risk vs. interdiction cost, which would, otherwise, be impractical for realistic sizes. Figure \ref{fig:efficientFrontier} shows the flow-at-risk as a function of the interdiction cost for different confidence levels for a $15 \times 15$ grid graph shown in Figure~\ref{fig:structureG}. At 100\% budget the network is interdicted completely allowing no flow.
At lower budget levels the flow-at-risk increases significantly with the higher confidence levels. The vertical axis is scaled so that the deterministic min-cut value
with $\Omega=0$ at 0\% budget (no interdiction) is 100. The green solid curve corresponding to 95\% confidence level shows that, if no interdiction is performed, the flow on the network is more than 200\% of the deterministic maximum flow with probability 0.05. \ins{The same curve shows} with 40\% interdiction budget \del{, however,} the flow is higher than the deterministic maximum flow \ins{(100)} with probability only 0.05.
\exclude{
The warm start procedure works well in practice even for the general case where $Q \succ 0$.
To show this, we apply the procedure on a problem based on the Deterministic Network Interdiction (DNI) problem introduced by Wood \cite{wood1993deterministic}.
The original problem can be seen as a two stage sequential game over a graph, where an interdictor first removes a fixed number of arcs in order to minimize the maximum flow a smuggler can transport afterwards through it. By using duality, Wood \cite{wood1993deterministic} showed that this problem can be formulated as a single stage optimization problem that minimizes the capacity of the minimum cut.
We consider a network interdiction problem with uncertain capacities.
In this section, we apply the warm-start procedure to an extension of the
Deterministic Network Interdiction problem (DNI) introduced by \citet{wood1993deterministic}.
The original problem can be viewed as a two-stage sequential game over a graph, where an interdictor first removes a fixed number of arcs in order to minimize the maximum flow a smuggler can transport afterwards through it. Using duality, \citet{wood1993deterministic} showed that this problem can be formulated as a single stage optimization problem that minimizes the capacity of the minimum cut.
We consider an extension of this problem by replacing the linear objective representing the cut capacity
with a conic quadratic mean-risk objective to incorporate uncertainty in arc capacities,
and refer to this problem as the Probabilistic Network Interdiction problem (PNI).
To the best of our knowledge, this is the first study that accounts for arc capacity uncertainty; previous works
that introduce uncertainty into the DNI focus primarily on the probability of success of an interdiction attempt
\cite{cormican1998stochastic,janjarassuk2008reformulation}.
Also, our approach to the solution of this problem is different from these studies that applying stochastic programming techniques.
In the remainder of this section, we first describe the problem formulation and the structure of the random instances generated for experiments.
Computational results are reported for the two cases where the conic quadratic term in the objective is diagonal and non-diagonal.
Finally, we demonstrate how the procedure can help creating an efficient frontier for different scenarios of the problem to aid decision makers.
\exclude{
We first proceed by describing the model formulation, followed by a description of the problem instances. Afterwards, we present results for both the specific case where $Q$ is diagonal and when it is a general positive definite matrix, together with the implementation issues regarding the warm start procedure in each case.
}
When there is no correlation between capacities of arcs, $\Sigma$ is a diagonal matrix of variances, i.e.,
$\Sigma = \operatorname{diag}(\sigma_1^2,\cdots,\sigma^2_{m})$.
The following two subsections consider two types of instances with diagonal and non-diagonal covariance matrices, respectively.
Given the speed improvements introduced in Sections \ref{sec:Relax} and \ref{sec:heuristic} for solving the PNI with arc capacity uncertainty, this model can be used to provide insight for decision makers. In this work, we provide the example of a decision maker that must decide the interdiction budget $b$, in the same spirit as \cite{royset2007solving}. However, unlike the previous reference, this decision can be made by considering arc capacity uncertainty.
To this end, Figure \ref{fig:efficientFrontier} plots how the cut's value-at-risk, which is the objective function in \eqref{eq:probForm}, varies as a function of the interdiction budget. This variation is measured in relative terms, by taking the quotient of the objective value for each budget level with respect to the objective value for the case where $b=0$. In this way, we can compare these curves for different risk aversion levels, which correspond to the weight of the cut's standard deviation in the objective function ($\Omega$).
With the expedited solution process through the use of polymatroid inequalities and the warm-start procedure,
\eqref{eq:problemPNI} can provide insight for decision makers more efficiently.
For example, solving multiple instances of \eqref{eq:problemPNI} can benefit a decision maker that must
decide how much interdiction budget to allocate,
in an approach similar to the efficient frontier generation discussed by \cite{royset2007solving}.
}
\exclude{
From Figure \ref{fig:efficientFrontier}, we can see that for intermediate values of $\Omega$, where the cuts' expected value and weighted standard deviation are both relevant, the cut's value-at-risk falls more slowly compared to low ($\Omega = 0$) or high ($\Omega = 10$) values of $\Omega$, where the decision maker can focus on minimizing either the expected capacity or its variance, which dominate the objective function in these cases.
Note also that the times required for generating the six curves amounts to 4,552 seconds, which required solving over 100 PNIs for different combinations of $\Omega$ and $b$, which is comparable to the times reported in Janjarassuk and Linderoth for solving one $20 \times 20$ instance with 200 scenarios (see Table 3 in \cite{janjarassuk2008reformulation}).
%
}
\subsubsection*{Contributions and outline}
In Section~\ref{sec:qp-ub}, we give a non-convex upper-bounding function for (\ensuremath{\text{MR}}) that matches
the mean-risk objective value at its local minima. Then, we describe an
upper-bounding procedure that successively solves quadratic optimization problems instead of conic quadratic optimization. The rationale behind the approach is that algorithms for quadratic optimization with linear constraints scale better than interior point algorithms for conic quadratic optimization. Moreover, simplex algorithms for quadratic optimization can be effectively integrated into branch-and-bound algorithms and other iterative procedures as they allow fast warm-starts. In Section~\ref{sec:comp}, we test the effectiveness of the proposed approach on the network interdiction problem with stochastic capacities and compare it with exact algorithms.
We conclude in Section~\ref{sec:conclusion} with a few final remarks.
\section{A successive quadratic optimization approach} \label{sec:qp-ub}
In this section, we present a successive quadratic optimization procedure to obtain feasible solutions to (\ensuremath{\text{MR}}).
The procedure is based on a reformulation of (\ensuremath{\text{MR}}) using the perspective function of the convex quadratic term $q(x) = x'Q x$.
\citet{AG:simplex-qp} introduce
\tagsleft@true
\begin{align*}
(\ensuremath{\text{PO}}) \ \ \
\min
\left\{c^{\prime} x + \frac{\Omega}{2} h(x,t) + \frac{\Omega}{2} t: x \in X, \ t \ge 0 \right\},
\end{align*}
\tagsleft@false
where \ins{$\Omega$ is a positive scalar as before,}
$h: \mathbb{R}^n \times \mathbb{R}_+ \rightarrow \mathbb{R} \cup \{\infty\}$ is the closure of the perspective function of $q$ and is defined as
\begin{align*}
h(x,t) :=
\begin{cases}
\frac{x' Q x}{t} & \quad \text{ if } t > 0, \\
0 & \quad \text{ if } t=0, \ x'Qx = 0, \\
+ \infty & \quad \text{ otherwise. }
\end{cases}
\end{align*}
As the perspective of a convex function is convex \citep{hiriart2013convex}, $h$ is convex.
\citet{AG:simplex-qp} show the equivalence of (\ensuremath{\text{MR}}) and (\ensuremath{\text{PO}}) for a polyhedral set $X$.
Since we are mainly interested in a discrete feasible region, we study (\ensuremath{\text{PO}}) for $X \subseteq \ensuremath{\mathbb{Z}}^n$.
For $t \ge 0$, it is convenient to define the optimal value function
\begin{align} \label{prob:qp}
f(t) :=\underset{x \in X}{\min }\left\{ g(x,t) := c^{\prime} x + \frac{\Omega}{2} h(x,t) + \frac{\Omega}{2} t \right \} \cdot
\end{align}
Given $t$, optimization problem \eqref{prob:qp} has a convex quadratic objective function. Let $x(t)$ be a minimizer of \eqref{prob:qp} with value $f(t)$. Note that $g$ is convex and $f$ is a point-wise minimum of convex functions in $t$ for each choice of $x \in X$, and is, therefore, typically non-convex
(see \ins{Figure~\ref{fig:fT}}). We show below
that, for any $t \ge 0$, $f(t)$ provides an upper bound on the mean-risk objective value for $x(t)$.
\begin{lemma} \label{lemma:grad}
$\sqrt{a} \le \frac{1}{2} (a/t + t)$ for all $a,t \ge 0$.
\end{lemma}
\begin{proof}
Since $\sqrt{a}$ is concave over $a \ge 0$, it is bounded above by its gradient line:
\[
\sqrt{a} \le \sqrt{y} + \frac{1}{2\sqrt{y}} (a-y)
\]
at any point $y \ge 0$. Letting $t = \sqrt{y}$ gives the result.
\end{proof}
\begin{proposition} \label{prop:ub}
For any $t \ge 0$, we have
\begin{align*}
\label{eq:propLocalMin}
c' x(t) + \Omega \sqrt{{x(t)}' Q x(t)}
\le f(t).
\end{align*}
\end{proposition}
\begin{proof}
Applying Lemma~\ref{lemma:grad} with $a = x' Q x \ (\ge 0 \text{ as $Q$ is positive semidefinite})$ gives
\[
\sqrt{x'Q x} \le \frac{1}{2} h(x,t) + \frac{t}{2}, \ \forall x \in \ensuremath{\mathbb{R}}^n, \ \forall t \ge 0.
\]
First multiplying both sides by $\Omega \ge 0$ and then adding $c'x$ shows
\begin{align*}
c' x + \Omega \sqrt{x' Q x} &\leq c' x + \frac{\Omega}{2} h(x,t) + \frac{\Omega}{2} t , \ \forall x \in \ensuremath{\mathbb{R}}^n, \ \forall t \ge 0.
\end{align*}
The inequality holds, in particular, for $x(t)$ as well.
\end{proof}
\begin{example}
Consider the mean-risk optimization problem
\[ \min \bigg \{ x_2 + \sqrt{10x_1^2 + 5 x_2^2} : x \in X = \{(0,1), (1,0)\} \subseteq \ensuremath{\mathbb{R}}^2 \bigg \}
\]
with two feasible points.
Figure \ref{fig:fT} illustrates the optimal value function $f$.
The curves in red and green show $g((1,0), t)$ and $g((0,1), t)$, respectively,
and $f(t) = \min \{g((1,0), t), g((0,1), t)\}$ is shown with a dotted line.
As the red and green curves intersect at $t = 2.5$, $x(t)$ is $(0,1)$ for $t \leq 2.5$, and $(1,0)$ for $t \ge 2.5$.
In this example, $f$ has two local minima: $1+\sqrt{5}$ attained at $t=\sqrt{5}$ and $\sqrt{10}$ at $t=\sqrt{10}$. Observe that the upper bound $f(t)$ matches the mean-risk objective at these local minima:
\[c' x({t}) + \Omega \sqrt{x({t})' Q x({t})} = f({t}), \ \ {t} \in \{\sqrt{5}, \sqrt{10}\}. \]
The black step function shows the mean-risk values for the two feasible solutions of $X$.
It turns out the upper bound $f(t)$ is tight, in general, for any local minima (Proposition~\ref{prop:localMinima}).
In order to contrast the convex and discrete cases, we show with
solid blue curve the lower bound $\hat{f}$ of $f$, where $\hat f(t) = \min \{g(x,t): x \in \hat X \}$ and
$\hat{X} := \{(x_1, x_2) \in \ensuremath{\mathbb{R}}^2_+: x_1 + x_2 = 1 \}$
is the convex relaxation of $X$.
Let $\hat{x}(t)$ be the solution of this convex problem.
Then $\hat{f}(t)$ provides an upper bound on $c' \hat{x}(t)+ \Omega \sqrt{{\hat{x}(t)}' Q \hat{x}(t)} $ (graph shown in dotted blue curve)
at any $t \ge 0$, and the bound is tight at $t = \sqrt{25 / 7}$, where the minimum of $\hat{f}(t)$ is attained.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.65]{Figure_fT2.png}
\caption{The value function $f$ with two discrete feasible points.}
\label{fig:fT}
\end{center}
\end{figure}
\end{example}
Although, in general, $f(t)$ provides an upper bound, the next proposition shows that the mean-risk objective and $f$ match at local minima of $f$.
\begin{proposition} \label{prop:localMinima}
If $f$ has a local minimum at $\bar{t} > 0$, then we have
\begin{align}
\label{eq:propLocalMin}
c' x(\bar{t}) + \Omega \sqrt{{x(\bar{t})}' Q x(\bar{t})}
= f(\bar{t}).
\end{align}
\end{proposition}
\begin{proof}
Since $f$ is the point-wise minimum of differentiable convex functions, it is differentiable at its local minima in the interior of its domain ($t > 0$). Then, its vanishing derivative at $\bar t$
\[
f'(\bar t) = -\frac{x(\bar t)' Q x(\bar t)}{\bar t^2} + 1 = 0
\]
implies $\bar t = \sqrt{x(\bar t)' Q x(\bar t)}$. Plugging this expression into $f(\bar{t})$ gives the result.
\end{proof}
\exclude{
\begin{proof}
Since $\bar{t}$ is a local minimum point, there exists $\delta > 0$ such that
$f(\bar{t}) \leq f(t) \ \forall \, t \in [\bar{t}-\delta, \bar{t}+\delta]$. Equivalently,
\begin{align*}
c' x(\bar{t}) + \frac{\Omega}{2\bar{t}} {x(\bar{t})}' Q x(\bar{t}) + \frac{\Omega}{2}\bar{t}
& \leq {\min_{x \in S} } \left\{ c' x + \frac{\Omega}{2(\bar{t} + \epsilon)} x' Q x + \frac{\Omega(\bar{t} + \epsilon)}{2} \right\}, \ \forall \epsilon \in [-\delta, \delta] \\
& \leq c' x(\bar{t}) + \frac{\Omega}{2(\bar{t} + \epsilon)} {x(\bar{t})}' Q x(\bar{t}) + \frac{\Omega(\bar{t} + \epsilon)}{2} , \ \forall \epsilon \in [-\delta, \delta]
\end{align*}
Simplifying, the last inequality implies that
\[
0 \geq \frac{\epsilon}{2\bar{t}(\bar{t} + \epsilon)} {x(\bar{t})} ' Q x(\bar{t}) - \frac{\epsilon}{2}, \ \forall \epsilon \in [-\delta, \delta].
\]
Multiplying the inequality by $2\bar{t}(\bar{t} + \epsilon)/\epsilon$ for
$\epsilon \neq 0$ leads to the following two inequalities depending on the sign of $\epsilon$:
\begin{align*}
0 & \geq {x(\bar{t})} ' Q x(\bar{t}) - \bar{t} (\bar{t} + \epsilon), \ \forall \epsilon \in (0, \delta], \\
0 & \leq {x(\bar{t})} ' Q x(\bar{t}) - \bar{t} (\bar{t} + \epsilon), \ \forall \epsilon \in [-\delta, 0).
\end{align*}
Taking the limit as $\epsilon \rightarrow 0+$ and $\epsilon \rightarrow 0-$, respectively,
we conclude that
\[\bar{t} = \sqrt{x(\bar{t}) ' Q x(\bar{t})}. \]
Plugging in this expression for $\bar{t}$ into $f(\bar{t})$ establishes the result.
\end{proof}
}
Finally, we show that problems (\ensuremath{\text{MR}}) and (\ensuremath{\text{PO}}) are equivalent. In other words, the best upper bound matches
the optimal value of the mean-risk problem, which provides an alternative way for solving (\ensuremath{\text{MR}}).
\begin{proposition} \label{prop:equiv}
Problems (\ensuremath{\text{MR}}) and (\ensuremath{\text{PO}}) are equivalent; that is,
\begin{align*}
{\min} \left\{c^{\prime} x + \Omega \sqrt{x^{\prime} Q x}: x \in X \right\}
= \min \{ f(t): t \ge 0 \} \cdot
\end{align*}
\end{proposition}
\begin{proof}
Let $t^*$ be optimal for $\min \{ f(t): t \ge 0 \}$. By Proposition~\ref{prop:ub}
\[
f(t^*) \ge c'x(t^*) + \Omega \sqrt{x(t^*)'Qx(t^*)} \ge {\min} \left\{c^{\prime} x + \Omega \sqrt{x^{\prime} Q x}: x \in X \right\}.
\]
The other direction follows from the observation
\begin{align*}
\min_{x \in X} \left\{c'x + \Omega \sqrt{x'Qx} \right\} & = \min_{x \in X, \; \ins{t \ge 0}} \left\{ c'x + \frac{\Omega}{2} \text{\rep{h}{g}}(x,t) + \frac{\Omega}{2} t : t = \sqrt{x'Qx} \right\} \\
& \geq \min_{x \in X, \; t \ge 0} \left\{ c'x + \frac{\Omega}{2} \text{\rep{h}{g}}(x,t) + \frac{\Omega}{2} t \right\} = \min_{t \ge 0} \{ f(t) \} \cdot
\end{align*}
\end{proof}
The one-dimensional upper-bounding function $f$ above suggests a local search
algorithm that utilizes quadratic optimization to evaluate the function at any $t \ge 0$:
\[ f(t) = \min_{x \in X} \left \{ g(x,t) := c'x + \frac{\Omega}{2t} x'Qx + \frac{\Omega}{2} t \right \}\]
and avoids the solution of a conic quadratic optimization problem directly.
Algorithm~\ref{alg:bisection} describes a simple binary search method that halves the uncertainty interval $[t_{min}, t_{max}]$, initiated as $t_{min} = 0$ and $t_{max} = \sqrt{\bar x'Q \bar x}$, where $\bar x$ is an optimal solution to (\ensuremath{\text{MR}}) with $\Omega=0$. The algorithm is terminated either when a local minimum of $f$ is reached or the gap between the upper bound $f(t)$ and $c' x(t) + \Omega \sqrt{x(t)'Q x(t)}$ is small enough. For the computations in Section ~\ref{sec:comp} we use 1\% gap as the stopping condition.
\ignore{
\begin{algorithm}
\DontPrintSemicolon
\SetAlgoNoEnd\SetAlgoNoLine%
{\texttt{0. \hskip -3mm Initialize}} \\ \texttt{Set} $\texttt{lb} = 0$, $\texttt{ub} = u$, $t = (\texttt{lb}+\texttt{ub})/2$ \\
\hrule
\bigskip
\texttt{1. \hskip -3mm Update \& Terminate} \\
\quad \texttt{Compute $f(t)$ and $x(t)$} \;
\quad \uIf{$(f(t) - (c'x(t) + \Omega \sqrt{x(t)'Qx(t)} ))/f(t) \leq \Delta$ or \texttt{ub}$-\texttt{lb} \leq \Delta$}{ \texttt{Return $x(t)$.}}
\quad \ElseIf{ $\frac{g}{\partial t}(x(t), t) \leq -\epsilon$}{$\texttt{ub} = t$ \; \texttt{Repeat step 1.}}
\quad \ElseIf{ $\frac{g}{\partial t}(x(t), t) \geq \epsilon$}{$\texttt{lb} = t$ \; \texttt{Repeat step 1.}}
\quad \Else{ \texttt{Return $x(t)$}}
\caption{Local search procedure.}
\label{alg:WS}
\end{algorithm}
}
\begin{algorithm}[h]
\caption{Binary local search.}
\label{alg:bisection}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\Require $X \subseteq \ensuremath{\mathbb{Z}}^n; Q\text{ p.s.d. matrix; }c\text{ cost vector; } \Omega>0$
\Ensure Local optimal solution $x$
\State \textbf{Initialize }$t_{\min}$ and $t_{\max}$
\State $\hat{z}\leftarrow \infty$ \Comment{best objective value found}
\Repeat
\State $t\leftarrow \frac{t_{\min}+t_{\max}}{2}$
\State $x(t)\leftarrow \operatorname*{arg\,min}\left\{c'x+\frac{\Omega}{2t}x'Qx+\frac{\Omega}{2}t: x \in X\right\}$ \ \
\If{$\frac{\partial g}{\partial t}(x(t), t) \leq -\epsilon$}\label{line:iBisection10}
\State $t_{\min}\leftarrow t$\label{line:iBisection11}
\ElsIf{ $\frac{\partial g}{\partial t}(x(t), t) \geq \epsilon$} \label{line:iBisection20}
\State $t_{\max}\leftarrow t$\label{line:iBisection21}
\Else \label{line:iBisection20}
\State {return $x(t)$}
\EndIf \label{line:iBisection3}
\Until stopping condition is met \label{line:stoppingCriterion2}
\State \Return $\hat{x}$
\end{algorithmic}
\end{algorithm}
\subsubsection*{The uncorrelated case over binaries}
The reformulation (\ensuremath{\text{PO}}) simplifies significantly for the special case of independent random variables over binaries. In the absence of correlations, the covariance matrix reduces to a diagonal matrix $Q = diag(q)$, where $q$ is the vector of variances. For
\begin{align*}
(\ensuremath{\text{DMR}}) \ \ \ \min \left\{c' x + \Omega \sqrt{q' x} : x \in X \subseteq \mathbb{B}^n \right\}
\end{align*}
the upper bounding problem simplifies to
\begin{align} \label{prob:ub-d}
f(t) = \min \left\{c' x + \frac{\Omega}{t} q' x + \frac{\Omega}{2} t: x \in X \subseteq \mathbb{B}^n \right\},
\end{align}
which is a binary linear optimization problem for fixed $t$. Thus, $f$ can be evaluated fast for \del{many} \ins{linear} combinatorial optimization problems, \ins{such as the minimum spanning tree problem, shortest path problem, assignment problem, minimum cut problem \cite{book:S:co},} for which there exist polynomial-time algorithms. Even when the evaluation problem
\eqref{prob:ub-d} is \NP-hard, simplex-based branch-and-bound algorithms equipped with warm-starts perform much faster than conic quadratic mean-risk minimization as demonstrated in the next section.
\section{Computational Experiments}
\label{sec:comp}
In this section we report on computational experiments conducted to test the effectiveness of the
proposed successive quadratic optimization approach on the network interdiction problem with stochastic capacities.
We compare the solution quality and the computation time with exact algorithms.
All experiments are carried out using CPLEX 12.6.2 solver on a workstation
with a 3.60 GHz Intel R Xeon R CPU E5-1650 and 32 GB main memory and with a single thread.
Default CPLEX settings are used with few exceptions: dynamic search and presolver are disabled to utilize the user cut callback; the branch-and-bound nodes are solved using linear outer approximation for faster enumeration; and the time limit is set to one hour.
\subsubsection*{Problem instances}
\label{subsec:inst}
We generate \del{network interdiction} instances \ins{of the mean-risk network interdiction problem (\ensuremath{\text{MRNI}})}
on grid graphs similar to the ones used in
\citet{cormican1998stochastic,janjarassuk2008reformulation}.
Let \textit{ $p \times q$ grid} be the graph with $p$ columns and $q$ rows of grid nodes in addition to a source and a sink node (see Figure \ref{fig:structureG}).
The source and sink nodes are connected to all the nodes in the first and last column, respectively.
The arcs incident to source or sink have infinite capacity and are not interdictable.
The arcs between adjacent columns are always directed toward the sink,
and the arcs connecting two nodes within
the same column are directed either upward or downward with equal probability.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.55 \linewidth]{Figure1_Graph_Structure_cropped2.jpg}
\caption{\del{Structure of }$p \times q$ grid graph.}
\label{fig:structureG}
\end{center}
\end{figure}
We generate two types of data: uncorrelated and correlated.
For each arc $a \in A$, the mean capacity $c_a$
and its standard deviation $\sigma_a$ are independently drawn from the integral uniform $[1, 10]$,
and the interdiction cost $\alpha_a$ is drawn from the integral uniform $[1, 3]$.
For the correlated case, the covariance matrix is constructed via a factor model: $Q = \operatorname{diag}(\sigma_1^2,\cdots,\sigma^2_{|A|})+ E F E' $,
where $F$ is an $m \times m$ factor covariance matrix and $E$ is the exposure matrix of the arcs to the factors.
$F$ is computed as $F = HH'$, where each $H_{ij}$ is drawn from uniform $[-100/pq, 100/pq]$, and each $E_{ij}$
from uniform $[0, 0.1]$ with probability 0.2 and set to 0 with probability 0.8.
The interdiction budget $\beta$ is set to $\lceil \frac{Y}{2} \rceil$,
and the risk averseness parameter $\Omega$ is set to $\Phi^{-1}(1-\epsilon)$, where $\Phi$ is the c.d.f. of the standard normal distribution.
Five instances are generated for each combination of graph sizes $p \times q$ : $10\times 10$, $20 \times 20$, $30 \times 30$
and confidence levels $1-\epsilon$: 0.9, 0.95, 0.975. The data set is available
for download at \texttt{http://ieor.berkeley.edu/$\sim$atamturk/data/prob.interdiction} .
\ins{For completeness, we state the corresponding perspective optimization for (\ensuremath{\text{MRNI}}):
\begin{align*}
\min \quad & c' x + \Omega x'Q x/2t + \Omega t/2 \\
\text{s.t.} \quad & B y \le x + z,\\
(\text{PO}-\ensuremath{\text{MRNI}}) \ \ \ \ \ \quad \quad &\alpha^{\prime} z \leq \beta, \\
&y_s = 1, \ y_t =0, \\
&x \in \{0,1\}^A, y \in \{0,1\}^N, z \in \{0,1\}^A, t \in \ensuremath{\mathbb{R}}_+.
\end{align*}
}
\subsubsection*{Computations}
Table~\ref{tab:ws} summarizes the performance of the successive quadratic optimization approach on the network interdiction instances.
We present the number of iterations, the computation time in seconds, and the percentage optimality gap for the solutions, separately for the uncorrelated and correlated instances. Each row represents the average over five instances for varying grid sizes and confidence levels. One sees in the table that only a few number of iterations are required to obtain solutions within about 1\% of optimality for both the correlated and uncorrelated instances.
While the solution times for the correlated case are higher, even the largest instances are solved under 20 seconds on average.
The computation time increases with the size of the grids, but is not affected by the confidence level $1-\epsilon$.
\begin{table}[h]
\caption{Performance of the binary local search.}
\label{tab:ws}
\footnotesize
\centering
\setlength{\tabcolsep}{3pt}
\begin{tabular}{c|c|rrr|rrr}
\hline \hline
\multicolumn{2}{c|}{} & \multicolumn{3}{c|}{ Uncorrelated } & \multicolumn{3}{c}{Correlated} \\
\hline
$p \times q$ & $1-\epsilon$ & iter & time & gap & iter & time & gap \\
\hline
\multirow{3}{*} {$10 \times 10$}
& 0.9 & 2.8 & 0.04 & 1.47 & 3.0 & 0.10 & 0.68 \\
& 0.95 & 3.4 & 0.05 & 0.28 & 3.0 & 0.11 & 0.74 \\
& 0.975 & 2.8 & 0.05 & 0.00 & 3.0 & 0.09 & 0.73 \\
\hline
\multirow{3}{*} {$20 \times 20$}
& 0.9 & 3.0 & 0.49 & 1.54 & 4.0 & 2.74 & 0.35 \\
& 0.95 & 2.8 & 0.36 & 1.06 & 4.0 & 2.68 & 0.44 \\
& 0.975 & 2.8 & 0.45 & 1.07 & 4.0 & 3.21 & 5.86 \\
\hline
\multirow{3}{*}{$30 \times 30$}
& 0.9 & 3.0 & 2.24 & 1.67 & 5.0 & 16.00 & 0.46 \\
& 0.95 & 3.0 & 2.54 & 1.26 & 5.8 & 16.87 & 0.15 \\
& 0.975 & 3.0 & 2.58 & 1.19 & 5.8 & 18.57 & 0.20 \\
\hline
\multicolumn{2}{c|}{\textbf{avg} \rule{0pt}{10pt}}
& $\BD{2.96}$ & $\BD{0.98}$ & $\BD{1.06}$ & $\BD{4.18}$ & $\BD{6.71}$ & $\BD{1.07}$ \\
\hline \hline
\end{tabular}
\end{table}
\begin{table}[t]
\footnotesize
\centering
\caption{Performance of b\&b and b\&c algorithms.}
\label{tb:diag5}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{c|c|rrrrr|rrrrrr}
\hline \hline
\multicolumn{2}{c}{} \rule{0pt}{10pt} & \multicolumn{11}{|c}{ Uncorrelated instances } \\ \hline
\multicolumn{2}{c}{} & \multicolumn{5}{|c|}{ Cplex } & \multicolumn{6}{c}{Cplex $+$ cuts} \\
\hline
$p \times q$ & $1-\epsilon$ & rgap & stime & time & egap (\#) & nodes & cuts & rgap & stime & time & egap (\#) & nodes \\
\hline
\multirow{3}{*} {$10 \times 10$}
& 0.9 & 15.1 & 0 & 1 & 0.0\phantom{(0)} & 457 & 101 & 5.1 & 0 & 3 & 0.0\phantom{(0)} & 11 \\% & 4.98 & 4.56 & 0.0\phantom{(0)} & 12 & 0.3 & 103 \\
& 0.95 & 17.0 & 1 & 1 & 0.0\phantom{(0)} & 1,190 & 127 & 5.6 & 2 & 4 & 0.0\phantom{(0)} & 75 \\
& 0.975 & 17.9 & 1 & 2 & 0.0\phantom{(0)} & 1,194 & 137 & 6.0 & 3 & 4 & 0.0\phantom{(0)} & 73 \\
\hline
\multirow{3}{*} {$20 \times 20$}
& 0.9 & 17.9 & 66 & 169 & 0.0\phantom{(0)} & 23,093 & 463 & 10.2 & 23 & 44 & 0.0\phantom{(0)} & 602 \\
& 0.95 & 20.0 & 469 & 676 & 0.0\phantom{(0)} & 56,937 & 579 & 11.4 & 48 & 102 & 0.0\phantom{(0)} & 4,850 \\
& 0.975 & 21.6 & 404 & 1,365 & 0.5(1) & 91,786 & 621 & 12.5 & 79 & 262 & 0.0\phantom{(0)} & 16,883 \\
\hline
\multirow{3}{*}{$30 \times 30$}
& 0.9 & 19.1 & 2,338 & 3,258 & 4.6(4) & 65,475 & 680 & 12.6 & 666 & 838 & 0.0\phantom{(0)} & 11,171 \\
& 0.95 & 21.3 & 3,315 & 3,600 & 10.3(5) & 61,754 & 752 & 14.3 & 850 & 1,313 & 0.0\phantom{(0)} & 22,420 \\
& 0.975 & 23.1 & 3,535 & 3,600 & 15.3(5) & 67,951 & 767 & 15.9 & 1,973 & 2,315 & 1.6(2) & 35,407 \\
\hline
\multicolumn{2}{c|}{\textbf{avg} \rule{0pt}{10pt} }
& $\BD{19.2}$ & $\BD{1,125}$ & $\BD{1,408}$ & $\BD{3.4(15)}$ & $\BD{41,093}$ & $\BD{470}$ & $\BD{10.4}$ & $\BD{404}$ & $\BD{543}$ & $\BD{0.2(2)}$ & $\BD{10,166}$ \\
\hline
\multicolumn{2}{c}{} \rule{0pt}{10pt} & \multicolumn{11}{|c}{ Correlated instances } \\ \hline
\multicolumn{2}{c}{} & \multicolumn{5}{|c|}{ Cplex } & \multicolumn{6}{c}{Cplex $+$ cuts} \\
\hline
$p \times q$ & $1-\epsilon$ & rgap & stime & time & egap (\#) & nodes & cuts & rgap & stime & time & egap (\#) & nodes \\
\hline
\multirow{3}{*} {$10 \times 10$}
& 0.9 & 10.5 & 2 & 4 & 0.0\phantom{(0)} & 268 & 114 & 5.8 & 4 & 7 & 0.0\phantom{(0)} & 14 \\
& 0.95 & 14.5 & 1 & 2 & 0.0\phantom{(0)} & 727 & 126 & 8.0 & 2 & 6 & 0.0\phantom{(0)} & 44 \\
& 0.975 & 16.2 & 2 & 2 & 0.0\phantom{(0)} & 1,105 & 120 & 10.3 & 2 & 5 & 0.0\phantom{(0)} & 67 \\
\hline
\multirow{3}{*} {$20 \times 20$}
& 0.9 & 15.0 & 49 & 92 & 0.0\phantom{(0)} & 11,783 & 341 & 12.0 & 26 & 31 & 0.0\phantom{(0)} & 1,199 \\
& 0.95 & 16.9 & 75 & 314 & 0.0\phantom{(0)} & 30,536 & 400 & 13.8 & 48 & 81 & 0.0\phantom{(0)} & 3,567 \\
& 0.975 & 18.2 & 802 & 615 & 0.0\phantom{(0)} & 66,759 & 420 & 15.1 & 66 & 129 & 0.0\phantom{(0)} & 6,911 \\
\hline
\multirow{3}{*}{$30 \times 30$}
& 0.9 & 12.1 & 427 & 873 & 0.0\phantom{(0)} & 21,748 & 343 & 9.3 & 130 & 246 & 0.0\phantom{(0)} & 4,325 \\
& 0.95 & 13.3 & 527 & 1,436 & 0.0\phantom{(0)} & 37,448 & 420 & 10.3 & 249 & 295 & 0.0\phantom{(0)} & 4,559 \\
& 0.975 & 13.8 & 1,776 & 2,465 & 0.4(1) & 59,202 & 529 & 10.8 & 673 & 810 & 0.0\phantom{(0)} & 12,093 \\
\hline
\multicolumn{2}{c|}{\textbf{avg} \rule{0pt}{10pt} }
&$\BD{14.5}$ & $\BD{386}$ & $\BD{666}$ & $\BD{0.1(1)}$ & $\BD{25,509}$ & $\BD{313}$ & $\BD{10.6}$ & $\BD{133}$ & $\BD{179}$ & $\BD{0.0\phantom{0)}}$ & $\BD{3,642}$ \\
\hline \hline
\end{tabular}
\end{table}
\ignore{
\begin{table}[h]
\footnotesize
\centering
\caption{Computations with the correlated case.}
\label{tb:corr6}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{c|c|rrrrr|rrrrrr}
\hline \hline
\multicolumn{2}{c}{} & \multicolumn{5}{|c|}{ Cplex } & \multicolumn{6}{c}{Cplex $+$ cuts} \\
\hline
$p \times q$ & $1-\epsilon$ & rgap & stime & time & egap (\#) & nodes & cuts & rgap & stime & time & egap (\#) & nodes \\
\hline
\multirow{3}{*} {$10 \times 10$}
& 0.9 & 10.5 & 2 & 4 & 0.0\phantom{(0)} & 268 & 114 & 5.8 & 4 & 7 & 0.0\phantom{(0)} & 14 \\
& 0.95 & 14.5 & 1 & 2 & 0.0\phantom{(0)} & 727 & 126 & 8.0 & 2 & 6 & 0.0\phantom{(0)} & 44 \\
& 0.975 & 16.2 & 2 & 2 & 0.0\phantom{(0)} & 1,105 & 120 & 10.3 & 2 & 5 & 0.0\phantom{(0)} & 67 \\
\hline
\multirow{3}{*} {$20 \times 20$}
& 0.9 & 15.0 & 49 & 92 & 0.0\phantom{(0)} & 11,783 & 341 & 12.0 & 26 & 31 & 0.0\phantom{(0)} & 1,199 \\
& 0.95 & 16.9 & 75 & 314 & 0.0\phantom{(0)} & 30,536 & 400 & 13.8 & 48 & 81 & 0.0\phantom{(0)} & 3,567 \\
& 0.975 & 18.2 & 802 & 615 & 0.0\phantom{(0)} & 66,759 & 420 & 15.1 & 66 & 129 & 0.0\phantom{(0)} & 6,911 \\
\hline
\multirow{3}{*}{$30 \times 30$}
& 0.9 & 12.1 & 427 & 873 & 0.0\phantom{(0)} & 21,748 & 343 & 9.3 & 130 & 246 & 0.0\phantom{(0)} & 4,325 \\
& 0.95 & 13.3 & 527 & 1,436 & 0.0\phantom{(0)} & 37,448 & 420 & 10.3 & 249 & 295 & 0.0\phantom{(0)} & 4,559 \\
& 0.975 & 13.8 & 1,776 & 2,465 & 0.4(1) & 59,202 & 529 & 10.8 & 673 & 810 & 0.0\phantom{(0)} & 12,093 \\
\hline
\multicolumn{2}{c|}{\textbf{avg} \rule{0pt}{10pt} }
&$\BD{14.5}$ & $\BD{386}$ & $\BD{666}$ & $\BD{0.1(1)}$ & $\BD{25,509}$ & $\BD{313}$ & $\BD{10.6}$ & $\BD{133}$ & $\BD{179}$ & $\BD{0.0\phantom{0)}}$ & $\BD{3,642}$ \\
\hline \hline
\end{tabular}
\end{table}
}
The optimal/best known objective values used for computing the optimality gaps in Table~\ref{tab:ws} are obtained with the CPLEX \del{exact} branch-and-bound algorithm.
To provide a comparison with the successive quadratic optimization procedure, we summarize the performance for the exact algorithm in Table~\ref{tb:diag5}, for the uncorrelated and correlated instances, respectively.
In each column, we report the percentage integrality gap at the root node (rgap),
the time spent until the best feasible solution is obtained (stime),
the total solution time in CPU seconds (time),
the percentage gap between the best upper bound and the lower bound at termination (egap),
and the number of nodes explored (nodes).
If the time limit is reached before proving optimality, the number of instances unsolved (\#) is shown next to egap. Each row of the tables represents the average for five instances.
\ignore{
\newpage
$\left . \right .$
\newpage
}
Observe that the solution times with the CPLEX branch-and-bound algorithm are much larger compared to the successive quadratic optimization approach: 1,408 secs. vs. 1 sec. for the uncorrelated instances and 666 secs. vs. 7 secs. for the correlated instances. The difference in the performance is especially striking for the 30 $\times$ 30 instances, of which half are not solved to optimality within the time limit. Many of these unsolved instances are terminated with large optimality gaps (egap).
\ins{In order to strengthen the convex relaxation of 0-1 problems with a mean-risk objective, one can utilize the polymatroid inequalities \cite{AG:poly}. Polymatroid inequalities exploit the submodularity of the mean-risk objective for the diagonal case. They are extended for the (non-digonal) correlated case as well as for mixed 0-1 problems in \cite{atamturk2008polymatroids}. }
\del{In order} To improve the performance of the exact algorithm, we also test it by adding \ins{the} polymatroid cuts.
\del{ for mean-risk 0-1 optimization problems}
It is clear in Table~\ref{tb:diag5} that the polymatroid cuts have a very positive impact on the exact algorithm. The root gaps are reduced significantly with the addition of the polymatroid cuts.
Whereas 16 of the instances are unsolved within the time limit with default CPLEX, all but two instances are solved to optimality when adding the cuts. Nevertheless, the solution times even with the cutting planes are much larger compared to the successive quadratic optimization approach: 543 secs. vs. 1 sec. for the uncorrelated case
and 179 secs. vs. 7 secs. for the correlated case.
\ignore{
The optimal values used for computing the optimality gaps in Table~\ref{tab:ws} are obtained with an exact branch-and-cut algorithm that employs polymatroid cuts for mean-risk 0-1 optimization problems \cite{AG:poly,atamturk2008polymatroids}. To provide a comparison with the
proposed successive quadratic optimization procedure, we summarize the performance for the exact algorithm in Tables~\ref{tb:diag5}~and~\ref{tb:corr6}, for the uncorrelated and correlated instances, respectively. In these tables
we present the results for the
default CPLEX branch-and-bound algorithm and a branch-and-cut algorithm with polymatroid inequalities.
In each column, we report the percentage integrality gap at the root node (rgap),
the time spent until the best feasible solution is obtained (stime),
the total solution time in CPU seconds (time),
the percentage gap between the best upper bound and the lower bound at termination (egap),
and the number of nodes explored (nodes).
If the time limit is reached before proving optimality, the number of instances unsolved (\#) is shown next to egap.
For the branch-and-cut algorithm, the number of polymatroid cuts added to the formulation (cuts) is reported as well.
Each row of the tables represents the average for five instances.
It is clear in Tables~\ref{tb:diag5}~and~\ref{tb:corr6} that the polymatroid cuts have a very positive impact on the exact algorithm. The root gaps are reduced significantly with the addition of the polymatroid cuts.
Whereas 16 of the instances are not solved within the time limit with default CPLEX, all but two instances are solved to optimality when adding the polymatroid cuts. Nevertheless, the solution times even with the cutting planes are much larger compared to the successive quadratic optimization approach: 543 secs. vs. 1 sec. for the uncorrelated case
and 179 secs. vs. 7 secs. for the correlated case. Note that the
correlated instances have lower integrality gap, and hence are solved faster with the exact search algorithms.
}
Branch-and-bound and branch-and-cut algorithms spend a significant amount of solution time to prove optimality rather than finding feasible solutions. Therefore, for a fairer comparison, it is also of interest to check the time to the best feasible solution, which are reported under the column stime in Table~\ref{tb:diag5}. The average time to the best solution is 1,125 and 386 seconds for the branch-and-bound algorithm and 404 and 133 seconds for the branch-and-cut algorithm for the uncorrelated and correlated cases, respectively.
Figure~\ref{fig:pp} presents the progress of the incumbent solution over time
for one of the $30 \times 30$ instances.
The vertical axis shows the distance to the optimal value (100\%). The binary search algorithm finds a solution within 3\% of the optimal under 3 seconds. It takes 1,654 seconds for the \del{CPLEX} default branch-and-bound algorithm and 338 seconds for the branch-and-cut algorithm to find a solution at least as good.
\ignore{
In Figure~\ref{fig:pp}, we present the performance profile of time to the best solution for the algorithms considered. An $(x,y)$ point in the plots shows the number of instances $y$ that required no longer than $x$ seconds to find the best solution.
49 out of 90 instances are solved to optimality by the binary local search algorithm. The average optimality gap for the remaining 41 instances is 2.33\%. Also observe that 74 instances are solved to optimality by the default CPLEX branch-and-bound algorithm within the one hour time limit.
Better feasible solutions are found faster by the branch-and-cut algorithm.
}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.65 \linewidth]{PP_165_4_steps.png}
\caption{Performance profile of the algorithms.}
\label{fig:pp}
\end{center}
\end{figure}
\ins{
The next set of experiments are done to test the impact of the budget constraint on the performance of the algorithms. For these experiments, the instances with $1-\epsilon = 0.95$ and grid size $20 \times 20$ are solved with varying levels of budgets. Specifically, the budget parameter
$\beta$ is set to $\frac{\bar{\alpha}Y}{\eta}$ for $\eta \in \{2, 4, 6, 8, 10, 20\}$,
where $\bar{\alpha}$ denotes the mean value of $\alpha_a$.
As before, each row of the Tables \ref{tb:ws_budget} -- \ref{tb:uncorr_budget} presents the averages for five instances.
Observe that the binary search algorithm is not as sensitive to the budget as the exact algorithms. For the exact the algorithms, while the root gap decreases with larger budget values, the solution time tends to increase, especially for the uncorrelated instances.
\begin{table}
\footnotesize
\centering
\caption{Performance of the binary search for varying budgets.}
\label{tb:ws_budget}
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}{c|rrr|rrr}
\hline \hline
& \multicolumn{3}{c|}{ Uncorrelated } & \multicolumn{3}{c}{Correlated} \\
\hline
$\eta$ & iter & time & gap & iter & time & gap \\
\hline
2 & 3.0 & 0.30 & 0.00 & 4.0 & 1.33 & 0.27 \\
4 & 2.8 & 0.36 & 1.06 & 4.0 & 2.68 & 0.44 \\
6 & 2.8 & 0.38 & 1.02 & 4.0 & 1.70 & 0.54 \\
8 & 3.0 & 0.35 & 0.17 & 4.0 & 1.69 & 0.26 \\
10 & 3.0 & 0.36 & 0.03 & 4.0 & 1.48 & 1.50 \\
\hline
\textbf{avg}
&$\BD{2.92}$ & $\BD{0.35}$ & $\BD{0.46}$ & $\BD{4.0}$ & $\BD{1.78}$ & $\BD{0.60}$ \\
\hline \hline
\end{tabular}
\end{table}
\begin{table}[t]
\footnotesize
\centering
\caption{Performance of b\&b and b\&c for varying budgets.}
\label{tb:uncorr_budget}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{c|rrrrr|rrrrrr}
\hline \hline
\rule{0pt}{10pt} & \multicolumn{11}{c}{ Uncorrelated instances } \\ \hline
& \multicolumn{5}{c|}{ Cplex } & \multicolumn{6}{c}{Cplex $+$ cuts} \\
\hline
$\eta$ & rgap & stime & time & egap (\#) & nodes & cuts & rgap & stime & time & egap (\#) & nodes \\
\hline
2 & 24.4 & 3 & 6 & 0.0\phantom{(0)} & 680 & 104 & 7.1 & 7 & 8 & 0.0\phantom{(0)} & 18 \\
4 & 20.0 & 469 & 676 & 0.0\phantom{(0)} & 56,937 & 579 & 11.4 & 48 & 102 & 0.0\phantom{(0)} & 4850 \\
6 & 18.7 & 888 & 1,412 & 0.1(1) & 123,930 & 626 & 11.5 & 94 & 124 & 0.0\phantom{(0)} & 6457 \\
8 & 18.6 & 343 & 1,618 & 0.0\phantom{(0)} & 121,157 & 563 & 12.5 & 167 & 327 & 0.0\phantom{(0)} & 25,062 \\
10 & 18.1 & 898 & 1,705 & 1.1(1) & 121,624 & 523 & 12.3 & 225 & 300 & 0.0\phantom{(0)} & 21,130 \\
\hline
{\textbf{avg} \rule{0pt}{10pt} } &
$\BD{20.0}$ & $\BD{520}$ & $\BD{1083}$ & $\BD{0.2(2)}$ & $\BD{84,865}$ & $\BD{479}$ & $\BD{10.9}$ & $\BD{108}$ & $\BD{172}$ & $\BD{0.0\phantom{(0)}}$ & $\BD{11,503}$ \\
\hline
\rule{0pt}{10pt} & \multicolumn{11}{c}{ Correlated instances } \\ \hline
& \multicolumn{5}{c|}{ Cplex } & \multicolumn{6}{c}{Cplex $+$ cuts} \\
\hline
$\eta$ & rgap & stime & time & egap (\#) & nodes & cuts & rgap & stime & time & egap (\#) & nodes \\
\hline
2 & 25.4 & 6 & 11 & 0.0\phantom{(0)} & 1,325 & 128 & 19.8 & 3 & 9 & 0.0\phantom{(0)} & 125 \\
4 & 16.9 & 75 & 314 & 0.0\phantom{(0)} & 30,536 & 400 & 13.8 & 48 & 81 & 0.0\phantom{(0)} & 3,567 \\
6 & 13.4 & 86 & 246 & 0.0\phantom{(0)} & 32,990 & 408 & 10.6 & 43 & 94 & 0.0\phantom{(0)} & 7,574 \\
8 & 13.2 & 105 & 199 & 0.0\phantom{(0)} & 27,514 & 386 & 10.9 & 38 & 62 & 0.0\phantom{(0)} & 4,997 \\
10 & 12.3 & 37 & 129 & 0.0\phantom{(0)} & 20,880 & 330 & 9.8 & 33 & 35 & 0.0\phantom{(0)} & 1,870 \\
\hline
{\textbf{avg} \rule{0pt}{10pt} } &
$\BD{16.2}$ & $\BD{61}$ & $\BD{180}$ & $\BD{0.0\phantom{(0)}}$ & $\BD{22,649}$ & $\BD{330}$ & $\BD{13.0}$ & $\BD{33}$ & $\BD{56}$ & $\BD{0.0\phantom{(0)}}$ & $\BD{3,627}$ \\
\hline \hline
\end{tabular}
\end{table}
\ignore{
\begin{table}[t]
\footnotesize
\centering
\caption{Correlated case for varying budgets: $20 \times 20$.}
\label{tb:corr_budget}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{c|rrrrr|rrrrrr}
\hline \hline
& \multicolumn{5}{c|}{ Cplex } & \multicolumn{6}{c}{Cplex $+$ cuts} \\
\hline
$\eta$ & rgap & stime & time & egap (\#) & nodes & cuts & rgap & stime & time & egap (\#) & nodes \\
\hline
2 & 25.4 & 6 & 11 & 0.0\phantom{(0)} & 1,325 & 128 & 19.8 & 3 & 9 & 0.0\phantom{(0)} & 125 \\
4 & 16.9 & 75 & 314 & 0.0\phantom{(0)} & 30,536 & 400 & 13.8 & 48 & 81 & 0.0\phantom{(0)} & 3,567 \\
6 & 13.4 & 86 & 246 & 0.0\phantom{(0)} & 32,990 & 408 & 10.6 & 43 & 94 & 0.0\phantom{(0)} & 7,574 \\
8 & 13.2 & 105 & 199 & 0.0\phantom{(0)} & 27,514 & 386 & 10.9 & 38 & 62 & 0.0\phantom{(0)} & 4,997 \\
10 & 12.3 & 37 & 129 & 0.0\phantom{(0)} & 20,880 & 330 & 9.8 & 33 & 35 & 0.0\phantom{(0)} & 1,870 \\
\hline
{\textbf{avg} \rule{0pt}{10pt} } &
$\BD{16.2}$ & $\BD{61}$ & $\BD{180}$ & $\BD{0.0\phantom{(0)}}$ & $\BD{22,649}$ & $\BD{330}$ & $\BD{13.0}$ & $\BD{33}$ & $\BD{56}$ & $\BD{0.0\phantom{(0)}}$ & $\BD{3,627}$ \\
\hline \hline
\end{tabular}
\end{table}
}
Next, we present the experiments performed to test the effect of the interdiction cost parameter $\alpha$.
New instances with $1-\epsilon = 0.95$ and grid size $20 \times 20$ are generated with
varying $\alpha_a$ drawn from integral uniform $[r, 3r]$ for $r \in \{5, 10, 15, 20, 25\}$.
To keep the relative scales of the parameters consistent with the previous experiments,
the budget parameter $\beta$ is set to $\frac{\bar{\alpha} Y}{4}$.
Tables \ref{tb:ws_cost} -- \ref{tb:uncorr_cost} summarize the results. The optimality gaps for the binary search algorithm
are higher for these experiments with similar run times. Both the binary search and the exact algorithms appear to be insensitive to the changes in the interdiction cost in our experiments.
\begin{table}
\footnotesize
\centering
\caption{Performance of the binary search for varying interdiction costs.}
\label{tb:ws_cost}
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}{c|rrr|rrr}
\hline \hline
& \multicolumn{3}{c|}{ Uncorrelated } & \multicolumn{3}{c}{Correlated} \\
\hline
$r$ & iter & time & gap & iter & time & gap \\
\hline
5 & 3.0 & 0.39 & 3.93 & 4.0 & 2.15 & 0.22 \\
10 & 2.8 & 0.42 & 1.94 & 4.0 & 2.05 & 4.29 \\
15 & 2.8 & 0.41 & 3.43 & 4.0 & 2.17 & 0.48 \\
20 & 2.8 & 0.41 & 1.20 & 4.0 & 2.24 & 0.80 \\
25 & 2.8 & 0.44 & 1.76 & 4.0 & 2.22 & 0.63 \\
\hline
\textbf{avg}
&$\BD{2.84}$ & $\BD{0.41}$ & $\BD{2.45}$ & $\BD{4.0}$ & $\BD{2.17}$ & $\BD{1.29}$ \\
\hline \hline
\end{tabular}
\end{table}
\begin{table}[t]
\footnotesize
\centering
\caption{Performance of b\&b and b\&c for varying interdiction costs.}
\label{tb:uncorr_cost}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{c|rrrrr|rrrrrr}
\hline \hline
\rule{0pt}{10pt} & \multicolumn{11}{c}{ Uncorrelated instances } \\ \hline
& \multicolumn{5}{c|}{ Cplex } & \multicolumn{6}{c}{Cplex $+$ cuts} \\
\hline
$r$ & rgap & stime & time & egap (\#) & nodes & cuts & rgap & stime & time & egap (\#) & nodes \\
\hline
5 & 22.4 & 969 & 2,405 & 0.6(2) & 299,432 & 705 & 15.1 & 296 & 428 & 0.0\phantom{(0)} & 26,404 \\
10 & 22.5 & 1,299 & 2,552 & 1.8(2) & 282,257 & 725 & 15.4 & 802 & 944 & 0.0\phantom{(0)} & 56,725 \\
15 & 22.4 & 1,611 & 2,383 & 3.3(3) & 267,595 & 686 & 15.3 & 815 & 1,319 & 0.5(1) & 72,508 \\
20 & 22.2 & 1,436 & 2,905 & 4.4(4) & 279,349 & 704 & 14.9 & 336 & 775 & 0.0\phantom{(0)} & 44,972 \\
25 & 22.3 & 1,502 & 2,905 & 4.2(4) & 339,789 & 691 & 15.2 & 576 & 985 & 0.2(1) & 57,971 \\
\hline
\textbf{avg} \rule{0pt}{10pt} &
$\BD{22.4}$ & $\BD{1363}$ & $\BD{2630}$ & $\BD{2.8(15)}$ & $\BD{293,684}$ & $\BD{702}$ & $\BD{15.2}$ & $\BD{565}$ & $\BD{890}$ & $\BD{0.1(2)}$ & $\BD{51,716}$ \\ \hline
\rule{0pt}{10pt} & \multicolumn{11}{c}{ Correlated instances } \\ \hline
& \multicolumn{5}{c|}{ Cplex } & \multicolumn{6}{c}{Cplex $+$ cuts} \\
\hline
$r$ & rgap & stime & time & egap (\#) & nodes & cuts & rgap & stime & time & egap (\#) & nodes \\
\hline
5 & 16.7 & 302 & 887 & 0.0\phantom{(0)} & 966,71 & 449 & 13.9 & 72 & 135 & 0.0\phantom{(0)} & 10,406 \\
10 & 17.2 & 644 & 1,020 & 0.0\phantom{(0)} & 118,975 & 442 & 14.4 & 163 & 209 & 0.0\phantom{(0)} & 17,283 \\
15 & 17.0 & 175 & 1,359 & 0.5(1) & 124,748 & 434 & 14.2 & 91 & 157 & 0.0\phantom{(0)} & 13,486 \\
20 & 16.9 & 800 & 1,434 & 0.0\phantom{(0)} & 140,953 & 418 & 14.0 & 57 & 263 & 0.0\phantom{(0)} & 21,663 \\
25 & 16.7 & 379 & 1,276 & 0.0\phantom{(0)} & 140,621 & 440 & 13.8 & 108 & 200 & 0.0\phantom{(0)} & 16,675 \\
\hline
\textbf{avg} \rule{0pt}{10pt} &
$\BD{16.9}$ & $\BD{460}$ & $\BD{1195}$ & $\BD{0.1(1)}$ & $\BD{124393}$ & $\BD{436}$ & $\BD{14.0}$ & $\BD{98}$ & $\BD{193}$ & $\BD{0.0\phantom{(0)}}$ & $\BD{15,902}$ \\
\hline \hline
\end{tabular}
\end{table}
\ignore{
\begin{table}[t]
\footnotesize
\centering
\caption{Correlated case for varying interdiction costs.}
\label{tb:corr_cost}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{c|rrrrr|rrrrrr}
\hline \hline
& \multicolumn{5}{c|}{ Cplex } & \multicolumn{6}{c}{Cplex $+$ cuts} \\
\hline
$r$ & rgap & stime & time & egap (\#) & nodes & cuts & rgap & stime & time & egap (\#) & nodes \\
\hline
5 & 16.7 & 302 & 887 & 0.0\phantom{(0)} & 966,71 & 449 & 13.9 & 72 & 135 & 0.0\phantom{(0)} & 10,406 \\
10 & 17.2 & 644 & 1,020 & 0.0\phantom{(0)} & 118,975 & 442 & 14.4 & 163 & 209 & 0.0\phantom{(0)} & 17,283 \\
15 & 17.0 & 175 & 1,359 & 0.5(1) & 124,748 & 434 & 14.2 & 91 & 157 & 0.0\phantom{(0)} & 13,486 \\
20 & 16.9 & 800 & 1,434 & 0.0\phantom{(0)} & 140,953 & 418 & 14.0 & 57 & 263 & 0.0\phantom{(0)} & 21,663 \\
25 & 16.7 & 379 & 1,276 & 0.0\phantom{(0)} & 140,621 & 440 & 13.8 & 108 & 200 & 0.0\phantom{(0)} & 16,675 \\
\hline
\textbf{avg} \rule{0pt}{10pt} &
$\BD{16.9}$ & $\BD{460}$ & $\BD{1195}$ & $\BD{0.1(1)}$ & $\BD{124393}$ & $\BD{436}$ & $\BD{14.0}$ & $\BD{98}$ & $\BD{193}$ & $\BD{0.0\phantom{(0)}}$ & $\BD{15,902}$ \\
\hline \hline
\end{tabular}
\end{table}
}
Finaly, we test the performance of the binary search algorithm for larger grid sizes up to $100 \times 100$
to see how it scales up. Five instances of each size are generated as in our original set of instances. The exact algorithms are not run for these large instances; therefore, the gap is computed against the convex relaxation of the problem and, hence, it provides an upper bound on the optimality gap.
\autoref{tb:ws_large} reports the number of iterations, the time spent for the algorithm, and the percentage integrality gap, that is the gap between the upper bound found by the algorithm and the lower bound from the convex relaxation.
Observe that the $100 \times 100$ instances have about 20,000 arcs. The correlated instances for this size could not be run due to memory limit. For the $20 \times 20$ instances, the reported upper bounds 20.73\% and 17.31\% on the optimality gap should be compared with the actual optimality gaps 1.06\% and 0.44\% in Table~\ref{tab:ws}. The large difference between the exact gap in Table~\ref{tab:ws} and igap in Table~\ref{tb:ws_large} is indicative of poor lower bounds from the convex relaxations, rather than poor upper bounds. The binary search algorithm converges in a small number of iterations for these large instances as well; however, solving quadratic 0-1 optimization problems at each iteration takes significantly longer time.
\begin{table}
\footnotesize
\centering
\caption{Performance of the binary local search for larger networks.}
\label{tb:ws_large}
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}{c|rrr|rrr}
\hline \hline
& \multicolumn{3}{c|}{ Uncorrelated } & \multicolumn{3}{c}{Correlated} \\
\hline
$p \times q$ & iter & time & igap & iter & time & igap \\
\hline
$20 \times 20$ & 2.8 & 0.36 & 20.73 & 4.0 & 2.68 & 17.31\\
$40 \times 40$ & 3.0 & 5.33 & 26.15 & 5.0 & 80.47 & 11.81\\
$60 \times 60$ & 3.2 & 35.66 & 27.09 & 6.0 & 502.14 & 10.12\\
$80 \times 80$ & 3.6 & 141.96 & 27.31 & 6.0 & 3,199.11 & 8.53\\
$100 \times 100$ & 10.2 & 4,991.3 & 31.08 & - & - & - \\
\hline
\textbf{avg}
& $\BD{4.6}$ & $\BD{1,034.92}$ & $\BD{26.47}$ & $\BD{5.3}$ & $\BD{945.97}$ & $\BD{11.94}$ \\
\hline \hline
\end{tabular}
\end{table}
}
\ignore{
\begin{table}
\footnotesize
\centering
\caption{Performance of the binary local search for larger networks.}
\label{tb:ws_large}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{c|rrrr|rrrr}
\hline \hline
& \multicolumn{4}{c|}{ Uncorrelated } & \multicolumn{4}{c}{Correlated} \\
\hline
$p \times q$ & time & iter & algtime & igap & time & iter & algtime & igap \\
\hline
$20 \times 20$ & 1.51 & 2.8 & 0.36 & 20.73 & 1.9 & 4.0 & 2.16 & 17.31\\
$40 \times 40$ & 59.03 & 3.0 & 5.33 & 26.15 & 121.19 & 5.0 & 80.47 & 11.81\\
$60 \times 60$ & 671.88 & 3.2 & 35.66 & 27.09 & 1,322.45 & 6.0 & 502.14 & 10.12\\
$80 \times 80$ & 3,398.32 & 3.6 & 141.96 & 27.31 & 3,812.97 & 6.0 & 3,199.11 & 8.53\\
$100 \times 100$ & 12,899.06 & 10.2 & 4,991.3 & 31.08 & 14,284.36 & - & - & - \\
\hline
\textbf{avg}
& $\BD{3,405.96}$ & $\BD{4.6}$ & $\BD{1,034.92}$ & $\BD{26.47}$ & $\BD{1,314.62}$ & $\BD{5.3}$ & $\BD{945.97}$ & $\BD{11.94}$ \\
\hline \hline
\end{tabular}
\end{table}
}
\exclude{
\begin{table}
\footnotesize
\centering
\caption{Performance of the binary local search for larger networks.}
\label{tb:ws_large}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{c|rr|rr}
\hline \hline
& \multicolumn{2}{c|}{ Uncorrelated } & \multicolumn{2}{c}{Correlated} \\
\hline
$Y$ & iter & time & iter & time \\
\hline
40 & 3.0 & 5.48 & 5.0 & 67.45 \\
50 & 3.0 & 15.43 & 5.2 & 216.50 \\
60 & 3.2 & 35.98 & 6.0 & 528.78 \\
70 & 3.2 & 65.75 & 6.0 & 1446.10 \\
\hline
\textbf{avg}
&$\BD{3.1}$ & $\BD{30.66}$ & $\BD{5.5}$ & $\BD{564.71}$ \\
\hline \hline
\end{tabular}
\end{table}
\begin{table}
\footnotesize
\centering
\caption{Performance of the binary local search for larger networks.}
\label{tb:ws_large}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{c|rr|rr}
\hline \hline
& \multicolumn{2}{c|}{ Uncorrelated } & \multicolumn{2}{c}{Correlated} \\
\hline
$Y$ & iter & time & iter & time \\
\hline
40 & 3.0 & 5.48 & 5.0 & 67.45 \\
50 & 3.0 & 15.43 & 5.2 & 216.50 \\
60 & 3.2 & 35.98 & 6.0 & 528.78 \\
70 & 3.2 & 65.75 & 6.0 & 1446.10 \\
80 & 3.6 & 142.40 & - & - \\
\hline
\textbf{avg}
&$\BD{3.2}$ & $\BD{53.01}$ & $\BD{5.5}$ & $\BD{564.71}$ \\
\hline \hline
\end{tabular}
\end{table}
}
\section{Conclusion} \label{sec:conclusion}
In this paper we introduce a successive quadratic optimization procedure embedded in a bisection search for finding high quality solutions to discrete mean-risk minimization problems with a conic quadratic objective. The search algorithm is applied on a non-convex upper-bounding function that provides tight values at local minima.
Computations with the network interdiction problem with stochastic capacities indicate that
the proposed method finds solutions within 1--\rep{4}{2}\% optimal in a small fraction of the time required by exact branch-and-bound and branch-and-cut algorithms. Although we demonstrate the approach for the network interdiction problem with stochastic capacities, since method is agnostic to the constraints of the problem, it can be applied to any \ins{0-1} optimization problem with a mean-risk objective.
\section*{Acknowledgement} This research is supported, in part, by grant FA9550-10-1-0168 from the Office
of the Assistant Secretary of Defense for Research and Engineering.
\bibliographystyle{plainnat}
|
1,108,101,566,467 | arxiv | \section{Introduction}
The classical Enestr\"om-Kakeya Theorem states the following
\begin{thma}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j}$ be a polynomial with real coefficients satisfying $$0<a_0\leq a_1\leq a_2\leq a_3\ldots\leq a_n.$$ Then all the zeros of $p(z)$ lie in $|z|\leq 1.$
\end{thma}
By putting a restriction on the coefficients of a polynomial similar to that of the Enestr\"om-Kakeya Theorem, Mohammad \cite{Moh} proved the following on the number of zeros that can be found in a specified disk.
\begin{thmb}\label{thm1.2}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j}$ be a polynomial with real coefficients satisfying $0<a_0\leq a_1\leq a_2\leq a_3\ldots\leq a_n.$ Then the number of zeros of $p$ in $|z|\leq \dfrac{1}{2}$ does not exceed $$1+\dfrac{1}{\log 2}\log\Big(\dfrac{a_n}{a_0}\Big)$$
\end{thmb}
In her dissertation work, Dewan \cite{Dew} weakens the hypotheses of Theorem B and proved the following two results for polynomials with complex coefficients.
\begin{thmc}\label{thm1.3}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j}$ be a polynomial such that $|\arg a_j-\beta|\leq\alpha\leq\dfrac{\pi}{2}$ for $j\in \{0,1,2,\ldots,n\}$ and for some real numbers $\alpha$ and $\beta,$ and $$0<|a_0|\leq |a_1|\leq |a_2|\leq |a_3|\ldots\leq |a_n|.$$ Then the number of zeros of $p$ in $|z|\leq 1/2$ does not exceed $$\dfrac{1}{\log 2}\log\dfrac{|a_n|(\cos\alpha+\sin\alpha+1)+2\sin\alpha\sum_{j=0}^{n-1}|a_j|}{|a_0|}.$$
\end{thmc}
\begin{thmd}\label{thm1.4}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j}$ where $Re(a_j)=\alpha_j$ and $Im(a_j)=\beta_j$ for all $j$ and $0<\alpha_0\leq \alpha_1\leq \alpha_2\leq\cdots\leq \alpha_n.$ Then the number of zeros of $p$ in $|z|\leq1/2$ does not exceed $$1+\dfrac{1}{\log 2}\log\dfrac{\alpha_n+\sum_{j=0}^{n}|\beta_j|}{|a_0|}.$$
\end{thmd}
Pukhta \cite{Puk} generalized Theorems C and D by finding the number of zeros in $|z|\leq\delta$ for $0<\delta<1.$ The next Theorem, due to Pukhta, deals with a monotonicity condition on the moduli of the coefficients.
\begin{thme}\label{thm1.5}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j}$ be a polynomial such that $|\arg a_j-\beta|\leq\alpha\leq\dfrac{\pi}{2}$ for $j\in \{0,1,2,\ldots,n\}$ and for some real $\alpha$ and $\beta,$ and $$0<|a_0|\leq |a_1|\leq |a_2|\leq |a_3|\ldots\leq |a_n|.$$ Then the number of zeros of $p$ in $|z|\leq \delta,$ $0<\delta<1,$ does not exceed $$\dfrac{1}{\log 1/\delta}\log\dfrac{|a_n|(\cos\alpha+\sin\alpha+1)+2\sin\alpha\sum_{j=0}^{n-1}|a_j|}{|a_0|}.$$
\end{thme}
Pukhta \cite{Puk} also gave a result which involved a monotonicity condition on the real part of the coefficients. Though the proof presented by Pukhta is correct, there was a slight typographical error in the statement of the result as it appeared in print. The correct statement of the theorem is as follows.
\begin{thmf}\label{thm1.6}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j}$ where $Re(a_j)=\alpha_j$ and $Im(a_j)=\beta_j$ for all $j$ and $0<\alpha_0\leq \alpha_1\leq \alpha_2\leq\cdots\leq \alpha_n.$ Then the number of zeros of $p$ in $|z|\leq\delta,$ $0<\delta<1,$ does not exceed $$\dfrac{1}{\log 1/\delta}\log2\Bigg[\dfrac{\alpha_n+\sum_{j=0}^{n}|\beta_j|}{|a_0|}\Bigg].$$
\end{thmf}
In this paper we generalize Theorem F and prove the following.
\begin{thm1}\label{thm1.7}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j},$ $a_0\neq 0,$ be a polynomial of degree $n$ with complex coefficients, $Re(a_j)=\alpha_j$ and $Im(a_j)=\beta_j$ for all $j.$ If for some real numbers $t,$ and for some $\lambda \in\{0,1,2,\cdots n\},$ $$ t+\alpha_{n}\leq \alpha_{n-1}\leq \ldots\leq\alpha_{\lambda}\geq\alpha_{\lambda - 1}\geq\alpha_{\lambda - 2}\geq \ldots\geq \alpha_{1}\geq \alpha_{0},$$
then the number of zeros of $p$ in $|z|\leq\delta,$ $0<\delta<1,$ does not exceed $$\dfrac{1}{\log 1/\delta}\log\dfrac{M_1}{|a_0|},$$
where $$M_1=|\alpha_0|-\alpha_0+|\alpha_n|-\alpha_n+|t|-t+2\alpha_{\lambda}+2\displaystyle{\sum_{j=0}^{n}|\beta_j|}.$$
\end{thm1}
For $t=0$ we get the following.
\begin{cor1}\label{cor1}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j},$ $a_0\neq 0,$ be a polynomial of degree $n$ with complex coefficients, $Re(a_j)=\alpha_j$ and $Im(a_j)=\beta_j$ for all $j.$ If for some $\lambda \in\{0,1,2,\cdots n\},$ $$\alpha_{n}\leq \alpha_{n-1}\leq \ldots\leq\alpha_{\lambda}\geq\alpha_{\lambda - 1}\geq\alpha_{\lambda - 2}\geq \ldots\geq \alpha_{1}\geq \alpha_{0},$$
then the number of zeros of $p$ in $|z|\leq\delta,$ $0<\delta<1,$ does not exceed $$\dfrac{1}{\log 1/\delta}\log\dfrac{M_1}{|a_0|},$$
where $$M_1=|\alpha_0|-\alpha_0+|\alpha_n|-\alpha_n+2\alpha_{\lambda}+2\displaystyle{\sum_{j=0}^{n}|\beta_j|}.$$
\end{cor1}
If $\lambda =0,$ then Corollary \ref{cor1} reduces to
\begin{cor2}\label{cor2}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j},$ $a_0\neq 0,$ be a polynomial of degree $n$ with complex coefficients, $Re(a_j)=\alpha_j$ and $Im(a_j)=\beta_j$ for all $j.$ Suppose $$\alpha_{n}\leq \alpha_{n-1}\leq \ldots\leq\alpha_{0},$$
then the number of zeros of $p$ in $|z|\leq\delta,$ $0<\delta<1,$ does not exceed $$\dfrac{1}{\log 1/\delta}\log\dfrac{M_1}{|a_0|},$$
where $$M_1=|\alpha_0|+\alpha_0+|\alpha_n|-\alpha_n+2\displaystyle{\sum_{j=0}^{n}|\beta_j|}.$$
\end{cor2}
If, also, $\lambda =n,$ then Corollary \ref{cor1} becomes
\begin{cor3}\label{cor3}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j},$ $a_0\neq 0,$ be a polynomial of degree $n$ with complex coefficients, $Re(a_j)=\alpha_j$ and $Im(a_j)=\beta_j$ for all $j.$ Suppose $$\alpha_{n}\geq\alpha_{n - 1}\geq\alpha_{n - 2}\geq \ldots\geq \alpha_{1}\geq \alpha_{0},$$
then the number of zeros of $p$ in $|z|\leq\delta,$ $0<\delta<1,$ does not exceed $$\dfrac{1}{\log 1/\delta}\log\dfrac{M_1}{|a_0|},$$
where $$M_1=|\alpha_0|-\alpha_0+|\alpha_n|+\alpha_n+2\displaystyle{\sum_{j=0}^{n}|\beta_j|}.$$
\end{cor3}
Suppose we assume $\alpha_{0}>0,$ then Corollary 3 becomes Theorem F. Instead of proving Theorem 1, we prove the following more general result.
\begin{thm2}
Let $p(z)=\displaystyle{\sum_{j=0}^{n}a_jz^j},$ $a_0\neq 0,$ be a polynomial of degree $n$ with complex coefficients, $Re(a_j)=\alpha_j$ and $Im(a_j)=\beta_j$ for all $j.$ If for some real numbers $t,$ $s,$ and for some $\lambda \in\{0,1,2,\cdots n\},$ $$ t+\alpha_{n}\leq \alpha_{n-1}\leq \ldots\leq\alpha_{\lambda}\geq\alpha_{\lambda - 1}\geq\alpha_{\lambda - 2}\geq \ldots\geq \alpha_{1}\geq \alpha_{0}-s,$$
then the number of zeros of $p$ in $|z|\leq\delta,$ $0<\delta<1,$ does not exceed $$\dfrac{1}{\log 1/\delta}\log\dfrac{M_2}{|a_0|},$$
where $$M_2=|\alpha_0|-\alpha_0+|\alpha_n|-\alpha_n+|t|-t+|s|+s+2\alpha_{\lambda}+2\displaystyle{\sum_{j=0}^{n}|\beta_j|}.$$
\end{thm2}
Clearly $M_2$ is nonnegative.
\section{Lemma}
For the proof of our result we shall make use of the following result (see page 171 of the second edition) \cite{Tit}.
\begin{lem}
Let $F(z)$ be analytic in $|z|\leq R.$ Let $|F(z)|\leq M$ in the disk $|z|\leq R$ and suppose $F(0)\neq 0.$ Then for $0<\delta<1$ the number of zeros of $F(z)$ in the disk $|z|\leq \delta R$ is less than $$\dfrac{1}{\log 1/\delta}\log\dfrac{M}{|F(0)|}.$$
\end{lem}
\section{Proof of the Theorem}
\begin{proof}
Consider the polynomial
\begin{align*}
g(z)&=(1-z)p(z)\\
&=-a_{n}z^{n+1}+\displaystyle{\sum_{j=1}^{n}(a_j-a_{j-1})z^j}+a_0
\end{align*}
For $|z|=1,$
\begin{align*}
|g(z)|& \leq |a_{n}|+\displaystyle{\sum_{j=1}^{n}|a_j-a_{j-1}|}+|a_0|\\
&\leq |\alpha_n|+|\beta_n|+\displaystyle{\sum_{j=1}^{n}|\alpha_j-\alpha_{j-1}|}+\displaystyle{\sum_{j=1}^{n}|\beta_j-\beta_{j-1}|}+|\alpha_0|+|\beta_0|\\
&\leq |\alpha_n|+|\alpha_0|+\displaystyle{\sum_{j=1}^{n}|\alpha_j-\alpha_{j-1}|}+2\displaystyle{\sum_{j=0}^{n}|\beta_j|}\\
&\leq |\alpha_n|+|\alpha_0|+\displaystyle{\sum_{j=2}^{n-2}|\alpha_j-\alpha_{j-1}|}+|\alpha_{n-1}-\alpha_{n-2}|+|\alpha_{n}-\alpha_{n-1}|+|\alpha_{1}-\alpha_{0}|+2\displaystyle{\sum_{j=0}^{n}|\beta_j|}\\
&\leq |\alpha_0|-\alpha_0+|\alpha_n|-\alpha_n+\alpha_{n-2}+\alpha_1+|t|-t+|s|+s+\displaystyle{\sum_{j=2}^{\lambda}|\alpha_j-\alpha_{j-1}|}+\displaystyle{\sum_{j=\lambda+1}^{n-2}|\alpha_j-\alpha_{j-1}|}\\
&+ 2\displaystyle{\sum_{j=0}^{n}|\beta_j|}\\
&= |\alpha_0|-\alpha_0+|\alpha_n|-\alpha_n+|t|-t+|s|+s+2\alpha_{\lambda}+2\displaystyle{\sum_{j=0}^{n}|\beta_j|}\\
&=M_2.
\end{align*}
Now $g(z)$ is analytic in $|z|\leq 1,$ and $|g(z)|\leq M_2$ for $|z|=1.$ So by the above lemma and the Maximum Modulus Principle, the number of zeros of $g$ (and hence of $p$) in $|z|\leq \delta$ is less than or equal to $$\dfrac{1}{\log 1/\delta}\log\dfrac{M_2}{|a_0|}.$$
Hence, the theorem follows.
\end{proof}
{\bf Acknowledgement:} The author is thankful to the anonymous referee for his/her valuable suggestions.
\newpage
|
1,108,101,566,468 | arxiv | \section{Introduction}
Phase transitions in the early Universe may give rise to topological defects \cite{Vilenkin:2000jqa},
which are predicted in a wide range of high-energy physics models of the early Universe \cite{Vilenkin:2000jqa,Jeannerot:2003qv,Sarangi:2002yt,Jones:2003da}.
Defects from symmetry breaking at the Grand Unified or inflation scales
induce gravitational fluctuations of sufficient amplitude to give a
characteristic signature in the cosmic microwave background (CMB).
The CMB power spectra of topological defects have been widely analysed.
In some approaches, full-sky maps \cite{Pen:1993nx,Allen:1997ag,Landriau:2002fx,Landriau:2003xf} are computed.
Most use the scaling properties of defect networks to derive the power spectra from the
Unequal Time Correlators (UETC) of the energy-momentum tensor of the classical field theory evolving in a cosmological background
\cite{Pen:1997ae}.
This method has been used to compute the CMB signatures of
global defects in the non-linear sigma-model (NLSM) approximation \cite{Durrer:1998rw,Bevis:2004wk},
and cosmic strings in the Abelian Higgs model \cite{Bevis:2006mj,Urrestilla:2011gr,Bevis:2010gj},
and for semilocal strings \cite{Urrestilla:2007sf}.
Gauge cosmic string CMB power spectra have also been computed in the Nambu-Goto approximation \cite{Lazanu:2014eya,Lazanu:2014xxa} and by modelling strings as randomly moving string segments \cite{Albrecht:1997nt,Albrecht:1997mz,Battye:1997hu,Pogosian:1999np,Pogosian:2006hg,Pogosian:2008am}.
Global strings have not been modelled in the Nambu-Goto approximation because their long-range interactions complicate the algorithm,
although an economical way of incorporating the interactions has recently been put proposed \cite{Fleury:2016xrz}.
Using a similar approach gravitational wave power spectra have also been calculated \cite{JonesSmith:2007ne,Fenu:2013tea}. However, the authors of \cite{Figueroa:2012kw}, comparing predictions for GWs obtained from large-$N$ limit with the ones obtained from field theory simulations, showed that the NLSM prediction in the large-$N$ limit is significantly lower than the true value
for models with $N<4$.
In the last few years significant advances in the study of the CMB power spectrum using the UETC approach for Abelian Higgs strings were performed; using the biggest simulation boxes up to date and, among other improvements, studying the behaviour of the correlators across cosmological transitions \cite{Daverio:2015nva,Lizarraga:2016onn}.
However, the CMB signature for O($2$) and O($3$) models have never been studied using the linear $\sigma$-model,
which is required to capture the important contribution to the energy-momentum of the cores of the topological defects.
Global string and monopole networks have been numerically analysed for other reasons: for example, in \cite{Moore:2001px,Yamaguchi:1999yp,Yamaguchi:1999dy,Hiramatsu:2010yu,Fleury:2015aca} the scaling properties of global string networks were studied.
Axion strings are global strings, with axions as the (pseudo-)Goldstone bosons, and the radiation from global string networks has
been examined in order to determine the dark matter axion density \cite{Hiramatsu:2010yu,Fleury:2015aca}.
Similarly in \cite{Lopez-Eiguren:2016jsy} the network velocities of global monopoles were studied in detail.
Therefore, the analysis of the CMB power spectrum generated by global strings and monopoles will be of great interest. As we have previously mentioned the NLSM predictions at large $N$ for GWs do not fit with O($2$) and O($3$) direct calculations. Therefore it is important to perform O($2$) and O($3$) field theory simulations to check whether they follow the CMB predictions from the O($N$) NLSM.
In this work we perform field theory simulations of the O($2$) and O($3$) linear $\sigma$ models,
measuring their scaling parameters to much greater accuracy than before, and determining the UETCs.
Armed with the UETCs, we calculate the CMB power spectra using the techniques detailed in \cite{Lizarraga:2016onn}. The power spectra are then used to compare with the predictions given by the NLSM and also with the spectrum coming from the analysis of the Abelian Higgs model. We also fit {\it Planck}\ data with these predictions and obtain constraints on the models using a Monte Carlo analysis.
The paper is structured as follows: In section~\ref{sec:model} we present the model and we give an overview of the UETC approach. Then, we describe the procedure to obtain the UETCs from the simulations in section~\ref{sec:uetcs} and the computation of the source functions in section~\ref{sec:source}. Once we have the source functions we present the power spectra in section~\ref{sec:spectra}. Finally, in section~\ref{sec:fits} we show the fits and constraints and we conclude in section~\ref{sec:conclusions}.
\section{Model and Method overview}
\label{sec:model}
The simplest field theory model that contains global topological defects is the global O($N$) theory, where O($N$) global symmetry spontaneously breaks down to O($N-1$). A theory that gives rise to this kind of defects is the linear sigma model \cite{Goldstone:1961eq}, whose action is
\begin{equation}
\mathcal{S}=\int d^4 x \sqrt{-g} \Big( \frac{1}{2} \partial_{\mu} \Phi^i \partial^{\mu}\Phi^i- \frac{1}{4}\lambda(|\Phi|^2-\eta^2)^2 \Big),
\label{eq:ac}
\end{equation}
where $\Phi^i$, $i=1,..,N$ are real fields, $|\Phi|\equiv \sqrt{\Phi^i \Phi^i}$ and $\lambda$ and $\eta$ are real constant parameters. In the symmetry breaking a massive particle with mass $m_s=\sqrt{2 \lambda}\eta$ arises as well as $N-1$ massless Goldstone bosons.
The energy of local defects is divergent with radius but in a cosmological situation this is not catastrophic, since the energy divergence will be cut-off by neighbouring defects.
Since our aim is to study the dynamics of a network of global defects in an expanding universe, we consider a flat Friedmann-Robertson-Walker space-time with comoving coordinates:
\begin{equation}
ds^2=a^2(\tau)(d\tau^2-dx^2-dy^2-dz^2),
\end{equation}
where $a(\tau)$ is the cosmic scale factor and $\tau$ is conformal time. The equations of motion derived from (\ref{eq:ac}) are
\begin{equation}
\ddot{\Phi}^i+2 \frac{\dot{a}}{a}\dot{\Phi}^i-\nabla^2 \Phi^i = -a^2 \lambda (\Phi^2-\eta^2)\Phi^i,
\label{eq:eom}
\end{equation}
and the dots represent derivatives with respect to the conformal time $\tau$.
Since the size of the defect is given by their inverse mass $(\delta \sim m_s^{-1})$, a fixed length in physical units, which means that in comoving coordinates the size of the defects rapidly decreases. Thus, in order to have a longer dynamical range one has to use the Press-Ryden-Spergel method \cite{Press:1989yh}.
This method makes the width of the defect controllable by turning the coupling constant into a time-dependent variable:
\begin{equation}
\lambda=\lambda_0 a^{-2(1-s)},
\end{equation}
where the parameter $s$ is the responsible to control the defect size. That is, when the $s=0$ the defect size is fixed in comoving coordinates and when $s=1$ we obtain the true case where the size of the defect is fixed in physical length.
This method and its extension for gauge theories has been widely checked
\cite{Daverio:2015nva,Moore:2001px,Bevis:2010gj,Lopez-Eiguren:2016jsy}, where the errors due to its use are shown to be typically smaller than the statistical errors, or the systematic errors inherent to the discretization procedure.
The evolution of a defect network perturbs the background space-time; and those perturbations evolve and affect the contents of the universe, eventually creating CMB anisotropies. In contrast to inflationary perturbations, which were seeded primordially and then evolve ``passively'', defects induce perturbations actively during their whole existence. Those for Abelian Higgs cosmic strings are estimated to be roughly of the order of the magnitude of $G\mu$, where $G$ is Newton's constant and $\mu$ the string tension. Current bounds on $G\mu$ from CMB experiments constrain its value to be below $10^{-7}$ \cite{Ade:2013xla,Urrestilla:2011gr, Lizarraga:2014xza, Lazanu:2014xxa}.
In order to describe the perturbations induced by defects energy-momentum correlations are appropriate statistical tools \cite{Turok:1996wa,Pen:1997ae,Durrer:2001cg,Bevis:2006mj}. Indeed, the two-point unequal time correlators of the energy-momentum tensor are the only objects needed to derive the power spectrum of CMB anisotropies. UETCs are defined as follows:
\begin{equation}
U_{\lambda\kappa\mu\nu}(\mathbf{k},\tau,\tau')= \langle T_{\lambda\kappa}(\mathbf{k},\tau)T_{\mu\nu}(\mathbf{k},\tau')\rangle,
\label{eq:uetc}
\end{equation}
where $T_{\alpha\beta}(\mathbf{k},\tau)$ is the energy momentum tensor.
The UETCs give the power spectra of cosmological perturbations when convolved with the appropriate Green's functions. In practice, they are decomposed into a set of functions derived from the eigenvectors of the UETCs, which are used as sources for an Einstein-Boltzmann integrator. The power spectrum of interest is reconstructed as the sum of power spectra from each of the source functions.
A considerable simplification occurs when the times $\tau$ and $\tau'$ are both in epochs during which the scale factor grows with the same constant power of conformal time. In this case the correlation functions do not depend on $k$, $\tau$ and $\tau'$ separately, but only on $k\tau$ and $k\tau'$. This behaviour is called scaling, and scaling correlators can be written as
\begin{equation}
U_{ab}(\mathbf{k},\tau,\tau')=\frac{\eta^4}{\sqrt{\tau\tau'}}\frac{1}{V}\bar{C}_{ab}(k\sqrt{\tau\tau'},\tau'/\tau),
\label{scaUETC}
\end{equation}
where now the indices $a$ and $b$ correspond to projections of the energy momentum tensor; specifically to its independent components: two scalar (longitudinal gauge potential $\phi$ and $\psi$), one vector and one tensor. The overbar represents the scaling form of the UETC in a FLRW background. We will sometimes write $z=k \sqrt{\tau\tau'}$, $r=\tau'/\tau$. An alternative pair of scaling variables is $x, x'=k\tau,k\tau'$. A scaling UETC will have eigenvectors which depend on $k$ and $\tau$ only through the combination $x$.
Scaling is an essential feature for any kind of defect network to ensure their cosmological viability. Defect networks that exhibit scaling do not dominate the cosmological evolution over other species, but neither do they disappear. From the computational point of view it is also an immensely valuable property, since it allows to extrapolate the numerical observables obtained from limited reproductions of cosmological scenarios to required cosmological scales, which are well beyond current capabilities.
The UETCs are extracted from numerical simulations, where correlations between energy-momentum tensors at different stages of the evolution are calculated. After that, these functions, which are definite positive and symmetric, are diagonalized. The diagonalisation decomposes the UETCs into its eigenvalues and eigenfunctions,
\begin{equation}
\int_{\tau_{\rm i}}^{\tau_0} d\tau' C_{ab}(k,\tau,\tau') c_b^n(k,\tau') = \lambda_n(k)c_a^n(k,\tau).
\end{equation}
where $c_a^n$ are the eigenfunctions and $\lambda_n$ the eigenvalues associated to them.
Finally, in terms of these two ingredients, the source functions are defined in the following way:
\begin{equation}
s_a^n(k,\tau) = \sqrt{\lambda_n(k)}c_a^n(k,\tau) \, .
\label{eq:source}
\end{equation}
These source functions are then what we will plug in to the Einstein-Boltzmann solvers to calculate the CMB anisotropy power spectra, for further details on the process see \cite{Durrer:2001cg,Bevis:2006mj,Daverio:2015nva}.
However, it should be noted that the scaling is broken near the cosmological transitions. That is, when the universe undergoes a transition from radiation-dominated era to matter-dominated era, or from matter-dominated era to the era dominated by dark energy the UETCs also depend explicitly on $\tau_{\rm eq}$ and $\tau_{\Lambda}$, the times of equal radiation and matter density, and equal matter and dark energy density.
\section{UETCs from the Simulations \label{sec:uetcs}}
In this section we present the details of the numerical simulations from which the scaling UETC data were collected. These scaling UETCs are the inputs for the next section, in which the eigenvector decomposition method is described.
\subsection{Simulation details}
In order to simulate the evolution of the global defects in a discrete box we discretise the action (\ref{eq:ac}),
deriving discretised equations of motion on
a cartesian grid using a 3-point stencil for the spatial Laplacian and the leapfrog method for time evolution.
The equations are evolved in $1024^3$ lattices with periodic boundary conditions using the LATfield2 library for parallel field theory simulations \cite{David:2015eya}.
The periodic boundary conditions impose an upper limit on the time that the system can be evolved:
beyond half a light-crossing time, when Goldstone bosons moving in opposite directions in the box will start to re-encounter each other.
Our simulation lattice has a comoving spatial separation of $dx=0.5$ and time steps of $dt=0.1$, in units where $\eta=1$. The simulation volume therefore has comoving size $L=512$.
All simulations were run with $s=0$ and $m_s(\tau) a(\tau) = 2\eta$, with $a = 1$ at the end of the simulation.
We performed 5 individual runs in pure radiation and in pure matter dominated eras to determine the scaling form of the UETCs. We also performed runs across the radiation-matter cosmological transitions using the same parameters and initial conditions.
We are interested in the scaling regime of the defect network, not on the details of the phase transition. Thus, the initial condition (at time $\tau_{\rm ini}$) used in the numerical simulation is not used to extract data; its only function is to drive the system to scaling in order to get as large as possible dynamical range. We have found that for the present work a satisfactory initial field configuration is given by the scalar field velocities set to zero and the scalar fields set to be stationary Gaussian random fields with power spectrum
\begin{equation}
P_{\phi}(\mathbf{k})=\frac{A}{1+(k\ell_{\phi})^2},
\end{equation}
with $A$ chosen so that $\langle \Phi^2 \rangle=\eta^2$, and $\ell_{\phi}=5 \eta^{-1}$.
The UETCs cannot be calculated until after the defects are formed and reach their scaling configuration. These early phases contain a huge amount of excess energy induced by the random initial conditions, therefore we smooth the field distribution by applying a period of diffusive evolution, with the second derivatives removed from the equations of motion and where the time step is $1/30$ in units where $\eta=1$. The length of time of the diffusive phase ($\tau_{\rm diff}$) varies, since the evolution of the equations of motion is different in each case. In all cases $\tau_{\rm diff}$ is determined in order to optimize the the dynamical range of the scaling evolution of the defect network.
After the diffusion period, the system relaxes into scaling, and we start to collect data from $\tau_{\rm ref}$ until the end of the simulation $\tau_{\rm end}$. We measure the UETC by recording the mean value of $C_{ab}(k,\tau_{\rm ref},\tau)$ for wavevectors binned in the range $2\pi(n-1)/L < |\mathbf{k}| \le 2n/L$ $(1\le n < N_{\rm b})$, with $N_{\rm b}=886$, and $n_{\rm out}$ number of outputs logarithmically-spaced times between $\tau_{\rm ref}$ and $\tau_{\rm end}$. The wavenumber of the $n$th bin $k_n$ is set to the mean value of $|\mathbf{k}|$ in that bin. Table \ref{tab:sim-pro} shows the values of these parameters.
\begin{table*}
\begin{tabular}{|l | r | r|}
\hline
& O($2$) & O($3$) \\
\hline
$\tau_{\rm ini}$ & 50 & 0 \\
$\tau_{\rm diff}$ & 70 & 20 \\
$\tau_{\rm ref}$ & 150 & 60 \\
$\tau_{\rm end}$ & 300 & 250 \\
$n_{\rm out}$ & 50 & 60 \\
\hline
\end{tabular}
\caption{\label{tab:sim-pro} The values of the time-related parameters, given in units where $\eta=1$. The simulations start at time $\tau_{\rm ini}$ and there is a period of diffusion until $\tau_{\rm diff}$; the data are taken from $\tau_{\rm ref}$ until $\tau_{\rm end}$ every $n_{\rm out}$ time-steps.}
\end{table*}
We also record the Equal Time Correlators (ETCs) at each time the UETC is evaluated, with which we can monitor the quality of scaling. Perfect scaling would mean that the ETCs collapse to a single line when plotted against $x=k\tau$.
\subsection{Scaling}
\label{s:Sca}
It is known that defining a length scale of the network is convenient to track the state of the system and scaling. We will define two different length scales, one for each case of defect under study, \textit{i.e.\ } one for strings and another one for monopoles.
For the case of strings the comoving string separation $\xi^{\rm s}$ has been identified as a useful quantity to determine compatible simulation stages \cite{Daverio:2015nva}. The string separation is defined in terms of the mean comoving string length $\ell_s$ in the comoving
simulation volume $\mathcal{V}$ as
\begin{equation}
\label{e:xisDef}
\xi^{\rm s}=\sqrt{\frac{\mathcal{V}}{\ell_s}}.
\end{equation}
The mean string length, $\ell_s$, is derived from estimators of the comoving length of string.
One way of obtaining the length of strings is by summing the number of plaquettes pierced by
strings. Such plaquettes are identified calculating the ``winding'' of the phase of the field around each plaquette of the lattice
\cite{Vachaspati:1984dz}.
We denote the string separation computed in this way as $\xi_{\rm w}^{\rm s}$.
An alternative way is to use local field theory estimators for the total string length \cite{Daverio:2015nva}.
In our case we use the total comoving energy weighted by the potential $V$,
\begin{equation}
E_{V} = \mathcal{V} \frac{\int d^3 x T_{00} V}{\int d^3 x V}
\end{equation}
and the energy per unit length, $\mu_{V}$, also weighted with the potential,
to define a string length estimator
\begin{equation}
\label{e:EVlen}
\ell_{\rm s}=\frac{E_{V}}{\mu_{V}}.
\end{equation}
In order to obtain the potential weighted energy per unit length of global strings, we have solved numerically the static field equations for a straight string lying on the $z$-axis \cite{Vilenkin:2000jqa}.
From the values of the profile functions we have calculated the weighted energy per unit length, which is $\mu_{V}=0.70\eta^2$.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{scaling.pdf}
\caption{In the left column: String separation $\xi^{\rm s}$ (\ref{e:xisDef}) from simulations in radiation era (top figure) and matter era (bottom figure), with $\xi_{\rm w}^{\rm s}$ obtained from the winding length measure and $\xi_{E}^{\rm s}$ from the string length measure defined in (\ref{e:EVlen}). In the right column: Monopole separation $\xi_{E}^{\rm m}$ (\ref{e:ximDef}) from simulations in radiation era (top figure) and matter era (bottom figure), obtained using
the monopole number estimate defined in (\ref{e:EVnum}).
}
\label{fig:scaling}
\end{figure}
Monopole networks can be characterized using the comoving monopole separation $\xi^{\rm m}$. The monopole separation is defined in terms of the monopole number in the simulation volume $\mathcal{V}$ as
\begin{equation}
\label{e:ximDef}
\xi^{\rm m}=\Big( \frac{\mathcal{V}}{\mathcal{N}}\Big)^{1/3}
\end{equation}
where $\mathcal{N}$ is the total monopole number\footnote{Monopoles and antimonopoles contribute the same to
the energy density, and are so equivalent in the energy-momentum tensor. Therefore, $\mathcal{N}$ is the sum of monopoles and antimonopoles.
The net monopole charge in the simulation is exactly zero, due to the periodic boundary conditions.}.
The monopole number can be computed by directly obtaining the topological charge in each lattice
cell of the simulation box \cite{Lopez-Eiguren:2016jsy,Antunes:2002ss}.
It can also be obtained from a local field estimate
\begin{equation}
\label{e:EVnum}
\mathcal{N}=\frac{E_{V}}{M_{V}},
\end{equation}
where $M_{V}$
is a energy of a monopole weighted with the potential $V$ and $E_{V}$ is the energy weighted with the potential.
The weighted energy is computed in a similar way in which we have computed the weighted energy per unit length of global strings. That is, we have solved the equations of motion for a static monopole \cite{Vilenkin:2000jqa} using a relaxation method and then, using the profile functions we have calculated the weighted mass, which is $M_{V}=2.33\eta$.
The computational cost of the field estimator is considerably lower, as the energy densities are being computed anyway,
although it does slightly overestimate the monopole number
by about $10$\%.
This does not affect the linearity of $\beta^{\rm m}$ in the
scaling regime, as we can see in Table~\ref{tab:betas3} where we have added the value of $\beta^{\rm m}_{\rm w}$ obtained from one simulation, and we restrict our separation measure to be the one from the field estimator.
As it was found in previous works, the asymptotic behaviour of the separations for both type of defects is very close to linear,
\begin{equation}
\label{e:xiSca}
\xi \simeq \beta(\tau-\tau_{\rm off}),
\end{equation}
where $\tau$ is the conformal time in the simulations and
$\tau_{\rm off}$ is the time offset of the linear fit to the $\xi$ curve.\footnote{Without a time offset, plotting $\xi/\tau$ will produce
an apparent time dependence in $\beta$, and one might wrongly conclude that the string network is not scaling.}
For the global string case the two different ways of computing $\xi^s$, $\xi_{\rm w}^{\rm s}$ and $\xi_{E}^{\rm s}$, give almost equal scaling behaviour and in the monopole case we have used $\xi_{E}^{\rm m}$.
(see Fig. \ref{fig:scaling}).
We have managed to find a combination of $\tau_{\rm ini}$ and $\ell_\phi$
such that the time offset is almost zero in all realisations.
We define the mean slopes $\beta_{\rm w}^{\rm s}$, $\beta_{E}^{\rm s}$, and $\beta_{E}^{\rm m}$
as the average of all different slopes from different realisations.
Numerical values of the mean slopes obtained in the range $\tau \in [150 \; 300]$ for global strings can be found in Table \ref{tab:betas2} and in the range
$\tau \in [60 \; 250]$ for global monopoles can be found in Table \ref{tab:betas3}.
\begin{table*}
\begin{tabular}{|c|cccc|}
\hline
& \multicolumn{4}{c|}{O($2$)} \\
\hline
-- & $\beta_{\rm w}^{\rm s}$ & $\beta_{E}^{\rm s}$ & $\zeta$ & $\rho$ \\
\hline
R & 0.36 $\pm$ 0.01 & 0.38 $\pm$ 0.01 & 1.7 $\pm$ 0.1 & 562 $\pm$ 9 \\
M & 0.36 $\pm$ 0.02 & 0.39 $\pm$ 0.02 & 0.72 $\pm$ 0.08 & 135 $\pm$ 9 \\
\hline
\end{tabular}
\caption{\label{tab:betas2} Numerical values of the different scaling parameters for global string networks
in the radiation (R) and matter (M) eras,
obtained in the range $\tau \in [150 \; 300]$.
The parameters $\beta$ are mean defect separations in units of the horizon length:
$\beta_{\rm w}^{\rm s}$ is computed using the length of strings obtained from the number of windings,
and $\beta_{E}^{\rm s}$ is computed using the energy weighted with the potential, $E_{V}$.
The parameter $\zeta$ \cite{Moore:2001px} is a relative energy density and
defined in \ref{eq:zeta}, while
$\rho$ is a parameter defined in \cite{Pen:1993nx} for simulations in NLSM.
}
\end{table*}
\begin{table*}
\begin{tabular}{|c|cccc|}
\hline
& \multicolumn{4}{c|}{O($3$)} \\
\hline
-- & $\beta^{\rm m}_{\rm w}$ & $\beta_{E}^{\rm m}$ & $\epsilon$ & $\rho$ \\
\hline
R & 0.64 & 0.63 $\pm$ 0.03 & 1.26 $\pm$ 0.03 & 75 $\pm$ 2 \\
M & 0.59 & 0.60 $\pm$ 0.02 & 1.80 $\pm$ 0.04 & 28 $\pm$ 2\\
\hline
\end{tabular}
\caption{\label{tab:betas3} Numerical values of the different scaling parameters for global monopole networks
in the radiation (R) and matter (M) eras,
obtained in the range $\tau \in [60 \; 250]$.
The parameters $\beta$ are mean defect separations in units of the horizon length:
$\beta_{\rm w}^{\rm s}$ is computed using the number of monopoles obtained from the topological charge (data from only one simulation), and
$\beta_{E}^{\rm m}$ is the mean separation of global monopoles, whose number density computed using
the energy weighted with the potential, $E_{V}$.
The parameter
$\epsilon$ \cite{Martins:2008zz}
is the mean separation of monopole in units of the physical time $t$, while
$\rho$ is a parameter defined in \cite{Pen:1993nx} for simulations in NLSM.
}
\end{table*}
We can translate the slopes of the mean separation to values of the parameter $\zeta$,
proportional to the relative energy density, used in \cite{Moore:2001px},
defined as
\begin{equation}
\zeta=\frac{E_{V}}{\mu_{V} a^2(t)} t^2,
\label{eq:zeta}
\end{equation}
where $t$ is the physical time. Our values of
$\zeta$ can be seen in Table~\ref{tab:betas2} and Table~\ref{tab:betas3}.
These values are compatible with the values given in \cite{Moore:2001px} in which
$\zeta=2.0\pm 0.5$ for the radiation era, and the uncertainty is greatly reduced by the greater volume of our simulations.\footnote{Not allowing for a time offset (see Eq.~\ref{e:xiSca}) when extracting the scaling value of $\zeta$ can produce an apparent time dependence of $\zeta$, which is the reason for the apparent logarithmic growth observed in \cite{Fleury:2016xrz}.}
The values for the slope, $\beta_{E}^{\rm m}$,
for the monopole case can also be seen in Table~\ref{tab:betas3}. These values are compatible with the values obtained in \cite{Lopez-Eiguren:2016jsy} where the authors found that $\beta_{\rm r}=0.72\pm 0.06$ in radiation era and $\beta_{\rm m}=0.65\pm0.04$ in the matter era. In order to compare with the results obtained in \cite{Martins:2008zz} we define a scaling parameter $\epsilon$ by:
\begin{equation}
\epsilon = \frac{a(t)}{t}\Big( \frac{\mathcal{V}}{\mathcal{N}}\Big)^{1/3}
\label{eq:epsilon}
\end{equation}
where $t$ is the physical time. After the translation our values are shown in Table~\ref{tab:betas3}. These values are compatible with the values obtained in \cite{Martins:2008zz}, where two different sets of simulations were used. On the one hand, they used the simulations made in \cite{Yamaguchi:2001xn} and the results are $\epsilon_{\rm r}=1.3\pm0.4$ and $\epsilon_{\rm m}=1.6\pm0.1$. On the other hand, they have also used the simulations made in \cite{Bennett:1990xy} where the values for $\epsilon$ are $\epsilon_{\rm r}=1.3\pm0.2$ and $\epsilon_{\rm m}=1.9\pm0.2$.
We can also compare the scaling energy density with the values obtained in simulations of the O(2) and O(3) non-linear sigma model (NLSM)
\cite{Pen:1993nx}, who defined a parameter
\begin{equation}
\label{e:RhoDef}
\rho = \frac{E}{\mathcal{V}} \frac{\tau^2}{\eta^2},
\end{equation}
where $E = \int d^3 x T_{00}.$
We computed the scaling values of $\rho$ from the slopes of a linear fit of $\xi_E = \sqrt{\mu\mathcal{V}/E}$, and display the mean and standard deviations in Table~\ref{tab:betas2} and Table~\ref{tab:betas3}.
These are to be compared with the NLSM values of
$68 \pm 7$ for O(2) and $24$ (no errors given) for O(3), both in matter era \cite{Pen:1993nx}.
Note that we have multiplied the scaling value $\rho = 14.5 \pm 1.5$ given in
Ref.~\cite{Pen:1993nx}
by $\ln(\xi_{\rm ref} m_s)$, in order to scale their results to our simulation volume.
The reason for this logarithm is that the global string energy per unit length
should increase as $\mu\ln(\xi m_s)$ \cite{Vilenkin:2000jqa},
giving a logarithmic correction to scaling. In Ref.~\cite{Pen:1993nx}
this logarithm is divided into the energy density in order to improve the scaling.
The value of the logarithm does not change significantly over the time range from which data is taken in our simulations,
so we can extract scaling correlators without this compensation.
The difference in scaling densities for string indicates that NLSM simulations are missing an important energy contribution from the string cores.
Monopole cores, on the other hand, make little difference to the energy density, as their contribution decreases
as $\tau^{-3}$, and so one expects the agreement between the NLSM and the linear $\sigma$-model to be good.
\subsection{Energy momentum correlators}
\label{subsec:uetcs}
The ETCs of the energy momentum tensor give a more detailed test of scaling. In Fig.~\ref{fig:etcs} we show the ETCs for global strings and global monopoles in radiation era for the whole period of time recorded. We show the ETCs at $\tau_{\rm end}$ with shaded regions that represent $1\sigma$ and $2\sigma$ levels obtained averaging over 5 realizations and the dashed lines correspond to intermediate times: 150, 185, 222, 261 and 296 for O($2$) and 60, 97, 130, 158 and 205 for O($3$) (in units of $\eta^{-1}$). The behaviour in the matter era is similar to the behaviour in the radiation era. The figures show that at small scales the ETCs collapse to a single line, though this behaviour is not so clear at low-k$\tau$'s.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\textwidth]{ETCpaper_O2.pdf}
\includegraphics[width=0.49\textwidth]{ETCpaper_O3.pdf}
\caption{ETCs for global strings (left pane) and for global monopoles (right pane) in radiation era. Shaded regions correspond to $1\sigma$ and $2\sigma$ deviations calculated at $\tau_{\rm end}$. The 5 dashed lines in each case correspond to ETCs at intermediate times, in units of $\eta^{-1}$: 150, 185, 222, 261 and 296 for O($2$) strings and 60, 97, 130, 158 and 205 for O($3$) monopoles.
}
\label{fig:etcs}
\end{figure}
We have also studied the decay of the ETCs at small scales, and figure~\ref{fig:etcsPL} shows the ETCs multiplied by $(k\tau)^2$ in logarithmic scale. We observe that for global strings as well as for global monopoles the ETCs decay roughly as $k^{-2}$, which contrasts with the expected string-like $k^{-1}$ \cite{Vincent:1996qr} and point-like $k^0$ behaviour. Instead, it is consistent with the $k^{-2}$ short-distance power spectrum induced by randomly-placed disks. We have looked at various visualisations of the energy-momentum tensor, and we have been unable to find one which gave a good impression of the sheet-like structures suggested by the power spectrum. It would interesting to explore the implied non-trivial correlations between defects and the surrounding Goldstone boson cloud.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\textwidth]{ETC_Slopes_O2.pdf}
\includegraphics[width=0.49\textwidth]{ETC_Slopes_O3.pdf}
\caption{Logarithmic plots for all five equal time correlators at the end of radiation era simulations for global strings $\tau\simeq300$ (left pane) and global monopoles $\tau\simeq250$ (right pane). Note that the correlators are multiplied by $(k\tau)^2$ to demonstrate that the fall-of beyond the peak is approximately $k^{-2}$. The colour scheme in both cases is the following (from top to bottom): $C_{11}$ black, $C_{22}$ gray, $C_{12}$ black, $C_{\rm vv}$ black and $C_{\rm tt}$ gray.}
\label{fig:etcsPL}
\end{figure}
Since the offset is consistent with zero in our simulations,
it is straightforward to average the UETCs obtained from different realizations.
Figure~\ref{fig:uetc-nf2-rad2} shows the averaged Matter UETCs for global strings, and Fig.~\ref{fig:uetc-nf2-rad3} shows the corresponding one for global monopoles.
\begin{figure}[htbp]
\centering
\includegraphics[width=6.5cm]{UETCnf2-Mat-scalar11.pdf}
\includegraphics[width=6.5cm]{UETCnf2-Mat-vector.pdf}
\includegraphics[width=6.5cm]{UETCnf2-Mat-scalar22.pdf}
\includegraphics[width=6.5cm]{UETCnf2-Mat-tensor.pdf}
\includegraphics[width=6.5cm]{UETCnf2-Mat-scalar12.pdf}
\hbox to 60mm{}
\caption{Full set of scaling O($2$) UETCs for the matter era, calculated averaging over 5 runs. See Section~\ref{sec:model} for definition of UETCs.}
\label{fig:uetc-nf2-rad2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=6.5cm]{UETCnf3-Mat-scalar11.pdf}
\includegraphics[width=6.5cm]{UETCnf3-Mat-vector.pdf}
\includegraphics[width=6.5cm]{UETCnf3-Mat-scalar22.pdf}
\includegraphics[width=6.5cm]{UETCnf3-Mat-tensor.pdf}
\includegraphics[width=6.5cm]{UETCnf3-Mat-scalar12.pdf}
\hbox to 60mm{}
\caption{Full set of scaling O($3$) UETCs for the matter era, calculated averaging over 5 runs. See Section~\ref{sec:model} for definition of UETCs.}
\label{fig:uetc-nf2-rad3}
\end{figure}
Figures \ref{fig:uetc-nf2-rad2} and \ref{fig:uetc-nf2-rad3} show that the amplitudes of the correlators (in matter era) of O($2$) strings are much bigger than the amplitude of O($3$) UETCs. Note that the UETCs for radiation era, in both defect types under analysis, have the same shape but the power is bigger than in the matter era. Similarly, if we compare global string correlators with the UETCs obtained from simulations of the Abelian Higgs model presented in \cite{Daverio:2015nva}, we observe that the general shape is similar while the amplitude is slightly higher for the global case. In both cases we use units where the vacuum expectation value of the scalar field is 1. Note that the normalisation convention is different for complex and real scalar fields.
\section{Computation of source functions \label{sec:source} }
It has been established in the previous section that global strings and monopoles evolve in the scaling regime for most of the time reproduced by our simulations. As it is known, and also mentioned in section~\ref{sec:model}, the scaling can be used to extrapolate results derived from numerical simulations of different type of defects to the required cosmological scales. The universe undergoes two transitions during times of interest, say the transition from radiation-dominated era to matter-dominated era and the transition from matter-domination to $\Lambda$-domination. In this work we will not consider the transition from matter-domination to $\Lambda$-domination, since its effect is rather small, as shown in \cite{Lizarraga:2016onn}. Therefore, perfect scaling is not a feature of networks evolving in our universe, this is why the scales imposed by the transitions must be also considered.
UETCs must also depend on the scales imposed by the transitions. This means that in general the correlators will depend explicitly on $\tau_{\rm eq}$, in other words, the true (non-scaling) UETCs are functions of three different dimensionless variables,
which can be chosen to be $k\tau$, $k\tau'$ and $\sqrt{\tau\tau'}/\tau_{\rm eq}$. One has to determine a method which captures the information of the transitions and include it in the computation of the source functions. There are several proposals in the literature for performing this estimation \cite{Fenu:2013tea, Lizarraga:2016onn, Bevis:2010gj,Daverio:2015nva}, all of which were compared in \cite{Daverio:2015nva}.
In this work we will follow the fixed-$k$ interpolation method proposed in \cite{Daverio:2015nva}: the UETCs are thought of as symmetric functions of $\tau$ and $\tau'$ for a given $k$. This approach has several advantages: it preserves orthogonality of the source functions during defects' whole existence and reproduces better the UETCs at cosmological transitions. Moreover, it also fits very well into the scheme used by Einstein-Boltzmann codes, which solve the perturbation equations with an outer loop over $k$ and an inner time integration for fixed values of $k$. For further details we refer the reader to \cite{Daverio:2015nva}.
The true UETCs $C_{ab}(k,\tau,\tau')$ are constructed from the mixture of the scaling matter and radiation correlators, extracted from our simulations, at each value of $k$. The relative mixture of matter and radiation UETCs is determined by $\tau/\tau_{\rm eq}$ and $\tau'/\tau_{\rm eq}$. An explicitly symmetric proposal for the UETCs which models this behaviour across the radiation-matter transition is the following \cite{Daverio:2015nva}:
\begin{equation}
C_{ab}(k\tau,k\tau',\sqrt{\tau\tau'}/\tau_{\rm eq})= f\left( \frac{\sqrt{\tau\tau'}}{\tau_{\rm eq}}\right) \bar{C}_{ab}^{\rm R}(k\tau,k\tau')+\left( 1-f\left( \frac{\sqrt{\tau\tau'}}{\tau_{\rm eq}}\right) \right) \bar{C}_{ab}^{\rm M}(k\tau,k\tau').
\label{eq:trans}
\end{equation}
It approximates the UETC in the entire region by the linear combination of pure radiation and pure matter era scaling correlators balancing the contribution of each by an interpolating function $f$. At extreme values of $\tau / \tau_{\rm eq}$ we recover functions that correspond to matter ($\tau/\tau_{\rm eq}\gg 1$) and radiation ($\tau/\tau_{\rm eq}\ll1$) dominations
We note that the source functions for the EB integrators at a given $k$ are now just the eigenvectors of these model UETCs, multiplied by the square root of the associated eigenvalues, and so they are indeed orthogonal, see Eq.~(\ref{eq:source}).
In order to establish the form of the interpolating function, we perform numerical simulations of O($2$) and O($3$) defects at cosmological transitions. The interpolating function can be defined in the following way in terms of the equal-time correlators (ETC) $E_{ab}(k,\tau)=C_{ab}(k,\tau,\tau)$ \cite{Fenu:2013tea}:
\begin{equation}
f_{ab}(k,\tau)=\frac{E_{ab}^{\rm RM}(k,\tau)-\bar{E}_{ab}^{\rm M}(k\tau)}{\bar{E}_{ab}^{\rm R}(k\tau)-\bar{E}_{ab}^{\rm M}(k\tau)} \quad \forall k,
\label{eq:trans-fun}
\end{equation}
where $\bar{E}^{\rm R}(k\tau)$ and $\bar{E}^{\rm M}(k\tau)$ are the scaling ETCs in the radiation and matter eras respectively, and $E^{\rm RM}(k,\tau)$ is the true ETC measured from the simulations performed during the transition.
We extracted ETCs from the simulations with $\tau_{\rm eq}= 3,\ 10,\ 40,\ 150$ and $300$ (see Table~\ref{tab-runs}), and used Eq.~(\ref{eq:trans-fun}) to compute the function. Fig. \ref{fig:trans-fun} shows the results obtained for $E_{11}$ correlators for global strings (left panel) and global monopoles (right panel), the transition functions for the rest of the correlators are similar to those shown in the figure. The five grey shaded regions represent the raw transition functions obtained during the five transition periods simulated. The two grey levels indicate $1\sigma$ and $2\sigma$ deviations from the mean value calculated averaging over a set of wavevectors: $1.5 < |\mathbf{k}| < 3.5$ and $3 < |\mathbf{k}| < 5$ respectively.
\begin{table*}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c| }
\hline
$\tau_{\rm eq}$ & 300 & 150 & 40 & 10 & 3 \\
$\tau_{ \rm ref}/\tau_{\rm eq}$ & 0.5 & 1.0 & 3.75 & 15 & 50 \\
$\tau_{\rm end}/\tau_{\rm eq}$ & 1.00 & 2.0 & 7.5 & 50 & 100 \\
$\alpha(\tau_{\rm ref})$ & 1.09 & 1.17 & 1.44 & 1.76 & 1.91\\
$\alpha(\tau_{\rm end})$ & 1.17 & 1.29 & 1.60 & 1.86 & 1.95 \\
\hline
\end{tabular}
\caption{\label{tab-runs} Selected parameters for simulations across the radiation-matter transition. The parameters are the conformal time of matter-radiation equality, $\tau_{\rm eq}$, in units of $\eta^{-1}$, the ratio of the reference time $\tau_{\rm ref}$ for UETC data taking and the simulation end time $\tau_{\rm end}$ to $\tau_{\rm eq}$, and the expansion rate $\alpha=d \ln a/ d \ln \tau$ at $\tau_{\rm ref}$ and $\tau_{\rm end}$.}
\end{table*}
\begin{figure}[htbp]
\centering
\includegraphics[width=15cm]{transition.pdf}
\caption{UETC interpolation functions derived from simulations performed during the radiation-matter transition corresponding to global strings (left panel) and global monopoles (right panel)(thick grey line). The five patches correspond to simulations with $\tau_{\rm eq}=3,\ 10,\ 40,\ 150$ and $300$. The shaded regions represent the $1\sigma$ and $2\sigma$ deviations from the mean value of the function obtained from Eq.~(\ref{eq:trans-fun}) calculated from averaging over $k$, while the red line corresponds to the best-fit given by the function expressed in Eq.~(\ref{eq:ftau}). In both panels the correlator used is $E_{11}$.
\label{fig:trans-fun}}
\end{figure}
The interpolating functions derived from our simulations confirm what previous analysis of the behaviour of the energy-momentum correlators at cosmological transitions showed: scale independence of the interpolating function. The deviations from the mean value represented by two grey levels, though they are somewhat bigger for monopoles, are not significant in either case. This demonstrates that the interpolating functions depend only on time to a good approximation. The rest of the correlators (not shown) support the scale independence statement, with the same function, which we shall write $f(\tau)$.
We fit the data using the same form used in \cite{Fenu:2013tea,Daverio:2015nva}, which is
\begin{equation}
f(\tau)= \left( 1+ \gamma \frac{\tau}{\tau_{\rm eq}} \right)^{\kappa},
\label{eq:ftau}
\end{equation}
where $\gamma$ and $\kappa$ are the parameters to be determined by the fitting process.
Table~\ref{tab:trans} shows the mean values and standard deviations for the parameters of Eq.~\ref{eq:ftau}. The mean and standard deviations are obtained averaging over different realizations and over different correlators, since it has been observed that in a good approximation the interpolating function is the same for all correlators. The best-fit obtained fitting data is also included in Fig.~\ref{fig:trans-fun}.
\begin{table*}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|}
\hline
&$\gamma$ & $\kappa$ \\
\hline
O($2$) & 0.26 $\pm$ 0.03 & -1.15 $\pm$ 0.02 \\
O($3$) & 0.23 $\pm$ 0.05 & -1.4 $ \pm$ 0.2 \\
\hline
\end{tabular}
\caption{\label{tab:trans} Mean values together with the standard deviations for parameters $\gamma$ and $\kappa$ of Eq.~(\ref{eq:ftau}) needed to reproduce the radiation-matter transition.}
\end{table*}
In \cite{Fenu:2013tea} it was proposed that the interpolation function should be universal in all defect models, with $\gamma = 0.25$ and $\kappa = -2$. In \cite{Daverio:2015nva} it was found that for the Abelian Higgs model,
the interpolation function had the same form, but with $\gamma = 0.25$ and $\kappa = -1$, which is already a counterexample.
From our results we see that the interpolation function is not universal even within O($N$) defect models.
However for bigger $N$'s the value of $\kappa$ is bigger, which might be a sign of a trend. It would be interesting to test whether increasing the value of $N$ we eventually get the value proposed in \cite{Fenu:2013tea}.
Finally, having determined how the transitions has to be performed for the two defects analyzed in this paper, we diagonalise the true non-scaling UETCs Eq.~(\ref{eq:trans}) and obtain the source functions Eq.~(\ref{eq:source}) that will be used for the CMB power spectra calculation, as we describe in the next section.
\section{Power Spectra \label{sec:spectra}}
In the previous section we have defined the source functions for the global strings and monopoles. Inserting these functions into a source enabled Einstein-Boltzmann (EB) solver we can compute the contributions to CMB power spectra due to the presence of global defects. In our case the EB solver we have used is the source enabled version of CMBEASY \cite{Doran:2003sy}, \textit{i.e.\ } the code has been additionally modified to handle source functions of that we have explained in the previous section.
The cosmological parameters used for these calculations are the best-fit values obtained by the Planck collaboration \cite{Ade:2015xua}: $h=0.6726$, $\Omega_{\rm b}h^2=0.02225$, $\Omega_{\Lambda}=0.6844$ and reionization optical depth $\tau_{\rm re}=0.079$. After diagonalisation, the total contribution of defects under analysis to temperature and polarization anisotropies is calculated summing the contribution of each individual source functions, where 130 source functions are summed in each case.
Figs.~\ref{fig:cl-com} and \ref{fig:spc-o3}
shows the temperature and polarization power spectra obtained for the two model as solid black lines,
normalised using the parameter
\begin{equation}
\mu=\pi \eta^2,
\label{eq:mu}
\end{equation}
where $\eta$ is the vacuum expectation value of the (real) scalar fields.
In each case $\mu$ has a different meaning. For the global string case it can be seen as the tension in the core of the string,
and for global monopoles the energy in the monopole core is approximately $\mu\delta$.
In Fig.~\ref{fig:cl-com} we have also plotted the power spectra obtained for Abelian Higgs strings in \cite{Lizarraga:2016onn} as red lines for comparison, for which $\mu$ is precisely the string tension.
The figure shows that the amplitude of the signal of global strings is almost a factor of two bigger, whereas the shape of both around the peak are very similar. The global string power spectra fall off faster at high multipole, a consequence of the faster fall-off of the ETCs at high wavenumber.
Fig.~\ref{fig:spc-o3} in turn shows the temperature and all polarization power spectra obtained for global monopoles.
Comparing with the O($2$) case we can see that the signal given by the O($2$) model is much bigger than the one given by O($3$) monopoles. Furthermore, although the overall shape is similar in the both cases, the O($3$) case is more oscillatory.
The power spectra for global strings and monopoles can be compared with that obtained in \cite{Fenu:2013tea} for O($N$) defects, in the large-$N$ limit (red line in Fig.~\ref{fig:spc-o3}). It can be noted that all spectra share a similar overall shape.
The spectrum obtained from the large-$N$ limit shows clearer oscillations,
more closely resembling the global monopole curve,
but underestimates the amplitude.
To quantify the underestimate, we show in table~\ref{t:largencom} the values of the power spectra for the two cases (obtained in our analysis and using large-$N$) at $l=10$ and at the peak of the power spectra.
Note also that the ratio is not the same in both points showing that the large-$N$ limit does not capture well the detailed shape of the power spectra. In these values we can see a similar behaviour to the one shown in \cite{Figueroa:2012kw}; that is for bigger values of $N$ the ratio between the measured value and the theoretical one seems to reaching one.
\begin{figure}[htbp]
\centering
\includegraphics[width=15cm]{o2-cl-com.png}
\caption{Temperature and all polarization channels for the CMB of O($2$) (black line) and Abelian Higgs (red line). Solid lines
correspond to the mean spectra while shaded regions represents $1\sigma$ and $2\sigma$ confidence limits obtained by bootstrapping 10 times over 5 radiation and 5 matter realizations for UETCs (over 7 radiation and 7 matter realizations in the UETC merging process for AH) \cite{Lizarraga:2016onn}. Note that $\mu = \pi \eta^2$, where $\eta$ is the vacuum expectation value of the scalar fields.
}
\label{fig:cl-com}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=15cm]{o3-cl-com.png}
\caption{Temperature and all polarization channels for the CMB of O($3$) (black line) and large-$N$ computation made in \cite{Fenu:2013tea} (red line). Black lines correspond to the mean spectra while grey regions represents $1\sigma$ and $2\sigma$ confidence limits obtained by bootstrapping 10 times over 5 radiation and 5 matter realizations for UETCs.
Note that $\mu = \pi \eta^2$, where $\eta$ is the vacuum expectation value of the scalar fields.
}
\label{fig:spc-o3}
\end{figure}
Fig.~\ref{fig:tt-all} shows the contribution of scalars, vectors and tensors to the temperature channel for the both cases O($2$) and O($3$). In this figure we can see that the contribution of scalars is the dominant one and that vectors and tensors contribute fewer to the temperature channel, being the tensor the one that contribute the least. The contribution scheme in both models, O($2$) and O($3$), is almost the same.
\begin{figure}[htbp]
\centering
\includegraphics[width=7cm]{o2-tt-all.png}
\includegraphics[width=7cm]{o3-tt-all.png}
\caption{The CMB temperature power spectrum for O($2$) (left) and O($3$) (right). The plot shows the total (black region) plus the decomposition into scalar (red region), vector (blue region) and tensor (green region). In those regions the bright lines correspond to mean spectra while the pale regions represents $1\sigma$ and $2\sigma$ confidence limits obtained by bootstrapping 10 times over 5 radiation and 5 matter realizations for UETCs. }
\label{fig:tt-all}
\end{figure}
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|ccc|ccc|}
\hline
& \multicolumn{3}{c|}{Peak} & \multicolumn{3}{c|}{$l=10$} \\
\hline
$N$ & LN & S & Ratio (S/LN) & LN & S & Ratio (S/LN) \\
\hline
2 & 41.0 & 799 & 19.5 & 32.0 & 374 & 11.7 \\
\hline
3 & 18.2 & 37.9 & 2.08 & 14.2 & 22.8 & 1.60 \\
\hline
\end{tabular} \\
\caption{\label{t:largencom} Values of $l(l+1)C_l^{TT}$ for $N=2$ and $N=3$ at the peak of the temperature power spectrum
and at multipole $l=10$.
S denotes values obtained from our simulations and LN values obtained in \cite{Fenu:2013tea} using the large-$N$ limit.}
\end{center}
\end{table}
We can also compare to the numerical simulations in the O(2) and O(3) NLSM \cite{Pen:1993nx,Pen:1997ae}.
Although these calculations were done in an Einstein-de Sitter universe with $h=0.5$, there is little difference
in the power spectrum with $\Lambda$CDM at $l = 10$ where the Sachs-Wolf effect is dominant.
The shapes of the power spectra
\cite{Pen:1997ae} are broadly similar, although the relative contribution of the scalar is much lower in the calculation of Ref.~\cite{Pen:1997ae}.
This has the effect of reducing the height of the peak relative to power at $l=10$, and moving the peak from $l \simeq 300$ to $l \simeq 100$.
This is likely to be an effect of the higher matter density \cite{Hu:1995kot}.
The smaller simulation volume, $400^3$ is unlikely to be a source of difference, as scaling is rapidly reached in NLSM simulations.
The matter-radiation transition was treated using a multistage eigenvector interpolation method, which gives very similar results to
our fixed-$k$ interpolation method \cite{Lizarraga:2016onn}.
We compare the value of $G\mu$ required to normalise the temperature power spectrum to the COBE at angular scales of $10^\circ$ reported in Ref.~\cite{Pen:1997ae} with the value required to normalise our power spectra to {\it Planck}\ at $l=10$ in Table \ref{t:GmuCom}, which are approximately equivalent.
At these large angular scales, the difference in the background cosmologies should not have much effect, and we
conclude that there is a suggestion of a systematically higher temperature power spectrum for a given $G\mu$
in the NLSM. This would be interesting to check, but would require a separate campaign of simulations in the NLSM.
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|}
\hline
$N$ & NLSM \cite{Pen:1993nx} & This work \\
\hline
2 & $ (2.6\pm1.3) \times 10^{-6}$ & $(1.34\pm0.02) \times 10^{-6}$ \\
\hline
3 & $ (2.3\pm0.4) \times 10^{-6}$ & $(5.44\pm0.02) \times 10^{-6}$ \\
\hline
\end{tabular}
\caption{\label{t:GmuCom} Comparison between $G\mu$ normalised to the COBE temperature fluctuations filtered at a $10^\circ$ scale
in the NLSM \cite{Pen:1993nx} and the approximately equivalent value obtained by normalising our power spectra to
{\it Planck}\ at $l = 10$.}
\end{center}
\end{table}
\section{Fits and constraints \label{sec:fits}}
The CMB anisotropy predictions obtained from field theoretical numerical simulations of global string and global monopoles are compared with the latest CMB data released by the {\it Planck}\ collaboration \cite{Ade:2015xua}, in order to put limits on the allowed fraction of those defects. We consider the whole {\it Planck}\ CMB dataset and analyzed them using the publicly available likelihoods (TT, TE, EE + lowTEB) provided by the collaboration \cite{Aghanim:2015xee}. The Monte Carlo analysis has been performed using {\sc cosmomc} \cite{Lewis:2002ah}, which uses {\sc camb} \cite{Lewis:1999bs} as the Einstein-Boltzman solver for the primordial perturbations.
The base model for the data consists firstly of the
6 parameters of the standard $\Lambda$CDM\ model:
$\omega_b$, the physical baryon density;
$\omega_c$, the physical cold dark matter density;
$\Theta_{MC}$, the approximate ratio of the sound horizon at recombination to the angular diameter distance;
$\tau$, the reionization optical depth to last scattering;
$n_{\rm s}$, the spectral index of the primordial scalar perturbations, and
$A_{\rm s}$, their amplitude.
There are also 27 nuisance parameters in the fit, relating to the experiments used in the analysis.
We call this the {\it Power-Law} model ($\mathcal{PL}$).
In order to construct models with defects, we add to the power spectrum of the basic $\mathcal{PL}$ model the possible contribution of one or other of the global defects, parametrised by $10^{12}(G\mu)^2$ (\ref{eq:mu}) or equivalently $f_{10}$, the fraction the defects contribute to the temperature power spectrum at $l=10$.
Shape variations of the defect power spectra for different background cosmologies are negligible,
as their contribution is expected to be of order of about $1\%$ of the temperature power spectrum.
Note that $f_{10} \propto (G\mu)^2$, and that a flat prior with upper bound $10^{12}(G\mu)^2<100$ was imposed.
We find that the addition of the defects to the $\mathcal{PL}$\ model does not improve the fit to the data. Even though O($3$) are able to improve slightly the likelihood, the improvement is not significant. In all cases the scenario with no defects $G\mu=0$ is compatible with the measurements.
We therefore give the $95\%$ confidence level upper limits for the defect model parameters for O($2$) and O($3$) defects in Table~\ref{t:gmulimits2}. The base-model parameters are consistent with the $\Lambda$CDM\ {\it Planck}\ values.
The upper bound on $G\mu$ is derived without any extrapolation of the UETCs due to the expected logarithmic increase of the
contribution from the cores of strings discussed at the end of Section \ref{s:Sca},
as we are unable to confirm the increase in the limited range of our simulations. If one assumes
that the UETCs scale by a factor of the ratio of the logarithms in our simulations and at decoupling,
then the upper bound is reduced to
\begin{equation}
10^{12} (G\mu)^2 < 0.031 \left(\frac{\log(m_{\rm s} a \xi)_{\tau_{\rm ref}}}{\log(m_{\rm s} a \xi)_{\tau_{\rm dec}}} \right)^2,
\end{equation}
where $\tau_{\rm dec}$ is the conformal time at decoupling.
This gives
\begin{equation}
(G\mu)^2 < 7 \times 10^{-17}.
\end{equation}
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c||c|c||c|}
\hline
Dataset & \multicolumn{3}{c|}{{\it Planck}\ 2015 CMB} \\
\hline
Defect & O($2$) & O($3$) & \\ \hline
Model & {$\mathcal{PL}+G\mu$} & {$\mathcal{PL}+G\mu$} & $\mathcal{PL}$ \\ \hline
$f_{10}$ & $<0.017$ & $<0.024$ & $-$ \\
$10^{12}(G\mu)^2$ & $<0.031$ & $<0.73$ & $-$\\
\hline
$-\ln{\cal L}_\mathrm{max}$ & $6472$ & $6470$ & $6472$ \\\hline
\end{tabular} \\
\caption{\label{t:gmulimits2} 95\% upper limits for $(G\mu)^2$ and $f_{10}$ as well as best-fit likelihood values for different cosmological models for O($2$) global strings and O($3$) global monopoles, fitting for the {\it Planck}\ 2015 TT, TE, EE and low TEB data. }
\end{center}
\end{table}
Comparing the fits obtained here with the fits obtained in \cite{Lizarraga:2016onn}, where the Abelian Higgs case was analysed, we observe that global strings give a slightly bigger contribution at $l=10$, $f_{10}$, compared with the Abelian Higgs with the same symmetry-breaking scale, while the contribution that global monopoles give is the biggest. However, even though global monopoles slightly improve the global fit to the data, the fitting process in general does not show any significative preference for models with defects.
\section{Discussion and conclusions \label{sec:conclusions}}
In this paper we have computed for the first time the CMB power spectra for global strings and monopoles using field theory simulations
of the linear sigma model. Previous numerical simulations used the non-linear sigma model, which does not capture the energy-momentum of the cores of the defects.
In order to obtain the power spectra, we computed the UETCs of the energy-momentum in our simulations,
and extracted source functions for Einstein-Boltzmann solvers, using a recently-introduced method which better
captures the effect of transition from radiation dominated cosmology to matter dominated one.
We compared our predictions with the latest {\it Planck}\ data \cite{Ade:2015xua} so as to put limits on models with O($2$) or O($3$) defects.
We investigated the scaling regime with simulations performed in pure radiation and matter eras,
giving various scaling parameters
in Table~\ref{tab:betas2} and Table~\ref{tab:betas3}.
They are compatible with, and more accurately determined than
previous numerical simulations of global strings \cite{Moore:2001px,Yamaguchi:1999yp,Yamaguchi:1999dy}
and for global monopoles \cite{Martins:2008zz,Lopez-Eiguren:2016jsy}.
The UETCs obtained in matter dominated era for global strings and global monopoles can be seen in Figure \ref{fig:uetc-nf2-rad2} and \ref{fig:uetc-nf2-rad3}. The shapes are similar, but the amplitudes for global strings are significantly higher than those for global monopoles.
Global string UETCs are also somewhat higher than those for the Abelian Higgs strings.
Although there are fewer global strings per horizon volume, their effective string tension is higher by
the logarithmic enhancement discussed at the end of the last section, and the net result is
more energy per horizon volume in the global case.
We computed the UETC interpolation function for the radiation-matter transition,
using 5 different time ranges that covered most of the transition period.
The effects of matter-$\Lambda$ transition are rather small \cite{Lizarraga:2016onn}, and we do not consider them.
The form of the radiation-matter transition function (\ref{eq:ftau}) is consistent with other analyses and the value of $\gamma$ is compatible with the values found for AH strings \cite{Daverio:2015nva} and for global defects in the large-$N$ limit \cite{Fenu:2013tea}.
However, the value of the exponent, $\kappa$, is differs from model to model, which confirms that the transition function is not universal between defects. It would be interesting to determine whether this parameter $\kappa$ will reach the value proposed in \cite{Fenu:2013tea} when $N$ becomes large.
After obtaining the source functions that capture the radiation-matter transition we have computed the CMB power spectra. We have compared the power spectra with those computed in the non-linear sigma model both in numerical simulations at $N=2,3$ \cite{Pen:1993nx,Pen:1997ae} and
in the large-$N$ limit \cite{Fenu:2013tea}. We also compare global strings with
the gauge strings in the Abelian Higgs model \cite{Lizarraga:2016onn}.
The overall shape of the CMB power spectra for the cases under study is fairly similar, although the power spectra for global monopoles show a more oscillatory behaviour than the strings, more like the spectra obtained in the large-$N$ limit of the NLSM.
The NLSM in the large-$N$ limit underestimates the amplitude of the
power spectra of global defects by a significant factor, up to 20 in the case of strings.
The NSLM numerical simulations at $N=2,3$ are broadly compatible with our linear $\sigma$-model simulations,
although detailed comparison is complicated by the different background cosmologies.
There is some sign that the NLSM amplitude is larger for a given symmetry-breaking scale.
The signal coming from the global string case has a similar shape around the peak but a somewhat higher amplitude than the signal coming from the Abelian Higgs strings. The fall-off of the temperature power spectrum at high multipole is faster in the global string case, which is a consequence of the faster fall-off of the ETCs with wave number, $k^{-2}$. This faster fall-off is indicative of some non-trivial energy-momentum correlations which disguise the expected $k^{-1}$ of randomly placed string-like objects \cite{Vincent:1996qr}.
The monopole ETCs also fall as $k^{-2}$, contrasting with $k^0$ for randomly-placed point-like objects.
Finally, comparing the power spectra predictions with the latest CMB data released by the {\it Planck}\ collaboration \cite{Ade:2015xua} we put limits on the allowed fraction of those defects, given in Table \ref{t:gmulimits2}.
We have seen that the global strings could give a slightly bigger contribution compared with the Abelian Higgs case, while global monopoles could give the biggest contribution between these three models, with a fractional contribution at $l=10$ of around 2.4\%.
The limits correspond to constraints on the symmetry-breaking scale of $\eta
< 2.9\times 10^{15}\;\textrm{GeV}$ for global strings
($6.3 \times 10^{14}\;\textrm{GeV}$ with the logarithmic correction to the scaling UETCs)
and $\eta < 6.4\times 10^{15}\;\textrm{GeV}$ for global monopoles.
The global string limit is relevant for the ultra-light axion scenario \cite{Marsh:2015xka},
provided that the strings are formed and not inflated away,
and also that the axion mass is less than the inverse horizon size at decoupling so that the strings
survive long enough to perturb the CMB.
The bound on the axion decay constant $f_a$ in this case is the same as the bound on $\eta$,
in the axion mass range $m_a \lesssim 10^{-28}$ eV.
The constraint applies even when ULAs do not comprise the dark matter.
Using these constraints we can estimate the maximum amplitude of the gravitational wave spectrum created by global strings and global monopoles following the calculations presented in \cite{Figueroa:2012kw}.
Assuming that the tensor UETC scales in the same way as the scalar and vector ones
with the logarithmic correction of the effective string tension, we can just insert
the uncorrected upper limit on $(G\mu)^2$ (see Table~\ref{t:gmulimits2}) into the
gravitational wave energy density \cite{Figueroa:2012kw}
\begin{equation}
\Omega_{\rm GW}=\frac{650}{N}\Omega_{\rm rad}\left(\frac{G\mu}{\pi}\right)^2 \frac{\Omega_\text{GW}^\text{num}}{\Omega_\text{GW}^\text{th}},
\end{equation}
where $\Omega_{\rm rad}=1.6\times 10^{-5}$ is the radiation-to-critical energy density ratio today, and $\Omega_\text{GW}^\text{num}/\Omega_\text{GW}^\text{th}$ is a numerically determined correction factor, equal to ($130,7.5$) for $N=(2,3)$ \cite{Figueroa:2012kw}.
We obtain that the upper limit for the amplitude of the GW spectrum is similar in both cases (global strings and global monopoles) at around $\Omega_{\rm GW} \lesssim 2\times 10^{-15}$. Comparing this value with the expected sensitivity curved of the gravitational wave observatory
LISA \cite{Bartolo:2016ami,Caprini:2015zlo} it seems that the gravitational wave backgrounds created by global strings and global monopoles lie below the sensitivity window.
\acknowledgments
We thank Martin Kunz and Guy Moore for useful discussions. JL, AL-E and JU acknowledge support from the Basque Government (IT-979-16) and the Spanish Ministry MINECO (FPA2015-64041-C2-1P). AL-E is also supported by the Basque Government grant BFI-2012-228.
MH (ORCID ID 0000-0002-9307-437X) acknowledges support from the Science and Technology Facilities Council
(grant number ST/L000504/1).
AL-E would like to thank Kari Rummukainen and the Helsinki Institute of Physics where part of this work was performed.
Our simulations made use of facilities at the i2Basque academic network, the Finnish Centre for Scientific Computing CSC, and the COSMOS Consortium supercomputer (within the DiRAC Facility jointly funded by STFC and the Large Facilities Capital Fund of BIS).
|
1,108,101,566,469 | arxiv | \section{Introduction}
Optimally allocating limited resources is a central problem in economics \citep{samuelson2010economics} and operations research \citep{ward1957optimal, everett1963generalized}. It is often complicated further by uncertainty inherent to the considered problem. On the one hand, future resource capacity may be limited and not known exactly in advance. On the other hand, the tasks that require resources might have uncertain payoff. This situation is commonly encountered in various real-world applications. For example, in credit card fraud detection, fraud analysts can only investigate a limited number of transactions each day. Similarly, in direct marketing, a company may only be able to target a subset of customers in a marketing campaign. The challenge is how to optimally allocate resources to maximize business pay-off, e.g., how to optimally allocate fraud investigators to suspicious transactions to minimize losses due to fraud. By learning from historical data, machine learning models can assist decision-makers by predicting the most relevant tasks based on their characteristics.
Prior work addresses the problem of uncertain task outcomes via classification. The most promising tasks can be identified by estimating the probability of success for each task. The problem of stochastic, limited capacity can then be addressed separately in a second stage, when assignment decisions are made by prioritizing tasks based on the estimated probabilities to result in a successful outcome. In this article, however, we argue and demonstrate that this approach based on classification models is suboptimal when resources are limited because a classification model does not take capacity limitations into account. Hence, although only the most promising tasks can be executed, the model focuses equally on accurately predicting probabilities for tasks that are highly unlikely to be successful and, consequently, to be executed.
Therefore, we propose a novel approach based on learning to rank that simultaneously accounts for both resource and task uncertainty.
When resources are limited, we demonstrate that this approach is superior to allocation based on classification. First, we show how learning to rank can directly optimize the assignment's expected profit given limited, stochastic capacity. By considering the available capacity during optimization, the model focuses on correctly ranking the most promising tasks, proportional to their likelihood of being processed under limited capacity. Second, while instances are processed individually in classification, learning to rank explicitly considers a task's relevance in comparison to the other available tasks. The benefit of this approach is that we only care about relative positions in the ranking, corresponding to the need to prioritize tasks relative to each other.
Our contributions are threefold. First, we formalize the problem of allocating limited, stochastic resources to uncertain tasks by framing it as an assignment problem. Second, we propose a novel, integrated predict-and-optimize approach to solve this problem based on learning to rank. We contrast our approach with a two-stage predict-then-optimize framework that first uses a classification model to predict task outcomes and then solves the assignment problem using the predicted task probabilities. Third, we compare both methods empirically using various real life data sets from different application areas.
\section{Problem formulation}
\label{sec:problem formulation}
\begin{figure}[t]
\includegraphics[width=\textwidth, trim={0 0.2cm 0 0}]{LTR_for_Optimal_Resource_Allocation.pdf}
\caption{
\textbf{Problem overview.} \normalfont{We formulate our setting as a type of linear assignment problem where two sources of uncertainty must be considered: stochastic worker capacity and uncertain task outcomes. To account for stochastic capacity in the assignment problem, the capacity distribution is converted to workers with decreasing processing probabilities. Task outcomes are also uncertain and need to be predicted. The ultimate objective is to assign workers to tasks to maximize the resulting expected profit.}
}
\label{fig:problem_overview}
\end{figure}
We address the problem of optimally assigning limited and stochastic resources to tasks with uncertain outcomes to maximize the expected profit. We formalize it as a linear assignment problem where both workers and tasks are sources of uncertainty. The exact number of workers is uncertain at the time when resources need to be allocated, but we assume it is governed by a known probability distribution. In practice, this distribution can be estimated from historical data on the available resources or based on domain knowledge. Alternatively, a deterministic capacity can be considered. Second, task outcomes are also uncertain and need to be predicted using historical data on similar tasks. A graphical overview of the problem is shown in \cref{fig:problem_overview}. In the following, we introduce and formally define each element of the assignment problem.
\paragraph{Stochastic capacity.}
The available resources or number of workers $W$ is a discrete random variable described by a known probability distribution. In this work, we consider a common situation where the expected capacity $\mathbb{E}(W)$ is smaller than the number of available tasks $N$. The stochastic capacity can be converted to a sequence of $N$ workers with monotonically decreasing success rates. This rate $w_i$ equals the worker's probability of being available given $W$ and is described by the complementary cumulative probability distribution function: $w_i = P(W \geq i) = 1 - F_W(i)$. This yields a monotonically decreasing sequence of $N$ worker success rates $\mathbf{W} = \begin{pmatrix} w_1 & \dots & w_N \end{pmatrix} = \{1-F_W(i)\}_{i=1}^N$ with $w_1 \geq \dotso \geq w_N$.
\paragraph{Uncertain tasks.}
There is also uncertainty regarding task outcomes. To address this uncertainty, we predict it using historical data on similar tasks. Let $\mathcal{T} = (\mathcal{X}, \mathcal{Y}, \mathcal{V})$ be the domain of all possible tasks $t_i = (\mathbf{x}_i, y_i, \mathbf{v}_i)$, where $\mathbf{x}_i \in \mathcal{X} \subset \mathbb{R}^d$ is a set of characteristics and $y_i \in \mathcal{Y} = \{0, 1\}$ is a binary label equal to 1 if the task is successful and 0 otherwise. Moreover, $\mathbf{v}_i = \{v^+_i, v^-_i\} \in \mathcal{V} \subset \mathbb{R}^2$ denotes the payoff if the task is executed, with $v^+_i$ if task $i$ was successful ($y_i = 1$) and $v^-_i$ otherwise. A task's reward is defined as $r_i = y_i v^+_i + (1-y_i)v^-_i$. We have $N$ available tasks to be allocated $\mathbf{T} = \{(\mathbf{x}_i, y_i, \mathbf{v}_i): i=1,\dots,N \}$, although $y_i$ is unknown when resources need to be allocated. Given historical data, a predictive model can estimate task outcomes $y_i$ resulting in $N$ predictions.
\paragraph{Matching workers and tasks.}
Workers and tasks can then be combined in an expected profit matrix $P = \begin{pmatrix} p_{ij} \end{pmatrix}$, where $p_{ij} = w_i r_j$ is the profit of assigning worker $i$ to task $j$ for $i,j=1,\dots,N$. Given $P$, the goal is to find the optimal assignment matrix $A = \begin{pmatrix} a_{ij} \end{pmatrix}$, where $a_{ij} = 1$ if worker $i$ is assigned to task $j$ and 0 otherwise, for $i,j=1,\dots,N$. This results in the following balanced linear assignment problem:
\begin{alignat}{3}
\text{maximize} & \sum_{i=1}^N \sum_{j=1}^N p_{ij} a_{ij} & \\
\text{subject to}
& \sum_{i=1}^N a_{ij} = 1 & j = 1, \dots, N; \label{eq:cond_task} \\
& \sum_{j=1}^N a_{ij} = 1 & i = 1, \dots, N; \label{eq:cond_worker} \\
& a_{ij} \in \{0,1\} & i,j = 1, \dots, N; \label{eq:integer}
\end{alignat}
where conditions \ref{eq:cond_task} and \ref{eq:cond_worker} specify that each task is assigned to exactly one worker and vice versa; condition \ref{eq:integer} imposes absolute assignments by restricting $a_{ij}$ to 0 or 1.
\section{Related work}
The proposed solution in this paper relates to prior work on uncertainty in assignment problems, predict-and-optimize, classification and learning to rank. In this section, we briefly introduce each line of work and its relationship to our contribution.
\subsection{Uncertainty in assignment problems}
Optimal allocation of resources and decision-making under uncertainty are key problems in operations research \citep{ward1957optimal, everett1963generalized}. In this work, we consider an assignment problem. This is a general problem formulation in which the goal is to find an optimal matching of workers and tasks subject to certain constraints. This type of problem has been analyzed extensively \citep{burkard2012assignment} and applied to a diverse range of tasks \citep[e.g.,][]{alonso2017demand, bertsimas2019optimizing}. Moreover, various extensions consider different sources of uncertainty: uncertain worker capacity, uncertain task presence (i.e., outcomes), or uncertain task-worker profits \citep{toktas2006addressing, krokhmal2009random}. This work focuses on a specific type of a linear assignment problem, in which we simultaneously address two sources of uncertainty: uncertain capacity and uncertain task success. However, instead of assuming that task success follows a probability distribution, we use a predictive model to estimate it.
\subsection{Predict-and-optimize}
The intersection of operations research and machine learning has increasingly drawn the attention of researchers from both fields \citep{lodi2017learning, bengio2021machine}. In particular, recent work on predict-and-optimize is relevant \citep{donti2017taskbased, wilder2019melding, elmachtoub2021smart}. The central aim in predict-and-optimize is to align a predictive model more closely with the downstream decision-making context \citep{mandi2020smart}. This is achieved by fusing the prediction and optimization phases and training the model in an end-to-end manner, with the aim of obtaining higher quality decisions \citep{kotary2021end}. Ranking specifically has been studied in this context: Demirovi\'{c} et al. use ranking methods for the knapsack problem \citep{demirovic2019investigation} and study ranking objectives of combinatorial problems in general \citep{demirovic2019predict+}, though both are limited to linear models. In contrast, our proposed approach is compatible with any type of learner that can be trained using gradient-based optimization, but it is applicable only to our specific, though commonly encountered, formulation of the assignment problem.
\subsection{Classification}
Classification is a task in machine learning where the goal is to predict the class of an instance given its characteristics. For instance, classifying a task as either successful or not is a binary classification problem. Existing work typically considers the applications in this paper as classification problems, e.g., fraud detection \citep{vanvlasselaer2017gotcha, cerioli2019newcomb}, credit scoring \citep{baesens2003benchmarking, lessmann2015benchmarking}, direct marketing \citep{baesens2002bayesian} and customer churn prediction \citep{verbeke2011building, verbeke2012new_churn}. Moreover, to align the models more closely with the decision-making context, cost-sensitive classification has been used \citep{bahnsen2014examplelogistic, petrides2020profit_credit, hoppner2020profit, hoppner2022instance}. Cost-sensitive methodologies incorporate the costs of different decisions into the optimization or use of predictive models \citep{elkan2001foundations, petrides2021csensemble}. Cost-sensitive variants have been proposed for different classification models, such as logistic regression and gradient boosting \citep{bahnsen2014examplelogistic, hoppner2022instance}. The output of a classification model is often used to rank instances, reflected by widely used evaluation metrics that analyze this ranking, such as the receiver operating characteristics curve and precision--recall curve \citep{davis2006relationship}. However, in contrast to our work, these approaches do not consider the available capacity during optimization of the models. Although limited capacity has been acknowledged in the literature (e.g., in fraud detection \citep{dal2017credit}, direct marketing \citep{bose2009quantitative} or churn prediction \citep{hadden2007computer}), no existing solution explicitly addresses this issue.
\subsection{Learning to rank}
In learning to rank, the goal is to predict the order of instances relative to each other, based on their characteristics.
Although learning to rank originated in the field of information retrieval, it is a general framework that has been applied to a variety of problems that have traditionally been solved with classification models, such as software defect prediction \citep{yang2014learning}, credit scoring \citep{coenen2020machine} and uplift modeling \citep{devriendt2020learning}. Moreover, similar to cost-sensitive classification, the learning to rank framework has been extended to incorporate costs of instances to align the optimization of the model more closely with the resulting decisions \citep{mcbride2019cost}.
However, our approach is the first to explicitly consider the available capacity during the optimization of the ranking model.
{\parfillskip=0pt \emergencystretch=.5\textwidth \par}
\section{Methodology}
We present two approaches for the problem presented in \cref{sec:problem formulation}. On the one hand, a two-stage predict-then-optimize framework can be used. In the first stage, we predict the task successes $\mathbf{\hat{Y}}$. Here, we show how different types of classification objectives can be used to predict task success. In the second stage, we optimize the assignment of tasks to workers to obtain an assignment matrix $A$. For this, we provide an analytical solution and prove its optimality. On the other hand, we present an integrated predict-and-optimize framework for prediction and optimization by leveraging learning to rank techniques.
\subsection{Two-stage predict-then-optimize}
This section presents a conventional two-stage approach for solving the problem. In the first stage, a classification model predicts each task's probability of success. Existing approaches in classification \citep{murphy2012machine, hoppner2022instance} can be used to optimize this model for either accuracy or profit. In the second stage, tasks are assigned to workers based on these predicted probabilities. We present a straightforward procedure for this assignment and prove its optimality.
\subsubsection{Predicting task outcomes using classification.}
To handle the uncertainty regarding task outcomes, we train a classification model to predict whether a task will be successful. Given historical data $\mathcal{D}_\text{Train}$, the goal is to predict $y_i$ using a classifier $f_\theta: \mathcal{X} \to [0, 1]: \mathbf{x} \mapsto f_\theta(\mathbf{x})$ defined by parameters $\theta \in \Theta$ that predicts the probability of a task being successful. Classifier training can be accomplished with different objective functions. We present two alternatives: one that focuses optimization on accuracy and one that optimizes the classification cost.
The conventional approach is to train the classifier with the aim of maximizing accuracy. This can be achieved using the maximum likelihood approach or, equivalently, by minimizing the cross-entropy loss function \citep{murphy2012machine}:
\begin{equation}
\mathcal{L}^\text{CE} =
y_i \text{log } f_\theta(\mathbf{x}_i) + (1-y_i)\text{log}\big(1-f_\theta(\mathbf{x}_i)\big).
\label{eq:ce}
\end{equation}
A drawback of this approach is that the solution ignores some of the problem specifications. Some tasks are more important to classify correctly than others, depending on their cost (or profit) when executed. Therefore, in cost-sensitive learning, these costs are incorporated into the training of a model. In classification, the cost of a decision depends on whether it was classified correctly and on the task itself. These costs are formalized with the concept of a cost matrix $\mathbf{c}_i$ \citep{elkan2001foundations}:
\begin{equation}
\begin{matrix}
& \text{\bf\small Actual class } $$y_i$$ \\[0.1em]
& \begin{matrix}
\hspace{2pt} 0 \hspace{10pt} & \hspace{10pt} 1
\end{matrix} \\[0.2em]
\begin{matrix}
\multirow{2}{*}{\text{\bf\small Predicted class } $\hat{y}_i$} & 0 \\[0.3em]
& 1 \\[0.3em]
\end{matrix}
&
\begin{pmatrix}
c^\text{TN}_i & c^\text{FN}_i \\[0.3em]
c^\text{FP}_i & c^\text{TP}_i \\[0.3em]
\end{pmatrix} \\
\end{matrix}
\label{eq:cost_matrix}
\end{equation}
This way, we can directly minimize the average expected cost of predictions, as an alternative to the cross-entropy loss \citep{bahnsen2014examplelogistic, hoppner2022instance}:
\begin{align}
\begin{split}
\mathcal{L}^\text{AEC} &=
y_i \Big( f_\theta(\mathbf{x}_i) c_i^\text{TP} + \big(1 - f_\theta(\mathbf{x}_i)\big) c_i^\text{FN} \Big)
\\
&+ (1-y_i) \Big( f_\theta(\mathbf{x}_i) c_i^\text{FP} + \big(1 - f_\theta(\mathbf{x}_i)\big) c_i^\text{TN} \Big).
\label{eq:aec}
\end{split}
\end{align}
$\mathcal{L}^\text{AEC}$ is a semidirect predict-and-optimize method: it incorporates some information of the downstream decision-making task, but learning is still separated from optimization \citep{demirovic2019predict+, demirovic2019investigation}.
\subsubsection{Optimizing worker--task assignments.}
Given task predictions $\mathbf{\hat{Y}}$, we can optimize the task--worker assignments. Although various general algorithms have been proposed to solve assignment problems, our formulation can be solved analytically. Here, we present this solution and prove its optimality.
\begin{theorem}
$\mathbf{W} = \{w_i\}_{i=1}^N$ is a sequence of monotonically decreasing worker success rates such that $w_1 \geq \dots \geq w_N$ with $w_i \in [0,1]$ for $i = 1, \dots, N$. $\mathbf{\hat{R}} = \begin{pmatrix} \hat{r}_1 & \dots & \hat{r}_N \end{pmatrix}$ are the predicted task rewards arranged in decreasing order such that $\hat{r}_1 \geq \dotso \geq \hat{r}_N$. For the resulting expected profit matrix $P = \begin{pmatrix} p_{ij} \end{pmatrix}$ with $p_{ij} = w_i \hat{r}_j$, the optimal assignment is $A^* = I_N$.
\end{theorem}
\begin{proof}
$A^* = I_N$ is a feasible solution: it is straightforward to verify that the identity matrix satisfies constraints \ref{eq:cond_task}, \ref{eq:cond_worker} and \ref{eq:integer} of the assignment problem.
Moreover, the solution is the result of a greedy strategy: at each step $m$, we assign worker $w$ with probability $w_m$ to the highest remaining task $m$ with payoff $\hat{r}_m$. To prove the optimality of this strategy, we show that it does not deviate from the optimal solution at each step up until the final solution is obtained.
First, the best single worker--task assignment is selected: the highest profit $p_{ij}$ is $p_{11} = w_1 \hat{r}_1$; no other higher profit exists as no higher $w_i$ or $\hat{r}_j$ exist. Next, we continue this strategy of selecting the best remaining worker--task assignment until there are no tasks left. We can show that, at each step, no other assignment matrix leads to a larger profit than this one. At step $m$, the profit obtained given assignment matrix $A^*$ equals $p_{11} + p_{22} + \dotso + p_{mm} = w_1 \hat{r}_1 + w_2 \hat{r}_2 + \dotso + w_m \hat{r}_m$.
Deviating from $A^*$ at a certain step means that at least one worker must be assigned to another task. We prove that no alternative assignment leads to a higher profit. Consider switching the assignments of tasks $i$ and $j$ with $i < j$. In the case that task $j$ has already been assigned to a worker, we have:
{\parfillskip=0pt \emergencystretch=.5\textwidth \par}
\begin{alignat*}{4}
&&p_{ii} + p_{jj} &\geq p_{ij} + p_{ji} &\\ \nonumber
\iff \quad &&w_i \hat{r}_i + w_j \hat{r}_j &\geq w_i \hat{r}_j + w_j \hat{r}_i &\\ \nonumber
\iff \quad &&w_i (\hat{r}_i - \hat{r}_j) &\geq w_j (\hat{r}_i - \hat{r}_j) &\\ \nonumber
\iff \quad &&w_i \geq w_j &\text{ and } \hat{r}_i - \hat{r}_j \geq 0. \nonumber
\end{alignat*}
In the case that task $j$ has not yet been assigned, we have:
\begin{alignat*}{4}
&&p_{ii} &\geq p_{ij} &\\ \nonumber
\iff \quad &&w_i \hat{r}_i &\geq w_i \hat{r}_j &\\ \nonumber
\iff \quad &&w_i \geq 0 & \text{ and } \hat{r}_i \geq \hat{r}_j \nonumber
\end{alignat*}
In both cases, the final statements follow from $\mathbf{W}$ and $\mathbf{\hat{R}}$ being monotonically decreasing and $i<j$, or from $w_i \in \left[0, 1\right]$.
\end{proof}
\subsection{Integrated predict-and-optimize using learning to rank}
In this section, we present a novel integrated approach for solving the assignment problem in \cref{sec:problem formulation}. Previously, we showed how the optimal assignment is $A^* = I_N$ if $\mathbf{W}$ and $\mathbf{\hat{R}}$ are arranged in decreasing order. Given that $\mathbf{W}$ is defined as a decreasing sequence, the challenge of optimizing the assignment can also be seen as correctly predicting the order of expected task rewards $\mathbf{\hat{R}}$. This formulation is equivalent to an alternative interpretation of the assignment problem as finding the optimal assignments by permuting the rows and columns of the profit matrix $P$ such that the resulting sum of the elements on the diagonal is maximized, or formally \citep{krokhmal2009random}:
\begin{equation}
\min_{\pi \in \Pi_n} \sum_{i=1}^N p_{i, \pi(i)}
\end{equation}
for $\pi \in \Pi_N$ with $\Pi_N$ the set of all permutations of the indices $\{1, \dots , N\}$, i.e., $\pi : \{1, \dots, n\} \mapsto \{1, \dots, N\}$. In our case, we need to find the optimal permutation of available tasks $\pi(\mathbf{T})$.
{\parfillskip=0pt \emergencystretch=.5\textwidth \par}
In this formulation, the assignment problem can be seen as predicting the optimal permutation $\pi(\mathbf{T})$ based on characteristics of the available tasks. Formally, let $g_\theta: \mathcal{X} \to \mathbb{R}: \mathbf{x} \mapsto g_\theta(\mathbf{x})$ be a ranking model. The goal is to find parameters $\theta \in \Theta$ such that the ordering of the mapping of tasks $g_\theta(x_1) \geq \dotso \geq g_\theta(x_n)$ corresponds to the ordering of their rewards $r_1 \geq \dotso \geq r_N$. A ranking based on $g_\theta$ can be seen as a permutation $\pi$ of the indices $\{1, \dots , n\}$.
The expected profit of a permutation $\pi(\mathbf{T})$ given a capacity $W$ can be optimized directly using learning to rank. The key insight is that for a given permutation $\pi$ of tasks $\mathbf{T}$, the expected profit $\sum_{i=1}^N w_i \hat{r}_{\pi(i)}$ of a ranking is equivalent to its discounted cumulative gain (DCG), which is a commonly used class of metrics in learning to rank \citep{wang2013theoretical}. Typically, the DCG is defined with discount $\frac{1}{\text{log}_2(i+1)}$ and gain $2^{t_i} - 1$ for $i \in \{1, \dots, n\}$. However, to match the expected profit, our formulation uses discount $\{w_i\}^N_{i=1}$ corresponding to the capacity distribution, gain equal to $1$ for all $i$, and relevance $\hat{r}_i$. By dividing the DCG by its ideal value (IDCG), the normalized DCG (NDCG) is obtained: NDCG = $\frac{\text{DCG}}{\text{IDCG}}$ with NDCG $\in [0,1]$ \citep{murphy2012machine}.
Optimizing the NDCG (or equivalently, the expected profit) directly is challenging as it depends on the predicted relative positions of instances instead of the model's outputs $g_\theta(\mathbf{x_i})$. Nevertheless, various algorithms have been proposed for this task in the literature on learning to rank (e.g., \citep{valizadegan2009learning}). In this work, we use LambdaMART \citep{wu2008ranking, burges2010ranknet}, which uses a combination of the LambdaRank loss \citep{burges2006learning} and gradient boosting of decision trees \citep{friedman2001greedy} to construct the ranking model. LambdaMART is a widely used approach that achieved the best performance in the Yahoo! Learning To Rank Challenge \citep{burges2010ranknet, chapelle2011yahoo, li2014learning}. In this way, we can train a ranking model $g_\theta$ to optimize the NDCG or expected profit of the assignments directly.
Finally, we need to specify each task's relevance, which serves as a label according to which the ranking would ideally be constructed. Because the ranking corresponds to the priority that should be given to tasks, it should respect the ordering in terms of both outcome $y_i$ and task payoffs $\mathbf{v_i}$. In other words, successful tasks should be more relevant than unsuccessful tasks, and a more profitable task should be more relevant. Therefore, we use a task's reward $r_i$ as a cost-sensitive relevance, as it uses an instance's class label $y_i$ and its cost matrix $\mathbf{c}_i$ (see \cref{eq:cost_matrix}). By means of this approach, a positive task's relevance is the profit (or equivalently, the negative cost) obtained by classifying it positively minus the profit obtained by classifying it negatively; vice versa for negative tasks. Thus, we obtain the relevance or reward $r_i$ as follows:
\begin{equation*}
r_i = y_i v^+_i + (1-y_i) v^-_i = y_i \left(c^\text{FN}_i -c^\text{TP}_i\right) + (1- y_i) \left(c^\text{TN}_i - c^\text{FP}_i\right)
.
\end{equation*}
Alternatively, if the goal is to optimize for accuracy rather than cost, we can use class label $y_i$ as the relevance of instance $i$.
\section{Empirical results}
In this section, we empirically evaluate and compare the two-stage and the integrated approach for a variety of tasks. We use publicly available data from a variety of application areas. For each application, the goal is to optimally allocate resources to optimize the expected cost given stochastic capacity.
All code for the experimental analysis will be made available online upon publication of this paper.
To compare the different approaches, we use gradient boosting to train the predictive models. Four different objectives are compared, depending on the task (classification or learning to rank) and on whether they aim to maximize precision or profit. xgboost denotes a conventional classification model using the cross-entropy loss $\mathcal{L}^\text{CE}$ (see \cref{eq:ce}), while csboost uses a cost-sensitive objective function $\mathcal{L}^\text{AEC}$ (see \cref{eq:aec}). LambdaMART uses the binary class label $y_i$, whereas csLambdaMART uses task payoffs $r_i$ as relevance. All models are implemented in Python using the \href{https://xgboost.readthedocs.io/en/latest/}{\texttt{xgboost}} package \citep{chen2015xgboost}. Gradient boosting is a popular methodology for both classification and ranking that has great predictive performance, as illustrated by recent benchmarking studies \citep{lessmann2015benchmarking, gunnarsson2021deep}.
\subsection{Data}
The data sets are enlisted in \cref{tab:data_overview} and stem from different application areas: customer churn prediction, credit scoring and direct marketing. They all concern binary classification where tasks are either successful or unsuccessful.
Resources are limited and stochastic: we assume a lognormal capacity distribution $W \sim \mathcal{LN}(\mu = \text{log}(100), \sigma = 1)$.
The cost matrices are taken from earlier work on cost-sensitive classification (see \cref{tab:cost_matrices}).
In churn prediction, we have $c^\text{FP}_i$ and $c^\text{FN}_i$ as, respectively, 2 and 12 times the monthly amount $A_i$ for KTCC following \citep{petrides2021csensemble}; whereas we follow the cost matrix given with the data set for TSC \citep{bahnsen2015novel}.
For credit scoring, we calculate the instance-dependent costs $c^\text{FP}_i$ and $c^\text{FN}_i$ as a function of the loan amount $A_i$ following \citep{bahnsen2014examplelogistic}.
In direct marketing, a positive classification incurs a fixed cost $c_f = 1$, while missing a potential success incurs an instance-dependent cost equal to the expected interest given $A_i$, following \citep{bahnsen2015exampletree}.
Similarly, in fraud detection, a positive prediction leads to an investigation that entails a fixed cost $c_f$, and missing a fraudulent transaction leads to a cost equal to its amount $A_i$. We use $c_f = 10$, following \citep{hoppner2022instance}.
\bgroup
\begin{table}[t]
\setlength{\tabcolsep}{10pt}
\centering
\begin{tabular}{L{90pt}HL{40pt}R{45pt}HR{45pt}L{150pt}}
\toprule
\bf Application & \bf Name & \bf Abbr. & \bf $N$ & $\mathbb{E}(R)$ [\%] & \% Pos & \bf Reference \\ \midrule
\multirow{2}{*}{Churn prediction} & Kaggle Telco Customer Churn & {KTCC} & 7,032 & 2.34 & 26.58 & \citep{kaggle2017telco} \\
& TV Subscription Churn & {TSC} & 9,379 & 1.76 & 4.79 & \citep{bahnsen2015novel} \\ \midrule
\multirow{7}{*}{Credit scoring} & Home Equity & {HMEQ} & 1,986 & & 19.95 & \citep{baesens2016credit} \\
& BeNe1 Credit Scoring & {BN1} & 3,123 & & 33.33 & \citep{lessmann2015benchmarking} \\
& BeNe2 Credit Scoring & {BN2} & 7,190 & & 30.00 & \citep{lessmann2015benchmarking} \\
& VUB Credit Scoring & {VCS} & 18,917 & 0.87 & 16.95 & \citep{petrides2020cost} \\
& UK Credit Scoring & {UK} & 30,000 & & 4.00 & \citep{lessmann2015benchmarking} \\
& UCI Default of Credit Card Clients & {DCCC} & 30,000 & 0.55 & 22.12 & \citep{yeh2009comparisons} \\
& Give Me Some Credit & {GMSC} & 112,915 & & 6.74 & / \\ \midrule
\multirow{2}{*}{Direct marketing} & UCI Bank Marketing & {UBM} & 45,211 & 0.36 & 11.70 & \citep{moro2014data} \\
& KDD Cup 1998 & {KDD} & 191,779 & & 5.07 & / \\ \midrule
\multirow{3}{*}{Fraud detection} & Kaggle Credit Card Fraud & {KCCF} & 282,982 & & 0.16 & \citep{dal2015calibrating} \\
& Kaggle IEEE Fraud Detection & {KIFD} & 590,540 & & 3.50 & / \\
& APATE Credit Card Fraud & {ACCF} & 3,639,323 & & 0.65 & \citep{vanvlasselaer2015apate} \\
\bottomrule \\
\end{tabular}
\caption{\textbf{Data sets overview.} \normalfont{For each data set, we present the application area, abbreviation, number of instances ($N$), class imbalance in terms of proportion of positive instances (\% Pos), and corresponding reference.
}}
\label{tab:data_overview}
\end{table}
\egroup
\subsection{Results}
We present the results using various performance metrics to compare the different models. The main metric of interest is either the expected precision or the expected profit given the stochastic capacity distribution $W$, depending on whether accuracy or profit is the objective. Furthermore, we present several additional classification and ranking metrics to gain more insight into the differences between the methodologies. For each metric, we present the average over all data sets and test whether the best performance is significantly different from the others using a Friedman test on the rankings with Bonferroni--Dunn post hoc correction \citep{demvsar2006statistical, garcia2008extension, garcia2010advanced} (see \cref{tab:metrics_overview}).
\bgroup
\renewcommand{\arraystretch}{1.2}
\begin{table}[t]
\centering
\tabcolsep=0.15cm
\begin{subtable}{0.24\textwidth}
\centering
\begin{tabular}{C{9pt}R{9pt}|C{27pt}C{27pt}}
\toprule
& & \multicolumn{2}{c}{$y_i$} \\
& & 0 & 1 \\
\cmidrule{1-4}
\multicolumn{1}{c}{\multirow{2}{*}{$\hat{y}_i$}} & 0 & 0 & 12$A_i$ \\
\textbf{} & 1 & 2$A_i$ & 0 \\
\bottomrule
\end{tabular}
\subcaption{\footnotesize Churn prediction}
\label{tab:cost_matrix_churn_prediction}
\end{subtable}%
\begin{subtable}{0.24\textwidth}
\centering
\begin{tabular}{C{9pt}R{9pt}|C{27pt}C{27pt}}
\toprule
& & \multicolumn{2}{c}{$y_i$} \\
& & 0 & 1 \\
\cmidrule{1-4}
\multicolumn{1}{c}{\multirow{2}{*}{$\hat{y}_i$}} & 0 & 0 & $c^\text{FN}_i$ \\%[-0.35ex]
\textbf{} & 1 & $c^\text{FP}_i$ & 0 \\%[-0.35ex]
\bottomrule
\end{tabular}
\subcaption{\footnotesize Credit scoring}
\label{tab:cost_matrix_credit_scoring}
\end{subtable}%
\begin{subtable}{0.24\textwidth}
\centering
\begin{tabular}{C{9pt}R{9pt}|C{27pt}C{27pt}}
\toprule
& & \multicolumn{2}{c}{$y_i$} \\
& & 0 & 1 \\
\cmidrule{1-4}
\multicolumn{1}{c}{\multirow{2}{*}{$\hat{y}_i$}} & 0 & 0 & $A_i$/$Int_i$ \\
\textbf{} & 1 & $c_f$ & $c_f$ \\
\bottomrule
\end{tabular}
\subcaption{\footnotesize Direct marketing}
\label{tab:cost_matrix_direct_marketing}
\end{subtable}%
\begin{subtable}{0.24\textwidth}
\centering
\begin{tabular}{C{9pt}R{9pt}|C{27pt}C{27pt}}
\toprule
& & \multicolumn{2}{c}{$y_i$} \\
& & 0 & 1 \\
\cmidrule{1-4}
\multicolumn{1}{c}{\multirow{2}{*}{$\hat{y}_i$}} & 0 & 0 & $A_i$ \\
\textbf{} & 1 & $c_f$ & $c_f$ \\
\bottomrule
\end{tabular}
\subcaption{\footnotesize Fraud detection}
\label{tab:cost_matrix_fraud_detection}
\end{subtable}%
\caption{
\textbf{Cost matrices for the different application areas.} \normalfont{For each application, we present the costs associated with the outcomes in terms of predicted ($\hat{y}$) and actual ($y$) labels. $A_i$, $c^\text{FN}_i$, $c^\text{FP}_i$ and $Int_i$ represent instance-dependent costs and $c_f$ is a fixed cost.}}
\label{tab:cost_matrices}
\end{table}
\egroup
\subsubsection{Expected precision and expected profit.} In terms of expected precision, LambdaMART is the best performing model. Two models optimize for accuracy: LambdaMART and xgboost. The ranking model, LambdaMART, outperforms the classification model, xgboost. In terms of expected profit, the cost-sensitive ranking model, csLambdaMart, performs best. Of the two models optimizing for accuracy, xgboost and LambdaMART, the ranking model again achieves better results, although this difference is not statistically significant. We compare the trade-off between profit and precision in \cref{fig:profit_vs_precision} by plotting the rankings for each data set. To get an idea of the densities for the different models, we estimate it using a Gaussian kernel and show it for probabilities greater than $0.5$. Although the densities overlap, the ranking models outperform their classifying counterparts in their respective category.
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{L{70pt}|C{70pt}C{70pt}|C{70pt}C{70pt}C{70pt}}
\toprule
\rowcolor{gray!5}
& & & & & \\
\rowcolor{gray!5}
& \multirow{-2}{*}{\textbf{\shortstack{Expected\\ precision}}} & \multirow{-2}{*}{\textbf{\shortstack{Expected\\ profit}}} & \multirow{-2}{*}{\textbf{\shortstack{Average\\ precision}}} & \multirow{-2}{*}{\textbf{\shortstack{Spearman\\ correlation}}} & \multirow{-2}{*}{\textbf{AUCPC}} \\\midrule
\cellcolor{red!15} xgboost & 0.4956 {\scriptsize $\pm$ 0.28} & 0.2115 {\scriptsize $\pm$ 0.18} & \textbf{0.9423} {\scriptsize $\pm$ \textbf{0.05}} & $-$0.0382 {\scriptsize $\pm$ 0.11} & 0.5548 {\scriptsize $\pm$ 0.25} \\
\cellcolor{green!15} csboost & 0.5865 {\scriptsize $\pm$ 0.24} & \underline{0.2940 {\scriptsize $\pm$ 0.19}} & 0.9075 {\scriptsize $\pm$ 0.07} & \textcolor{white}{$+$}0.2258 {\scriptsize $\pm$ 0.27} & 0.5657 {\scriptsize $\pm$ 0.24} \\
\midrule
\cellcolor{blue!15} LambdaMART & \textbf{0.6555} {\scriptsize $\pm$ \textbf{0.26}} & 0.2471 {\scriptsize $\pm$ 0.16} & \underline{0.9366 {\scriptsize $\pm$ 0.05}} & $-$0.0302 {\scriptsize $\pm$ 0.15} & 0.5363 {\scriptsize $\pm$ 0.22} \\
\cellcolor{cyan!15} csLambdaMART & 0.6089 {\scriptsize $\pm$ 0.25} & \textbf{0.3587} {\scriptsize $\pm$ \textbf{0.17}} & 0.9336 {\scriptsize $\pm$ 0.05} & \textcolor{white}{$+$}\textbf{0.3829} {\scriptsize $\pm$ \textbf{0.28}} & \textbf{0.5999} {\scriptsize $\pm$ \textbf{0.23}} \\
\bottomrule
\end{tabular}
}
\caption{\textbf{Evaluation metrics overview.} {We present an overview of the evaluation metrics, showing the average and standard deviation over all data sets. The best result is denoted in \textbf{bold}. Results that are not significantly different from the best result are \underline{underlined} ($\alpha = 0.05$). This is based on a Friedman test on the rankings with Bonferroni--Dunn post hoc correction.
For both expected precision and profit, the ranking models perform best in their respective category. For the classification metric, average precision, the cost-insensitive classification model, xgboost, performs best. Conversely, for the ranking metrics, namely, Spearman correlation and the area under the cumulative profit curve, the ranking models outperform their classifying counterparts.}}
\label{tab:metrics_overview}
\vspace{-10pt}
\end{table*}
\subsubsection{Average precision, Spearman's $\rho$ and AUCPC.} These metrics weight all instances in the ranking equally as opposed to the previous metrics that weighted instances depending on their probability of being processed given the capacity distribution. On the one hand, we consider a classification metric: given the high degree of class imbalance for some data sets, we use the average precision \citep{davis2006relationship}. On the other hand, we consider two ranking metrics: the area under the cumulative profit curve and Spearman's rank correlation coefficient $\rho$.
First, we assess the quality of the model's predictions with a standard classification metric: average precision (AP). This metric summarizes the precision-recall curve and looks at the trade-off between precision and recall at different thresholds. The cost-insensitive classification model, xgboost, performs best. This result is expected as it is a classification model that optimizes for accuracy. However, this conventional classification metric has only weak correlation with the expected precision, suggesting that it is not a good indicator of performance.
We also adopt two ranking metrics. First, we use Spearman's rank correlation coefficient to quantify the correlation between the ranking of the predictions and the ranking of the task payoffs. csLambdaMart is the best performing model, outperforming csboost. Moreover, both cost-insensitive models have a correlation of approximately 0. This is as expected, as these models do not take payoff into account in their optimization. Second, the cumulative profit curve plots the profit that is realized as a function of the number $k$ of first ranked instances, with $k \in [1,N]$. We compare the area under this curve with the area of a random ranking and one of the optimal ranking to obtain a value between 0 and 1. csLambdaMART performs best, though neither the difference with xgboost nor csboost is statistically significant.
These results indicate that metrics for evaluating the ranking, such as Spearman's $\rho$ or the AUCPC, are more suitable than classification metrics, such as the average precision, for evaluating a model's performance under limited capacity. These findings suggest that ranking as a solution more closely aligns with the problem of allocating limited resources to uncertain tasks than classification, which is also confirmed by the superior performance of ranking models compared to classification models in terms of expected precision and expected profit.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{Profit_vs_Precision.pdf}
\caption{\textbf{Comparing the methodologies in terms of expected precision and profit.} \normalfont{We plot the rankings in terms of expected profit and expected precision for each method on each data set. Each method's average ranking is shown with a star ($\medwhitestar$). Moreover, the ranking density is fitted with a Gaussian kernel; for visual clarity, only probabilities greater than $0.5$ are shown. On average, csLambdaMART performs best in terms of expected profit, while LambdaMART performs best in terms of expected precision.
}}
\label{fig:profit_vs_precision}
\end{figure}
\subsubsection{Top $k$ metrics.}
Finally, we also consider metrics focusing solely on the top of the ranking. Given limited capacity, these are the instances that will be prioritized. We can evaluate this critical part of the ranking by looking at the precision and profit of the ranking for the first $k$ instances for different values of $k$ (see \cref{fig:top_k_precision_profit}). The ranking model optimizing for accuracy, LambdaMART, performs best in terms of precicision@$k$, while the ranking model optimizing for profit, csLambdaMART, has the best performance in terms of profit@$k$.
\begin{figure}[t]
\centering
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=\textwidth]{TopPrecision.pdf}
\caption{\footnotesize \textbf{Precision@$k$}}
\label{fig:top_precision}
\end{subfigure}%
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=\textwidth]{TopProfits.pdf}
\caption{\footnotesize \textbf{Profit@$k$}}
\label{fig:top_profit}
\end{subfigure}
\vspace{5pt}
\caption{\textbf{Evaluating the top $k$ ranked instances.} \normalfont{Precision \textbf{(a)} and profit \textbf{(b)} for the top $k$ instances in the ranking for the different models averaged over all data sets. The ranking models outperform the classifiers in the metric they optimize for: LambdaMART is the best in terms of precision; csLambdaMART has the best profit.}}
\label{fig:top_k_precision_profit}
\end{figure}
\section{Conclusion}
In this work, we formally introduced and defined a commonly encountered problem: how to optimally allocate limited, stochastic resource capacity to tasks with uncertain payoff to maximize the expected profit. Moreover, we contribute by proposing a novel integrated solution using learning to rank and empirically comparing it with a more conventional predict-then-optimize approach using a classification model.
Our findings illustrate the benefit of approaching this problem as a ranking problem, which allows us to consider the availability of limited and stochastic resources. Theoretically, we show how the expected profit for a given capacity distribution can be optimized directly using learning to rank with a specific formulation of the net discounted cumulative gain as the objective. Empirical results for a variety of applications show that ranking models achieve better performance in terms of expected profit or expected precision, depending on the objective. Moreover, good results in terms of ranking metrics are more indicative of good performance in terms of expected profit compared to conventional classification metrics. This illustrates how ranking is more closely aligned with the problem at hand compared to classifying. In summary, in the common scenario where decision-makers are constrained by limited resources, deciding upon resource allocation using classification models is inferior to using learning to rank. These findings have important implications for practitioners in a variety of application areas.
Our work opens several promising directions for future research. For example, it would be interesting to consider a temporal variant of the assignment problem with tasks arriving sequentially in time. Although this problem has been studied extensively for stochastic or random arrival rates \citep{derman1972sequential, albright1972asymptotic, albright1974optimal}, future work could consider the addition of a predictive ranking model to address uncertainty regarding task outcomes. Another possible extension would be to consider tasks that require varying degrees of resources. For example, in credit scoring, loans with a large principal require more resources. Finally, a technical limitation of LambdaMART is the $O(N^2)$ complexity due to the pairwise calculation of the gradient. To address this issue, future work could look at approaches that calculate the gradient in a listwise fashion by considering the entire ranking simultaneously \citep{cao2007learning, xia2008listwise, ravikumar2011ndcg} with several recently proposed, efficient candidates \citep[e.g.,][]{sculley2009large, lucchese2017xdart, cakir2019deep}.
\bibliographystyle{unsrtnat}
|
1,108,101,566,470 | arxiv | \section{Introduction}
It is quite clear that our understanding of the molecular complexity of interstellar and circumstellar environments is rapidly growing. It is also apparent that our understanding of interstellar molecular synthesis is presently incomplete; observations of new interstellar molecules are currently outpacing the model predictions as to how these interstellar species are formed in astronomical environments. In addition, many searches for interstellar species have focused on complex organic molecules of biological significance, (e.g. \citet{Zaleski}, \citet{Loomis}, \citet{Belloche}). Since the detection of glycolaldehyde (HOCH$_2$CHO) there have been $\sim$60 new molecular species claimed in interstellar and circumstellar environments. Furthermore, a majority of these claimed detections have involved complex organic species including alcohols (vinyl alcohol (CH$_2$CHOH), \citet{Vinyl}; ethylene glycol (HOCH$_2$CH$_2$OH), \citet{Ethylene}); aldehydes (propenal (CH$_2$CHCHO), propanal (CH$_3$CH$_2$CHO), \citet{Propenal}); amino acids (glycine (NH$_2$CH$_2$COOH), \citet{Kuan}) sugars (dihydroxyacetone (CH$_2$OHCOCH$_2$OH), \citet{Widicus}) and ethers (C$_2$H$_5$OCH$_3$, hereafter tEME, \citet{FuchsSpace}).
Large organic molecules typically have high line strength (S$_{ij}\mu^2 \geq$ 50 D$^2$), low energy transitions ($\leq$ 50 K) that span the millimeter and submillimeter spectrum (e.g. \citet{Carroll}). Thus, it appears that the unambiguous identification of large molecules would be straightforward given the number of transitions available to search. Yet, the detection of new molecules becomes difficult at millimeter and submillimeter wavelengths due in large part to the line confusion of more well-known interstellar species, including isotopic variants. It has been estimated that in the 2 mm and 3 mm windows, there are approximately 10 lines per 100 MHz at sensitivity levels of 10 mK, toward high mass hot molecular cores (HMCs) \citep{Halfen1}. In the case of Sgr B2(N-LMH), perhaps the most well studied region to search for new interstellar species, the chance of finding a line at a particular LSR velocity ($\pm$ 2 km s$^{-1}$) of a measured spectral line frequency is $\sim$40\%, assuming simple Gaussian line profiles \citep{Halfen1}. Searching a less complicated source than Sgr B2(N-LMH) can partially mitigate this obstacle; however, the problem of coincident spectral features interfering with the detection of a new interstellar molecule still persists toward any chemically rich source.
The challenges in the identification of a new interstellar species have been reported by \cite{Snyder1}. The authors suggest ways to overcome these challenges by assigning a set of criteria that must be met before the identification of a new interstellar molecules is confirmed. These criteria can be summarized as follows: 1) The transition frequencies searched for must be accurate to within a few kHz. In addition, the most favorable transitions to search for are multiply degenerate (if possible), high line strength, and low energy. The criteria of high line strength and low energy depends on the molecule. 2) The LSR velocities between transitions must be consistent. 3) If possible, the transitions of a new molecular species must be separated by any interfering features by the Rayleigh criterion in order to claim that transition. 4) The relative intensities between transitions must be consistent with radiative transfer based on the physical conditions of the region. Finally 5), if possible, connecting transitions at higher and lower quantum numbers to the claimed transition should be detected. These criteria were applied to the claimed detections of glycine (\citealt{Kuan}) and dihydroxyacetone (\citealt{Widicus}) and both of the claimed detections were rejected (\citealt{Snyder1} and \citealt{Apponi1}, respectively). Conversely, the criteria were utilized to confirm the detection of glycolaldehyde (\citealt{Halfen1}) toward Sgr B2(N-LMH) at the 99.8\% confidence level. As demonstrated by \citet{Snyder1} and \citet{Apponi1}, the detection of a large organic molecule based on even 10 to 20 transitions can be tenuous.
In 2005, an extensive survey was performed by Fuchs and colleagues to search for interstellar trans-ethyl methyl ether, C$_2$H$_5$OCH$_3$, toward several high mass HMCs \citep{FuchsSpace}. This work was motivated by the previously reported observation of a single tEME transition towards Orion KL and W51 e1/e2 (\citealt{Charnley}). As a result of their survey, a detection of tEME was claimed toward the high mass star forming region W51 e2.
This would make tEME the fourth largest molecule to be detected in the interstellar medium (ISM). The three molecules larger than tEME, HC$_{11}$N, C$_{60}$, and C$_{70}$, posses symmetry that greatly facilitates their detection. However, tEME lacks such symmetry. Determination of the tEME abundance therefore has important implications for the limits of chemical complexity and detection in the ISM. Additionally, tEME is believed to be produced by the same chemical reactions, summarized in Equation 1, that produce dimethyl ether, a molecule detected in numerous environments in the ISM \footnote{R = CH$_3$ for dimethyl ether formation and CH$_3$CH$_2$ for tEME}. Therefore, tEME is the next logical progression in ether synthesis from dimethyl ether. If confirmed, the detection of tEME would represent a significant advance in our understanding of complex molecule formation.
\begin{align*}
\text{R}\text{OH} + \text{H}_3^+ &\rightarrow \text{R}\text{OH}_2^+ +\text{H}_2 \\
\text{R}\text{OH}_2^+ + \text{CH}_3\text{OH} &\rightarrow \text{CH}_3\text{ORH}^+ +\text{H}_2\text{O} \tag{1} \\
\text{CH}_3\text{ORH}^+ + e^- &\rightarrow \text{CH}_3\text{OR} + \text{H}
\end{align*}
In this work, we attempted and failed to confirm the detection of tEME toward W51 e1/e2 using the 12 m Telescope of the Arizona Radio Observatory (ARO) in the 2 mm and 3 mm atmospheric windows, and further report on an extensive search for this species toward the high mass star forming region Sgr B2(N-LMH) with the GBT. We additionally reanalyzed the original detection in the context of the \citet{Snyder1} criteria and show that the reported column density and temperature of \citet{FuchsSpace} are not reproducible based on their observations. Furthermore, no transitions of tEME were conclusively observed toward either W51 e1/e2 or Sgr B2(N-LMH) in the present ARO and GBT data. Our work therefore calls into question the initial detection of tEME toward W51 e1/e2.
\section{Observations}
The observations using the ARO 12 m telescope, located on Kitt Peak, were conducted during the period of October 2006 to April 2007. The receivers used were dual-channel, cooled SIS mixers, operated in single-sideband mode with at least 20 dB of image rejection. The back ends used were (1) 256-channel filter banks with 500 kHz and 1 MHz resolution, and (2) a millimeter autocorrelator in the 390.5 kHz resolution mode. All spectrometers were configured in parallel mode to accommodate both receiver channels. The temperature scale, T$^*_R$ , was determined by the chopper-wheel method, corrected for forward spillover losses. Conversion to radiation temperature T$_R$ is then T$_R$ = T$_R^*$/$\eta_c$, where $\eta_c$ is the corrected beam efficiency. Twelve new transitions of tEME covering the range 91 GHz to 168 GHz were studied; over this frequency range, the beam size was 73$\arcsec$ to 38$\arcsec$ and the beam efficiency varied from 0.9 to 0.7. A comparison of the present observations and those from \citet{FuchsSpace} is given in Figure \ref{tEME_Comparison}. A key concern is that the larger beam size of the ARO 12 m telescope may result in beam dilution of potential tEME flux. To assess this possibility, observational frequencies were chosen to partially overlap with those from \citet{FuchsSpace}. From Figure \ref{tEME_Comparison}, it is likely that both observations sample similar regions, however, the 12 m ARO beam weights more heavily to larger spatial scales than does the IRAM 30 m. The fact that the feature at 150845 MHz attributed by \citet{FuchsSpace} to the 20$_{0,20}$ - 19$_{1,19}$ transition of tEME is slightly stronger in the ARO data indicates a non-compact source size. The complete observations from the ARO 12 m are shown in Figure \ref{ARO_Full1} and \ref{ARO_Full2}.
\begin{figure}[h!]
\centering
\includegraphics[scale=.35]{Figure1_New.eps}
\vspace{-1em}
\caption{A comparison of the previous IRAM 30 m data and the current ARO 12 m data. All tEME transition frequencies are noted as blue vertical lines of uniform height. The inset shows the tEME 20$_{0,20}$ -- 19$_{1,19}$ transition multiplet. ARO data is converted to T$_{mb}$ assuming the 5\arcsec source size of \citet{FuchsSpace}} and $\eta_m$ = 0.75.
\label{tEME_Comparison}
\end{figure}
The observations of Sgr B2 (N) were taken using the National Radio Astronomy Observatory (NRAO) Robert C. Byrd 100 m Green Bank Telescope as part of the \textbf{PR}ebiotic \textbf{I}nterstellar \textbf{MO}lecular \textbf{S}urvey (PRIMOS). Observations began in 2008, and are continually updated.\footnote{The PRIMOS data set is available at \textless {\url{http://www.cv.nrao.edu/~aremijan/PRIMOS/}}\textgreater} These observations provide nearly continuous high-sensitivity, high-resolution data from 1 GHz to 50 GHz of the Sgr B2(N-LMH) region ($\alpha$[J2000] = 17$^h$47$^m$19.8$^s$, $\delta$[J2000] = -28$^\circ$22$\arcmin$17$\arcsec$). A complete description of the PRIMOS observations can be found in \citet{Neill}. Two tEME transitions at 25.3 GHz and 30.5 GHz were fortuitously covered while searching for other molecules; the telescope beamwidths were $\sim$30$\arcsec$ and $\sim$25$\arcsec$, with corresponding beam efficiencies of 0.7 and 0.6, at those frequencies, respectively.
\begin{figure}[h!]
\centering
\includegraphics[scale=.90]{Full_ARO_Split1.eps}
\vspace{-1em}
\caption{The 2 mm spectral coverage of the ARO observations toward W51 e1/e2. Frequencies are given assuming an LSR velocity of 55 km/s. Molecular transitions are labeled for context. tEME transitions are marked by vertical blue lines of uniform height.}
\label{ARO_Full1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=.90]{Full_ARO_Split2.eps}
\vspace{-1em}
\caption{The 3 mm spectral coverage of the ARO observations toward W51 e1/e2. Frequencies are given assuming an LSR velocity of 55 km/s. Molecular transitions are labeled for context. tEME transitions are marked by vertical blue lines of uniform height.}
\label{ARO_Full2}
\end{figure}
\section{Results and Discussion}
\begin{figure}[h!]
\centering
\includegraphics[scale=.3]{Fuchs_Intensity.eps}
\vspace{-1em}
\caption{The computed LTE peak antenna temperature of individual tEME transitions based on Equation 2 assuming a column density of 2$\times$10$^{14}$ cm$^{-2}$, velocity width of 3 km s$^{-1}$, and an excitation temperature of 70 K (Blue) plotted with the reported peak antenna temperature from \citet{FuchsSpace} (Red). An enlarged view showing the weakest reported transition from \citet{FuchsSpace} is shown in the inset.}
\label{Fuchs_Int}
\end{figure}
\subsection{Analysis of Previous tEME Observations}
Following the criteria of \citet{Snyder1}, we begin by attempting to verify the previously-reported detection of tEME toward W51 e2. We first consider the possibility that tEME is not well described by an LTE model. While experimental collisional cross-sections are not available, a rough collisional cross section based on molecular geometry and Van der Waals radii gives critical densities of order $\sim$ 10$^3$ - 10$^4$ cm$^{-3}$. Reported densities toward W51 e2 are 10$^3$ - 10$^7$ cm$^{-3}$ \citep{W51Density, W51Density2}, suggesting that tEME transitions should be well described by an LTE approximation. This is supported by the observation that emission from many large species toward W51 is well described by LTE \citep{Kalenskii}. We therefore conclude that an LTE model is appropriate.
A simple first test is to compare the expected local thermodynamic equilibrium (LTE) antenna temperatures with the reported intensity of tEME transitions. From \citet{TonyFormulas}, the LTE antenna temperature is related to column density and temperature by Equation 2, where $E_U$ is the upper state energy of the transition (K), $Q_r$ is the rotational partition function, $\nu$ is the transition frequency (MHz), $S$ is the line strength, $\mu$ is the dipole moment of the molecule (Debye), $\Delta T_{mb}\Delta V$ is the peak observed intensity (mK) times the full width half max (FWHM) of the line (km s$^{-1}$), B is the beam dilution factor, $\Theta_S$ is the source size, $\Theta_B$ is the beam size, and $\eta_B$ is the beam efficiency of the telescope at $\nu$. That is,
\begin{equation*}
\label{TonyColumn}
N_T = (1.8\times 10^{14}) \frac{Q_r e^{\frac{E_u}{T_{ex}}}}{\text{B} \nu S\mu^2}\times\frac{\Delta T_{mb}\Delta V}{\eta_B \Bigg(1-\frac {e^{\frac{(4.8\times10^{-5})\nu}{T_{ex}}}-1}{e^{\frac{(4.8\times10^{-5})\nu}{T_{bg}}}-1} \Bigg)}, \hspace{0.1in} B = \frac{\Theta_S^2}{\Theta_B^2 + \Theta_S^2} \tag{2}\end{equation*}
The transition strengths, frequencies, upper state energies, as well as the rotational partition function (Q = 2027.617 $\times$ T$^{3/2}$) and dipole moment ($\mu_a$ = 0.146 D \& $\mu_b$ = 1.165 D) are taken from \cite{FuchsLab}. While the assumed velocity width is not explicitly given, from Figure 4 of \citet{FuchsSpace} a FWHM of 1.4 MHz at 150.8 GHz, or 2.7 km s$^{-1}$, may be inferred, in good agreement with previous observations toward W51 e2 \citep[e.g.][]{Remijan1}. Using the reported column density of 2$\times$10$^{14}$ cm$^{-2}$ and a rotational temperature of 70 K from \citet{FuchsSpace}, as well as a velocity width of 3 km s$^{-1}$, $\eta_B$ = 1, a beam filling factor of B = 1, and background temperature of T$_{bg}$ = 2.7 K, the peak intensity values are calculated in the T$_{mb}$ scale using Equation 2 and plotted (blue crosses) against their corresponding observed values (red circles) from \citet{FuchsSpace} in Figure \ref{Fuchs_Int}. The complete list of parameters used and calculated integrated intensities is given in Table \ref{parameters}.
It is immediately apparent the reported transitions from \citet{FuchsSpace} do not match their predicted values. Indeed, \textit{every reported transition should have a peak intensity at least an order of magnitude less than its reported value}. As shown in Table \ref{parameters}, the discrepancy is 2 - 4 orders of magnitude for most transitions. In order to be considered valid, the observed intensity of all transitions should match their predicted values, in accordance with criteria 3 from \citet{Snyder1}. Accounting for the possibility that the five tEME spin components are blended into a single peak, the greatest peak intensity observed at the column density and temperature reported by \citet{FuchsSpace} would be the 20$_{0,20}$ -- 19$_{1,19}$ transition with a peak intensity of 1.4 mK, well below the previously reported intensity of \citet{FuchsSpace} as well as the RMS of both the present and previous observations. Reexamining the data as a whole, we performed an iterative least-squares fit of the data used in Fit II of \citet{FuchsSpace} used to determine the reported column density. This yields a best fit column density of 6$\times$10$^{16}$ cm$^{-2}$. We therefore conclude that the column density of 2$\times$10$^{14}$ cm$^{-2}$ derived by \citet{FuchsSpace} is not valid.
A probable explanation for the reported transitions from \citet{FuchsSpace} is interference from coincident transitions of other species. W51 e1/e2 is a rich molecular source and from the present observations of W51 e1/e2, on average there is a transition with peak intensity $\geq$ 25 mK every 6.3 MHz and a transition with peak intensity $\geq$ 15 mK every 3.2 MHz. For transitions near the noise level, this means that there is a strong probability that there will be a coincident transition within twice the FWHM that may be falsely attributed to the new molecule. \citet{FuchsSpace} note that of their observed transitions, only two are free of any interfering transitions. This however is based only on comparison with previously detected species and does not account for the possibility of interference from previously unidentified transitions.
Examining the reported transition frequencies, the difference in the observed and laboratory frequencies varies from -2.0 MHz to 1.46 MHz with a root mean squared value of 926 kHz and a standard deviation of 896 kHz. As these values span a wide range of positive and negative velocity offsets, this cannot be attributed to a systematic difference in the velocity of a single carrier relative to the reported LSR velocity of W51 e2. The laboratory measurements from \citet{FuchsLab} have uncertainties on the order of tens of kHz, thus this also cannot be attributed to uncertainty in the laboratory frequencies. A possible explanation is low spectral resolution. The previously reported astronomical observations have a resolution that varies from 0.3 MHz to 1.25 MHz. Many of the observed transitions have an observed minus calculated value at or below some or all of these spectral resolutions however, because the resolution of the individual spectra used to calculate these values is not specified, it is impossible to evaluate this possibility for many transitions. It can however be noted that four ($\sim$ 21 \%) of the transitions have an observed frequency that differs from its laboratory measurement by $\geq$ 1.25 MHz and are therefore likely not associated with tEME emission.
\subsection{Analysis of ARO Observations}
An alternative approach is to examine all tEME transitions covered by the present ARO observations. As a starting point, we assume a column density of 1.3$\times$10$^{16}$ cm$^{-2}$ such that the emission at the 20$_{0,20}$ -- 19$_{1,19}$ tEME transition is reproduced for an excitation temperature of 70 K. A simulation can be made of the resulting tEME line intensities, as shown in Figure \ref{FuchsFail}. It is clear in this modeling that several transitions with predicted peak intensities well above the RMS of the observations are clearly absent. To satisfy criteria 4 of \cite{Snyder1}, there should be no absent transitions. In fact, an excitation temperature of 70 K cannot satisfy this criteria unless the column density is sufficiently low that all observed transitions have peak intensities below the RMS of the observations. Considering other excitation temperatures (10 K - 300 K) and column densities (1$\times$10$^{12}$ cm$^{-2}$ - 1$\times$10$^{16}$ cm$^{-2}$) does not improve the situation. It becomes apparent that, in order not to have ``missing" lines, the tEME column density must be sufficiently low that all observed transitions are below the RMS of the observations from \cite{FuchsSpace} as well as the present observations, and thus not detectable in either observation.
\begin{figure}[h!]
\centering
\includegraphics[scale=.75]{SimIntGrid.eps}
\vspace{-1em}
\caption{200 km s$^{-1}$ windows of the ARO observations of potential tEME transitions toward W51 e1/e2 (Black). A simulation of tEME at 10 K (Blue) 150 K (Green) and 300 K (Red), assuming $\Delta$V = 3 km s$^{-1}$, $\eta_B$ = 1, B = 1, N$_T$ is the best fit column density derived for each temperature. Clearly the 3$_{2,1}$ - 2$_{1,2}$ and 16$_{1,16}$ - 15$_{0,15}$ are inconsistent with the 20$_{0,20}$ -- 19$_{1,19}$ transition.}
\label{FuchsFail}
\end{figure}
An additional concern is the effect of beam dilution. Fuchs et al. assume a source size of 5$\arcsec$. At 145 GHz, the ARO beam is $\sim$ 43$\arcsec$, corresponding to a beam dilution factor 6.67 times higher than that of the IRAM 30 m at the same frequency. If this source size is correct, the present ARO observations would be up to a factor of 6.67 less sensitive. However, examining the only transition covered by both observatories, the 20$_{0,20}$ -- 19$_{1,19}$ tEME transition and 150845 MHz (Figure \ref{tEME_Comparison}), after comparing both ARO and IRAM 30 m observations in the T$_{mb}$ scale, assuming a 5\arcsec source size, it is clear that the flux observed at this frequency does not decrease when observed with a larger beam, indicating that it cannot arise from a compact source. Therefore a beam dilution factor of 6.67 cannot apply. Furthermore, the column density from Fit II of \cite{FuchsSpace} would still produce transitions clearly visible in the ARO observations.
With no reliably identified tEME transitions, we determine an upper limit to the column density towards W51 e1/e2 using the current observations. As dimethyl ether and tEME are thought to form from similar processes, it is plausible to assume that they should have similar excitation conditions in a source. From \citet{Kalenskii}, the derived rotational temperature of dimethyl ether towards W51 e1/e2 is 85 K. The strongest tEME transition in the current observations at 85 K that has no obvious interfering transitions is the 12$_{0,12}$ -- 11$_{0,11}$ transition. This transition is not detected, but the RMS at this frequency can be used to determine an upper limit. Using Equation 2 and a velocity width of 3 km s$^{-1}$, an upper limit of $\leq$ 1$\times$10$^{15}$ cm$^{-2}$ can be derived for tEME, assuming all five components are blended into a single transition.
Finally we assess the possibility of detecting tEME in Sagittarius B2 (N-LMH). Using data from the PRIMOS project toward Sgr B2 (N-LMH), we have searched for possible tEME transitions. Several peaks coincident with tEME transitions were located, however several tEME transitions of similar predicted intensity show no emission, indicating that these features are simply coincidental. We therefore use the RMS at the strongest predicted transition to set an upper limit. Molecules detected toward Sgr B2 (N-LMH) show a wide range of excitation temperatures. No tEME transitions are detected, making it impossible to determine an excitation temperature. We therefore compute the upper limit at 10 K and 85 K. At 10 K, the strongest transition with no interfering features is the 4$_{1,3}$ -- 4$_{0,4}$ transition, while at 85 K the strongest feature would be the 9$_{1,8}$ -- 9$_{0,9}$ transition. The RMS at each transition frequency in the PRIMOS data is 4.5 mK and 11 mK, respectively. Using Equation 2 and assuming a beam efficiency of 0.8, the molecular parameters given in Table \ref{parameters}, and a velocity width of 13 km s$^{-1}$, the upper limits for the column density of tEME towards Sgr B2 (N-LMH) are $\leq$ 2.1 $\times$ 10$^{15}$ cm$^{-2}$ and $\leq$ 1.7 $\times$ 10$^{16}$ cm$^{-2}$, respectively.
\clearpage
\begin{deluxetable}{c c c c c c c}
\tablecolumns{7}
\tabletypesize{\footnotesize}
\tablecaption{Observed and calculated intensity of tEME Transitions.\tablenotemark{a}}
\tablewidth{0pt}
\tablehead{
\colhead{Transition} & \colhead{$\nu$} & \colhead{S$_{ij}$} & \colhead{E$_u$} & \colhead{N$_{Lines}$} & \colhead{Observed $\int T_{mb}$d$v$}\tablenotemark{b} & \colhead{Calculated $\int T_{mb}$d$v$\tablenotemark{c}} \\
\colhead{$J_{K_a,K_c}^{\prime} - J_{K_a,K_c}^{\prime\prime}$} & \colhead{(MHz)} & \colhead{} & \colhead{(K)} & \colhead{} & \colhead{ K km s$^{-1}$} & \colhead{ K km s$^{-1}\times$10$^3$}
}
\startdata
11$_{2,10}$ -- 11$_{1,11}$ & 80881.71 - 80883.6 & 5.195 & 30.1 & 5 & 0.09 & 0.3695\\
24$_{1,23}$ -- 24$_{0,24}$ & 81198.23 - 81199.22 & 10.268 & 118.5 & 5 & 0.16 & 0.2073\\
35$_{3,32}$ -- 35$_{2,33}$ & 91439.39 - 91441.26 & 27.608 & 255.2 & 5 & 1.0 & 0.0891\\
34$_{2,32}$ -- 34$_{1,33}$ & 91630.26 - 91631.17 & 21.850 & 237.1 & 5 & 0.03 & 0.09149\\
37$_{3,34}$ -- 37$_{2,35}$ & 91811.70 - 91813.40 & 29.637 & 283.7 & 5 & 0.1 & 0.06386\\
29$_{3,26}$ -- 29$_{2,27}$ & 96390.20 - 96392.55 & 20.487 & 179.1 & 5 & 1.85 & 0.2067\\
3$_{2,1}$ -- 2$_{1,2}$ & 96463.73 - 96464.85 &1.545 - 1.639 & 6.9 & 5 & 0.002 & 0.1936\tablenotemark{f}\\
22$_{3,19}$ -- 22$_{2,20}$ & 107655.40 - 107658.06 & 13.199 & 108.3 & 5 & 0.05 & 0.4089\\
7$_{2,5}$ -- 6$_{1,6}$ & 131349.80 - 131351.62 & 2.306 & 15.4 & 5 & 0.02 & 0.3284\\
15$_{1,15}$ -- 14$_{0,4}$ & 131372.62 - 131373.11 & 9.858 & 46.7 & 5 & 0.02 & 0.8986\\
34$_{4,30}$ -- 34$_{3,31}$ & 150661.35 - 150664.55 & 20.064 & 248.7 & 5 & 0.12 & 0.1170\\
13$_{6,x}$ -- 14$_{5,y}$\tablenotemark{d} & 150793.24 - 150797.40 & 1.289 & 76.6 & 10 & 0.17 & 0.088\\
20$_{0,20}$ -- 19$_{1,19}$ & 150845.28 - 150845.58 & 14.283 & 80.4 & 5 & 0.20 & 0.9231\\
19$_{5,z}$ -- 19$_{4,15}$\tablenotemark{e} & 215324.99 - 215327.56 & 3.487 - 9.519 & 102.2 & 6 & 0.28 & 0.6436\tablenotemark{f}\\
28$_{0,28}$ -- 27$_{1,27}$ & 217940.65 - 217940.76 & 22.586 & 154.7 & 5 & 3.66 & 0.7299\\
16$_{3,14}$ -- 15$_{2,13}$ & 245103.55 - 245106.43 & 4.994 & 62.9 & 5 & 0.46 & 0.6735\\
31$_{1,31}$ -- 30$_{0,30}$ & 245274.09 - 245274.22 & 25.712 & 188.8 & 5 & 0.87 & 0.5748\\
17$_{3,15}$ -- 16$_{2,14}$ & 252188.29 - 252191.14 & 5.191 & 69.5 & 5 & 0.70 & 0.6556\\
28$_{2,27}$ -- 27$_{1,26}$ & 253307.71 - 253308.94 & 12.415 & 161.0 & 5 & 16.01 & 0.4264\\
\hline
4$_{1,3}$ -- 4$_{0,4}$\tablenotemark{g} & 25335.53 - 25336.17 & 4.386 & 5.1 & 5 & & \\
9$_{1,8}$ -- 9$_{0 ,9}$ & 30561.87 - 30562.54 & 8.335 & 18.2 & 5 & & \\
\enddata
\tablenotetext{a}{Computed integrated intensities are assuming T$_{ex}$ = 70 K, N$_T$ = 2$\times$10$^{14}$ cm$^{-2}$, and $\Delta$V = 3 km s$^{-1}$, B = 1}
\tablenotetext{b}{Observed integrated intensities are taken from \citet{FuchsSpace}}
\tablenotetext{c}{Computed integrated intensities are for a single transition. The maximum observable integrated intensity can be obtained by $\int T_{mb}$d$v_{max}$ = $\int T_{mb}$d$v\times$N$_{Lines}$}
\tablenotetext{d}{ \textit{x}-\textit{y} deontes 7-9, 7-10, 8-9, or 8-10.}
\tablenotetext{e}{\textit{z} denotes either 14 or 15.}
\tablenotetext{f}{Computed using the average value of S$_{ij}$}
\tablenotetext{g}{Parameters used to derive upper limits for PRIMOS data}
\label{parameters}
\end{deluxetable}
\pagebreak
\section{Conclusions}
Rigorous application of the criteria for detection of a new molecule as outlined in \citet{Snyder1} has again been applied to a claimed detection. As in the case of dihydroxyacetone \citet{Apponi1} and glycine \cite{Snyder1}, these criteria underline the need for a thorough analysis when evaluating the possible detection of new molecules. In the present case, analysis of the previously reported detection of tEME (\citealt{FuchsSpace}) calls into question the original detection. Both the LSR velocities and LTE intensities reported in \citet{FuchsSpace} are shown to be inconsistent with the reported column density and temperature, casting doubt on the claimed detection of tEME. Based on previous observations of W51 e1/e2, we instead derive an upper limit five times higher than the previously reported value. We also derive similar upper limits toward Sgr B2 (N-LMH).
P.B.C \& G.A.B. gratefully acknowledge funding from the NASA Astrophysics Research and Analysis and Herschel Guaranteed Time Observer programs. B.A.M. gratefully acknowledges funding by an NSF Graduate Research Fellowship. The Arizona Radio Observatory is operated by Steward Observatory, University of Arizona, with partial support through the NSF University Radio Observatories program (URO: AST-1140030). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
\clearpage
\bibliographystyle{apj}
|
1,108,101,566,471 | arxiv | \section{2011 Nobel Prize Winners in Physics}
"The Nobel Prize in Physics 2011 was divided, one half awarded to
Saul Perlmutter, the other half jointly to Brian P. Schmidt and
Adam G. Riess for the discovery of the accelerating expansion of
the Universe through observations of distant supernovae." the
nobelprize.org wrote.
\footnote{http://www.nobelprize.org/nobelprizes/physics/laureates/2011/ }
Two research teams, Supernova Cosmology Project (SCP) lead by
Saul Perlmutter and High-z Supernova Search Team (HZT) headed by
Brian Schmidt,
raced to map the Universe by locating the most distant supernovae.
The two research teams found over 50 distant supernovae which
were dimmer than expected - this was a sign that the
expansion of the Universe was accelerating [1], [2].
Theoretically, this was not new idea since Albert Einstein
came out the idea of cosmological constant (often marked Lambda).
Einstein did that because he was guided by the paradigm of
the day that the Universe was static. When he heard the
Edwin Hubble discovery that the Universe was actually expanding
he declared that the inclusion of the cosmological constant
was his "biggest blunder".
Since there was no observation for cosmological constant
after that most scientists assumed that Lambda is zero.
A series of papers were published in Astrophysics and Space Science
in the early 90's years before the supernova publications
calculated $\Omega_{\Lambda}$ from observed data.
\section{Earlier Publications About Positive Cosmological Constant}
Pa\'al et al. [3] used the so called pencil beam survey
[4] to find out whether the regularity found in the
galaxy distribution is quasiperiodical or not.
It was found that $q_0$ was preferably negative [3]. Therefore
a nonzero Lambda term was needed.
The preferred value was $\Omega_{\Lambda}$ equal 2/3.
This is very close to the value that later was observed by
the Nobel Prize winners.
\begin{figure}
{\includegraphics[angle=0,width=12.4cm]{horvathistvan_fig1.ps}}
\caption{This figure was published in Holba et al.
(1994) (figure 8. in that article).
The red line represents the flat cosmological model. }
\end{figure}
In the second paper [5] a two parameter fit was made.
The positive cosmological constant (negative $q_0$) found to be
still needed. In the third paper [6] optical and radio quasars
were also used to find the preferred cosmological parameters.
Figure 8. in that paper [6] showed the results (see figure).
As it was written the contour meant 80\% confidence level.
The preferred region is similar that the supernovae
analisys later suggested.
For comparison please see this www page
http://vizion.galileowebcast.hu/HOI/Comparation.jpg
Therefore those earlier suggestions also support
the Nobel Prize winning results.
\acknowledgments
The author thanks for his supervisors, B. Luk\'acs and G. Pa\'al,
to being part of these researches. Unfortunately, G. Pa\'al
died in 1992. This was surely affected the fact
that these results were almost unrecognized.
\bigskip
{\bf REFERENCES}
[1] Perlmutter, S. et al., 1999, \apj,
517, 565
[2] Riess, A. G.
et al., 1998, \aj, 116, 1009
[3] Pa\'al, G.,
Horv\'ath, I., \& Luk\'acs, B. 1992, \apss, 191, 107
[4] Broadhurst, T. J. et al.,
1990, Nature, 343, 726
[5] Holba, A., Horv\'ath, I.,
Luk\'acs, B., \& Pa\'al, G. 1992, \apss, 198, 111
[6] Holba, A., Horv\'ath, I.,
Luk\'acs, B., \& Pa\'al, G. 1994, \apss, 222, 65
\end{document}
|
1,108,101,566,472 | arxiv | \section{Introduction}
\label{sec::introduction}
Inverse differential problems, where given a set of measurements one seeks a set of optimal parameters in a governing differential equation, arise in numerous scientific and technological domains. Some well-known applications include X-ray tomography \cite{epstein2007introduction,natterer2001mathematics}, ultrasound \cite{van2019deep}, MRI imaging \cite{jin2017deep}, and transport in porous media \cite{J.-P.-Fouque:2007aa}. Moreover, modeling and control of dynamic complex systems is a common problem in a broad range of scientific and engineering domains, with examples ranging from understanding the motion of bacteria colonies in low Reynolds number flows \cite{samsami2020stability}, to the control of spinning rotorcrafts in high speed flights \cite{hedayatpour2017unified,hedayatpour2019precision}. Other applications in medicine, navigation, manufacturing, \textit{etc.} need estimation of the unknown parameters in \textit{real-time}, \textit{e.g.} in electroporation \cite{zupanic2012treatment,mistani2019parallel} the pulse optimizer has to be informed about tissue parameters in microsecond time. On the other hand, high resolution data-sets describing spatiotemporal evolution of complex systems are becoming increasingly available by advanced multi-scale numerical simulations (see \emph{e.g.} \cite{mistani2019parallel, mistani2018island}). These advances have become possible partly due to recent developments in discretization techniques for nonlinear partial differential equations with sharp boundaries (see \emph{e.g.} the reviews \cite{Gibou:2019aa, Gibou:2018aa}). However, solving these inverse problems poses substantial computational and mathematical challenges that makes it difficult to infer reliable parameters from limited data and in real-time.
The problem can be mathematically formulated as follows. Let the values of $u=u(t, x_1, \ldots, x_n)$ be given by a set of measurements, which may include noise. Knowing that $u$ satisfies the partial differential equation:
\begin{eqnarray*}
\frac{\partial u}{\partial t} = f \left (t, x_1, \ldots, x_n; u, \frac{\partial u}{\partial x_1}, \ldots \frac{\partial u}{\partial x_n}; \frac{\partial^2 u}{\partial x_1 \partial x_1}, \ldots \frac{\partial^2 u}{\partial x_1 \partial x_n}; \ldots; \mathbf{c} \right),
\end{eqnarray*}
find the hidden fields stored in $\mathbf{c}$, where the hidden fields can be constant or variable coefficients (scalars, vectors or tensors).
Deep neural networks have, rather recently, attracted considerable attention for data modeling in a vast range of scientific domains, in part due to freely available modern deep learning libraries (in particular \texttt{TensorFlow} \cite{abadi2016tensorflow}). For example, deep neural networks have shown astonishing success in emulating sophisticated simulations \cite{he2019learning,zhang2019dark,zamudio2019higan,chandrasekaran2019solving,sinitskiy2018deep}, discovering governing differential equations from data \cite{Raissi:2017aa,berg2019data,long2018pde,schaeffer2017learning}, as well as potential applications to study and improve simulations of multiphase flows \cite{Gibou:2019aa}. We refer the reader to \cite{owhadi2019operator,owhadi2019statistical} for a comprehensive survey of interplays between numerical approximation, statistical inference and learning. However, these architectures require massive datasets and extensive computations to train numerous hidden weights and biases. Therefore, reducing complexity of deep neural network architectures for inverse problems poses a significant practical challenge for many applications in physical sciences, especially when the collection of large datasets is a prohibitive task \cite{Raissi:2018aa}. One remedy to reduce the network size is to embed the knowledge from existing mathematical models \cite{Stinis:2019aa} or known physical laws within a neural network architecture \cite{Ling:2016aa,Geng:2019aa}. Along these lines, semantic autoencoders were recently proposed by Aragon-Calvo \cite{aragon2019self}, where they replaced the decoder stage of an autoencoder architecture with a given physical law that can reproduce the provided input data given a physically meaningful set of parameters. The encoder is then constrained to discover optimal values for these parameters, which can be extracted from the bottleneck of the network after training. We shall emphasize that this approach reduces the size of the unknown model parameters, and that the encoder can be used independently to infer hidden parameters in real time, while adding interpretability to deep learning frameworks. Inspired by their work, we propose to blend traditional numerical solver algorithms with custom deep neural network architectures to solve inverse PDE problems more efficiently, and with higher accuracy.
\subsection{Existing works}
Recently, the most widely used approach for solving forward and inverse partial differential equations using neural networks has been the constrained optimization technique. These algorithms augment the cost function with terms that describe the PDE, its boundary and its initial conditions, while the neural network acts as a surrogate for the solution field. Depending on how the derivatives in the PDEs are computed, there may be two general classes of methods that we review in the next paragraph.
In the first class, spatial differentiations in the PDE are performed exclusively using automatic differentiation, while temporal differentiation may be handled using the traditional Runge-Kutta schemes (called \textit{discrete time models}) or using automatic differentiations (called \textit{continuous time models}) \cite{raissi2017physics}. In these methods, automatic differentiation computes gradients of the output of a neural network with respect to its input variables. Hence, the input must always be the independent variables, \emph{i.e.} the input coordinates $\mathbf{x}$, time and the free parameters. In this regard, network optimization aims to calibrate the weights and biases such that the neural network outputs the closest approximation of the solution of a PDE; this is enforced through a regularized loss function. An old idea that was first proposed by Lagaris \textit{et al.\ } (1998) \cite{lagaris1998artificial}. In 2015, the general framework of solving differential equations as a learning problem was proposed by Owhadi \cite{owhadi2015bayesian,Owhadi2015,owhadi2017multigrid} which revived interest in using neural networks for solving differential equations in recent years. Raissi \textit{et al.\ } (2017) \cite{raissi2017physics,Raissi2017PhysicsID} presented the regularized loss function framework under the name \textit{physics informed neural networks} or PINNs and applied it to time-dependent PDEs. Ever since, other authors have mostly adopted PINNs, see \emph{e.g.} \cite{Sirignano:2018aa,bar2019unsupervised}. The second class of constrained optimization methods was proposed by Xu and Darve \cite{xu2019neural} who examined the possibility of directly using pre-existing finite discretization schemes within the loss function.
An alternative approach for solving PDE systems is through explicit embedding of the governing equations inside the architecture of deep neural networks via convolutional layers, activation functions or augmented neural networks. Below we review some of these methods:
\begin{itemize}
\item A famous approach is PDE-Net \cite{long2018pde,long2019pde} which relies on the idea of numerical approximation of differential operators by convolutions. Therefore, PDE-Nets use convolution layers with trainable and constrained kernels that mimic differential operators (such as $\rm U_x, U_y, U_{xx}, \cdots$) whose outputs are fed to a (symbolic) multilayer neural network that models the nonlinear response function in the PDE system, \textit{i.e.} the right hand side in $\rm U_t = F(U, U_x, U_y, U_{xx}, \cdots)$. Importantly, PDE-Nets can only support \textit{explicit} time integration methods, such as the forward Euler method \cite{long2018pde}. Moreover, because the differential operators are being learned from data samples, these methods have hundreds of thousands of trainable parameters that demand hundreds of data samples; \textit{e.g.} see section 3.1 in \cite{long2018pde} that uses $20$ $\delta t$-blocks with $17,000$ parameters in each block, and use $560$ data samples for training.
\item Berg and Nystr{\"o}m \cite{berg2017neural} (hereby BN17) proposed an augmented design by using neural networks to estimate PDE parameters whose output is fed into a forward finite element PDE solver, while the adjoint PDE problem is employed to compute gradients of the loss function with respect to weights and biases of the network using automatic differentiation. Even though their loss function is a simple $\rm L_2$-norm functional, the physics is not localized in the structure of the neural network as the adjoint PDE problem is also employed for the optimization process. It is important to recognize that in their approach the numerical solver is a separate computational object than the neural network, therefore computing gradients of error functional with respect to the network parameters has to be done explicitly through the adjoint PDE problem. Moreover, their design can not naturally handle trainable parameters in the numerical discretization itself, a feature that is useful for some meshless numerical schemes. In contrast, \emph{in BiPDEs the numerical solver is a computational layer added in the neural network architecture and naturally supports trainable parameters in the numerical scheme.} For example in the meshless method developed in section \ref{sec::meshfree} we leverage this unique feature of BiPDEs to also train for shape parameters and interpolation seed locations of the numerical scheme besides the unknown diffusion coefficient.
\item Dal Santos \textit{et al.\ } \cite{dal2020data} proposed an embedding of a reduced basis solver as \textit{activation function} in the last layer of a neural network. Their architecture resembles an autoencoder in which the decoder is the reduced basis solver and the parameters at the bottleneck ``are the values of the physical parameters themselves or the affine decomposition coefficients of the differential operators'' \cite{dal2020data}.
\item Lu \textit{et al.\ } \cite{lu2020extracting} proposed an unsupervised learning technique using variational autoencoders to extract physical parameters (not inhomogeneous spatial fields) from noisy spatiotemporal data. Again the encoder extracts physical parameters and the decoder propagates an initial condition forward in time given the extracted parameters. These authors use convolutional layers both in the encoder to extract features as well as in the decoder with recurrent loops to propagate solutions in time; \textit{i.e.} the decoder leverages the idea of estimating differential operators with convolutions. Similar to PDE-Nets, this architecture is also a ``PDE-integrator with explicit time stepping'', and also they need as few as 10 samples in the case of Kuramoto-Sivashinsky problem.
\end{itemize}
In these methods, a recurring idea is treating latent space variables of autoencoders as physical parameters passed to a physical model decoder. This basic idea pre-dates the literature on solving PDE problems and has been used in many different domains. Examples include Aragon-Calvo \cite{aragon2019self} who developed a galaxy model fitting algorithm using \textit{semantic autoencoders}, or Google Tensorflow Graphics \cite{tensorflowGraphics} which is a well-known application of this idea for scene reconstruction.
\subsection{Present work}
Basic criteria of developing numerical schemes for solving partial differential equations are \textit{consistency} and \textit{convergence} of the method, \textit{i.e.} increasing resolution of data should yield better results. Not only there is no guarantee that approximating differential operators through learning convolution kernels or performing automatic differentiations provide a consistent or even stable numerical method, but also the learning of convolution kernels to approximate differential operators requires more data and therefore yield less data-efficient methods. Therefore it seems reasonable to explore the idea of blending classic numerical discretization methods in neural network architectures, hence informing the neural network about proper discretization methods. This is the focus of the present manuscript.
In the present work, we discard the framework of constrained optimization altogether and instead choose to explicitly blend fully traditional finite discretization schemes as the decoder layer in semantic autoencoder architectures. In our approach, the loss function is only composed of the difference between the actual data and the predictions of the solver layer, but contrary to BN17 \cite{berg2017neural} we do not consider the adjoint PDE problem to compute gradients of the error functional with respect to network parameters. This is due to the fact that in our design the numerical solver is a custom layer inside the neural network through which backpropagation occurs naturally. This is also in contrast to PINNs where the entire PDE, its boundary and its initial conditions are reproduced by the output of a neural network by adding them to the loss function. Importantly, the encoder learns an approximation of the inverse transform in a \emph{self-supervised} fashion that can be used to evaluate the hidden fields underlying unseen data without any further optimization. Moreover, the proposed framework is versatile as it allows for straightforward consideration of other domain-specific knowledge such as symmetries or constraints on the hidden field. In this work, we develop this idea for stationary and time-dependent PDEs on structured and unstructured grids and on noisy data using mesh-based and mesh-less numerical discretization methods.
\subsection{Novelties and features of BiPDEs}
A full PDE solver is implemented as a \textit{custom layer inside the architecture of semantic autoencoders} to solve inverse-PDE problems in a self-supervised fashion. Technically this is different than other works that implement a propagator decoder by manipulating activation functions or kernels/biases of convolutional layers, or those that feed the output of a neural network to a separate numerical solver such as in BN17 which requires the burden of considering the adjoint problem in order to compute partial differentiations. The novelties and features of this framework are summarized below:
\begin{enumerate}
\item \textbf{General discretizations.} We do not limit numerical discretization of differential equations to only finite differences that are emulated by convolution operations, our approach is more general and permits employing more sophisticated numerical schemes such as meshless discretizations. It is a more general framework that admits any existing discretization method directly in a decoder stage.
\item \textbf{Introducing solver layers.} All the information about the PDE system is \emph{only} localized in a solver layer; \textit{i.e.} we do not inform the optimizer or the loss function with the adjoint PDE problem, or engineer regularizers or impose extra constraints on the kernels of convolutions, or define exotic activation functions as reviewed above. In other words, PDE solvers are treated as custom layers similar to convolution operations that are implemented in convolutional layers. An important aspect is the ability to employ any of the usual loss functions used in deep learning, for example we arbitrarily used mean absolute error or mean squared error in our examples.
\item \textbf{Blending meshless methods with trainable parameters.} Another unique proposal made in this work is the use of Radial Basis Function (RBF) based PDE solver layers as a natural choice to blend with deep neural networks. Contrary to other works, the neural network is not only used as an estimator for the unknown field but also it is tasked to optimize the shape parameters and interpolation points of the RBF scheme. In fact, our meshless decoder is not free of trainable parameters similar to reviewed works, instead shape parameters and seed locations are trainable parameters that define the RBF discretization, this is analogous to convolutional layers with trainable weights/biases that are used in machine learning domain. In fact this presents an example of neural networks complementing numerical discretization schemes. Choosing optimal shape parameters or seed locations is an open question in the field of RBF-based PDE solvers and here we show neural networks can be used to optimally define these discretization parameters.
\item \textbf{Explicit/implicit schemes.} Most of the existing frameworks only accept explicit numerical discretizations in time, however our design naturally admits implicit methods as well. Using implicit methods allows taking bigger timesteps for stiff problems such as the diffusion problem, hence not only providing faster inverse-PDE solvers, but also present more robust/stable inverse PDE solvers.
\item \textbf{Data efficient.} Our design lowers the computational cost as a result of reusing classical numerical algorithms for PDEs during the learning process, which focuses provided data to infer the actual unknowns in the problem, \textit{i.e.} reduces the load of learning a discretization scheme from scratch.
\item \textbf{Physics informed.} Domain-specific knowledge about the unknown fields, such as symmetries or specialized basis functions, can be directly employed within our design.
\item \textbf{Inverse transform.} After training, the encoder can be used independently as a real-time estimator for unknown fields, \textit{i.e.} without further optimization. In other words, the network can be pre-trained and then used to infer unknown fields in real-time applications.
\end{enumerate}
\section{Blended inverse-PDE network (BiPDE-Net)}
The basic idea is to embed a numerical solver into a deep learning architecture to recover unknown functions in inverse-PDE problems, and all the information about the governing PDE system is only encoded inside the DNN architecture as a solver layer. In this section we describe our proposed architectures for inverse problems in one and two spatial dimensions.
\subsection{Deep neural networks (DNN)}
The simplest neural network is a single layer of perceptron that mathematically performs a linear operation followed by a nonlinear composition applied to its input space,
\begin{align}
\mathcal{N} =\sigma\big( \mathbf{W}\mathbf{x}+\mathbf{b}\big),
\end{align}
where $\sigma$ is called the \textit{activation function}. Deep neural networks are multiple layers stacked together within some architecture. The simplest example is a set of layers connected in series without any recurrent loops, known as feedforward neural networks (FNN). In a densely connected FNN, the action of the network is simply the successive compositions of previous layer outputs with the next layers, \textit{i.e.},
\begin{align}
\mathcal{N}_l =\sigma\big( \mathbf{W}_l\mathcal{N}_{l-1}(\mathbf{x})+\mathbf{b}_l\big),
\end{align}
where $l$ indicates the index of a layer. This compositional nature of NNs is the basis of their vast potential as universal function estimators of any arbitrary function on the input space $\mathbf{x}$, see e.g. \cite{tikhomirov1991representation, cybenko1989approximation, csaji2001approximation}. Another important feature of NNs is that they can effectively express certain high dimensional problems with only a few layers, for example Darbon \textit{et al.} \cite{darbon2020overcoming} have used NNs to overcome the curse of dimensionality for some Hamilton-Jacobi PDE problems (also see \cite{han2018solving,Sirignano:2018aa}).
Most machine learning models are reducible to composition of simpler layers which allows for more abstract operations at a higher level. Common layers include dense layers as described above, convolutional layers in convolutional neural networks (CNNs) \cite{lecun1998gradient,krizhevsky2012imagenet}, Long-short term memory networks (LSTM) \cite{hochreiter1997long}, Dropout layers \cite{srivastava2014dropout} and many more. In the present work, we pay particular attention to CNNs owing to their ability to extract complicated spatial features from high dimensional input datasets. Furthermore, we define custom PDE solver layers as new member of the family of pre-existing layers by directly implementing numerical discretization schemes inside the architecture of deep neural networks.
\subsection{Custom solver layers}
A \textit{layer} is a high level abstraction that plays a central role in existing deep learning frameworks such as \texttt{TensorFlow}\footnote{For example see TensorFlow manual page at \href{https://www.tensorflow.org/guide/keras/custom_layers_and_models}{$\rm https://www.tensorflow.org/guide/keras/custom\_layers\_and\_models$} } \cite{abadi2016tensorflow}, \texttt{Keras} API \cite{chollet2015keras}, \texttt{PyTorch} \cite{paszke2017automatic}, \textit{etc.} Each Layer encapsulates a state, \textit{i.e.} trainable parameters such as weights/biases, and a transformation of inputs to outputs. States in a layer could also be non-trainable parameters in which case they will be excluded from backpropagation during training.
We implement different explicit or implicit numerical discretization methods as custom layers that transform an unknown field, initial data and boundary conditions to outputs in the solution space. Solver layers encapsulate numerical discretization schemes with trainable (\textit{e.g.} shape parameters and seeds in meshless methods) or non-trainable (\textit{e.g.} the finite difference methods) state parameters. Interestingly, solver layers with trainable parameters are new computational objects analogous to pre-existing convolutional layers with trainable kernel parameters.
An important aspect of layer objects is that they can be composed with other layers in any order. Particularly, this offers an interesting approach for solving inverse problems given by systems of partial differential equations with several unknown fields that can be modeled with neural layers. We will explore this avenue in future work. In the remainder of this manuscript we will only focus on different inverse-PDE examples given by a single PDE equation and one unknown field.
\subsection{Blended neural network architectures}
BiPDE is a two-stage architecture, with the first stage responsible for learning the unknown coefficients and the second stage performing numerical operations as in traditional numerical solvers (see figure \ref{fig::autoBiPDE}). To achieve higher performance, it is essential to use GPU-parallelism. We leverage the capability provided by the publicly available library \texttt{TensorFlow} \cite{abadi2016tensorflow} by implementing our PDE-solver as a \textit{custom layer} into our network using the \texttt{Keras} API \cite{chollet2015keras}. Details of this includes vectorized operations to build the linear system associated by the PDE discretization.
We propose a semantic autoencoder architecture as proposed by Aragon-Calvo (2019) \cite{aragon2019self} with hidden parameters being represented at the bottleneck of the autoencoder. Figure \ref{fig::autoBiPDE} illustrates the architecture for the proposed semantic autoencoder. Depending on static or time dependent nature of the governing PDE, one may train this network over pairs of input-output solutions that are shifted $\rm p$ steps in time, such that for a static PDE we have $\rm p=0$ while dynamic PDEs correspond to $\rm p\ge 1$. We call this parameter the \textit{shift parameter}, which will control the accuracy of the method (\emph{cf.} see section \ref{sec::meshfree}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figures/BiPDE.png}
\caption{Architecture of the BiPDE to infer unknown parameters of hidden fields. Here the loss function is the mean squared error between data and output of the autoencoder, however other choices for loss function may be used depending on the nature of data. }
\label{fig::autoBiPDE}
\end{figure}
An important aspect is that the input to BiPDE is the solution data itself. In other words the neural network in a BiPDE is learning the \textit{inverse transform},
\begin{align}
\rm \mathcal{NN}:~ \{u\}\rightarrow \textrm{hidden\ field},
\end{align}
where $\{u\}$ indicates an ensemble of solutions, \textit{e.g.} solutions obtained with different boundary conditions or with different hidden fields. Note that in other competing methods such as PINNs the input is sanctioned to be the coordinates in order for automatic differentiation to compute spatial and temporal derivatives; as a consequence PINNs can only be viewed as \textit{surrogates} for the solution of the differential problem defined on the space of coordinates. However, we emphasize that semantic autoencoders are capable to approximate the inverse transformation from the space of solutions to the space of hidden fields, a feature that we exploit in section \ref{sec::inverse}.
Essentially different numerical schemes can be implemented in the decoder stage. We will blend examples of both mesh-based and mesh-less numerical discretizations and present numerical results and comparisons with PINNs. We will show how BiPDEs can handle data on unstructured grids and data with added noise. In section \ref{sec::meshbased}, we demonstrate performance of mesh-based BiPDEs on inverse problems in two spatial dimensions by using a finite difference discretization and Zernike expansion of the non-homogeneous hidden field, we will consider both stationary and dynamic PDE problems in this section. Then in section \ref{sec::meshfree}, we develop a mesh-less BiPDE and consider a dynamic nonlinear inverse partial differential problem.
\section{Mesh-based BiPDE: Finite Differences}\label{sec::meshbased}
We consider a variable coefficient Poisson problem in one and two spatial dimensions as well as the one dimensional nonlinear Burger's equation as an example of a nonlinear dynamic PDE problem with a scalar unknown parameter.
\subsection{Stationary Poisson problem}
We consider the governing equation for diffusion dominated processes in heterogeneous media:
\begin{align}
&\nabla\cdot \big(D(\mathbf{x}) \nabla u\big)=-f(\mathbf{x}), &\mathbf{x}\in \Omega \label{eq::Poisson}\\
&u(\mathbf{x})=u_0(\mathbf{x}), &\mathbf{x}\in\partial \Omega
\end{align}
Here we consider a rectangular domain with Dirichlet boundary conditions.
\textbf{Discretization.} In our architecture, we use the standard 5-point stencil finite difference discretization of the Poisson equation in the solver layer, \textit{i.e.}
\begin{align*}
&\frac{D_{i-1/2, j} u_{i-1, j} - (D_{i-1/2,j} + D_{i+1/2,j})u_{i,j} + D_{i+1/2,j}u_{i+1,j}}{\Delta x^2}+\\
&\frac{D_{i, j-1/2} u_{i, j-1} - (D_{i,j-1/2} + D_{i,j+1/2})u_{i,j} + D_{i,j+1/2}u_{i,j+1}}{\Delta y^2} + f_{i,j}=0,
\end{align*}
and we use the linear algebra solver implemented in \texttt{TensorFlow} to solve for the solution field, \textit{i.e.} we used \texttt{tf.linalg.solve} method that is a dense linear system solver. Of course, this can be improved by implementing a sparse linear solver.
\textbf{Hidden Model.} We decompose the hidden field into a finite number of eigenfunctions and search for their optimal coefficients. This is also advantageous from a physics point of view, because domain's knowledge of hidden fields can be naturally formulated in terms of basis functions into this framework. One such family of series expansions are the moment-based methods that have been largely exploited in image reconstruction \cite{khotanzad1990invariant, belkasim1991pattern, prokop1992survey, bailey1996orthogonal}. In particular, Zernike moments \cite{von1934beugungstheorie} provide a linearly independent set of polynomials defined on the unit circle/sphere in two/three spatial dimensions. Zernike moments are well-suited for such a task and are commonly used for representing optical aberration in astronomy and atmospheric sciences \cite{ragazzoni2000adaptive}, for image reconstruction and for enhanced ultrasound focusing in biomedical imaging \cite{dong2008zernike,markelj2012review,kaye2012application}.
Zernike moments are advantageous over regular moments in that they intrinsically provide rotational invariance, higher accuracy for irregular patterns, and are orthogonal, which reduces information redundancy in the different coefficients. Zernike polynomials capture deviations from zero mean as a function of radius and azimuthal angle. Furthermore, the complete set of orthogonal bases provided by Zernike moments can be obtained with lower computational precision from input data, which enhances the robustness of the reconstruction procedure.
Odd and even Zernike polynomials are given as a function of the azimuthal angle $\theta$ and the radial distance $\rho$ between $0$ and $1$ measured from the center of image,
\begin{align*}
&\begin{bmatrix} Z_{nm}^o(\rho, \theta) \\ Z_{nm}^e(\rho, \theta) \end{bmatrix}=R_{nm}(\rho) \begin{bmatrix} \sin(m\theta)\\ \cos(m\theta)\end{bmatrix},
\end{align*}
with
\begin{align*}
R_{nm}(\rho)&=\begin{cases}
\sum_{l=0}^{(n-\vert m\vert )/2}\frac{(-1)^l (n-l)!}{l![(n+\vert m\vert )/2 -l]! [(n-\vert m\vert )/2 -l]!}\rho^{n-2l} & \textrm{for $n-m$ even,} \\
0 & \textrm{for $n-m$ odd,}
\end{cases}
\end{align*}
where $n$ and $m$ are integers with $n\ge \vert m\vert$. A list of radial components is given in table \ref{tab::I} (from \cite{weisstein2002zernike}). For an extensive list of Zernike polynomials in both two and three spatial dimensions, we refer the interested reader to \cite{mathar2008zernike}.
\begin{table}
\centering
\resizebox{1 \textwidth}{!}{
\begin{tabular}{SSSSSS} \toprule
{$\textbf{n}$} & {$ \vert \textbf{m}\vert$} & {$\textbf{R}_{nm}$} & {$\textbf{Z}_{nm}^o$} & {$\textbf{Z}_{nm}^e$} & {$\rm \textbf{Aberration/Pattern}$}\\ \midrule \midrule
0 & 0 & 1 & {$0$} & 1 & {$\rm Piston$} \\ \midrule
1 & 1 & {$\rho$} & {$\rho\sin(\theta)$} & {$\rho\cos(\theta)$} & {$\rm Tilt $}\\ \midrule
2 & 0 & {$2\rho^2 - 1$} & {$0$} & {$2\rho^2 - 1$} & {$\rm Defocus $}\\
& 2 & {$\rho^2$} & {$\rho^2\sin(2\theta)$} & {$\rho^2\cos(2\theta)$} & {$\rm Oblique/Vertical\ Astigmatism $} \\ \midrule
3 & 1 & {$3\rho^3 - 2\rho$} & {$(3\rho^3 - 2\rho)\sin(\theta)$} & {$(3\rho^3 - 2\rho)\cos(\theta)$} & {$\rm Vertical/Horizontal\ Coma $}\\
& 3 & {$\rho^3$} & {$\rho^3\sin(3\theta)$} & {$\rho^3\cos(3\theta)$} & {$\rm Vertical/Oblique\ Trefoil $}\\ \midrule
4 & 0 & {$6\rho^4 - 6\rho^2 + 1 $} & {$0$} & {$6\rho^4 - 6\rho^2 + 1 $} & {$\rm Primary\ Spherical $} \\
& 2 & {$4\rho^4 - 3\rho^2$} & {$(4\rho^4 - 3\rho^2)\sin(2\theta)$} & {$(4\rho^4 - 3\rho^2)\cos(2\theta)$} & {$\rm Oblique/Vertical\ Secondary\ Astigmatism $} \\
& 4 & {$\rho^4$} & {$\rho^4\sin(4\theta)$} & {$\rho^4\cos(4\theta)$} & {$\rm Oblique/Vertical\ Quadrafoil $}\\ \bottomrule
\end{tabular}
}
\caption{First $15$ odd and even Zernike polynomials according to Noll's nomenclature. Here, the ordering is determined by ordering polynomial with lower radial order first, cf. \cite{wyant1992basic}.\label{tab::I}}
\end{table}
Furthermore, each Zernike moment is defined by projection of the hidden field $f(x,y)$ on the orthogonal basis,
\begin{align*}
& \begin{bmatrix} A_{nm}\\ B_{nm}\end{bmatrix}=\frac{n+1}{\epsilon_{mn}^2\pi}\int_x \int_y f(x,y) \begin{bmatrix} Z_{nm}^o(x,y) \\ Z_{nm}^e(x,y) \end{bmatrix} dx dy, \quad x^2 + y^2\le 1,
\end{align*}
where for $m=0,~ n\neq 0$ we defined $\epsilon_{0n}=1/\sqrt{2}$ and $\epsilon_{mn}=1$ otherwise. Finally, superposition of these moments expands the hidden field in terms of Zernike moments:
\begin{align}
\hat{f}(x,y)=\sum_{n=0}^{N_{max}}\sum_{\vert m\vert=0}^{n}\big[ A_{nm}Z_{nm}^o (r,\theta) + B_{nm}Z_{nm}^e (r,\theta) \big]. \label{eq::ZExp}
\end{align}
In order to identify the coefficients in the Zernike expansion \eqref{eq::ZExp} for hidden fields, we use a semantic autoencoder architecture with Zernike moments being represented by the code at the bottleneck of the autoencoder. Figure \ref{fig::autoarch} illustrates the architecture for the proposed semantic autoencoder.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figures/autoencoder_Architecture.png}
\caption{Architecture of the semantic autoencoder to infer hidden fields. Zernike moments are discovered at the bottleneck of the architecture.}
\label{fig::autoarch}
\end{figure}
\textbf{Architecture.} Even though a shallow neural network with as few neurons as the number of considered Zernike terms suffices to estimate values of the unknown Zernike moments in each of the problems considered in this section, however we will use a deep convolutional neural network (detailed below) in order to achieve our ultimate goal of approximating the inverse transform for the Poisson problem in a broad range of diffusion coefficient fields. Therefore we design one deep neural network and uniformly apply it to several problems in this section.
In the training process of a CNN, the kernels are trained at each layer such that several feature maps are extracted at each layer from input data. The CNN is composed of 3 convolutional blocks with $32,~ 64,~ 128$ channels respectively and kernel size $3\times 3$. Moreover, we use the \texttt{MaxPooling} filter with kernel size $(2,2)$ after each convolutional block to downsample the feature maps by calculating the maximum values of each patch within these maps. We use the \texttt{ReLU} activation function \cite{hahnloser2000digital}, \textit{i.e.} a piecewise linear function that only outputs positive values: ${\rm ReLU}(x)=\max(0,x)$, in the convolutional layers followed by a \texttt{Sigmoid} activation in dense layers and a scaled \texttt{Sigmoid} activation at the final layer,
\begin{align}
\tilde{\sigma}(x)&=D_{\min} + (D_{\max} - D_{\min})\sigma (x)\label{eq::activ},
\end{align}
such that the actual values of the diffusion coefficient are within the range $(D_{\min}, D_{\max})$, known from domain specific knowledge. After each dense layer, we apply \texttt{Dropout} layers with a rate of $0.2$ to prevent overfitting \cite{hinton2012improving,srivastava2014dropout} (a feature that is most useful in estimating the inverse transform operator) and avoid low quality local minima during training.
\subsubsection{Test cases.} \label{sec::tests}
\textbf{Case I. A tilted plane.}
In the first example we consider a linear diffusion model given by
\begin{align*}
&D(x,y)=\sqrt{2} + 0.1(y-x)
\end{align*}
where the boundary condition function $u_{BC}$ and the source field $f$ are given by
\begin{align*}
&u_{BC}(x,y)=0.01\cos(\pi x)\cos(\pi y) \qquad \textrm{and} \qquad f(x,y)=\sin(\pi x)\cos(\pi y)
\end{align*}
\begin{figure}
\centering
\subfigure[Comparison of learned (left) versus true diffusion coefficient (right).]{\includegraphics[width=\linewidth]{./figures/2D_results/Linear/01.png} \label{subfig::b}}\quad\quad
\subfigure[Learned solution.]{ \includegraphics[width=0.45\linewidth]{./figures/2D_results/Linear/05.png} }\quad\quad
\subfigure[True solution.]{ \includegraphics[width=0.45\linewidth]{./figures/2D_results/Linear/06.png} }\quad\quad
\subfigure[Error in learned solution $u-\hat{u}$.]{\includegraphics[width=0.45\linewidth]{./figures/2D_results/Linear/04.png} }\quad\quad
\subfigure[Error in learned diffusion coefficient.]{\includegraphics[width=0.45\linewidth]{./figures/2D_results/Linear/02.png} }\quad\quad
\caption{Results for the two dimensional tilted plane (case I).}
\label{fig:example2}
\end{figure}
In this experiment we only use a \emph{single solution field} for training. Even though in our experiments the method succeeded to approximate the hidden field even with a \emph{single grid point} to compute the loss function, here we consider all the grid points in the domain to obtain improved accuracy in the results. We trained the network for $\rm 30$ epochs using an \texttt{Adam} optimizer \cite{kingma2014adam} that takes $170$ seconds on a Tesla T4 GPU available on a free Google Colaboratory account\footnote{\href{https://colab.research.google.com/}{https://colab.research.google.com/}}. Figure \ref{fig:example2} depicts the results obtained by the proposed scheme. The diffusion map is discovered with a maximum relative error of only $2\%$, while the error in the solution field is $1\%$. It is noteworthy to mention that the accuracy of the results in this architecture are influenced by the accuracy of the discretizations used in the solver layer. While we used a second-order accurate finite difference discretization, it is possible to improve these results by using higher order discretizations instead. We leave such optimizations to future work.
\begin{figure}
\centering
\includegraphics[width=0.55\linewidth]{./figures/2D_results/Linear/loss_case_I_section_3-1-1.png}
\caption{Mean absolute error in solution vs. epochs for the two dimensional tilted plane (case I).}
\label{fig:example2loss}
\end{figure}
\begin{table}[ht]
\centering
\resizebox{1 \textwidth}{!}{
\begin{tabular}{| c | c | c | c || c | c || c | c ||| c| c | c |c|}\toprule \hline
T & \# params & \texttt{C(32)} & \texttt{C(32)} & \texttt{C(64)} & \texttt{C(64)} & \texttt{C(128)} & \texttt{C(128)} & \texttt{D(64)} & \texttt{D(32)} & $\rm MAE_D$ & $\rm L^\infty_D$ \\ \hline\hline
1 & $1,468,323$ & Y & Y & Y & Y & Y & Y & Y & Y & $0.0144207$ & $0.0294252$\\
2 & $1,459,075$ & Y & - & Y & Y & Y & Y & Y & Y & $0.0193128$ & $0.0267854$\\
3 & $1,422,147$ & Y & - & Y & - & Y & Y & Y & Y & $0.0226252$ & $0.0527432$\\
4 & $1,274,563$ & Y & - & Y & - & Y & - & Y & Y & $0.0199361$ & $0.0272122$ \\
5 & $682,627$ & Y & - & Y & - & Y & - & - & Y & $0.0141946$ & $0.0243868$ \\
6 & $313,859$ & Y & - & Y & - & - & - & - & Y & $0.0301841$ & $0.0544990$ \\
7 & $46,467$ & Y & - & Y & - & - & - & - & - & $0.0190432$ & $0.0264254$ \\
8 & $6,915$ & - & - & - & - & - & - & - & - & $0.0183808$ & $0.0267156$ \\
\hline\bottomrule
\end{tabular}
}
\caption{Influence of architecture of the decoder stage on mean absolute error $\rm MAE_D \equiv \rm \sum \vert D(\mathbf{x}) - \hat{D}(\mathbf{x})\vert/N$ and maximum error $\rm L^\infty_D$ in the discovered hidden field in case I. Double vertical lines correspond to \texttt{MaxPooling2D()} layers and triple vertical lines correspond to \texttt{Flatten()} layer. \texttt{C(o)} and \texttt{D(o)} stand for \texttt{conv2D(filters)} and \texttt{Dense(neurons)} layers respectively. There are $3$ neurons at the bottleneck not shown in the table. }
\label{tab::arch_field}
\end{table}
\textbf{Influence of architecture.} Table \ref{tab::arch_field} tabulates the mean absolute error in the discovered tilted plane diffusion coefficient for different architectures of the encoder stage. No significant improvement is observed for deeper or shallower encoder network for the example considered here.
\textbf{Case II. superimposed Zernike polynomials.}
We consider a more complicated hidden diffusion field given by
\begin{align*}
&D(x,y)= 4 + a_0 + 2a_1x + 2 a_2 y + \sqrt{3}a_3 (2 x^2 + 2y^2 - 1).
\end{align*}
The boundary condition function $u_{BC}$ and the source field $f$ are given by
\begin{align*}
&u_{BC}(x,y)=\cos(\pi x)\cos(\pi y)\qquad \textrm{and} \qquad f(x,y)=x+ y.
\end{align*}
Figure \ref{fig:example3} illustrates the performance of the proposed Zernike-based network using a mean absolute error measure for the loss function. We trained the network for $\rm 100$ epochs using an \texttt{Adam} optimizer \cite{kingma2014adam}.
\begin{figure}
\centering
\subfigure[Learned diffusion.]{ \includegraphics[width=0.45\linewidth]{./figures/2D_results/2D_Zernike_1/07.png} }\quad\quad
\subfigure[True diffusion.]{ \includegraphics[width=0.45\linewidth]{./figures/2D_results/2D_Zernike_1/08.png} }\quad\quad
\subfigure[Learned solution.]{ \includegraphics[width=0.45\linewidth]{./figures/2D_results/2D_Zernike_1/05.png} }\quad\quad
\subfigure[True solution.]{ \includegraphics[width=0.45\linewidth]{./figures/2D_results/2D_Zernike_1/06.png} }\quad\quad
\subfigure[Error in learned solution $u-\hat{u}$.]{\includegraphics[width=0.45\linewidth]{./figures/2D_results/2D_Zernike_1/04.png} }\quad\quad
\subfigure[Error in learned diffusion coefficient.]{\includegraphics[width=0.45\linewidth]{./figures/2D_results/2D_Zernike_1/02.png} }\quad\quad
\caption{Results in the two dimensional parabolic case.}
\label{fig:example3}
\end{figure}
\textbf{Resilience to noise.} We also assess the performance of our scheme on noisy datasets. We consider a zero-mean Gaussian noise with standard deviation $0.025$ superimposed on the solution field. Figure \ref{fig:performance} depicts the solution learned from a noisy input image. The network succeeds in discovering the diffusion field with comparable accuracy as in the noise-free case. Note that this architecture naturally removes the added noise from the learned solution, a feature that is similar to applying a low-pass filter on noisy images.
\begin{figure}
\centering
\subfigure[Learned diffusion.]{ \includegraphics[width=0.45\linewidth]{./figures/withNoiseTest2D/D_learned_300epochs.png} }\quad\quad
\subfigure[True diffusion.]{ \includegraphics[width=0.45\linewidth]{./figures/withNoiseTest2D/D_true_300epochs.png} }\quad\quad
\subfigure[Learned solution.]{ \includegraphics[width=0.45\linewidth]{./figures/withNoiseTest2D/Sol_learned_300epochs.png} }\quad\quad
\subfigure[Noisy input solution.]{ \includegraphics[width=0.45\linewidth]{./figures/withNoiseTest2D/Sol_True_300epochs.png} }\quad\quad
\subfigure[Error in learned solution $u-\hat{u}$.]{\includegraphics[width=0.45\linewidth]{./figures/withNoiseTest2D/Sol_Err_300epochs.png} }\quad\quad
\subfigure[Error in learned diffusion coefficient.]{\includegraphics[width=0.45\linewidth]{./figures/withNoiseTest2D/D_Err_300epochs.png} }\quad\quad
\caption{Results in the two dimensional case with added noise. After 300 epochs the network discovers the hidden diffusion field with a maximum relative error of $5\%$. Interestingly the learned solution is resilient to added noise and the network approximates a noise-free solution.}
\label{fig:performance}
\end{figure}
\begin{figure}
\centering
\subfigure[$\rm L_1$ loss vs. epoch for case II without added noise.]{\includegraphics[width=0.45\linewidth]{./figures/2D_results/2D_Zernike_1/loss_case_II_section_3-1-1_noiseless.png} \label{subfig::losspara}}\quad\quad
\subfigure[$\rm L_2$ loss vs. epoch for case II with added noise.]{\includegraphics[width=0.45\linewidth]{./figures/2D_results/2D_Zernike_1/loss_case_II_section_3-1-1_noise.png} \label{subfig::lossparaNose}}\quad\quad
\subfigure[$\rm L_2$ loss vs. epoch for 1D inverse transform.]{\includegraphics[width=0.45\linewidth]{./figures/inverse_transform/MSE_1D.png} \label{subfig::lossinv1D}}\quad\quad
\subfigure[$\rm L_2$ loss vs. epoch for 2D inverse transform.]{\includegraphics[width=0.45\linewidth]{./figures/inverse_transform/MSE_2D.png} \label{subfig::lossinv2D}}\quad\quad
\caption{Mean absolute/square error vs. epochs for (top panel) the two dimensional parabolic experiment (case II) with and without added Gaussian noise of section \ref{sec::tests}, and (bottom panel) the inverse transform for 1D and 2D experiments of section \ref{sec::inverse}.}
\label{fig:example3loss}
\end{figure}
\subsubsection{Learning the inverse transform}\label{sec::inverse}
In the previous sections, we have applied BiPDE to find the variable diffusion coefficient from a single input image. Another interesting feature of the proposed semantic autoencoder architecture is its ability to train neural networks in order to discover the inverse transform for the underlying hidden fields \textit{in a self-supervised fashion}. In this scenario, the trained encoder learns the inverse transform function that approximates the hidden parameters given a solution field to its input. Note that even though the same task could be accomplished by supervised learning of the hidden fields, \emph{i.e.} by explicitly defining loss on the hidden fields without considering the governing equations, BiPDEs substitute the data labels with a governing PDE and offer comparable prediction accuracy. In this section we train BiPDEs over ensembles of solution fields to estimate hidden Zernike moments of diffusion coefficients underlying unseen data.
\textbf{One dimensional inverse transform}
\begin{figure}
\centering
\subfigure[Regression quality is $\rm R^2=0.9906891$.]{ \includegraphics[width=0.45\linewidth]{./figures/inverse_transform/1D_a0_without_noise_gen.png} }
\subfigure[Regression quality is $\rm R^2=0.9953392$.]{ \includegraphics[width=0.45\linewidth]{./figures/inverse_transform/1D_a1_without_noise_gen.png} }
\subfigure[Regression quality is $\rm R^2=0.9796781$.]{ \includegraphics[width=0.45\linewidth]{./figures/inverse_transform/1D_a0_with_noise_gen.png} }
\subfigure[Regression quality is $\rm R^2=0.9834912$.]{ \includegraphics[width=0.45\linewidth]{./figures/inverse_transform/1D_a1_with_noise_gen.png} }
\caption{(Top, bottom) panel shows performance of BiPDE over $1000$ randomly chosen one-dimensional images with $\rm N_x=160$ grid points after $1000$ epochs (with,without) added zero-mean Gaussian noise with standard deviation $0.025$ to the test sample. The hidden diffusion coefficient is $D(x)=1 + a_0 + a_1 x$. In each case the $\rm R^2$ coefficient is reported for the blue data points, where unknown parameters fall within the training range $[0.25,0.75]$. Red data points show predictions outside of training range. Network has $20,222$ trainable parameters, and training takes $\sim 2$ seconds per epoch on a Tesla T4 GPU available on a free Google Colaboratory account.}
\label{fig:1d_inverse}
\end{figure}
We build a one dimensional semantic autoencoder using 3 layers with $100,~ 40$, and $2$ neurons respectively. We used the \texttt{ReLU} activation function for the first two layers and a \texttt{Sigmoid} activation function for the last layer representing the hidden parameters. A linear solver is then stacked with this encoder that uses the second order accurate finite difference discretization, \textit{i.e.}
\begin{align*}
&\frac{D_{i-1/2} u_{i-1} - (D_{i-1/2} + D_{i+1/2})u_{i} + D_{i+1/2}u_{i+1} }{\Delta x^2} + f_{i}=0, & D_{i+1/2}=\frac{D_i + D_{i+1}}{2}
\end{align*}
However, the diffusion map is internally reconstructed using the hidden parameters before feeding the output of the encoder to the solver. As a test problem, we consider the one dimensional Poisson problem with a generic linear form for the diffusion coefficient,
\begin{align*}
D(x)=1 + a_0 + a_1 x.
\end{align*}
We consider identical left and right Dirichlet boundary conditions of 0.2 for all images and let the source term be $f(x)=\sin(\pi x)$. We consider random diffusion coefficients $a_0$ and $a_1$ with a uniform distribution in $[0.25, 0.75]$ and we generate $1000$ solutions over the domain $x\in [-1,1]$. We train BiPDE over $900$ images from this dataset and validate its performance over the remaining $100$ images using a mean squared error loss function for $1000$ epochs. Each image is generated on a uniform grid with $\rm N_x=160$ grid points. We used a batch size of $100$ in these experiments using the \texttt{Adam} optimizer. Figure \ref{subfig::lossinv1D} shows loss versus epochs in this experiment. Figure \ref{fig:1d_inverse} compares learned and true coefficients over two independent test samples containing $1000$ solutions, with and without a zero-mean Gaussian noise with standard deviation $0.025$, \emph{i.e.} amounting to $\sim 13\%$ added noise over the images.
In figure \ref{fig:1d_inverse}, we expanded the range of unknown parameters $a_0,a_1\in [0.15, 0.85]$ in our test sample to assess performance of trained encoder over unseen data that are outside the range of training data (as a measure of generalizability of BiPDEs). In this figure blue points correspond to new images whose true unknown parameters fall inside the training range, and red data points correspond to those outside the training range. We observe that the encoder is able to predict the unknown parameters even outside of its training range, although its accuracy gradually diminishes far away from the training range. Note that the predicted values for $a_0$ and $a_1$ exhibit a systematic error towards the lower and upper bounds of the \texttt{Sigmoid} activation function, indicative of the influence of the \texttt{Sigmoid} activation function used in the last layer. This emphasizes the significance of properly designing activation functions at the bottleneck.
Using the $\rm R^2$ statistical coefficient as a measure of accuracy for the trained encoder, we assess effects of sample size and grid points on the performance of BiPDEs and report the results in table \ref{tab::tab::R2coeff1D}.
\begin{enumerate}
\item \textit{Effect of sample size:} First, we fix number of grid points and vary sample size. We find that increasing sample size improves accuracy of the predictions in the case of clean data, however in the case of noisy data the accuracy does not show significant improvement by enlarging sample size.
\item \textit{Effect of grid points:} Second, we fix the sample size and gradually increase number of grid points. We find that accuracy of predictions on noisy data is strongly correlated with number of grid points, however this dependence is weaker for clean data.
\end{enumerate}
\begin{table}
\centering
\begin{tabular}{| c | c | c | c | c |}\toprule \hline
1D inverse transform & \multicolumn{2}{c|}{Noiseless} &\multicolumn{2}{c|}{ Noisy ($13\%$ relative noise)}\\ \hline
Sample Size, $\rm N_x=100$ & $a_0$ & $a_1$ & $a_0$ & $a_1$\\ \hline
$\rm N_{data}=250$ & $0.9953634$ & $0.9977753$ & $0.9609166$ & $0.9570264$\\
$\rm N_{data}=500$ & $0.9979478$ & $0.9988417$ & $0.9644154$ & $0.9640230$\\
$\rm N_{data}=1000$ & $0.9990417$ & $0.9992921$ & $0.9600430$ & $0.9586783$\\
$\rm N_{data}=2000$ & $0.9995410$ & $0.9997107$ & $0.9599427$ & $0.9652383$\\
$\rm N_{data}=4000$ & $0.9994279$ & $0.9994974$ & $0.9603496$ & $0.9661519$\\
$\rm N_{data}=8000$ & $0.9998054$ & $0.9998115$ & $0.9614795$ & $0.9619859$\\ \hline
Grid Points, $\rm N_{data}=1000$ & $a_0$ & $a_1$ & $a_0$ & $a_1$\\ \hline
$\rm N_{x}=20$ & $0.9900532$ & $0.9987560$ & $0.8623348$ & $0.8822680$\\
$\rm N_{x}=40$ & $0.9932568$ & $0.9975166$ & $0.9161125$ & $0.9081806$\\
$\rm N_{x}=80$ & $0.9986574$ & $0.9993274$ & $0.9509870$ & $0.9511483$\\
$\rm N_{x}=160$ & $0.9991550$ & $0.9990234$ & $0.9747287$ & $0.9762977$\\
$\rm N_{x}=320$ & $0.9985649$ & $0.9987451$ & $0.9861375$ & $0.9860783$\\
$\rm N_{x}=640$ & $0.9991842$ & $0.9991606$ & $0.9920950$ & $0.9922520$\\
\hline\bottomrule
\end{tabular}
\caption{$\rm R^2 $ coefficient for predicted Zernike coefficients of the one dimensional Poisson problem at different training sample size and number of grid points.}
\label{tab::tab::R2coeff1D}
\end{table}
\textbf{Two dimensional inverse transform}
We consider an example of variable diffusion coefficients parameterized as $D(x,y)=4 + 2a_2 y + \sqrt{3}a_3(2x^2 + 2y^2 - 1)$, with unknown coefficients randomly chosen in range $a_2,a_3\in [0.25,0.75]$. The equations are solved on a square domain $\Omega \in [-\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}]^2$ governed by the Poisson equation:
\begin{align*}
\nabla\cdot \big([4 + 2a_2 y + \sqrt{3}a_3(2x^2 + 2y^2 - 1)]\nabla u\big) + x + y=0, &\qquad (x,y)\in \Omega, \\
u_{BC}=\cos(\pi x) \cos(\pi y), &\qquad (x,y)\in\partial\Omega.
\end{align*}
The encoder is composed of two convolutional layers with $32$ and $64$ channels followed by a $2\times 2$ average pooling layer and a dense layer with $128$ neurons, at the bottleneck there are $2$ neurons representing the two unknowns. All activation functions are \texttt{ReLU} except at the bottleneck that has \texttt{Sigmoid} functions. An \texttt{Adam} optimizer is used on a mean squared error loss function.
We trained BiPDE over $900$ generated solutions with randomly chosen parameters $a_2,a_3$ and validated its performance on 100 independent solution fields for $300$ epochs, evolution of loss function is shown in figure \ref{subfig::lossinv2D}. Then we tested the trained model over another set of $1000$ images with randomly generated diffusion maps independent of the training dataset. Furthermore, we repeated this exercise over $1000$ images with added zero-mean Gaussian noise with standard deviation $0.025$. In each case, the learned parameters are in good agreement with the true values, as illustrated in figure \ref{fig:2d_inverse}. Moreover, we performed a sensitivity analysis on the accuracy of the predictions with respect to sample size. We measured quality of fit by the $\rm R^2$ statistical coefficient. Results are tabulated in table \ref{tab::tab::Two_Unknowns_inverseTran2D} and indicate training over more samples leads to more accurate predictions when applied to clean data, while noisy data do not show a strong improvement by increasing sample size.
\begin{figure}
\centering
\subfigure[Regression quality is $\rm R^2=0.9915683$.]{ \includegraphics[width=0.45\linewidth]{./figures/inverse_transform/2D_quad_a2_without_noise.png} } \quad\quad
\subfigure[Regression quality is $\rm R^2=0.9986852$.]{ \includegraphics[width=0.45\linewidth]{./figures/inverse_transform/2D_quad_a3_without_noise.png} }\quad\quad
\subfigure[Regression quality is $\rm R^2=0.9896654$.]{ \includegraphics[width=0.45\linewidth]{./figures/inverse_transform/2D_quad_a2_with_noise.png} } \quad\quad
\subfigure[Regression quality is $\rm R^2=0.9915149$.]{ \includegraphics[width=0.45\linewidth]{./figures/inverse_transform/2D_quad_a3_with_noise.png} }\quad\quad
\caption{Top row shows performance of BiPDE over $1000$ randomly chosen clean 2D images after 1000 epochs, and the bottom panel shows performance of the same network on noisy images given a zero-mean Gaussian noise with standard deviation $0.025$. Network has $1,852,000$ trainable parameters and training takes $\sim 11$ seconds on a Tesla T4 GPU available on a free Google Colaboratory account.}
\label{fig:2d_inverse}
\end{figure}
\begin{table}
\centering
\begin{tabular}{| c | c | c | c | c |}\toprule \hline
2D inverse transform & \multicolumn{2}{c|}{$\rm Noiseless$} &\multicolumn{2}{c|}{ Noisy ($13\%$ relative noise)}\\ \hline
Sample Size & $a_2$ & $a_3$ & $a_2$ & $a_3$\\ \hline
$\rm N_{data}=250$ & $0.9897018$ & $0.9958963$ & $0.9872887$ & $0.9892064$\\
$\rm N_{data}=500$ & $0.9917211$ & $0.9977917$ & $0.9910183$ & $0.9900091$\\
$\rm N_{data}=1000$ & $0.9915683$ & $0.9986852$ & $0.9896654$ & $0.9915149$\\
$\rm N_{data}=2000$ & $0.9940470$ & $0.9993891$ & $0.9909640$ & $0.9883151$\\
$\rm N_{data}=4000$ & $0.9938268$ & $0.9997119$ & $0.9919061$ & $0.9898697$\\
\hline\bottomrule
\end{tabular}
\caption{$\rm R^2 $ coefficient for predicted Zernike coefficients of the two dimensional Poisson problem by increasing training sample size. Number of grid points are fixed at $\rm 30\times 30$.}
\label{tab::tab::Two_Unknowns_inverseTran2D}
\end{table}
\subsection{Dynamic Burger's problem}\label{sec::meshless}
In this section, we demonstrate the applicability of BiPDEs on time-dependent nonlinear partial differential equations, and we use those results to illustrate the consistency and accuracy of the proposed framework. Similar to previous works \cite{Raissi2017PhysicsID}, we consider the nonlinear Burgers' equation in one spatial dimension,
\begin{align}
&\frac{\partial u}{\partial t} + u\frac{\partial u}{\partial x} = \nu \frac{\partial^2 u}{\partial x^2} &x\in [-1,1],~~ t\in [0,1) \label{eq::Burgers}\\
&u(-1,t)=u(1,t)=0 &u(x,0)=-\sin(\pi x)
\end{align}
where $\nu=1/Re$ with $Re$ being the Reynolds number. Notably, Burgers' equation is of great practical significance for understanding evolution equations as it is nonlinear. Burgers' equation has been used as a model equation for the Navier-Stokes equation and by itself can be used to describe shallow water waves \cite{debnath2011nonlinear}, turbulence \cite{burgers1948mathematical}, traffic flow \cite{nagatani2000density}, and many more.
\begin{itemize}
\item \textbf{Discretization.}
In our design we adopted the $6^{\rm th}$-order compact finite difference scheme proposed by Sari and Gurarslan (2009) \cite{sari2009sixth} for its simplicity of implementation, its high accuracy and because it leads to a linear system with narrow band and subsequently ensures computational efficiency. This scheme combines a tridiagonal\footnote{Tridiagonal systems of equations can be obtained in $\mathcal{O}(N)$ operations.} sixth-order compact finite difference scheme (CFD6) in space with a low-storage third-order accurate total variation diminishing Runge-Kutta scheme (TVD-RK3) for its time evolution (\cite{shu1988efficient}). In particular, the high-order accuracy associated with this discretization provides highly accurate results on coarse grids. This is an important aspect of BiPDEs as a data-efficient inverse solver, which stems from their capacity to seamlessly blend highly accurate and sophisticated discretization methods with deep neural networks.
The first-order spatial derivatives are given at intermediate points by
\begin{align}
&\alpha u'_{i-1} + u'_i + \alpha u'_{i+1} = b \frac{u_{i+2} - u_{i-2}}{4h} + a\frac{u_{i+1} - u_{i-1}}{2h}, &i=3,\cdots, N-2
\end{align}
where
\begin{align}
&a=\frac{2}{3}(\alpha+2), &b=\frac{1}{3}(4\alpha -1),
\end{align}
and $h=x_{i+1} - x_i$ is the mesh size, with grid points identified by the index $i=1,2,\cdots, N$. For $\alpha=1/3$ we obtain a sixth order accurate tridiagonal scheme. The boundary points (for non-periodic boundaries) are treated by using the formulas \cite{gaitonde1998high,sari2009sixth},
\begin{align*}
u'_1 + 5 u'_{2} &= \frac{1}{h} \bigg[ -\frac{197}{60}u_1 - \frac{5}{12}u_2 + 5u_3 - \frac{5}{3} u_4 + \frac{5}{12} u_5 -\frac{1}{20}u_6\bigg]\\
\frac{2}{11}u'_1 + u'_2 + \frac{2}{11}u'_3 &= \frac{1}{h}\bigg[ -\frac{20}{33} u_1 -\frac{35}{132}u_2 + \frac{34}{33}u_3 - \frac{7}{33}u_4 + \frac{2}{33}u_5 -\frac{1}{132}u_6\bigg] \\
\frac{2}{11}u'_{N-2} + u'_{N-1} + \frac{2}{11}u'_{N}&=\frac{1}{h}\bigg[ \frac{20}{33}u_N + \frac{35}{132}u_{N-1} - \frac{34}{33}u_{N-2} + \frac{7}{33}u_{N-3} - \frac{2}{33}u_{N-4} + \frac{1}{132}u_{N-5} \bigg]\\
5u'_{N-1} + u'_{N}&=\frac{1}{h}\bigg[ \frac{197}{60}u_N + \frac{5}{12} u_{N-1} - 5 u_{N-2} + \frac{5}{3}u_{N-3} -\frac{5}{12}u_{N-4} + \frac{1}{20} u_{N-5} \bigg]
\end{align*}
This can be easily cast in the matrix form
\begin{align}
BU'=AU
\end{align}
where $U=[u_1, u_2, \cdots, u_N]^T$ is the vector of solution values at grid points. Furthermore, second order derivatives are computed by applying the first-order derivatives twice\footnote{From implementation point of view this is a very useful feature of this scheme, because $A$ and $B$ are constant matrices that do not change through training it is possible to pre-compute them using \texttt{numpy}'s \cite{numpy} basic data structures, and then simply import the derivative operators into \texttt{TensorFlow}'s custom solver layer using \texttt{tf.convert\_to\_tensor()} command. },
\begin{align}
BU'' = AU'
\end{align}
Burgers' equation are thus discretized as:
\begin{align}
&\frac{dU}{dt}=\mathcal{L}U, &\mathcal{L}U=\nu ~ U'' - U\tens U',
\end{align}
where $\tens$ is the element-wise multiplication operator and $\mathcal{L}$ is a \textit{nonlinear} operator. We use a low storage TVD-RK3 method to update the solution field from time-step $k$ to $k+1$,
\begin{align}
U^{(1)}&=U_k + \Delta t\mathcal{L}U_k\\
U^{(2)}&=\frac{3}{4}U_k + \frac{1}{4}U^{(1)} + \frac{1}{4}\Delta t \mathcal{L}U^{(1)}\\
U_{k+1}&=\frac{1}{3} U_k + \frac{2}{3}U^{(2)} + \frac{2}{3}\Delta t\mathcal{L}U^{(2)}
\end{align}
with a CFL coefficient of $1$. Note that this method only requires two storage units per grid point, which is useful for large scale scientific simulations in higher dimensions.
\item \textbf{Training protocol.} For training, we first solve Burgers' equation for $M$ time-steps, then we construct two shifted solution matrices that are separated by a single time-step, \emph{i.e.},
\begin{align}
&\mathcal{U}^{-1}=\begin{bmatrix}
U^1 & U^2 & \cdots & U^{M-1}
\end{bmatrix} &\mathcal{U}^{+1}=\begin{bmatrix}
U^{2} & U^{3} & \cdots & U^M
\end{bmatrix}
\end{align}
Basically, one step of TVD-RK3 maps a column of $\mathcal{U}^{-1}$ to its corresponding column in $\mathcal{U}^{+1}$ given an accurate prediction for the hidden parameter. Hence, a semantic BiPDE is trained with $\mathcal{U}^{-1}$ and $\mathcal{U}^{+1}$ as its input and output respectively. The unknown diffusion coefficient is discovered by the code at the bottleneck of the architecture.
\item \textbf{Numerical setup.} To enable direct comparison with PINNs, we also consider a second parameter $\gamma$ in Burger's equation. In these current experiments we train for $2$ unknown parameters $(\nu, \gamma)$ in the Burger's equation given by
\begin{align*}
&u_t + \gamma u u_x - \nu u_{xx}=0, & t\in [0,1], ~ ~~ x\in [-1,1]
\end{align*}
Similar to Raissi \emph{et al.} \cite{Raissi2017PhysicsID} we consider $\nu=0.01/\pi$ and $\gamma=1.0$. For completeness we also recall the loss function used in PINN that encodes Burger's equation as a regularization,
\begin{align*}
MSE=\frac{1}{N}\sum_{i=1}^N \bigg\vert u(t^i_u, x_u^i) - u^i \bigg\vert^2 +\frac{1}{N}\sum_{i=1}^N \bigg\vert u_t(t_u^i, x_u^i) + \gamma u(t_u^i, x_u^i) u_x(t_u^i, x_u^i) - \nu u_{xx}(t_u^i, x_u^i) \bigg\vert^2
\end{align*}
where $(t^i_u, x^i_u, u^i)$ constitute training data with $N=2000$ observation points in the spatio-temporal domain. In this experiment PINN is composed of $9$ layers with $20$ neurons per hidden layer. It is worth mentioning that we are reporting BiPDE results by considering solutions in a narrower time span $t\in [0,0.2]$.
\item \textbf{Architecture.} Obviously, one can choose a single neuron to represent an unknown parameter $\nu$ or $\gamma$ and in a few iterations an approximate value can be achieved. However, our goal is to train a general purpose encoder that is capable of predicting the unknown value from an input solution pair with arbitrary values of $\nu$ and $\gamma$ and without training on new observations (\emph{cf.} see part \ref{subsec::invD}). Therefore, we consider a BiPDE that is composed of a \texttt{conv1D} layer with $128$ filters and a kernel size of $10$ with \texttt{tanh} activation function, followed by \texttt{AveragePooling1D} with a pool size of $2$ that after flattening is fed to two \texttt{Dense} layers with $20$ and $2$ neurons respectively that apply \texttt{Sigmoid} activation function. We used the \texttt{Adam} optimizer to minimize the mean absolute error measure for the loss function.
\item \textbf{Accuracy test.} First, we train for two unknowns in Burger's equation, namely $\nu$ and $\gamma$. We perform a sensitivity analysis for $200$ epochs with different numbers of spatial grid points, as well as different time-steps. In each case, we measure the error between the learned values of $\nu$ and $\gamma$ with their true value $\nu_{\rm true}=0.01/\pi$ and $\gamma_{\rm true}=1.0$. Convergence results of this experiment are tabulated in table \ref{tab::Two_Unknowns_NU} and shown in figure \ref{fig:sensitivity}. We find that increasing the number of grid points (\emph{i.e.} the resolution) improves the accuracy up to almost $700$ grid points before accuracy in $\nu$ (but not $\gamma$) starts to deteriorate. Note the decrease in time-step size does not have a significant effect on accuracy unless large number of grid points $N_x>160$ are considered where decreasing time-step clearly improves results.
\begin{figure}[!h]
\centering
\subfigure[Error in $\nu$ - BiPDE with finite difference method.]{ \includegraphics[width=0.45\linewidth]{./figures/TWO_UNKNOWNS_NU.png} }\quad\quad
\subfigure[Error in $\gamma$ - BiPDE with finite difference method.]{ \includegraphics[width=0.45\linewidth]{./figures/TWO_UNKNOWNS_GAMMA.png} }\quad\quad
\subfigure[Error in $\nu$ - PINN.]{ \includegraphics[width=0.45\linewidth]{./figures/maziar_ii_tab_1.png}\label{subfig:c} }\quad\quad
\subfigure[Error in $\gamma$ - PINN.]{ \includegraphics[width=0.45\linewidth]{./figures/maziar_ii_tab_1_gamma.png} \label{subfig:d}}\quad\quad
\caption{Sensitivity analysis in training both parameters $\gamma$ and $\nu$ with BiPDE (a,b), also results from table 1 of Raissi \textit{et al.\ } (2017) \cite{Raissi2017PhysicsID} are shown for comparison (c,d) - note only the solid red line may be compared to BiPDE results where no noise is considered on the solution data. True values are $\nu_{\rm true}=0.01/\pi$ and $\gamma=1.0$. In figure (a) the data points at the right end of $N_x$ axis correspond to $N_x=700$ grid points where the accuracy in the discovered $\nu$ value deteriorates.}
\label{fig:sensitivity}
\end{figure}
\begin{table}
\centering
\begin{tabular}{| c | c | c | c| c | c| c| }\toprule \hline
$\rm \#~ epochs=200$ & \multicolumn{2}{c}{$\rm \Delta t=0.001$} & \multicolumn{2}{c}{$\rm \Delta t = 0.0005$} & \multicolumn{2}{c}{$\rm \Delta t = 0.00025$} \\ \hline
grid size & $\nu$ & $\gamma$ & $\nu$ & $\gamma$ & $\nu$ & $\gamma$ \\ \hline
$\rm N_x=20$ & $0.0028751$ & $0.9500087$ & $0.0028731$ & $0.9500685$ & $0.0028828$ & $0.9499334$ \\
$\rm N_x=40$ & $0.0030294$ & $0.9750050$ & $0.0030341$ & $0.9750042$ & $0.0030391$ & $0.9750047$\\
$\rm N_x=80$ & $0.0031067$ & $0.9875077$ & $0.0031101$ & $0.9875285$ & $0.0031167$ & $0.9875455$\\
$\rm N_x=160$ & $0.0031455$ & $0.9937580$ & $0.0031443$ & $0.9937674$ & $0.0031519$ & $0.9937985$\\
$\rm N_x=320$ & $0.0031659$ & $0.9968843$ & $0.0031679$ & $0.9968919$ & $0.0031738$ & $0.9969027$\\
$\rm N_x=640$ & $0.0031775$ & $0.9984500$ & $0.0031797$ & $0.9984597$ & $0.0031841$ & $0.9984711$ \\
$\rm N_x=700$ & $0.0031773$ & $0.9985866$ & $0.0031779$ & $0.9985945$ & $0.0031865$ & $0.9986123$\\
\hline\bottomrule
\end{tabular}
\caption{Discovering two unknown values of $\nu$ and $\gamma$ in Burger's equation. These values are plotted in figure \ref{fig:sensitivity}.}
\label{tab::Two_Unknowns_NU}
\end{table}
For comparison purposes we report numerical results from table 1 of Raissi \textit{et al.\ } (2017) \cite{Raissi2017PhysicsID} in our figures \ref{subfig:c}--\ref{subfig:d}. Here we only presented noise-less results of BiPDE, therefore only the $0\%$ added noise case of PINN is comparable, \emph{i.e.} the solid red line in figures \ref{subfig:c}--\ref{subfig:d}. Even though the two test cases have significant differences and much care should be taken to directly compare the two methods, however BiPDEs have a clear advantage by exhibiting \emph{convergence} in the unknown values, \emph{i.e.} more data means better results.
\begin{table}
\centering
\begin{tabular}{| c | c | c | c|}\toprule \hline
$\rm <\hat{\nu}>$ & $\rm \Delta t=0.001$ & $\rm \Delta t = 0.0005$ & $\rm \Delta t = 0.00025$ \\ \hline\hline
$\rm N_x=20$ & $0.0064739$ & $0.0065189$ & $0.0065514$ \\
$\rm N_x=40$ & $0.0048452$ & $0.0048200$ & $0.0047086$ \\
$\rm N_x=80$ & $0.0040260$ & $0.0040324$ & $0.0039963$ \\
$\rm N_x=160$ & $0.0036042$ & $0.0036011$ & $0.0036310$ \\
$\rm N_x=320$ & $0.0033958$ & $0.0034144$ & $0.0033827$ \\
$\rm N_x=640$ & $0.0032919$ & $0.0032895$ & $0.0032916$ \\
$\rm N_x=700$ & $0.0032829$ & $0.0032816$ & $0.0032906$ \\
\hline\bottomrule
\end{tabular}
\caption{Discovering one unknown parameter in Burger's equation, values for $\nu$. }
\label{tab::One_Unknowns_NU}
\end{table}
In a second experiment, we fix the value of $\gamma=1.0$ and only train for the unknown diffusion coefficient $\nu$. Similar to previous test we trained the network for $200$ epochs and figure \ref{fig:sensitivity2} shows the error in the discovered value of $\nu$ at different levels of resolution. In this case decreasing time-step size does not seem to have a significant effect on accuracy. A curious observation is the absolute value of error for $\nu$ is two orders of magnitude more precise when the network is trained for both parameters $\nu$ and $\gamma$ than when only tuning for $\nu$. Again, convergence in unknown parameter is retained in this experiment.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{./figures/ONE_UNKNOWN_NU.png}
\caption{Sensitivity analysis in training only one parameter $\nu$. True value of $\nu_{\rm true}=0.01/\pi$ is sought in Burgers' equation at different levels of resolution. Rightmost data points correspond to $N_x=700$ grid points.}
\label{fig:sensitivity2}
\end{figure}
\end{itemize}
\section{Mesh-less BiPDE: Multi-Quadratic Radial Basis Functions}\label{sec::meshfree}
Not only are direct computations of partial derivatives from noisy data extremely challenging, in many real world applications, measurements can only be made on scattered point clouds. Tikhonov regularization type approaches have been devised to avoid difficulties arising from high sensitivity of differencing operations on noisy data \cite{cullum1971numerical,chartrand2011numerical,stickel2010data}; for neural network based approaches, see \cite{maas2012recurrent,shen2017denoising}. Recently, Trask \textit{et al.\ } \cite{trask2019gmls} have proposed an efficient framework for learning from unstructured data that is based on the Generalized Moving Least Squares (GMLS) technique. They show performance of GMLS-Nets to identify differential operators and to regress quantities of interest from unstructured data fields. Another interesting approach had been put forth in the late $80$s by \cite{broomhead1988radial,poggio1990networks} that designed neural networks based on Radial Basis Functions (RBF) to perform functional interpolation tasks. In these networks, the activation function is defined as the radial basis function of interest and the training aims to discover the weights of this network, which interestingly coincide with the coefficients in the corresponding radial basis expansion.
Since the early 70s, RBFs have been used for highly accurate interpolation from scattered data. In particular, Hardy \cite{hardy1971multiquadric} introduced a special kind of RBF called the \textit{multiquadratic} series expansion, that provides superior performance in terms of accuracy, stability, efficiency, simplicity and memory usage \cite{franke1982scattered}. Kansa (1990) \cite{kansa1990multiquadricsI,kansa1990multiquadricsII} pioneered the use of radial basis functions to solve time dependent partial differential equations through deriving a modified multi-quadratic scheme. In 1998, Hon and Mao \cite{hon1998efficient} applied multiquadratics as a spatial discretization method for the nonlinear Burgers' equation and solved it for a wide range of Reynolds numbers (from 0.1 to 10,000). Their scheme was later enhanced to second-order accuracy in time by Xie and Li (2013) \cite{xie2013meshless} via introducing a compact second-order accurate time discretization. Interestingly, the accuracy of these mesh-free methods can be improved by fine-tuning distributions of collocation points or their \textit{shape parameters}. For example, Hon and Mao devised an adaptive point to chase the peak of shock waves, which improved their results. Fortunately, such fine-tuning of parameters can be automated using BiPDE networks; we demonstrate this in this section.
\begin{itemize}
\item \textbf{Discretization.} We chose to blend the second-order accurate method of Xie and Li, briefly described next and we leave further details to their original paper \cite{xie2013meshless}. Initially, one can represent a distribution $u(\mathbf{x})$ in terms of a linear combination of radial basis functions,
\begin{align}
&u(\mathbf{x})\approx \sum_{j=0}^{N_s} \lambda_j \phi_j(\mathbf{x}) + \psi(\mathbf{x}), &\mathbf{x}\in\Omega \subset \mathcal{R}^{dim},\label{eq::RBF}
\end{align}
where $\phi(\mathbf{x})$ is the radial basis function that we adopt,
\begin{align}
&\phi_j(\mathbf{x})=\sqrt{r_j^2 + c_j^2}, &r_j^2 = \vert\vert \mathbf{x} - \mathbf{x}_j\vert\vert_2^2,
\end{align}
and $c_j$ is the \textit{shape parameter} that has been experimentally shown to follow $c_j = Mj + b$ with $j=0, 1, \cdots, N_s$ ($N_s$ is number of seed points). Moreover $M$ and $b$ are tuning parameters. In equation \eqref{eq::RBF}, $\psi(\mathbf{x})$ is a polynomial to ensure solvability of the resulting system when $\phi_j$ is only conditionally positive definite. To solve PDEs, one only needs to represent the solution field with an appropriate form of equation \eqref{eq::RBF}. In the case of Burgers' equation the solution at any time-step $n$ can be represented by
\begin{align}
&u^n(x)\approx \sum_{j=0}^{N_s} \lambda_j^n \phi_j(x) + \lambda_{N_s+1}^n x + \lambda_{N_s+2}^n \label{eq::RBFXL}
\end{align}
over a set of reference points for the basis functions that are given by $x_j=j/N_s$, $j=0,1,\cdots, N_s$. Xie and Li derived the following compact second-order accurate system of equations
\begin{align}
\big[1 + \frac{\Delta t}{2} u_x^n(\hat{x}_j) \big] u^{n+1}(\hat{x}_j) + \frac{\Delta t}{2} u^n(\hat{x}_j) u^{n+1}_x(\hat{x}_j) - \frac{\nu \Delta t}{2}u_{xx}^{n+1}(\hat{x}_j) = u^n(\hat{x}_j) + \frac{\nu \Delta t}{2}u_{xx}^n(\hat{x}_j) \label{eq::XieLi}
\end{align}
over a set of $N_d+1$ distinct collocation points $\hat{x}_j=(1+j)/(N_d+2)$ with $j=0,1,\cdots, N_d$. Two more equations are obtained by considering the left and right boundary conditions $u^{n+1}(x_{L}) = u^{n+1}(x_{R})=0$. Note that spatial derivatives are directly computed by applying derivative operator over equation \eqref{eq::RBFXL}. At every time-step, one solves for the $N+3$ coefficients $\lambda^n_0,\cdots, \lambda^n_{N+2}$, while the spatial components of the equations remain intact (as long as points are not moving). The solution is obtained over the initial conditions given by $u^0(\hat{x}_j)$.
For implementation purposes, we represent the system of equations \eqref{eq::XieLi} in a matrix notation that is suitable for tensorial operations in \texttt{TensorFlow}. To this end, we first write equation \eqref{eq::RBFXL} as
\begin{align}
U^n(\hat{x})= A(\hat{x}) \Lambda^n, \label{eq::linear}
\end{align}
where
\begin{align}
&U^n_{(N_d+1)\times 1} = \begin{bmatrix}
u^n(\hat{x}_0) \\ u^n(\hat{x}_1) \\ \vdots \\ u^n(\hat{x}_{N_d})
\end{bmatrix}, & \Lambda^n_{(N_s+1)\times 1} = \begin{bmatrix}
\lambda_0^n \\ \lambda_1^n \\ \vdots \\ \lambda_{N_s}^n
\end{bmatrix},
\end{align}
\begin{align}
&\bigg[A_{ij}(\hat{\mathbf{x}})\bigg]_{(N_d+1) \times (N_s+1)}=\bigg[\phi_{j}(\hat{x}_i) - \phi_{j}(x_L) - \frac{ \phi_{j}(x_R) - \phi_j(x_L)}{x_R -x_L}(\hat{x}_i - x_L) \bigg],
\end{align}
with $i=0,1,\cdots, N_d$ and $j=0,1,\cdots, N_s$. Note that we already injected the homogeneous boundary conditions into equation \eqref{eq::linear}. Therefore, equation \eqref{eq::XieLi} can be written as,
\begin{align}
\bigg[ A + (\mathbf{g}_x~\mathbf{1}^T)\tens A + (\mathbf{g}~ \mathbf{1}^T)\tens A_x - \frac{\nu \Delta t}{2}A_{xx}\bigg] \Lambda^{n+1} = \bigg[A + \frac{\nu \Delta t}{2}A_{xx}\bigg] \Lambda^n, \label{eq::RBFequat}
\end{align}
where $\mathbf{1}^T=[1,~ 1, ~\cdots, ~1]_{1\times(N_s+1)}$, $\tens$ is component-wise multiplication, and
\begin{align}
&\mathbf{g}= \frac{\Delta t}{2}A \Lambda^n, &\mathbf{g}_x= \frac{\Delta t}{2}A_x \Lambda^n, \\
&(A_x)_{ij}=\phi'_{j}(\hat{x}_i) - \frac{ \phi_{j}(x_R) - \phi_j(x_L)}{x_R -x_L}, &(A_{xx})_{ij}=\phi''_{j}(\hat{x}_i).
\end{align}
Note that in case of training for two parameters $(\nu,\gamma)$, expression for $\mathbf{g}$ in equation \ref{eq::RBFequat} needs to be modified by letting $\mathbf{g}=\frac{\gamma \Delta t}{2}A\Lambda^n$.
\item \textbf{Architecture.} Note that both the collocation points and the interpolation seed points can be any random set of points within the domain and not necessarily a uniform set of points as we chose above. In fact, \textit{during training we allow BiPDE to find a suitable set of interpolation points as well as the shape parameters on its own}. The input data is calculated using aforementioned finite difference method over uniform grids and later interpolated on a random point cloud to produce another sample of solutions on unstructured grids for training. Thus, in our architecture the last layer of the encoder has $2N_s + 1$ neurons with \texttt{sigmoid} activation functions representing the $2N_s$ shape parameters and seed points, as well as another neuron for the unknown diffusion coefficient. Note that for points to the left of origin, in the range $x\in [-1, 0]$, we simply multiplied the output of $N_s$ activation functions by $``-1''$ within the solver layer (because output of \texttt{Sigmoid} function is always positive). We use the mean squared error between data and predicted solution at time-step $n+p$ as the loss function. We used the \texttt{Adam} optimizer to minimize the loss function.
\item \textbf{Training protocol.} As in the previous case, we apply successive steps of MQ-RBF scheme to march the input data forward to a future time-step. Not surprisingly, we observed that taking higher number of steps improves the results because erroneous guess of the diffusion coefficient leads to more pronounced discrepancy after longer periods of time. Hence, we map the data $\mathcal{U}^{-p}$ to $\mathcal{U}^{+p}$, which is $p$ time-steps shifted in time,
\begin{align}
&\mathcal{U}^{-p}=\begin{bmatrix}
U^1, U^2, \cdots, U^{M-p}
\end{bmatrix} &\mathcal{U}^{+p}=\begin{bmatrix}
U^{1+p}, U^{2+p}, \cdots, U^{M}
\end{bmatrix}
\end{align}
In our experiments a value of $ p=10$ was sufficient to get satisfactory results at the absence of noise. However, at the presence of Gaussian noise and for smaller values of the diffusion coefficient (such as for $\nu_{\rm true}=0.01/\pi$) we had to increase the shift parameter to $ p=100$.
\item \textbf{Numerical setup.} Once again, we let $\nu_{\rm true}=0.01/\pi\approx 0.00318$ and integrate Burgers' equation up to $t_f=0.2$ with a fixed time-step of $\Delta t=0.001$. We use the finite difference method of the previous section to generate the datasets. We then interpolate the solution on 80 data points, uniformly distributed in the range $(-1,1)$ with 20 interpolation seed points. For this experiment, we set the batch size to $1$. We trained the network using \texttt{Adam} optimizer. The results after 50 epochs are given in figure \ref{fig:results_RBF_proc}.
\begin{figure}
\centering
\subfigure[True solution generated by finite differences (input data).]{ \includegraphics[height=0.37\linewidth]{./figures/RBF_poc_trueSol.png} } \quad\quad
\subfigure[Learned solution generated by MQ-RBF BiPDE (output data).]{ \includegraphics[height=0.37\linewidth]{./figures/RBF_poc_learnedSol.png} }\quad\quad
\subfigure[Error in solution.]{ \includegraphics[height=0.4\linewidth]{./figures/RBF_poc_Error.png} }\quad\quad
\subfigure[Discovered seeds and shape parameters.]{ \includegraphics[height=0.4\linewidth]{./figures/RBF_poc_seeds_shapeParams.png} }\quad\quad
\subfigure[Distribution of diffusion coefficients.]{ \includegraphics[width=0.45\linewidth]{./figures/RBF_poc_learnedSigma.png} }\quad\quad
\subfigure[Evolution of mean squared error during training.]{ \includegraphics[height=0.45\linewidth]{./figures/RBF_proc_epochs.png} }\quad\quad
\caption{Results of applying the RBF-BiPDE to Burgers' equation with a true diffusion coefficient of $\nu_{\rm true}=0.003183$. The average value of the predicted diffusion coefficients is $\hat{\nu}=0.00320$.}
\label{fig:results_RBF_proc}
\end{figure}
Interestingly, for every pair of input-output, the network discovers a distinct value for the diffusion coefficient that provides a measure of uncertainty for the unknown value. We report the average value of all diffusion coefficients as well as the probability density of these values. We observe that for all pairs of solutions, the predicted value for the diffusion coefficient is distributed in the range $0.00305\le \hat{\nu} \le 0.00340$ with an average value of $<\hat{\nu}>=0.00320$, which is in great agreement with the true value, indeed with $0.6\%$ relative error. Interestingly, we observe that the BiPDE network has learned to concentrate its interpolation seed points around the origin where the solution field varies more rapidly. Furthermore, around $x=\pm 0.5$, the interpolation points are more sparse, which is in agreement with the smooth behavior of the solution field at these coordinates. Therefore, this network may be used as an automatic shock tracing method to improve numerical solutions of hyperbolic problems with shocks and discontinuities as was shown by Hon and Mao.
\item \textbf{Resilience to noise on unstructured grids.} We consider several cases to assess robustness to noise. In each case, we pick $80$ \textit{randomly} distributed points along the $x$-axis and linearly interpolate the solution field on this set of points. Then, we add a Gaussian noise with a given standard deviation. This noisy and unstructured data field is then fed into the MQ-RBF based BiPDE of this section. We use a batch size of 10, with $10\%$ of each sample for validation during training. A summary of our results follows:
\begin{enumerate}
\item Let $\nu_{\rm true}=0.1/\pi$, $p=10$, $N_d=80$, $N_s=20$, $\Delta t=0.001$, and consider a Gaussian noise with a standard deviation of $1\%$. After $100$ epochs, we obtain the results in figure \ref{fig:results_RBF_proc2}.
\begin{figure}
\centering
\subfigure[True solution generated by finite differences and with added noise. Solution is interpolated on a random grid.]{ \includegraphics[height=0.35\linewidth]{./figures/noisy_dynamic_01_pi_1percent/true_solution.png} } \quad\quad
\subfigure[Learned solution generated by MQ-RBF BiPDE (output data).]{ \includegraphics[height=0.35\linewidth]{./figures/noisy_dynamic_01_pi_1percent/learned_solution.png} }\quad\quad
\subfigure[Error in solution.]{ \includegraphics[height=0.4\linewidth]{./figures/noisy_dynamic_01_pi_1percent/error.png} }\quad\quad
\subfigure[Discovered seeds and shape parameters. Error bars indicate one standard deviation.]{ \includegraphics[height=0.4\linewidth]{./figures/noisy_dynamic_01_pi_1percent/params_seeds.png} }\quad\quad
\subfigure[Probability density of diffusion coefficients.]{ \includegraphics[width=0.45\linewidth]{./figures/noisy_dynamic_01_pi_1percent/histogram.png} }\quad\quad
\subfigure[Evolution of mean squared error versus number of epochs.]{ \includegraphics[width=0.45\linewidth]{./figures/noisy_dynamic_01_pi_1percent/epochs.png} }\quad\quad
\caption{Results of applying the RBF-BiPDE to Burgers' equation with a true diffusion coefficient of $\nu_{\rm true}=0.03183$. The average value of the predicted diffusion coefficients is $\hat{\nu}=0.0331$. The data is provided on a scattered point cloud with added Gaussian noise with $1\%$ standard deviation.}
\label{fig:results_RBF_proc2}
\end{figure}
\item Let $\nu_{\rm true}=0.1/\pi$, $p=100$, $N_d=200$, $N_s=20$, $\Delta t=0.001$, and consider a Gaussian noise with a standard deviation of $5\%$. After $150$ epochs, we obtain the results in figure \ref{fig:results_RBF_proc3}.
\begin{figure}
\centering
\subfigure[True solution generated by finite differences and with added noise. Solution is interpolated on a random grid.]{ \includegraphics[height=0.35\linewidth]{./figures/noisy_dynamic_01_pi_5percent/true_solution.png} } \quad\quad
\subfigure[Learned solution generated by MQ-RBF BiPDE (output data).]{ \includegraphics[height=0.35\linewidth]{./figures/noisy_dynamic_01_pi_5percent/learned_solution.png} }\quad\quad
\subfigure[Error in solution.]{ \includegraphics[height=0.4\linewidth]{./figures/noisy_dynamic_01_pi_5percent/error.png} }\quad\quad
\subfigure[Discovered seeds and shape parameters. Error bars indicate one standard deviation.]{ \includegraphics[height=0.4\linewidth]{./figures/noisy_dynamic_01_pi_5percent/seeds_shape_params.png} }\quad\quad
\subfigure[Probability density of diffusion coefficients.]{ \includegraphics[width=0.45\linewidth]{./figures/noisy_dynamic_01_pi_5percent/histogram.png} }\quad\quad
\subfigure[Evolution of mean squared error versus number of epochs.]{ \includegraphics[width=0.45\linewidth]{./figures/noisy_dynamic_01_pi_5percent/epochs.png} }\quad\quad
\caption{Results of applying the RBF-BiPDE to Burgers' equation with a true diffusion coefficient of $\nu_{\rm true}=0.03183$. The average value of the predicted diffusion coefficients is $\hat{\nu}=0.03160$. The data is provided on a scattered point cloud with added Gaussian noise with $5\%$ standard deviation.}
\label{fig:results_RBF_proc3}
\end{figure}
\item Let $\nu_{\rm true}=0.01/\pi$, $p=100$, $N_d=80$, $N_s=20$, $\Delta t=0.001$, and consider a Gaussian noise with a standard deviation of $1\%$. After $200$ epochs, we obtain the results in figure \ref{fig:results_RBF_proc4}.
\begin{figure}
\centering
\subfigure[True solution generated by finite differences and with added noise. Solution is interpolated on a random grid.]{ \includegraphics[height=0.35\linewidth]{./figures/noisy_dynamic_001_pi_1percent/true_solution.png} } \quad\quad
\subfigure[Learned solution generated by MQ-RBF BiPDE (output data).]{ \includegraphics[height=0.35\linewidth]{./figures/noisy_dynamic_001_pi_1percent/learned_solution.png} }\quad\quad
\subfigure[Error in solution.]{ \includegraphics[height=0.4\linewidth]{./figures/noisy_dynamic_001_pi_1percent/error.png} }\quad\quad
\subfigure[Discovered seeds and shape parameters. Error bars indicate one standard deviation.]{ \includegraphics[height=0.4\linewidth]{./figures/noisy_dynamic_001_pi_1percent/params_seeds.png} }\quad\quad
\subfigure[Probability density of diffusion coefficients.]{ \includegraphics[width=0.45\linewidth]{./figures/noisy_dynamic_001_pi_1percent/histogram.png} }\quad\quad
\subfigure[Probability density of diffusion coefficients.]{ \includegraphics[width=0.45\linewidth]{./figures/noisy_dynamic_001_pi_1percent/epochs.png} }\quad\quad
\caption{Results of applying the RBF-BiPDE to Burgers' equation with a true diffusion coefficient of $\nu_{\rm true}=0.003183$. The average value of the predicted diffusion coefficients is $\hat{\nu}=0.00352$. The data is provided on a scattered point cloud with added Gaussian noise with $1\%$ standard deviation.}
\label{fig:results_RBF_proc4}
\end{figure}
\item Let $\nu_{\rm true}=0.01/\pi$, $p=100$, $N_d=200$, $N_s=20$, $\Delta t=0.001$, and consider a Gaussian noise with a standard deviation of $5\%$. After $150$ epochs, we obtain the results in figure \ref{fig:results_RBF_proc5}.
\begin{figure}
\centering
\subfigure[True solution generated by finite differences and with added noise. Solution is interpolated on a random grid.]{ \includegraphics[height=0.35\linewidth]{./figures/noisy_dynamic_001_pi_5percent/true_solution.png} } \quad\quad
\subfigure[Learned solution generated by MQ-RBF BiPDE (output data).]{ \includegraphics[height=0.35\linewidth]{./figures/noisy_dynamic_001_pi_5percent/learned_solution.png} }\quad\quad
\subfigure[Error in solution.]{ \includegraphics[height=0.4\linewidth]{./figures/noisy_dynamic_001_pi_5percent/error.png} }\quad\quad
\subfigure[Discovered seeds and shape parameters. Error bars indicate one standard deviation.]{ \includegraphics[height=0.4\linewidth]{./figures/noisy_dynamic_001_pi_5percent/seeds_shape_params.png} }\quad\quad
\subfigure[Probability density of diffusion coefficients.]{ \includegraphics[width=0.45\linewidth]{./figures/noisy_dynamic_001_pi_5percent/histograms.png} }\quad\quad
\subfigure[Probability density of diffusion coefficients.]{ \includegraphics[width=0.45\linewidth]{./figures/noisy_dynamic_001_pi_5percent/Epochs.png} }\quad\quad
\caption{Results of applying the RBF-BiPDE to Burgers' equation with a true diffusion coefficient of $\nu_{\rm true}=0.003183$. The average value of the predicted diffusion coefficients is $\hat{\nu}=0.003677$. The data is provided on a scattered point cloud with added Gaussian noise with $5\%$ standard deviation.}
\label{fig:results_RBF_proc5}
\end{figure}
\end{enumerate}
We observe that this architecture is generally robust to noise. However, at higher noise values require more tuning of hyperparameters, as well as longer training.
\item \textbf{Accuracy tests.} We report the values of the discovered diffusion coefficients in the Burgers' equation for different grid sizes and different time-steps. We use the same setting as that detailed in the numerical setup part in this section. Particularly, the interpolation seeds are determined by the network and the training data is on a uniformly distributed set of points computed by the finite difference method of the previous section. We consider three different time-steps, $\Delta t=0.001, ~0.0005, ~0.00025$, and two diffusion coefficients of $\nu_{\rm true}=0.01/\pi,~0.1/\pi$ over grids of size $N_x=80,~ 160$. At each time-step, for all experiments with different grid sizes, we stop the training when the mean squared error in the solution field converges to a constant value and does not improve by more epochs; this roughly corresponds to $50, ~ 25,~12$ epochs for each of the training time-steps, respectively. This indicates that by choosing smaller time steps less number of epochs are needed to obtain the same level of accuracy in the unknown parameter. Furthermore, we use an \texttt{Adam} optimizer with a learning rate of 0.001.
The results of the accuracy tests are tabulated in tables \ref{tab::nu1}--\ref{tab::nu2}. We observe, for all experiments, that the discovered coefficient is in great agreement with the true values. Due to adaptivity of the interpolation seed points and their shape parameters for different experiments, the observed error values do not seem to follow the trend of traditional finite difference methods, as depicted in previous sections. This could also be due to lower order of accuracy of the MQ-RBF method, \textit{i.e.} being a second-order accurate method, compared to the higher-order accurate finite difference method used in the previous section.
\begin{table}
\centering
\begin{tabular}{| c | c | c | c|}\toprule \hline
$ <\hat{\nu}>$ & $\Delta t=0.001$ & $\Delta t = 0.0005$ & $\Delta t = 0.00025$ \\ \hline
$\#~ epochs$ & $50$ & $25$ & $12$\\ \hline
$N_x=80$ & $0.03173\pm 3.4\times 10^{-4}$ & $0.03196 \pm 4.2\times 10^{-4}$ & $0.03188 \pm 2.8\times 10^{-4}$ \\
$N_x=160$ & $0.03186 \pm 5.8\times 10^{-5}$ & $0.03191\pm 3.6\times 10^{-4}$ & $0.03137\pm 1.2\times 10^{-4}$ \\ \hline\bottomrule
\end{tabular}
\caption{Discovered values of the diffusion coefficient for $\nu_{\rm true}=0.03183$ at different time-steps and grid sizes. }
\label{tab::nu1}
\end{table}
\begin{table}
\centering
\begin{tabular}{| c | c | c | c|}\toprule \hline
$ <\hat{\nu}>$ & $\Delta t=0.001$ & $\Delta t = 0.0005$ & $\Delta t = 0.00025$ \\ \hline
$\#~ epochs$ & $50$ & $25$ & $12$\\ \hline
$N_x=80$ & $0.003326\pm 5.1\times 10^{-5}$ & $0.003162\pm 2.2\times 10^{-4}$ & $0.003155\pm 1.2 \times 10^{-4}$ \\
$N_x=160$ & $0.003264\pm 1.0\times 10^{-4}$ & $0.003151\pm 1.3\times 10^{-4}$ & $0.003192\pm 1.2\times 10^{-4}$ \\ \hline \bottomrule
\end{tabular}
\caption{Discovered values of the diffusion coefficient for $\nu_{\rm true}=0.003183$ obtained with different time-steps and grid sizes.}
\label{tab::nu2}
\end{table}
\end{itemize}
\subsection{Learning the inverse transform}\label{subsec::invD}
As we emphasized before, a feature of BiPDE is to produce self-supervised pre-trained encoder models for inverse differential problems that are applicable in numerous applications where hidden values should be estimated in real-time. We train an encoder over a range of values $\nu\in [0.1/\pi,~ 1/\pi]$ and assess the performance of the trained model on new data with arbitrarily chosen $\nu$ values. We choose 50 diffusion coefficients that are distributed uniformly in this range, then integrate the corresponding Burgers' equation up to $t_f=0.2$ with a constant time-step of $\Delta t=0.0005$ on a grid with $N_x=100$ grid points using the aforementioned finite difference method. There are $4000$ time-steps in each of the $50$ different realizations of Burgers' equation. For a fixed value of $p=20$, we draw $10$ solution pairs for each value of $\nu$ at uniformly distributed time instances and discard the first two instances to improve convergence of the network. Hence, the training data uniformly samples the space of solutions over a $8\times 50$ grid of $(t, \nu)$, as illustrated in figure \ref{fig::semanticBiPDE_dyn}. We use the resulting $400$ pairs in training of a semantic BiPDE, with $320$ pairs used for training and $80$ pairs for validation.
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth]{./figures/RBF_data_space_dynamic_actual_data_v2.png}
\caption{Topology of data points for training and testing of the semantic BiPDE. Along the $\nu$ dimension, we depict $10$ (out of $50$) of the selected data points, while along the time dimension we illustrate the actual $8$ data points. Training pairs of $\mathcal{U}^{-p}$ and $\mathcal{U}^{+p}$ are color coded by black and orange dots, respectively; testing pairs are depicted by blue and red crosses. On the right panel, we illustrate the training data for three nominal values of the diffusion coefficient, highlighted by green shades. Green arrows indicate the direction of time.}
\label{fig::semanticBiPDE_dyn}
\end{figure}
\textbf{Architecture.} Given an arbitrary input, the signature of the hidden physical parameters will be imprinted on the data in terms of complex patterns spread in space and time. We use a CNN layer as a front end unit to transform the input pixels to internal image representations. The CNN unit has $32$ filters with kernel size of $5$. The CNN is followed by max pooling with pool size of $2$, which is then stacked with another CNN layer of $16$ filters and kernel size of $5$ along with another max pooling layer. The CNN block is stacked with two dense layers with $100$ and $41$ neurons, respectively. CNN and dense layers have \texttt{ReLU} and \texttt{Sigmoid} activation functions, respectively. Overall, there are $42,209$ trainable parameters in the network. Conceptually, the CNN extracts features on every snapshot that characterizes the evolution of the solution field through time-steps with a proper physical parameter. This parameter is enforced to be the diffusion coefficient through the PDE solver decoder stage. We train this network for $500$ epochs using an \texttt{Adam} optimizer.
\textbf{Resilience to noise.} Even though the encoder is trained on ideal datasets, we demonstrate a semantic BiPDE still provides accurate results on noisy datasets. In contrast to other methods, we pre-train the network in a self-supervised fashion on clean data and later we apply the trained encoder on unseen noisy data\footnote{Note that the network could also be trained on noisy data as we showed before; however training would take longer in that case.}.
\begin{figure}
\centering
\subfigure[Performance of encoder on training data set.]{ \includegraphics[height=0.35\linewidth]{./figures/RBF_encoder/inverse_500epochs_final/new_run/training.png} } \quad\quad
\subfigure[Distribution of interpolation points and shape parameters discovered by the network.]{ \includegraphics[height=0.35\linewidth]{./figures/RBF_encoder/inverse_500epochs_final/new_run/seeds_shapes.png} } \quad\quad
\subfigure[Performance of the encoder on unseen data.]{ \includegraphics[height=0.35\linewidth]{./figures/RBF_encoder/inverse_500epochs_final/new_run/test_clean_data.png} } \quad\quad
\subfigure[Performance of the encoder on unseen data with Gaussian noise with standard deviation $0.01$.]{ \includegraphics[height=0.35\linewidth]{./figures/RBF_encoder/inverse_500epochs_final/new_run/noisy_data_1percent.png} } \quad\quad
\caption{Semantic autoencoder learns how to discover hidden variables from pairs of solutions. These results are obtained after $500$ epochs on $50$ data points along the $\nu$-axis.}
\label{fig:semanticBiPDE_dyn}
\end{figure}
In figure \ref{fig:semanticBiPDE_dyn}, we provide the performance of this network on training as well as on unseen clean/noisy data-sets. Furthermore, the network determines optimal parameters of the MQ-RBF method by evaluating interpolation seed points as well as their corresponding shape parameters to obtain the best approximation over \textit{all} input data.
\section{Conclusion}
We introduced BiPDE networks, a natural architecture to infer hidden parameters in partial differential equations given a limited number of observations. We showed that this approach is versatile as it can be easily applied to arbitrary static or nonlinear time-dependent inverse-PDE problems. We showed the performance of this design on multiple inverse Poisson problems in one and two spatial dimensions as well as on the non-linear time-dependent Burgers' equation in one spatial dimension. Moreover, our results indicate BiPDEs are robust to noise and can be adapted for data collected on unstructured grids by resorting to traditional mesh-free numerical methods for solving partial differential equations. We also showed the applicability of this framework to the discovery of inverse transforms for different inverse-PDE problems.
There are many areas of research that could be further investigated, such as considering diffusion maps with discontinuities across subdomains, using more sophisticated neural network architectures for more complex problems, using higher-order numerical solvers and finally tackle more complicated governing PDE problems with a larger number of unknown fields or in higher dimensions.
\section*{Acknowledgment}
This research was supported by ARO W911NF-16-1-0136 and ONR N00014-17-1-2676.
\newpage
\section*{References}
\bibliographystyle{abbrv}
\addcontentsline{toc}{section}{\refname}
|
1,108,101,566,473 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
The \textit{Lite (Light) satellite for the studies of B-mode polarization and Inflation from cosmic background Radiation Detection} (LiteBIRD) mission\cite{Sugai:article} is the successor of the CMB space missions COBE\cite{COBE:ARTICLE}, WMAP\cite{WMAP:article}, and Planck\cite{Planck:article}, each of which has given landmark scientific discoveries. A detection of primordial gravitational waves with LiteBIRD (at a level $\delta r < 0.001$) would indicate that inflation occurred near the energy scale associated with grand unified theories and would provide additional evidence of an inherently quantum-gravitational process\cite{Grav_waves}.
Additionally, the energy scale of inflation has important implications for other aspects of fundamental physics, such as axions and neutrinos. LiteBIRD’s ability to measure the entire sky at the largest angular scales with 15 frequency bands is complementary to ground-based experiments\cite{LiteBIRD_PTEP}. Ground-based experiments can also improve LiteBIRD observations with high-resolution lensing data.
A key component of LiteBIRD is its polarization modulator unit (PMU), an essential feature to suppress the $1/f$ noise contribution (low-frequency system drifts induced by thermal variations or detector gain drift) and and mitigate systematics uncertainties induced by detector gain drifts.
The polarization modulation methodology based on HWP is already used by a large number of experiments and can be divided in two families: a step-and-integrate strategy (SPIDER\cite{SPIDER:article}, QUBIC\cite{QUBIC_hwp, QUBIC:article}) and a continuously-rotating HWP (ABS\cite{ABS:article}, EBEX\cite{ebex_pol}, ACT-pol\cite{ACTpol:article}, POLARBEAR-2\cite{Polarbear:article, Polarbear2}, LSPE/SWIPE\cite{Lamagna:article, Columbro2020:article}).
Each of the 3 telescopes (high, mid and low frequency: HFT, MFT, LFT) is equipped with a cryogenic continuously-rotating half-wave plate (HWP) based on a superconducting magnetic bearing (SMB), an emerging technology with a low technology readiness level (TRL).
In this contribution we present the baseline design (Sec.~\ref{sec:baseline}) of the MHFT (mid- and high-frequency telescope) PMUs, which use the metal-mesh filter technology, while the LFT PMU uses of an achromatic 9-layer sapphire HWP\cite{Sakurai_SPIE}. The MHFT design takes inspiration from the LSPE/SWIPE one which is under testing (Sec.~\ref{sec:mockup}). A room-temperature mockup was used to develop and validate the eddy current model and to develop the driver and readout electronics (Sec.~\ref{sec:electronic}).
The expected performance of the LiteBIRD PMUs is discussed in Sec.~\ref{sec:performance} and will be confirmed during the first test of the breadboard model.
\section{Baseline design}
\label{sec:baseline}
In this section, we present the baseline design of the LiteBIRD MHFT PMU. Since both modulators will be mounted on a space mission, the design is driven by stringent requirements on mass, dimensions, stiffness, power dissipation, and TRL for the levitation, driving, gripping, and position encoding mechanisms. The most important requirements for both PMUs are summarized in Tab.~\ref{tab:MHFT_requirements}.
\begin{table}[ht]
\caption{MHFT-PMU main requirements.}
\label{tab:MHFT_requirements}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Parameter} & \multicolumn{2}{c|}{\bf{Requirement}} \\
\hline
\rule[-1ex]{0pt}{3.5ex} & \bf{MFT} & \bf{HFT} \\
\hline
\rule[-1ex]{0pt}{3.5ex} Spin rate & \SI{39}{rpm} (\SI{0.65}{\hertz}) & \SI{61}{rpm} (\SI{1.02}{\hertz}) \\
\hline
\rule[-1ex]{0pt}{3.5ex} HWP diameter & \SI{320}{\milli\meter} & \SI{220}{\milli\meter} \\
\hline
\rule[-1ex]{0pt}{3.5ex} HWP temperature & \multicolumn{2}{c|}{$<\SI{20}{\kelvin}$} \\
\hline
\rule[-1ex]{0pt}{3.5ex} Load on the \SI{5}{\kelvin} stage & \multicolumn{2}{c|}{$<\SI{4}{\milli\watt}$} \\
\hline
\rule[-1ex]{0pt}{3.5ex} Angular accuracy & $<\SI{1}{\arcmin}$ & $<\SI{5}{\arcmin}$ \\
\hline
\rule[-1ex]{0pt}{3.5ex} Total mass & \multicolumn{2}{c|}{$<\SI{20}{\kilogram}$} \\
\hline
\end{tabular}
\end{center}
\end{table}
The modulator is conceptually similar to the EBEX\cite{SMB:article, ebex_pol}, POLARBEAR-2\cite{Polarbear2} and LSPE/SWIPE\cite{LSPE:article} designs, but is more challenging and ambitious because of the space application.
The HWP diameters of MFT and HFT are \SI{320}{\milli\meter} and \SI{220}{\milli\meter}, respectively.
The concept of the design is shown in Fig.~\ref{fig:MHFT_design} and is the same for both modulators with a scaling of the components.
In contrast to the most common design of a superconducting magnetic bearing (SMB), we chose a different configuration: the magnet ring and the superconductor are not stacked up but the internal rotating ring is the magnetic one and the external is the superconducting one, in order to obtain a side face to face interaction and minimize horizontal displacement.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=7cm]{Images/View.png}
\includegraphics[height=7cm]{Images/Section.png}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:MHFT_design}
\textit{Left panel}: Overview of the polarization modulator unit design. The concept of the design is the same with a scaling of the geometry (\SI{320}{\milli\meter} and \SI{220}{\milli\meter} diameter HWPs are mounted in the center, for MFT and HFT, respectively). \textit{Right panel}: Section view of the modulator: a nearly frictionless bearing is obtained with the magnetic levitation of rotor composed by a permanent magnet rotor ring (cream grey) sandwitched between an encoder ring and a groove ring. The stator ring hosts an array of superconducting bulks (black) and the electromagnetic motor composed of 2 sets of 32 coils each coupled with 8 small motor magnets hosted in the rotor.
}
\end{figure}
The selected superconductor, YBCO (Yttrium barium copper oxide), is the type-II superconductor with the highest
pinning force and critical current density ($\sim$ \SI{e5}{\ampere.\milli\meter^{-2}}), and was chosen because higher critical current means lower hysteresis losses\cite{Bean:article}.
The rotor is composed of three rings stacked along the optical axis, starting from the bottom:
\begin{itemize}
\item Groove ring: used to clamp the plate above the YBCO transition temperature.
\item Magnetic ring: composed of 2 Samarium-Cobalt magnetic rings sandwiched between 3 thin iron rings to produce a more uniform magnetic field.
\item Aluminum ring with three different functions: to align the HWP in the center, to measure the angular position of the rotor with the encoder and to hold the motor magnets used in the driver system.
\end{itemize}
The drive mechanism is conceptually similar to an electromagnetic motor. We use 8 SmCo magnets (\SI{2}{\milli\meter} thick, \SI{9}{\milli\meter} diameter) coupled with 2 rings of 32 coils each, on the top and on the bottom of the rotor to obtain a larger and more uniform force. The coils are connected in series (4 series of 16 coils each). The geometric parameters chosen are reported in Tab.~\ref{tab:MHFT_coils}, and the average force produced by the motor during operation (16 coils) is \SI{280}{\milli\newton\per\ampere}/\SI{414}{\milli\newton\per\ampere} for MFT/HFT.
\begin{table}[ht]
\caption{Coil parameters for the MFT and HFT. The diameter of the copper wire is \SI{0.2}{\milli\meter} and the resistance reported is assumed at \SI{300}{\kelvin}. }
\label{tab:MHFT_coils}
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Parameter} & \bf{Unit} & \bf{HFT} & \bf{MFT} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Coil diameter} & \SI{}{\milli\meter} & \SI{6}{} & \SI{5}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Coil length} & \SI{}{\milli\meter} & \SI{10}{} & \SI{10}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Turn density} & \SI{}{\milli\meter^{-1}} & \SI{25}{} & \SI{25}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Resistance (16 coils)} & \SI{}{\ohm} & \SI{103}{} & \SI{87}{} \\
\hline
\end{tabular}
\end{center}
\end{table}
During the launch, the rotor is held above the stator at room temperature by 3 pin pullers\footnote{\url{https://www.ebad.com/tini-pin-puller/}}, radially oriented towards the center of the HWP ring. After the launch, the pin pullers are released and 3 linear actuators hold the rotor in position during the cooldown process, until the YBCO is superconducting and the magnetic field is frozen. Hereafter the rotor is kept in place by the flux pinning and the clamps are released.
We developed a frictionless electromagnetic clamp/release system \cite{Actuator:article} suitable for any experiment equipped with a large cryogenic HWP rotator based on a SMB.
The main features of this system are:
\begin{itemize}
\itemsep-0.3em
\item large rotor mass compliance ($\sim\SI{10}{\kilogram}$);
\item zero power dissipation while holding the rotor;
\item fast ($\sim\SI{40}{\milli\second}$) release with low power dissipation ($\sim\SI{30}{\joule}$) during each operation;
\item low cost and high reliability over hundreds of operation cycles.
\end{itemize}
This system is intended to be used only once but if needed it can clamp the rotor as needed throughout the flight.
The PMU is also equipped with a custom capacitive sensors to measure the temperature and levitation height of the rotor\cite{PdB_levitation_measurement2020}.
The temperature sensor is a thermistor, physically mounted on the rotating device and biased with an AC current, which is transferred from the stationary electronics to the rotating device via capacitive coupling. The levitation height sensor is a network of capacitors, similar to the one used for the capacitive coupling of the thermistor. The system reaches an accuracy better than 3\% for the measurement of the thermistor resistance, and an accuracy of $\sim\SI{10}{\micro\meter}$ for the measurement of its levitation height.
\subsection{HWP design}
The baseline designs for the MFT and HFT HWPs are mesh-HWPs\cite{Pisano1 , Pisano2}. These quasi-optical components are based on the mesh-filter technology\cite{Pisano3}, which has been adapted to mimic anisotropic behaviour. A mesh-HWP is based on two stacks of anisotropic metal grids embedded into polypropylene. The two stacks, one inductive and one capacitive, are designed in such a way that two electromagnetic waves passing through them, polarised in orthogonal directions, will experience \SI{180}{\degree} phase-shift. Each stack has different grids, designed with specific geometries, located at optimised distances. The combination of all the grids, in our case 5 capacitive and 5 inductive, provides a differential phase-shift around \SI{180}{\degree} across the frequency of operation. The design and the manufacture of mesh-HWPs are described in detail elsewhere\cite{Pisano4}. The expected performance of the MFT and HFT mesh-HWPs are reported in Fig.~\ref{fig:hwp_band}. The transmission coefficients and the modulation efficiencies across the MFT and HFT bands are on average greater than 95\%.
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.48\linewidth]{Images/MFT.pdf}
\includegraphics[width=0.48\linewidth]{Images/HFT.pdf}
\end{tabular}
\end{center}
\caption{\label{fig:hwp_band}
MFT (\textit{Left}) and HFT (\textit{Right}) mesh-HWP preliminary designs expected performances as a function of frequency: transmissions, absorptions for the capacitive and inductive axes. Vertical dashed lines represent the central frequency of MHFT bands.
}
\end{figure}
\section{Room-temperature mockup}
\label{sec:mockup}
We developed a room-temperature mockup to validate the motor, the driver and readout electronics, the eddy current model, the main magnet inhomogeneities and the spinning frequency stablity.
The size of the mockup is similar to the LSPE/SWIPE polarization modulator (\SI{500}{\milli\meter} diameter), but the performance in terms of the friction can be scaled because of the well known diameter dependence (see Sec.~\ref{sec:losses}).
In place of the superconducting magnetic bearing, we used a low-friction ball bearing\footnote{\url{https://www.skf.com/it/index.html}}.
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.94\linewidth]{Images/Section_mockup_tilt.PNG} \\
\includegraphics[width=0.65\linewidth]{Images/Mockup.pdf}
\includegraphics[width=0.24\linewidth]{Images/Fiber_holder.jpeg}
\end{tabular}
\end{center}
\caption{\label{fig:mockup}
(\textit{Top}) CAD section of the room-temperature mockup, composed by 2 ball bearings (yellow) separated by an aluminum spacer allowing to rotate an umbrella support which keeps in position a lightened aluminum disk ($\sim\SI{2}{\kilo\gram}$). (\textit{Left}) Picture of the room-temperature mockup rotating. The rotation is driven by a set of 64 coils mounted on the upper aluminum ring coupled with 8 small Neodymium motor magnets hosted in the rotating disk. The YBCO holder is removed to show the magnet ring (top right). (\textit{Right}) Detail of the Polyether ether ketone (PEEK) encoder holder coupled with 64 evenly spaced slits on the rotor.}
\end{figure}
The \textit{top panel} of Fig.~\ref{fig:mockup} shows a CAD cross section of the mockup: in the center there are 2 ball bearings (yellow) separated by an aluminum spacer. The umbrella support in the center positions a dummy HWP, which is composed of a lightened aluminum disk ($\sim$\SI{2}{\kilo\gram}) and the magnet ring (red). The stator is mounted in its final position and its YBCO bulks are in a normal state. The set of 64 coils (the smaller in blue) is positioned on the top of the external part of the main disk and is coupled with 8 small Neodymium motor magnets (\SI{8}{\milli\meter} diameter) on the rotating disk. On the same diameter there are 64 evenly spaced slits for the encoder readout system.
The \textit{left bottom panel} of Fig.~\ref{fig:mockup} shows the assembled system without the stator while \textit{right bottom panel} of Fig~\ref{fig:mockup} shows a of the Polyether ether ketone (PEEK) encoder holder which is coupled with 64 evenly spaced slits on the rotor.
\subsection{Motor driver and position readout electronics}
\label{sec:electronic}
In the prototype implementation, the coils mounted on the stator are powered in 8 groups. Together with the small magnets on the rotor, they form an 8-phase low-torque motor, optimized to minimize the heat losses in the system. A sampled, smoothed trapezoidal-wave, stored in permanent memory, is used to drive eight independent multiplying DACs and current generators. These produce 8 suitably phased currents, flowing in the 8 groups of coils. The phasing is such that when the current through a given coil is at the positive maximum, the current in the next coil is at the negative minimum, so that the first coil pushes the magnet while the next one pulls it. The rotation is sensed by an optical encoder, consisting of 64 precision machined, equally spaced slits, in the periphery of the rotor. Their position is read by means of LED emitters, optical fibers, and photodiodes, in the same way as in the Pilot experiment cryogenic WP rotator \cite{Salatino:article}. The measured rotation speed is compared to the desired rotation speed to produce an error signal, which is PID-processed and used to modulate the reference of the multiplying DACs, and thus the amplitude of the driving currents. For synchronization with the rest of the instrument, each transit of a slit below a fixed reference position is time-stamped with the value of a wide-counter, driven by the 5 MHz master clock of the instrument. An additional single slit, placed on a larger radius in the periphery of the rotor, is read in the same way. Its transit below the reference position resets a position counter, updated by the transit of each of the 64 slits. The position counter is then output together with its master clock time-stamp.
\subsection{Friction tests}
We first mounted only the bearing and the encoder in order to quantify the friction of the bearing with and without the driver motor magnets. The friction is quantified in terms of power loss and measured by spinning the rotor up to $\sim\SI{1.6}{\hertz}$ and then letting it free to slow down, while reading its angular position versus time with the optical encoder.
The rotation of the system is described by the equation of motion:
\begin{equation}
\label{eq:motion}
\tau(i) - \tau_f(\omega) = I\frac{d\omega}{dt},
\end{equation}
where $\tau$ is the external torque applied to spin the rotor, $\tau_f$ is the torque of friction forces, $I$ is the moment of inertia of the rotating system and $\omega$ the angular velocity of the rotor we measure. When the bearing is free to slow down (applied torque $\tau_a = 0$) we can convert Eq.~\ref{eq:motion} into an equation for the dissipated power:
\begin{equation}
\label{eq:torque_SMB}
\tau_f(\omega) = \frac{P_f(\omega)}{\omega} \, \, \rightarrow \, \, P_f(\omega) = -\omega I \frac{d\omega}{dt}.
\end{equation}
Fig.~\ref{fig:power_loss} shows the spin down test performed for different configurations of the system in order to quantify the magnitude of each contribution to the total power budget.
The first configuration we tested consists only in the rotor. All conductive materials and the motor magnets are removed in order to measure the friction produced by the bearings (${\rm P}_0$). Than the whole system was assembled except for the motor magnets. This configuration allows us to quantify the eddy currents produced by the inhomogeneities ($\sim3\%$) of the main magnet (${\rm P}_{mag}$)\cite{Columbro:2021}. This value sets only an upper limit for the eddy currents in the cryo environment because the most of this contribution comes from the aluminum holder of the YBCO which will be mostly shielded by the superconductors at cryo temperature.
At the end we add the 8 motor magnets to determine their losses (${\rm P}_8$).
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=11cm]{Images/Comparison.pdf}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:power_loss}
Undersampled data of spin down tests calculated according to Eq.~\ref{eq:torque_SMB}. The power loss from friction is measured as a function of frequency for different contributions: ball bearings (${\rm P}_0$ in the \textit{top panel}), main magnet (${\rm P}_{\rm mag}$ in the \textit{central panel}) and 8 motor magnets (${\rm P}_8$ in the \textit{bottom panel:}). The main magnet and motor magnets contributions are differential measurements from the ball bearings contribution.
}
\end{figure}
\subsection{Angular accuracy}
The Proportional-Integrated-Derivative (PID) feedback controls both the frequency of pulses (allowing to spin up the rotor) and the magnitude of the current 32 times per round, to stabilize the rotation when the right frequency is reached.
The user specifies the target frequency of the rotor which should be changed during the operation. Knowing the position of the 8 magnets (one every 8 slits), the relative phase of current in each series of coils is determined. The maximum value of the current is reached when the magnets are in the middle of two coils. Due to the inertia of the system, an additional small phase that is dependent on frequency is inserted to optimize system performance. Fig.~\ref{fig:rotation} shows a sample test performed with a target frequency of \SI{0.7}{\hertz}
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6cm]{Images/PID070_8_d_PID.pdf}
\includegraphics[height=6cm]{Images/PID070_8_d_hist.pdf}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:rotation}
\textit{Left panel:} Relative encoder data acquired with a target frequency of \SI{0.7}{\hertz}. \textit{Right panel:} Histogram of the data during the stable rotation period ($\sim \SI{100}{\minute}$, $\sim$ 4200 rotations).
}
\end{figure}
The PMU must be capable of reconstructing the HWP angle with high fidelity. Encoder performance at room temperature are fully representative of the cryogenic performance of the angular encoder system because it depends on properties which do not change with the temperature, including inertia of the system, warm readout electronics, rotational stability.
A rough estimate of the achievable angular accuracy in arcminute is calculated with the following relation:
\begin{equation}
\sigma_\theta[\SI{}{\arcmin}] = \bar{\sigma} \cdot 360 \cdot f \cdot 60,
\end{equation}
where $f$ is the mean frequency expressed in \SI{}{\hertz} and $\bar{\sigma}$ is the mean value expressed in seconds of the standard deviations of the Gaussian distribution for each interval $\Delta t_i - \frac{\Delta t_0}{64}$, while $\Delta t_i$ and $\Delta t_0$ are the time in second reads 64 times per round by the relative encoder and only one time per round by the absolute encoder, respectively. The \textit{right panel} of Fig.~\ref{fig:rotation} shows the histogram of the data taken in the second half of the test shown in the \textit{left panel} of Fig.~\ref{fig:rotation}.
Due to the high inertia of the system, all measurements of the position are correlated. We introduce a Kalman filter, which uses the dynamic model, the physical properties of the system, and multiple sequential measurements to make an estimate of the varying quantities that is better than the estimate obtained by using only one measurement alone.
The input parameters of the filter are the error on the readout data retrieved with the previous estimation and the uncertainty on the acceleration of the system estimated using the rotor inertia and the variation of the coil torque.
The accuracy improvement obtained by means of the Kalman filter ranges from a factor 3 at \SI{0.3}{\hertz} to a factor 8 at \SI{0.9}{\hertz}. The results of the Kalman filter application are shown in Fig.~\ref{fig:kalman} and are compared with the accuracy corresponding to the electronic readout resolution (\SI{1}{\micro\second}).
The current configuration is not limited by the electronic readout resolution but only by the stability of the rotation. This stability is limited by the current generator resolution which uses a 12-bit DAC. An improvement (from 12-bit to 16-bit) of the resolution is already planned but is not required for the SWIPE/LSPE modulator which has a requirement of \SI{0.05}{\arcmin} at \SI{0.5}{\hertz}.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=7cm]{Images/Resolution_0707.pdf}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:kalman}
The raw accuracy of the encoder data (blue dots), the accuracy resulting from the use of a Kalman filter (red squares) and the accuracy corresponding to the electronic readout resolution (\SI{1}{\micro\second}).
}
\end{figure}
\newpage
\section{Expected performance}
\label{sec:performance}
\subsection{Losses}
\label{sec:losses}
The tests performed with the LSPE/SWIPE prototype show that when the ball bearing friction is removed, the most important friction contribution comes from the inhomogeneities of the main magnet. The uniformity of the main magnet magnetic field should be improved by the help of the manufacturer up to 1\%. This is not so easy but seems feasible because of the smaller dimension of the permanent magnets with respect to the LSPE/SWIPE one\cite{Columbro:2021}. A further solution for HFT (due to the small radius of HFT magnet $\sim\SI{200}{\milli\meter}$) consists in the use of a single magnet with only one magnetization which will guarantee more uniformity.
By assuming the RRR = 2.8 for aluminum 6061-T6\cite{Duthil:article}, the dependence on frequency and magnetic dipole\cite{Reitz:article}, we can estimate the expected power loss produced by the rotor eddy currents: \SI{1.10}{\milli\watt} for MFT and \SI{1.45}{\milli\watt} for HFT\footnote{Starting from Fig.~\ref{fig:power_loss}, the power loss was estimated at the operating frequencies for both telescopes and by scaling the diameter values. For example the HFT (equivalent radius \SI{150}{\milli\watt}) eddy currents were estimated as:
\begin{equation}
P_8^\textit{HFT} = P_8(\SI{1.02}{\hertz}) \times \frac{d_\textit{HFT}^2}{d_\textit{mock}^2} \times RRR = \SI{1.25}{\milli\watt} \times \frac{150^2}{300^2} \times 2.8 = \SI{0.88}{\milli\watt} .
\end{equation}
}
We expect hysteresis losses to be very small. This is due both to the absence of gravity which keeps in position the rotor after the release and to the high homogeneity of the magnetic field which minimizes hysteresis in the superconductor. We estimate\cite{Davis:article} a contribution $\ll\SI{0.5}{\milli\watt}$ which needs to be confirmed during cryogenic tests.
From parameters reported in Tab.~\ref{tab:MHFT_coils}, the resulting mean force for MFT (HFT) 8-phase motor is \SI{280}{\milli\newton\per\ampere} (\SI{414}{\milli\newton\per\ampere}).
Assuming the same radius $R_*$ for all drag forces, we can find a rough estimate for the required force to spin the rotor:
\begin{equation}
F_{\textit{drag}} = \frac{P}{v} = \frac{P}{2\pi f R_{*}},
\end{equation}
which gives a required force of \SI{1.98}{\milli\newton} (\SI{2.26}{\milli\newton}), meaning that the current required is $\sim$\SI{7}{\milli\ampere} ($\sim$\SI{5}{\milli\ampere}) and the Joule loss is \SI{0.09}{\milli\watt} (\SI{0.05}{\milli\watt}).
As for the harness we decide to use manganin wires for the sensors and CuBe (\SI{0.25}{\milli\meter} diameter) for the motor and actuator wires in order to minimize the total heat load produced by the harness (\SI{0.22}{\milli\watt} for each telescope).
\begin{table}[ht]
\caption{Contribution to the power budget. The total expected heat load is $<\SI{4.19}{\milli\watt}$.}
\label{tab:losses}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
\rule[-1ex]{0pt}{3.5ex} & \bf{MFT} & \bf{HFT} \\
\hline
\rule[-1ex]{0pt}{3.5ex} & [\SI{}{\milli\watt}] & [\SI{}{\milli\watt}] \\
\hline
\rule[-1ex]{0pt}{3.5ex} \textbf{8 magnets} & \SI{0.59}{} & \SI{0.88}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \textbf{Main magnet} & $<$\SI{0.41}{} & $<$\SI{0.57}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \textbf{Hysteresis} & $<$\SI{0.50}{} & $<$\SI{0.50}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \textbf{Joule} & \SI{0.09}{} & \SI{0.05}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \textbf{Harness} & \SI{0.22}{} & \SI{0.22}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \textbf{Rotor emission} & \SI{0.09}{} & \SI{0.07}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \textbf{Total} & $<\SI{1.90}{}$ & $<\SI{2.29}{}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
Tab.~\ref{tab:losses} summarizes all contributions to the power budget. The total expected heat load is \SI{4.19}{\milli\watt} which is of the order of the total power budget for both PMUs (\SI{4}{\milli\watt}). Because this estimate was made by using the upper limit both of the main magnet and hysteresis contributions, there are margins to be within the budget.
The main contribution to the motor eddy currents comes from the aluminum holder of the YBCO. Making the holder of electrical insulator like G10 will remove eddy currents but will thermally insulate the superconductor ring. This may make the cooldown time of the superconductor too long. The possibility of using an electrical insulator for the upper part of the holder and a thermal conductor (aluminum) for the lower part is under study.
\subsection{HWP Temperature}
The temperature of both HWPs must be $< \SI{20}{\kelvin}$ to reduce the radiative loading on the detector and minimize the amplitude of spurious signals\cite{Columbro_systematics:article, Hileman:article, Salatino:article}.
We use Comsol Multiphysics to build a thermal model of the rotor surrounded by a \SI{5}{\kelvin} environment. The rotor is made by Aluminum for the encoder and the groove rings, while the material assumed for the magnetic ring is the iron. This choice is driven by the lack of SmCo measured properties at very low temperature (iron physical properties are very similar to SmCo up to \SI{77}{kelvin}). The estimated main band HWP emissivities are \SI{0.02}{} and \SI{0.03}{} for MFT and HFT (see Fig.~\ref{fig:hwp_band}), respectively, while the emissivity outside the instrument bands is \SI{0.03}{}. The assumed emissivity of aluminum is \SI{0.5}{} when the surface is blackened. The expected dissipation on the rotor is $\sim\SI{0.1}{\milli\watt}$ which mainly comes from the modulated current in the motor coils.
The heating propagated on the edge of the encoder ring (where the coils are located) are analyzed also for more pessimistic cases (\SI{0.2}{\milli\watt} and \SI{0.5}{\milli\watt}).
The real HWP is made of polypropylene and Cu meshes: the polypropylene has strong absorption features in the thermal IR (which help to cool the HWP more quickly in the initial stages) and high transparency at long wavelengths while the inductive Cu meshes has high emissivity at low frequency. In these frequency range the most relevant heat sources for the HWP are interplanetary dust (IPD) emission and instrument emission.
While the instrument emission is taken into account in the simulation with a \SI{5}{\kelvin} environment, the IPD is not and varies across the sky with a smooth distribution. This results in a sine-like time profile with the same period as the satellite spin.
The total radiative load at all ecliptic latitudes is of the order of a few \SI{}{\micro\watt}, which is negligible with respect to the heating from the coils. Combined with a HWP thermal time constant of about \SI{10}{}-\SI{20}{\hour} (from simulation of Fig.~\ref{fig:thermal_model}) this produces a negligible sky-synchronous variation of the HWP temperature, resulting in a negligible loading variation on the detectors.
Fig.~\ref{fig:thermal_model} shows that the rotor (HFT solid lines, MFT dashed lines) reaches the equilibrium temperature of $<\SI{20}{\kelvin}$ within a few days under all scenarios that were modeled, minimizing the impact on the detector background and instrument sensitivity.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=7cm]{Images/Thermal_model.pdf}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:thermal_model}
Temperature of the HWPs as a function of time for different values of power load on the rotor (\SI{0.1}{\milli\watt} is the most likely case). Dashed lines correspond to the MFT and solid lines to the HFT.
}
\end{figure}
\subsection{Angular accuracy}
The LiteBIRD total angular error budget corresponds to \SI{1}{\arcmin} for MFT and \SI{5}{\arcmin} for HFT and is equally split in 3 error contributions: angle reconstruction, positioning of the HWP reference, margin.
As stated before, the angular accuracy of the system is related to the inertia of the rotor, to its speed and to the warm readout resolution. The main parameters and expected performance of LiteBIRD PMUs are summarized in Tab.~\ref{tab:accuracy}.
The MFT modulator is very similar to the prototype tested configuration and a similar performance is expected. The faster rotation and lower inertia of the HFT modulator reduce rotational stabilization, resulting in a reduced angular accuracy. All the same, the HFT raw encoder accuracy is nearly sufficient to meet the requirement and can be readily improved to perform well below the requirement by use of a Kalman filter. The improvement can be achieved by increasing the current generator resolution to have a finer control of the motor current in the PID feedback loop.
\begin{table}[ht]
\caption{Main parameters of LSPE/SWIPE, MFT and HFT configurations. The encoding accuracy of LiteBIRD modulators is estimated using the same configuration used in LSPE/SWIPE.}
\label{tab:accuracy}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
\rule[-1ex]{0pt}{3.5ex} & & \bf{SWIPE} & \bf{MFT} & \bf{HFT} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{HWP diameter} & \SI{}{\milli\meter} & \SI{500}{} & \SI{320}{} & \SI{220}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Frequency} & rpm & \SI{30}{} & \SI{39}{} & \SI{71}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Encoder speed} & \SI{}{\meter\per\second} & \SI{1.0}{} & \SI{1.0}{} & \SI{1.3}{} \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Moment of inertia} & \SI{}{\kilogram\meter^2} & \SI{0.5}{} & $\sim\SI{0.2}{}$ & $\sim\SI{0.05}{}$ \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Encoding accuracy} & \SI{}{\arcmin} & \SI{0.09}{} & $\sim\SI{0.4}{}$ & $\sim\SI{5.7}{}$ \\
\hline
\rule[-1ex]{0pt}{3.5ex} \bf{Kalman accuracy} & \SI{}{\arcmin} & \SI{0.02}{} & $\sim\SI{0.1}{}$ & $\sim\SI{0.7}{}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusions}
We presented the baseline design of LiteBIRD PMUs for the mid and high frequency telescopes. Both PMU are located at \SI{5}{\kelvin} and based on a continuously transmissive rotating HWP which has a transmission across the bands on average greater than 95\%.
We discussed the tests performed on a room-temperature rotating disk used as stand-in for the cryogenic HWP rotor which helped in the confirmation of the models used for the LiteBIRD design. The expected total load on the \SI{5}{\kelvin} stage is $<\SI{4.19}{\milli\watt}$ which is close to the requirement of \SI{4}{\milli\watt}. The angular accuracy in the angle reconstruction is $\SI{0.4}{\arcmin}$ (\SI{5.7}{\arcmin}) for MFT (HFT). The introduction of a Kalman filter improves the accuracy of angle reconstruction down to \SI{0.1}{\arcmin} and \SI{0.7}{\arcmin} for MFT and HFT, both values lower than the requirements of \SI{1}{\arcmin} and \SI{5}{\arcmin}, respectively.
Both HWP temperatures are expected to be below \SI{20}{\kelvin}, the maximum value to minimize the impact on the detector background and on instrument sensitivity.
In conclusion, all values are close to the requirement. In any case, they represent the worst case for the expected performance, giving us some design margin.
\newpage
\acknowledgments
This work is supported in \textbf{Japan} by ISAS/JAXA for Pre-Phase A2 studies, by the acceleration program of JAXA research and development directorate, by the World Premier International Research Center Initiative (WPI) of MEXT, by the JSPS Core-to-Core Program of A. Advanced Research Networks, and by JSPS KAKENHI Grant Numbers JP15H05891, JP17H01115, and JP17H01125. The \textbf{Italian} LiteBIRD phase A contribution is supported by the Italian Space Agency (ASI Grants No. 2020-9-HH.0 and 2016-24-H.1-2018), the National Institute for Nuclear Physics (INFN) and the National Institute for Astrophysics (INAF). The \textbf{French} LiteBIRD phase A contribution is supported by the Centre National d’Etudes Spatiale (CNES), by the Centre National de la Recherche Scientifique (CNRS), and by the Commissariat à l’Energie Atomique (CEA). The \textbf{Canadian} contribution is supported by the Canadian Space Agency. The \textbf{US} contribution is supported by NASA grant no. 80NSSC18K0132.
\textbf{Norwegian} participation in LiteBIRD is supported by the Research Council of Norway (Grant No. 263011). The \textbf{Spanish} LiteBIRD phase A contribution is supported by the Spanish Agencia Estatal de Investigación (AEI), project refs. PID2019-110610RB-C21 and AYA2017-84185-P. Funds that support the \textbf{Swedish} contributions come from the Swedish National Space Agency (SNSA/Rymdstyrelsen) and the Swedish Research Council (Reg. no. 2019-03959). The \textbf{German} participation in LiteBIRD is supported in part by the Excellence Cluster ORIGINS, which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy (Grant No. EXC-2094 - 390783311). This research used resources of the Central Computing System owned and operated by the Computing Research Center at KEK, as well as resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy.
|
1,108,101,566,474 | arxiv | \section{Introduction}
Actuator shape is an important design variable for feedback synthesis in control of distributed parameter systems. Optimizing actuator shape can improve performance of the controller and significantly reduces the cost of control. Numerical simulations in \cite{kalise2017optimal} show significant improvement in the cost and performance of the control.
Optimal shape of actuators has only been studied in few works. In (\cite{PTZ2013}), the optimal shape and position of an actuator for the wave equation in one spatial dimension are discussed. An actuator is placed on a subset $\omega\in [0,\pi]$ with a constant Lebesgue measure $L\pi$ for some $L\in (0,1)$. The optimal actuator minimizes the norm of a Hilbert Uniqueness Method (HUM)-based control; such control steers the system from a given initial state to zero state in finite time. In (\cite{privat2017actuator}), optimal actuator shape and position for linear parabolic systems are discussed. This paper adopts the same approach as in (\cite{PTZ2013}) but with initial conditions that have randomized Fourier coefficients. The cost is defined as the average of the norm of HUM-based controls. In \cite{kalise2017optimal}, optimal actuator design for linear diffusion equations has been discussed. A quadratic cost function is considered, and shape and topological derivative of this function are derived. Optimal sensor design problems are in many ways similar to the optimal actuator design problems. In (\cite{privat2015optimal}), optimal sensor shape design has been studied where the observability is maximized over all admissible sensor shapes.Controllability-based approaches to actuator design were used in (\cite{munch2011optimal,munch2009optimal,munch2013numerical}).
Numerical techniques to optimize the actuator design concurrently with the controller are mostly limited to linear quadratic regulator problems and location of actuators, see for example \cite{allaire2010long,kumar1978optimal-a,kubrusly1985sensors,darivandi2013algorithm}. An $H_\infty$-approach was used in \cite{kasinathan2013h}.
The previous studies have only discussed optimal actuator shape for linear systems. Optimal actuator design problems for nonlinear distributed parameter systems has also been studied. In (\cite{edalatzadehSICON}), it is shown that under certain conditions on the nonlinearity and the cost function, an optimal input and actuator design exist, and optimality equations are derived. Results are applied to the nonlinear railway track model as well as to the semi-linear wave model in two spatial dimensions. The existence of an optimal shape in a Banach space for nonlinear systems has been discussed in (\cite{edalatzadehTAC}). The optimality conditions in (\cite{edalatzadehTAC}) are derived for admissible actuator shapes in Hilbert spaces. The actuator shape space in this paper is an arbitrary Banach space. Optimality conditions for actuator shapes over a subset of a Banach space are obtained for nonlinear parabolic systems. A quadratic cost function on the state and input is considered to be minimized. The theory can be applied to various models. Some applications are nonlinear diffusion equation, Kuramoto-Sivashinsky equation, and nonlinear beam models (\cite{edalatzadeh2016boundary,edalatzadeh2019stability,edalatzadehSICON,edalatzadehTAC}). In this paper, the application of the theory to the optimal actuator shape design for railway track model is considered.
\section{Notation and Definitions}
Let $\state$ be a Hilbert space. The notation $\state_1\hookrightarrow \state_2$ means that the space $\state_1$ is densely and continuously embedded in $\state_2$. Let $I$ be a set on the real line, and $m$ be a non-negative number. The Banach space $H^m(I;\state)$ is the space of all strongly measurable functions ${\bm{x}}:I\to \state$ for which $\norm{{\bm{x}}(t)}{\state}$ is in $H^m(I,\mathbb{R})$.
For simplicity of notation, when $I$ is an interval, the corresponding space will be indicated without the braces; for example $L^2([0,\tau];\state)$ will be indicated by $L^2(0,\tau;\state) . $
The Banach space ${\mathbb{W}}(0,\tau)$ is the set of all $ {\bm{x}} (\cdot ) \in H^1(0,\tau;\state)\cap L^2(0,\tau;D({\mathcal{A}}))$ with norm \cite[Section II.2]{bensoussan2015book}
\begin{equation}\notag
\norm{{\bm{x}}}{{\mathbb{W}}(0,\tau)}=\norm{\dot{{\bm{x}}}}{L^2(0,\tau;\state)}+\norm{{\mathcal{A}} {\bm{x}}}{L^2(0,\tau;\state)}.
\end{equation}
When there is no ambiguity, the norm on $\state$ will not be explicitly indicated.
For every $p\in [1,\infty]$ and $\alpha\in (0,1)$, the interpolation space $D_{{\mathcal{A}}}(\alpha,p)$ is defined as the set of all ${\bm{x}}_0 \in \ss$ such that the function
\begin{equation}
t \mapsto v(t)\coloneqq\normm{t^{1-\alpha-1/p}{\mathcal{A}} e^{t{\mathcal{A}}}{\bm{x}}_0}
\end{equation}
belongs to $L^p(0,1)$ \cite[Section 2.2.1]{lunardi2012analytic}. The norm on this space is
$$\norm{{\bm{x}}_0 }{D_{{\mathcal{A}}}(\alpha,p)}=\normm{{\bm{x}}_0 }+\norm{v}{L^p(0,1)}.$$
\section{Optimal Actuator Design}
Let ${\bm{x}}(t)$ and ${\bm{u}}(t)$ be the state and input taking values in Hilbert spaces $\state$ and $\cs$, respectively. Also, let ${\bm{r}}$ denote the actuator design parameter that takes value in a compact set $K_{ad}$ in a Banach space $\as$. Consider the following initial value problem (IVP):
\begin{equation}\label{eq-IVP}
\begin{cases}
\dot{{\bm{x}}}(t)=\mc{A}{\bm{x}}(t)+\mc{F}({\bm{x}}(t))+\mc{B}({\bm{r}}){\bm{u}}(t),\quad t>0,\\
{\bm{x}}(0)={\bm{x}}_0.
\end{cases}
\end{equation}
The nonlinear operator ${\mathcal{F}}(\cdot)$ maps a Hilbert space ${\mathbb{V}}$ to $\state$. It is assumed that $D_{{\mathcal{A}}}(1/2,2)\hookrightarrow {{\mathbb{V}}}\hookrightarrow \state.$
The linear operator ${\mathcal{A}}$ is associated with a sesquilinear form $a:{\mathbb{V}}\times {\mathbb{V}}\to \mathbb{C}$ (see \cite[Chapter 4]{lang2012real}). Let there be positive numbers $\alpha$ and $\beta$ such that
\begin{flalign*}
|a({\bm{x}}_1,{\bm{x}}_2)|&\le \alpha \norm{{\bm{x}}_1}{{\mathbb{V}}}\norm{{\bm{x}}_2}{{\mathbb{V}}}, &\forall&{\bm{x}}_1,{\bm{x}}_2\in {\mathbb{V}},\\
\text{Re} \; a({\bm{x}},{\bm{x}})&\ge \beta \norm{{\bm{x}}}{{\mathbb{V}}}^2, &\forall&{\bm{x}}\in {\mathbb{V}}.
\end{flalign*}
The operator ${\mathcal{A}}$ has an extension to $\bar{{\mathcal{A}}}\in\mc L({\mathbb{V}},{\mathbb{V}}^{^*})$ described by
\begin{equation}
\inn{\bar{{\mathcal{A}}} {\bm{v}}}{{\bm{w}}}_{{\mathbb{V}}^{^*},{\mathbb{V}}}=a({\bm{v}},{\bm{w}}), \quad \forall {\bm{v}}, {\bm{w}} \in {\mathbb{V}},
\end{equation}
where ${\mathbb{V}}^{^*}$ denotes the dual of ${\mathbb{V}}$ with respect to pivot space $\state$.
According to \cite{edalatzadehTAC}, there are positive numbers $\tau$, $R_1$, and $R_2$ such that \eqref{eq-IVP} admits a unique solution for any initial conditions ${\bm{x}}_0\in B_{{\mathbb{V}}}(R_2)$ and inputs ${\bm{u}}\in\Bu$ where
\begin{flalign}\label{ad sets}
B_{{\mathbb{V}}}(R_2)&=\left\{{\bm{x}}_0 \in {\mathbb{V}}: \norm{{\bm{x}}_0}{{\mathbb{V}}}\le R_2 \right\},\\
\Bu&=\left\{{\bm{u}}\in L^2(0,\tau;\cs): \norm{{\bm{u}}}{2}\le R_1 \right\}.
\end{flalign}
The mapping ${\bm{x}}=\mc S({\bm{u}},{\bm{r}};{\bm{x}}_0)$ maps input ${\bm{u}}\in L^2(0,\tau;\cs)$, actuator location ${\bm{r}} \in \as$ and initial condition ${\bm{x}}_0\in \state$ to the corresponding solution ${\bm{x}}\in {\mathbb{W}}(0,\tau)$ of \eqref{eq-IVP}.
Consider the cost function
\begin{equation*}
J({\bm{x}},{\bm{u}})=\int_0^\tau\inn{\mc Q{\bm{x}}(t)}{{\bm{x}}(t)}+\inn{\mc R {\bm{u}}(t)}{{\bm{u}}(t)}_{\cs}dt,
\end{equation*}
where $\mc Q$ is a positive semi-definite, self-adjoint bounded linear operator on $\state$, and $\mc{R}$ is a coercive, self-adjoint linear bounded operator on $\cs$. Let $U_{ad}$ be a convex and closed set contained in the interior of $\Bu$.
For a fixed initial condition ${\bm{x}}_0\in B_{{\mathbb{V}}}(R_2)$, consider the following optimization problem over the admissible input set $U_{ad}$ and actuator design set $K_{ad}$
\begin{equation}
\left\{ \begin{array}{ll}
\min&J({\bm{x}},{\bm{u}})\\
\text{s.t.}& {\bm{x}}=\mc S({\bm{u}},{\bm{r}};{\bm{x}}_0),\\
&({\bm{u}},{\bm{r}}) \in U_{ad}\times K_{ad}.
\end{array} \right. \tag{P} \label{eq-optimal problem}
\end{equation
The existence of an optimizer to this optimization problem is proven in (\cite{edalatzadehTAC}).
\chgs{\begin{defn}
The operator ${\mathcal{G}}:\state\to \Yb$ is said to be G\^ateaux differentiable at ${\bm{x}}\in \state$ in the direction $\tilde{{\bm{x}}}\in \state$, if the limit
\begin{equation}
{\mathcal{G}}'({\bm{x}};\tilde{{\bm{x}}})=\lim_{\epsilon\to 0}{\frac{\|{\mathcal{G}}({\bm{x}}+\epsilon\tilde{{\bm{x}}})-{\mathcal{G}}({\bm{x}})\|_{\Yb}}{\epsilon}}
\end{equation}
exits.
\end{defn}}
The optimality conditions are derived next after assuming G\^ateaux differentiability of nonlinear operators ${\mathcal{F}}({\bm{x}})$ and ${\mathcal{B}}({\bm{r}})$.
\begin{enumerate}
\item[A1.] \label{as-diff F} The nonlinear operator ${\mathcal{F}}(\cdot)$ is G\^ateaux differentiable, and the derivative is linear. Indicate the G\^ateaux derivative of ${\mathcal{F}}(\cdot)$ at ${\bm{x}}$ in the direction ${\bm{p}}$ by ${{\mathcal{F}}}_{{\bm{x}}}^\prime{\bm{p}}$. Furthermore, the mapping ${\bm{x}}\mapsto {{\mathcal{F}}}_{{\bm{x}}}^\prime$ is bounded; that is, bounded sets in ${{\mathbb{V}}}$ are mapped to bounded sets in $\mc L({{\mathbb{V}}},\state)$.
\item[A2.] \chgs{\label{as:diff B} The control operator $\mc{B}({\bm{r}})$ is G\^ateaux differentiable with respect to ${\bm{r}}$ from $K_{ad}$ to $\mc{L}(\cs,\state)$. Indicate the G\^ateaux derivative of $\mc{B}({\bm{r}})$ at ${\bm{r}}^o$ in the direction $\tilde{{\bm{r}}}$ by $\mc{B}'({\bm{r}}^o;\tilde{{\bm{r}}})$. Furthermore, the mapping $\tilde{{\bm{r}}}\mapsto \mc{B}'({\bm{r}}^o;\tilde{{\bm{r}}})$ is bounded; that is, bounded sets in $\as$ are mapped to bounded sets in $\mc L(\cs,\state)$.}
\end{enumerate}
Using these assumptions, the G\^ateaux derivative of the solution map with respect to a trajectory ${\bm{x}}(t)=\mc S({\bm{u}}(t),{\bm{r}};{\bm{x}}_0)$ is calculated. The resulting map is a time-varying linear IVP. Let $\bm g\in L^p(0,\tau;\state)$, consider the time-varying system
\begin{equation}\label{eq-time var}
\begin{cases}
\dot{{\bm{h}}}(t)=({\mathcal{A}}+{{\mathcal{F}}}_{{\bm{x}}(t)}^\prime) {\bm{h}}(t)+\bm g(t),\\
{\bm{h}}(0)=0.
\end{cases}
\end{equation}
\begin{lem}\label{lem-estimate}
For every $\bm g\in L^p(0,\tau;\state)$, there is a unique solution $\bm h(t)$ to \eqref{eq-time var} in ${\mathbb{W}}(0,\tau)$. Moreover, there is a positive number $c$ independent of $\bm g$ such that
\begin{equation}\label{eq-estimate}
\|{\bm{h}}\|_{{\mathbb{W}}(0,\tau)}\le c \norm{\bm g}{L^2(0,\tau;\state)}.
\end{equation}
\end{lem}
\begin{pf}
The proof follows immediately from \cite[Corollary 5.2]{dier2015}. Let $\mc P(\cdot):[0,\tau]\to \mc L ({\mathbb{V}},\state)$ be such that $\mc P(\cdot){\bm{x}}$ is weakly measurable for all ${\bm{x}}\in {\mathbb{V}}$, and there exists an integrable function $h:[0,\tau]\to [0,\infty)$ such that $\norm{\mc P(t)}{\mc L({\mathbb{V}},\state)}\le h(t)$ for all $t\in [0,\tau]$. Corollary 5.2 in (\cite{dier2015}) states that for every ${\bm{x}}_0\in {\mathbb{V}}$ and $\bm g\in L^2(0,\tau;\state)$, there exists a unique ${\bm{x}}$ in ${\mathbb{W}}(0,\tau)$ such that
\begin{equation}
\begin{cases}
\dot{{\bm{x}}}(t)=({\mathcal{A}} +\mc P(t)) {\bm{x}}(t)+\bm g(t),\\
{\bm{x}}(0)={\bm{x}}_0.
\end{cases}
\end{equation}
Moreover, there exists a constant $c>0$ independent of ${\bm{x}}_0$ and $\bm g(t)$ such that
\begin{equation}
\norm{{\bm{x}}}{{\mathbb{W}}(0,\tau)}^2\le c \left(\norm{\bm g}{L^2(0,\tau;\state)}^2+\norm{{\bm{x}}_0}{{\mathbb{V}}}^2\right).
\end{equation}
Since ${\mathbb{W}}(0,\tau)$ is embedded in $C(0,\tau;{\mathbb{V}})$, the state ${\bm{x}}(t)$ is bounded in ${\mathbb{V}}$ for all $t\in [0,\tau]$. This together with G\^ateaux differentiablity of ${\mathcal{F}}(\cdot)$ ensures that there is a positive number $M_{{\mathcal{F}}}$ such that
\begin{equation}
\sup_{t\in [0,\tau]}\norm{{{\mathcal{F}}}_{{\bm{x}}(t)}^\prime}{\mc L({\mathbb{V}},\state)}\le M_{{\mathcal{F}}}.
\end{equation}
Thus, replacing the operator $\mc P(t)$ with ${{\mathcal{F}}}_{{\bm{x}}(t)}^\prime$ and noting that
\begin{equation}\label{d1}
\norm{\mc P(t)}{\mc L({\mathbb{V}},\state)}\le M_{{\mathcal{F}}},
\end{equation}
proves the lemma.
\end{pf}
\begin{prop}\label{prop-diff}
\chgs{Let assumptions A1 and A2 hold. The solution map $\mc S({\bm{u}}(t),{\bm{r}};{\bm{x}}_0)$ is G\^ateaux differentiable with respect to each ${\bm{u}}(t)$ and ${\bm{r}}$ in $U_{ad}\times K_{ad}$. Let ${\bm{x}}(t)=\mc S({\bm{u}}(t),{\bm{r}};{\bm{x}}_0)$. The G\^ateaux derivative of $\mc S({\bm{u}}(t),{\bm{r}};{\bm{x}}_0)$ at ${\bm{r}}$ in the direction $\tilde{{\bm{r}}}$ is the mapping $\mc S'({\bm{u}}(t),{\bm{r}};{\bm{x}}_0,\tilde{{\bm{r}}}) :\as\to {\mathbb{W}}(0,\tau)$, $\tilde{{\bm{r}}}\mapsto {\bm{z}}(t)$, where ${\bm{z}}(t)$ is the strict solution to
\begin{equation}
\begin{cases}
\dot{{\bm{z}}}(t)=({\mathcal{A}}+{{\mathcal{F}}}_{{\bm{x}}(t)}^\prime) {\bm{z}}(t)+{\mathcal{B}}'({\bm{r}}^o;\tilde{{\bm{r}}}){\bm{u}}(t),\\
{\bm{z}}(0)=0.
\end{cases}
\end{equation}}
\end{prop}
\begin{thm}\label{thm-optimality}
Suppose assumptions A1 and A2 hold. For any initial condition ${\bm{x}}_0\in\state$, let the pair $({\bm{u}}^o,{\bm{r}}^o)\in U_{ad}\times K_{ad}$ be a local minimizer of the optimization problem \eqref{eq-optimal problem} with the optimal trajectory ${\bm{x}}^o=\mc{S}({\bm{u}}^o;{\bm{r}}^o,{\bm{x}}_0)$ and let
${\bm{p}}^o(t)$ indicate the strict solution in ${\mathbb{W}}(0,\tau)^*$ of the final value problem
\begin{equation}\label{adj}
\dot{{\bm{p}}}^o(t)=-(\mc{A}^*+{{\mathcal{F}}_{{\bm{x}}^o(t)}^\prime}^*){\bm{p}}^o(t)-\mc Q {\bm{x}}^o(t), \quad {\bm{p}}^o(\tau)=0.
\end{equation}
Then ${\bm{u}}^o(t)=-\mc R^{-1} {\mathcal{B}}^*({\bm{r}}^o){\bm{p}}^o(t)$ and
\begin{equation}\label{optim r}
\int_0^\tau \inn{{\bm{p}}^o(t)}{{\mathcal{B}}'({\bm{r}}^o;\tilde{{\bm{r}}}){\bm{u}}^o(t)}dt=0
\end{equation}
for all $\tilde{{\bm{r}}}\in K_{ad}$.
\end{thm}
{\bf Outline of Proof:}
Theorem 11 in (\cite{edalatzadehTAC}) ensures that ${\bm{u}}^o(t)=-\mc R^{-1} {\mathcal{B}}^*({\bm{r}}^o){\bm{p}}^o(t)$. To obtain \eqref{optim r}, the G\^ateaux derivative of $$\mc G({\bm{u}},{\bm{r}}):=J(\mc S({\bm{u}},{\bm{r}};{\bm{x}}_0),{\bm{u}})$$ at ${\bm{r}}^o$ in the direction $\tilde{{\bm{r}}}$ is taken. After some manipulation and integration by parts, the following G\^ateaux derivative is obtained
\begin{equation}
\mc G'({\bm{u}}^o,{\bm{r}}^o;\tilde{{\bm{r}}}) =\int_0^\tau \inn{{\bm{p}}^o(t)}{{\mathcal{B}}'({\bm{r}}^o;\tilde{{\bm{r}}}){\bm{u}}^o(t)}dt.
\end{equation}
The optimality condition now follows by setting the G\^ateaux derivative to zero.
\section{Railway Track Model}
Letting $[0, \tau]$ indicate the time interval of interest, the following semilinear PDE governs the motion of a railway track $w(x,t)$ with initial deflection $w_0(x)$ and rate of deflection $v_0(x)$ on $(x,t)\in [0,1]\times [0,\tau]$ (\cite{edalatzadehSICON}):
\begin{equation}\label{track pde}\notag
\begin{cases}
\partial_{tt}w+\partial_{xx}(\partial_{xx}w+C_d\partial_{xxt}w)+\mu \partial_t w+w+\alpha w^3\\[1mm]
\qquad=b(x,r)u(t),\\[1mm]
\allowdisplaybreaks w(x,0)=w_0(x), \quad \partial_x w(x,0)=v_0(x),\\[1mm]
\allowdisplaybreaks w(0,t)=w(1,t)=0,\\[1mm]
\allowdisplaybreaks \partial_{xx} w(0,t)+C_d\partial_{xxt}w(0,t)=0,\\[1mm]
\partial_{xx} w(1,t)+C_d\partial_{xxt} w(1,t)=0,
\end{cases}
\end{equation}
where the subscript $\partial_x$ denotes the derivative with respect to $x$; the derivative with respect to $t$ is indicated similarly. The nonlinear part of the foundation elasticity corresponds to the coefficients $\alpha$. The constant $\mu\ge 0$ is the viscous damping coefficient of the foundation, and $C_d\ge 0$ is the coefficient of Kelvin-Voigt damping in the beam.
{The track deflection is controlled by a single external force $u(t) . $
The shape influence function $b(x,r)$ is a continuous function over $[0,1]$ parametrized by the parameter $r$ that describes its dependence on the actuator design. The function $b(x, r) $ is differentiable with respect to $r $.
Choose state ${\bm{x}}:=( w,v)$ where $v=\partial_t w$ and define the state space $\state:=H^2(0,1 )\cap H_0^1(0,1 )\times L^2(0,1 )$ with norm
\begin{equation}
\| (w,v) \|^2=\int_0^{1} (\partial_{xx} w)^2+w^2+ v^2 \, dx \label{Track-eq: norm}.
\end{equation}
Define the closed self-adjoint positive operator
\begin{flalign}
&{A}_0w:= \partial_{xxxx} w,\notag \\
&D({A}_0):=\left\lbrace w\in H^4(0,1 )| \, w(0)=w(1)=0,\right.\notag\\
&\qquad \qquad \left. \partial_{xx}w(0)=\partial_{xx}w(1)=0 \right\rbrace,
\end{flalign}
and also define
\begin{flalign}
{A}_{\scriptscriptstyle KV}(w,v)&:=\left(v,-{A}_0(w+C_dv)\right),\\
{K}(w,v)&:=(0,-(w+\mu v)),
\end{flalign}
with
\begin{flalign}
D({A}_{\scriptscriptstyle KV}):=&\left\lbrace(w,v)\in \state| \, v\in H^2(0,1)\cap H_0^1(0,1), \right.\notag \\
& \left. w+C_dv\in D({A}_0) \right\rbrace
\end{flalign}
The state operator ${\mathcal{A}}$ is defined as
\begin{flalign}
\allowdisplaybreaks {\mathcal{A}} :={A}_{\scriptscriptstyle KV}+{K}, \text{ with } D ( {\mathcal{A}} ) = D({A}_{\scriptscriptstyle KV} ).
\end{flalign}
Let $\cs:=\mathbb{R}$, the input operator ${\mathcal{B}}(r):\cs\to \state$ is
\begin{equation}
{\mathcal{B}}(r)u:=(0,b(x,r)u).
\end{equation}
The nonlinear operator ${\mathcal{F}}(\cdot):\state\to \state$ is defined as
\begin{equation}
{\mathcal{F}}(w,v):=(0,-\alpha w^3).
\end{equation}
With these definitions and by setting the state ${\bm{x}}(t)=(w(\cdot,t),v(\cdot,t))$ and initial condition ${\bm{x}}_0=(w_0(\cdot),v_0(\cdot))$, the state space representation of the railway track model is
\begin{equation}\label{sys}
\begin{cases}
\dot{{\bm{x}}}(t)={\mathcal{A}} {\bm{x}}(t)+{\mathcal{F}}({\bm{x}}(t))+{\mathcal{B}}(r)u(t),\quad t\in (0,\tau],\\
{\bm{x}}(0)={\bm{x}}_0.
\end{cases}
\end{equation}
Well-posedness and stability of this model has been established (\cite{edalatzadeh2019stability}).
The set of admissible inputs, $U_{ad}$, is a convex and closed subset of $L^2(0,\tau)$. The set of admissible actuator designs is denoted by $K_{ad}$ and is compact in a Banach space $\as$.
The cost function is
\begin{equation}
J(u,r;{\bm{x}}_0):=\frac{1}{2}\int_0^\tau\normm{{\bm{x}}(t)}^2+\gamma \normm{u(t)}^2\; dt.
\label{eq-cost}
\end{equation}
For a fixed initial condition ${\bm{x}}_0\in \state$, consider the following optimization problem over the admissible input set $U_{ad}$ and actuator design set $K_{ad}$
\begin{equation}\label{optim}
\left\{ \begin{array}{ll}
\underset{r\in K_{ad}}{\min}\;\underset{u\in U_{ad}}{\min}&J(u,r;{\bm{x}}_0)\\
\text{such that}& {\bm{x}}(t) \text{ solves }\eqref{sys}.
\end{array} \right. \tag{P}
\end{equation}
Let ${\bm{p}}^o(t)$ indicate the strict solution of the final value problem
\begin{equation}\label{adj}
\dot{{\bm{p}}}^o(t)=-({{\mathcal{A}}}^*+{{\mathcal{F}}_{{\bm{x}}^o(t)}^\prime}^*){\bm{p}}^o(t)-\frac{1}{2} {\bm{x}}^o(t), \quad {\bm{x}}^o(\tau)=0.
\end{equation}
Any solution $(u^o,r^o)$ to \eqref{optim} in the interior of $U_{ad}\times K_{ad}$ satisfies
\begin{subequations}\label{opt}
\begin{flalign}
\frac{\gamma}{2} u^o(t)+{\mathcal{B}}^*(r^o){\bm{p}}^o(t)&=0,\label{opt1}\\
\int_0^\tau \inn{{\bm{p}}^o(t)}{{\mathcal{B}}'(r^o;\tilde{r})u^o(t)}dt&=0 \quad \forall \tilde{r}\in K_{ad}.\label{opt2}
\end{flalign}
\end{subequations}
Write ${\bm{p}}^o(t)=(f^o(x,t),g^o(x,t))$ and $b'(x,r^o;\tilde{r})$ be the G\^ateaux derivative of $b(x,r)$ at $r^o$ in the direction $\tilde{r}$. The optimality conditions can be written
\begin{flalign}
&u^o(t)=\frac{2}{\gamma}\int_0^1b(x,r^o)g^o(x,t)dx,\\
&\int_0^\tau \int_0^1 g^o(x,t)b'(x,r^o;\tilde{r})dxdt=0, \quad \forall \tilde{r}\in K_{ad}.
\end{flalign}
\section{Future Directions}
Future directions will focus on application of the theory to various PDE models. An example of a nonlinear parabolic PDE model is Kuramoto-Sivashinsky equation which models propagation of flames as well as dynamics of thin film fluids. Another example is nonlinear models of flexible beams with Kelvin-Voigt damping. Future research will develop of suitable numerical schemes for computation of optimal actuator shapes.
|
1,108,101,566,475 | arxiv | \section{Introduction and main results}
\label{sec:intro}
Let $G$ be one of the classical Lie groups
\[
{\mathrm{GL}}_{n}(\mathbb R),\,{\mathrm{GL}}_{n}(\mathbb{C}),\,\operatorname{O}(p,q),\, \operatorname{O}_{n}(\mathbb{C}),\, \operatorname{U}(p,q),
\]
and let $G'$ be respectively the subgroup
\[
{\mathrm{GL}}_{n-1}(\mathbb R),\,{\mathrm{GL}}_{n-1}(\mathbb{C}),\,\operatorname{O}(p,q-1),\, \operatorname{O}_{n-1}(\mathbb{C}),\, \operatorname{U}(p,q-1),
\]
of $G$, where $p\geq 0$, $q,n\geq 1$. The subgroup $G'$ embeds in
$G$ in the usual way as follows. For general linear groups,
\[
{\mathrm{GL}}_{n-1}(\mathbb{K})=\left\{\,\left[\begin{array}{cc} a&0\\
0&1
\end{array}\right]\in {\mathrm{GL}}_{n}(\mathbb{K})\mid
a \in {\mathrm{GL}}_{n-1}(\mathbb{K}) \right\}\subset {\mathrm{GL}}_n(\mathbb{K}),
\]
where $\mathbb{K}$ stands for either $\mathbb R$ or $\mathbb{C}$. The real orthogonal
groups are realized as
\[
\operatorname{O}(p,q)=\left\{\,x\in {\mathrm{GL}}_{p+q}(\mathbb R)\mid {x}^t\,I_{p,q}\,x=I_{p,q} \right\},
\]
where $I_{p,q}$ is the diagonal matrix of size $p+q$ whose first
$p$ diagonal entries are $1$, and last $q$ diagonal entries are
$-1$. Then
\[
\operatorname{O}(p,q-1)={\mathrm{GL}}_{p+q-1}(\mathbb R)\cap \operatorname{O}(p,q)\subset \operatorname{O}(p,q).
\]
Likewise for the complex orthogonal groups and the unitary groups.
By a representation of $G$, we mean a continuous linear action of
$G$ on a complete, locally convex, Hausdorff, complex topological
vector space. We say that a representation $V$ of $G$ is a
Harish-Chandra smooth representation if it is Fr\'{e}chet, smooth,
of moderate growth, admissible and $\operatorname{Z}({\mathfrak g}_{\mathbb{C}})$-finite. Here and as
usual, $\operatorname{Z}({\mathfrak g}_{\mathbb{C}})$ is the center of the universal enveloping
algebra $\operatorname{U}({\mathfrak g}_{\mathbb{C}})$ of the complexified Lie algebra ${\mathfrak g}_{\mathbb{C}}$ of $G$.
The reader may consult \cite{Cass} and \cite[Chapter 11]{W2} for
more details about Harish-Chandra smooth representations.
The main purpose of this paper is to prove the following
\begin{introtheorem}
\label{thm:mainA}
Let $V$ and $V'$ be irreducible Harish-Chandra smooth
representations of $G$ and $G'$, respectively. Then the space of
$G'$-intertwining continuous linear maps from $V$ to $V'$ is at
most one dimensional, i.e.,
\[
\dim {\mathrm{Hom}}_{G'}(V,V')\leq 1.
\]
\end{introtheorem}
Theorem \ref{thm:mainA} and its p-adic analog have been expected
(first by Bernstein) since the 1980's. When $V'$ is the trivial
representation, Theorem \ref{thm:mainA} is proved in \cite{AGS1},
\cite{AGS2} and \cite{Dijk}, in the case of general linear,
orthogonal, and unitary groups, respectively. The p-adic analog of
Theorem \ref{thm:mainA} is proved in \cite{AGRS} in its full
generality.
{\vspace{0.2in}}
\begin{remark}
Denote by $K$ the maximal compact subgroup
\[
\operatorname{O}(n),\, \operatorname{U}(n),\, \operatorname{O}(p)\times \operatorname{O}(q),\, \operatorname{O}(n),\, \operatorname{U}(p)\times \operatorname{U}(q)
\]
of $G$, according to the five cases under consideration. Set
$K'=G'\cap K$, and denote by $\mathfrak g'_\mathbb{C}$ the complexified Lie algebra
of $G'$. Given Harish-Chandra smooth representations $V$ and $V'$
of $G$ and $G'$ (respectively), we expect that a certain form of
automatic continuity theorem will imply that
\[
{\mathrm{Hom}}_{(\mathfrak g'_\mathbb{C},K')}(V_{K},V'_{K'})={\mathrm{Hom}}_{G'}(V,V'),
\]
where $V_{K}$ is the underlying $(\mathfrak g_\mathbb{C},K)$-module of $V$,
similarly for $V'_{K'}$. Therefore, a $(\mathfrak g_\mathbb{C},K)$-module version
of Theorem \ref{thm:mainA} should hold. Consequently, we expect
the theorem to remain true whenever $V$ and $V'$ are irreducible
admissible representations.
\end{remark}
{\vspace{0.2in}} For any (smooth) manifold $M$, denote by $\textit{C}^{-\infty}(M)$
the space of generalized functions on $M$, which by definition
consists of continuous linear functionals on ${\mathrm {D}}_c^\infty(M)$,
the space of (complex) smooth densities on $M$ with compact
supports. The latter is equipped with the usual inductive smooth
topology.
By (a version of) the Gelfand-Kazhdan criterion, Theorem
\ref{thm:mainA} is a consequence of the following result. See
Proposition \ref{gkmo}.
\begin{introtheorem}
\label{thm:mainB} Let $f \in \textit{C}^{-\infty}(G)$ satisfy
\[
f(gxg^{-1})=f(x)
\]
for all $g\in G'$. Then we have
\[
f(x^\sigma)=f(x),
\]
where $\sigma $ is the anti-involution of $G$ given by
\[
x^\sigma =\left\{\begin{array}{ll}
x^t,&\quad\textrm{if $G$ is a general linear group,}\\
x^{-1},&\quad\textrm{if $G$ is an orthogonal group,}\\
\bar{x}^{-1},&\quad\textrm{if $G$ is a unitary
group.}\\
\end{array}
\right.
\]
\end{introtheorem}
We record another consequence of Theorem \ref{thm:mainB}, in the
case of general linear groups. As before, let $\mathbb{K}$ be either $\mathbb R$
or $\mathbb{C}$. Denote by $\mathrm{P}_{n}(\mathbb{K})$ the subgroup of
${\mathrm{GL}}_{n}(\mathbb{K})$ consisting of matrices whose last row is
$[0,0,\cdots,0,1]$. Since $\mathrm{P}_{n}(\mathbb{K})$ contains
${\mathrm{GL}}_{n-1}(\mathbb{K})$, and since $\mathrm{P}_{n}(\mathbb{K})$,
$\mathrm{P}_{n}(\mathbb{K})^t$ and the center $\mathbb{K}^\times$ generate the
group ${\mathrm{GL}}_{n}(\mathbb{K})$, we have the following
\begin{introcorollary}
\label{cor} Every generalized function on ${\mathrm{GL}}_{n}(\mathbb{K})$ which is
invariant under the adjoint action of $\mathrm{P}_{n}(\mathbb{K})$ is
invariant under the adjoint action of the whole group
${\mathrm{GL}}_{n}(\mathbb{K})$.
\end{introcorollary}
We remark that under the additional assumption that the
generalized function is an eigenvector of the algebra of
bi-invariant differential operators on ${\mathrm{GL}}_{n}(\mathbb{K})$, Corollary
\ref{cor} is the main result of \cite{B03} (Theorem 1.4). As
observed by Kirillov, this implies the validity of his famous
conjecture on ${\mathrm{GL}}_{n}(\mathbb{K})$, namely that every irreducible unitary
representation of ${\mathrm{GL}}_{n}(\mathbb{K})$ remains irreducible when
restricted to the subgroup $\mathrm{P}_{n}(\mathbb{K})$. We refer the
readers to \cite{B03} for details.
{\vspace{0.2in}}
Here are some words on the approaches, contents and the
organization of this paper. In Section \ref{sec:rigid}, we examine
the space of tempered generalized functions with support
properties for both the functions and their Fourier transforms, as
a module for the Weyl algebra. A key result (Proposition
\ref{quad}) says that certain such modules are complete reducible
with expected irreducible factors. In Section
\ref{sec:eigenvalue}, we introduce the notion of a hermitian
$A$-module, where $A$ is a commutative involutive algebra over
$\mathbb R$. Then the group $G$ in this paper becomes the isometry group
$\operatorname{U}(E)$ of a hermitian $A$-module $E$, corresponding to one of
the five simple commutative involutive algebras $A$. We then prove
in this context (Proposition \ref{glk}) that a (weighted) Euler
vector field acts semisimply on a certain space of tempered
generalized functions on $E$, and all its eigenvalues are
nonnegative integers. Note that the proof of this positivity
result depends in a rather crucial way the rigidity assertions of
Section \ref{sec:rigid}. In Section \ref{sec:nullcone}, we
introduce a group $\tilde{\operatorname{U}}(E)$ and an action of
$\tilde{\operatorname{U}}(E)$ on $\u(E)\times E$, where $\u(E)$ is the Lie
algebra of $\operatorname{U}(E)$. The group $\tilde{\operatorname{U}}(E)$ has a quadratic
character $\chi _E$ with kernel $\operatorname{U}(E)$, and the key object of
concern is then $\textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)$, the space
of $\chi _E$-equivariant tempered generalized functions on
$\u(E)\times E$. We prove in Proposition \ref{indn} a reduction
result for such generalized functions within the null cone, by
using metrically properness of nondistinguished nilpotent orbits,
or by appealing to the eigenvalue estimate of Section
\ref{sec:eigenvalue} for distinguished nilpotent orbits. Sections
\ref{sec:rigid}, \ref{sec:eigenvalue}, and \ref{sec:nullcone} are
at the heart of our approaches.
In Section \ref{sec:reduction}, we carry out the reduction to the
null cone (Proposition \ref{descent2}) by a form of Harish-Chandra
descent. We then see in Section \ref{sec:proofB} that results of
Sections \ref{sec:nullcone} and \ref{sec:reduction} allow us to
conclude the vanishing of $\textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)$.
This leads us to Theorem \ref{thm3}, which is a reformulation of
Theorem \ref{thm:mainB}. In Section \ref{sec:proofA}, we derive
Theorem \ref{thm:mainA} from Theorem \ref{thm:mainB} by using a
version of the Gelfand-Kazhdan criterion. Notwithstanding the fact
that the general lines of the concluding three sections are known
to the experts (see \cite{GK,Be,JR,AGRS} for related references),
the approaches taken by the current article, in terms of hermitian
$A$-modules, have some distinct advantages, at least for the
problem at hand.
\section{Rigidity of some generalized functions}
\label{sec:rigid}
Recall the space $\textit{C}^{-\infty}(M)$ of generalized functions on a
manifold $M$. For any locally closed subset $Z$ of $M$, denote by
\begin{equation} \label{dcmz}
\textit{C}^{-\infty}(M;Z) \subset \textit{C}^{-\infty}(U) \end{equation} the
subspace consisting of all $f$ which are supported in $Z$, where
$U$ is an open subset of $M$ containing $Z$ as a closed subset.
This definition is independent of $U$.
If $M$ is a Nash manifold, denote by
$\textit{C}^{-\xi}(M)\subset\textit{C}^{-\infty}(M)$ the space of tempered
generalized functions on $M$. We refer the interested reader to
\cite{Sh, AG1} on generalities of Nash manifolds and tempered
generalized functions. (For a short introduction, see \cite{JSZ}.)
We say that a subset $Z$ of a Nash manifold $M$ is locally Nash
closed if there is an open semialgebraic subset $U$ of $M$, which
contains $Z$ as a closed semialgebraic subset. In this case,
denote by $\textit{C}^{-\xi}(M;Z)$ the subspace of $\textit{C}^{-\xi}(U)$
consisting of all $f$ which are supported in $Z$. Again this is
independent of $U$.
Let $F$ be a finite dimensional real vector space, which is
canonically a Nash manifold. Denote by $\textit{W}[F]$ the space of all
(complex) polynomial coefficient differential operators on $F$,
called the Weyl algebra of $F$. It contains the algebra $\mathbb{C}[F]$ of
all polynomial functions, and the algebra $\textit{D}[F]$ of all constant
coefficient differential operators. Furthermore, the
multiplication map
\[
\textit{D}[F]\otimes \mathbb{C}[F]\rightarrow \textit{W}[F]
\]
is a vector space isomorphism.
The space $\textit{C}^{-\xi}(F)$ is a $\textit{W}[F]$-module in a natural way.
Here is an example of irreducible $\textit{W}[F]$-submodule of
$\textit{C}^{-\xi}(F)$ with a simple structure: $\textit{C}^{-\xi}(F; \{0\})$.
It has a distinguished nonzero element $\delta_F$ (called the
Dirac function), which is characterized (up to a nonzero scalar)
by the equation
\[
\lambda \delta_{F}=0, \quad \lambda \in
F^*,
\]
where $F^*$ is the space of real valued linear functionals on $F$.
More generally, we define the following analog of $\textit{C}^{-\xi}(F;
\{0\})$ for each subspace $F'$ of $F$:
\begin{equation}\label{ffperp}
\textit{C}^{-\xi}(F;F')^{\partial F'}:=\{f\in \textit{C}^{-\xi}(F;F')\mid
\textrm{$\partial v$ is nilpotent on $f$},\, v\in F' \}.
\end{equation}
Here and henceforth, $\partial v:=\frac{\partial}{\partial v}$ is
the partial derivative along $v$, and we say that a linear
operator is nilpotent on a vector if some positive power of the
linear operator annihilates the vector.
\begin{lem}\label{lemf1}
If $F=F'\oplus F''$ is a direct sum decomposition, then
\[
\textit{C}^{-\xi}(F;F')^{\partial F'}=\mathbb{C}[F']\otimes \textit{C}^{-\xi}(F'';\{0\}),
\]
and consequently it is an irreducible $\textit{W}[F]$-module.
\end{lem}
\begin{proof} Note that every tempered generalized function has a
finite order. Hence by the well-known result of L. Schwartz about
local representation of a generalized function with support, we
have
\[
\textit{C}^{-\xi}(F;F')=\textit{C}^{-\xi}(F')\otimes \textit{C}^{-\xi}(F'';\{0\}).
\]
The lemma then follows easily.
\end{proof}
The following lemma says that $\textit{C}^{-\xi}(F;F')^{\partial F'}$ is
typical within a certain category of $\textit{W}[F]$-modules. It may be
viewed as an algebraic version of the Stone-Von Neumann theorem.
See \cite{W3}, Lemma 3 of Appendix 1.
\begin{lem}\label{lemf2}
\label{unique} Let ${\mathcal {V}}$ be a $\textit{W}[F]$-module such that every
$\lambda \in (F/F')^*$ and every $\partial v$ ($v\in F'$) act
locally nilpotently. Then ${\mathcal {V}}$ splits into a direct sum of
irreducible $\textit{W}[F]$-modules, each of which is isomorphic to
$\textit{C}^{-\xi}(F;F')^{\partial F'}$.
\end{lem}
{\vspace{0.2in}} From now on, we further assume that $F$
is a non-degenerate real quadratic space, i.e., it is equipped
with a non-degenerate symmetric bilinear form $\langle\,,\,\rangle_F$.
Recall that the Fourier transform is the topological linear
automorphism
\[
{\mathcal {F}}_F:{\mathcal {S}}(F)\rightarrow
{\mathcal {S}}(F)
\]
of the Schwartz function space ${\mathcal {S}}(F)$, given by
\[
({\mathcal {F}}_F f)(x)=\int_F f(y)e^{-2\pi \sqrt{-1}\,\langle x,y\rangle_F}\,dy.
\]
Here $dy$ is the Lebesgue measure on $F$, normalized so that the
volume of the cube
\[
\{t_1v_1+t_2v_2+\cdots +c_rv_r\mid 0\leq t_1,t_2,\cdots, t_r\leq 1\}
\]
is $1$, for any orthogonal basis $v_1,v_2,\cdots,v_r$ of $F$ such
that
\begin{equation*}
\langle v_i,v_i\rangle_F=\pm 1, \quad i=1,2,\cdots, r.
\end{equation*}
Recall also that as a topological vector space, $\textit{C}^{-\xi}(F)$
is the strong dual of the Fr\'{e}chet space of Schwartz densities
on $F$. It contains ${\mathcal {S}}(F)$ as a dense subspace, and the Fourier
transform extends continuously to a topological linear isomorphism
\[
{\mathcal {F}}_F: \textit{C}^{-\xi}(F)\rightarrow
\textit{C}^{-\xi}(F),
\]
which is still called the Fourier transform.
For any two closed semialgebraic subsets $Z_1$ and $Z_2$ of $F$,
denote by
\begin{equation}
\label{dfz1z2} \textit{C}^{-\xi}(F;Z_1,Z_2) \subset \textit{C}^{-\xi}(F)
\end{equation} the subspace consisting of all $f$ such that
\begin{itemize}
\item
$f$ is supported in $Z_1$, and
\item
${\mathcal {F}}_F (f)$ is supported in $Z_2$.
\end{itemize}
It is a $\textit{W}[F]$-submodule of $\textit{C}^{-\xi}(F)$. For the rest of
the section, we will be concerned with the structure of such
$\textit{W}[F]$-submodules.
For a subspace $F'$ of $F$, let $F'^\perp$ denote its
perpendicular space:
\[
F'^\perp:=\{v\in F\mid \langle v, v'\rangle_F=0, \,v'\in
F'\}.
\]
Note that the Fourier transform ${\mathcal {F}}_F (f)$ of
$f\in\textit{C}^{-\xi}(F)$ is supported in $F'^\perp$ if and only if for
all $v\in F'$, $\partial v$ is nilpotent on $f$. Therefore
\begin{equation}
\label{dsame} \textit{C}^{-\xi}(F;F')^{\partial F'} =
\textit{C}^{-\xi}(F;F',F'^\perp).
\end{equation}
For later use, we record the following
\begin{prpl}\label{lemf3}
If $F^0$ is a non-degenerate subspace of $F$, and
\[
(F^0)^\perp=F^\oplus\oplus F^-
\]
is a decomposition to totally isotropic subspaces, then
\[
\textit{C}^{-\xi}(F;F^+\oplus F^0,F^+\oplus F^0)=\mathbb{C}[F^+]\otimes \textit{C}^{-\xi}(F^-;\{0\})
\otimes \textit{C}^{-\xi}(F^0).
\]
\end{prpl}
\begin{proof}
The proof is similar to that of Lemma \ref{lemf1}.
\end{proof}
Following \cite{JSZ}, we make the following
\begin{dfnl}
(a). A submanifold $Z$ of $F$ is said to be metrically proper if
for every $z\in Z$, the tangent space $\operatorname{T}_z(Z)$ is contained in a
proper non-degenerate subspace of $F$.
(b). A locally closed subset $Z$ of $F$ is said to be piecewise
metrically proper if there is a finite filtration
\[
Z=Z_0\supset Z_1\supset\cdots\supset Z_k=\emptyset
\]
of $Z$ by its closed subsets, so that every difference
$Z_i\setminus Z_{i+1}$ is a metrically proper submanifold of $F$,
$i=0,1,\cdots, k-1$.
\end{dfnl}
Denote by $\Delta_F$ the Laplacian operator on $F$. If
$v_1,v_2,\cdots,v_r$ is a basis of $F$, and $v_1',
v_2',\cdots,v_r'$ is the dual basis with respect to
$\langle\,,\,\rangle_F$, then
\begin{equation}
\label{dlaplace}
\Delta_F=\sum_{i=1}^r \partial v_i\, \partial v_i'.
\end{equation}
The following is a special case of Lemma 3.2 in \cite{JSZ}.
\begin{leml}
\label{lapl} Let $F$ be a finite dimensional non-degenerate real
quadratic space, and $Z$ be a piecewise metrically proper locally
closed subset of it. If $f\in \textit{C}^{-\infty}(F;Z)$ is annihilated
by some positive power of $\Delta_F$, then $f=0$.
\end{leml}
\noindent {\bf Remark}: A tempered generalized function $f$ on $F$
is annihilated by some positive power of $\Delta_F$ if and only if
its Fourier transform ${\mathcal {F}}_F(f)$ is supported in the null cone
\[
\Gamma _F:=\{v\in F\mid \langle v,v\rangle_F=0\}.
\]
\begin{leml}\label{weyl}
Assume $\dim_\mathbb R F=2r$. Let $F^+$ be a totaly isotropic subspace of
$F$ of dimension $r$. Let ${\mathcal {V}}$ be a $\textit{W}[F]$-module on which
$\Delta_F$ and every $\lambda\in (F/F^+)^*$ act locally
nilpotently. Then every $\partial v$ ($v\in F^+$) also acts
locally nilpotently on ${\mathcal {V}}$. Consequently, ${\mathcal {V}}$ is generated by
\[
\{f\in {\mathcal {V}}\mid \lambda f=(\partial v) f=0,
\,\lambda\in (F/F^+)^*,\, v\in F^+\}.
\]
\end{leml}
\begin{proof}
The second assertion follows from the first, in view of Lemma
\ref{lemf1} and Lemma \ref{lemf2}.
To prove the first assertion, take a totaly isotropic subspace
$F^-$ of $F$ which is complementary to $F^+$. Note that
\[
\textit{W}[F]=\textit{W}[F^+]\otimes \textit{W}[F^-].
\]
View ${\mathcal {V}}$ as a $\textit{W}[F^-]$-module and apply Lemma \ref{lemf2} to
it, we have
\begin{equation}\label{mathe}
{\mathcal {V}}={\mathcal {V}}'\otimes \textit{C}^{-\xi}(F^-;\{0\}),
\end{equation}
where
\[
{\mathcal {V}}'={\mathrm{Hom}}_{\textit{W}[F^-]}(\textit{C}^{-\xi}(F^-;\{0\}), {\mathcal {V}})
\]
is canonically a $\textit{W}[F^+]$-module, and (\ref{mathe}) is an
identification of $\textit{W}[F]$-modules.
Take a basis $u_1, u_2,\cdots, u_r$ of $F^+$, and the dual basis
$v_1, v_2,\cdots,v_r$ of $F^-$. Then
\[
\Delta_F=2\sum_{i=1}^r \partial u_i \,\partial v_i.
\]
Let $f'\in {\mathcal {V}}'$. Then for some positive integer $m$, we have
\[
\Delta_F^m (f'\otimes \delta_{F^-})=0,
\]
i.e.,
\begin{eqnarray*}
\sum_{\begin{array}{c}
i_1+i_2+\cdots+i_r=m,\\ i_1,i_2,\cdots,i_r\geq 0
\end{array}}
&\left(
\begin{array}{c}
m\\
i_1,i_2,\cdots,i_r
\end{array}
\right)
(\partial u_1)^{i_1}\,(\partial
u_2)^{i_2}\cdots (\partial u_r)^{i_r} \,f'\\
&\otimes \,(\partial v_1)^{i_1}\,(\partial v_2)^{i_2}\cdots
(\partial v_r)^{i_r} \,\delta_{F^-}=0.
\end{eqnarray*}
Therefore
\[
(\partial u_1)^{i_1}\,(\partial u_2)^{i_2}\cdots (\partial
u_r)^{i_r} \,f'=0,
\]
which proves that $\partial u_1,\partial u_2,\cdots , \partial
u_r$ act locally nilpotently on ${\mathcal {V}}'$, and the lemma follows.
\end{proof}
We are now ready to prove the following
\begin{prpl}\label{quad}
Assume $\dim_\mathbb R F=2r$. Let $F_1,F_2,\cdots, F_s$ be a set of
(distinct) totally isotropic subspaces of $F$, each of dimension
$r$. Then the $\textit{W}[F]$-module
\[
\textit{C}^{-\xi}(F;F_1\cup F_2\cup \cdots \cup F_s, F_1\cup F_2\cup \cdots \cup F_s)
\]
is completely reducible with finite length, and with each
irreducible factor isomorphic to some $\textit{C}^{-\xi}(F;F_i, F_i)$.
\end{prpl}
\noindent {\bf Remark}: We expect that
\[
\textit{C}^{-\xi}(F;F_1\cup F_2\cup \cdots \cup F_s, F_1\cup F_2\cup \cdots \cup F_s)
=\bigoplus_{i=1}^s \textit{C}^{-\xi}(F;F_i,
F_i).
\]
Proposition \ref{quad} is nevertheless sufficient for our purpose.
\begin{proof}
For any nonempty open connected semialgebraic subset $F^\circ$ of
a totaly isotropic subspace $F^+$ of $F$, of dimension $r$, set
\begin{equation}
\label{dff} {\mathcal {V}}_{F,F^\circ}:=\{f\in \textit{C}^{-\xi}(F; F^\circ)\mid
\Delta_F \textrm{ is nilpotent on } f\}.
\end{equation}
Then we have the restriction map
\begin{equation}\label{isocon}
\textit{C}^{-\xi}(F; F^+,F^+)\rightarrow {\mathcal {V}}_{F,F^\circ}.
\end{equation}
We claim that it is a $\textit{W}[F]$-module isomorphism.
Clearly the map (\ref{isocon}) is a well-defined nonzero
$\textit{W}[F]$-module homomorphism. It is injective since $
\textit{C}^{-\xi}(F; F^+,F^+)$ is irreducible. The space
${\mathcal {V}}_{F,F^\circ}$ is also a $\textit{W}[F]$-module satisfying the
conditions of Lemma \ref{weyl}. Therefore, it is generated by
\begin{eqnarray*}
&&\quad\{f\in \textit{C}^{-\xi}(F; F^\circ)\mid \lambda f=(\partial v)f=0,
\,\lambda\in (F/F^+)^*, \,v\in F^+\} \smallskip\\
&&=\{\textrm{constant function on $F^\circ$}\}\otimes
\delta_{F^-},
\end{eqnarray*}
where $F^-$ is a totally isotropic subspace of $F$ which is
complementary to $F^+$. Consequently, the map (\ref{isocon}) is
surjective as well.
Set
\[
\tilde{Z}:=\bigcup F_i\quad\textrm{and}\quad Z:= \bigcup_{i\neq j} (F_i\cap F_j).
\]
Label the connected components of $\tilde{Z}\setminus Z$ by
$F_1^\circ, F_2^\circ, \cdots, F_N^\circ$. Clearly each of them is
contained in some $F_i$ as an open semialgebraic subset. Since
$\tilde{Z}$ is contained in the null cone, any $f\in
\textit{C}^{-\xi}(F;\tilde{Z},\tilde{Z})$ is annihilated by some
positive power of $\Delta _F$. Therefore the restrictions yield a
$\textit{W}[F]$-module homomorphism
\begin{equation}\label{embedf}
\textit{C}^{-\xi}(F;\tilde{Z},\tilde{Z})\rightarrow
\prod_{k=1}^N \, {\mathcal {V}}_{F,F_k^\circ},
\end{equation}
with kernel $\textit{C}^{-\xi}(F;Z,\tilde{Z})$.
Define a filtration
\[
Z=Z_0\supset Z_1\supset\cdots\supset Z_r=\emptyset
\]
of $Z$ by
\[
Z_k:=\bigcup_{\begin{array}{c}
\dim F_i\cap F_j\leq r-1-k,\\
i\neq j
\end{array}} (F_i\cap F_j).
\]
Since every subspace of $F$ of dimension $<r$ is metrically
proper, we see from the filtration that $Z$ is piecewise
metrically proper in $F$. Now Lemma \ref{lapl} implies that
\[
\textit{C}^{-\xi}(F;Z,\tilde{Z})=0.
\]
Therefore the map in (\ref{embedf}) is injective and we finish the
proof by using the isomorphism in (\ref{isocon}).
\end{proof}
\section{Eigenvalue estimate of an Euler vector field}
\label{sec:eigenvalue}
In this section, we first describe a general set-up in order to
work with all five series of classical groups in a uniform manner.
We then prove in this context an eigenvalue estimate of an Euler
vector field acting on a certain space of tempered generalized
functions.
Let $A$ be a finite dimensional semi-simple commutative algebra
over $\mathbb R$, which is thus a finite product of copies of $\mathbb R$ and
$\mathbb{C}$. Let $E$ be an $A$-module of finite dimension, i.e.,
\[
\dim_A(E):=\max \,\{\dim_{A_ 0}(A_0\otimes_A E)\mid A_0
\textrm{ is a quotient field of
$A$}\}< +\infty.
\]
Denote by $\mathfrak g \mathfrak l_A(E)$ the $A$-algebra of $A$-endomorphisms of $E$,
and by
\[
{\mathrm{tr}}_A: \mathfrak g \mathfrak l_A(E)\rightarrow A
\]
the trace map, which is specified by requiring that the diagram
\[
\begin{CD}
\mathfrak g \mathfrak l_A(E)@>{\mathrm{tr}}_A>> A\\
@V 1_{A_0}\otimes VV @VVV\\
\mathfrak g \mathfrak l_{A_0}(A_0\otimes_A E)@>{\mathrm{tr}}>> A_0
\end{CD}
\]
commutes for every quotient field $A_0$ of $A$, where the bottom
arrow is the usual trace map. Set
\[
\sl_A(E):=\{x\in \mathfrak g \mathfrak l_A(E)\mid {\mathrm{tr}}_A(x)=0\}.
\]
From now on, we assume that a $\mathbb R$-algebra involution $\tau$ on
$A$ is given. We call $(A,\tau)$ (or $A$ when $\tau$ is
understood) a commutative involutive algebra. The commutative
involutive algebra $A$ is said to be simple if it is nonzero, and
has no $\tau$-stable ideal except for $\{0\}$ and itself. Every
simple commutative involutive algebra is isomorphic to one of the
followings:
\begin{equation}\label{five}
(\mathbb R, 1),\, \,(\mathbb{C},1),\,\, (\mathbb{C}, \bar{\phantom{a}}\,),\,\, (\mathbb R\times \mathbb R,\tau_\mathbb R),
\,\,(\mathbb{C}\times
\mathbb{C},\tau_\mathbb{C}),
\end{equation}
where $\tau_\mathbb R$ and $\tau_\mathbb{C}$ are the maps which interchange the
coordinates. The first three cases will be referred to as Type I,
and the last two cases as Type II.
A $\mathbb R$-bilinear map
\[
\langle\,,\,\rangle_E:E\times E\rightarrow A
\]
is called a hermitian form if it satisfies
\[
\langle u,v\rangle_E=(\langle v,u\rangle_E)^\tau, \quad \langle au,v\rangle_E=a\langle u,
v\rangle_E,\quad a\in A,\, u,v\in E.
\]
We will always assume that $E$ is a hermitian $A$-module, namely
it is equipped with a non-degenerate hermitian form
$\langle\,,\,\rangle_E$. Denote by $\operatorname{U}(E)$ the group of all $A$-module
automorphisms of $E$ which preserve the form $\langle\,,\,\rangle_E$, and
by $\u(E)$ its Lie algebra, which consists of all $x\in \mathfrak g \mathfrak l_A(E)$
such that
\[
\langle xu, v\rangle_E+\langle u, xv\rangle_E=0,\quad u,v\in E.
\]
Set
\[
\operatorname{U}(A):=\{a\in A^\times\mid a^\tau a=1\}.
\]
Through scalar multiplication, there is a homomorphism
\[
\operatorname{U}(A)\rightarrow \operatorname{U}(E),
\]
whose image, which coincides with the center of $\operatorname{U}(E)$, is
denoted by $\operatorname{Z}(E)$. Similarly, set
\[
\u(A):=\{a\in A\mid a^\tau+ a=0\},
\]
and denote by $\mathfrak z(E)$ the image of the map (again through scalar
multiplication)
\[
\u(A)\rightarrow \u(E).
\]
Then $\u(A)$ is the Lie algebra of $\operatorname{U}(A)$, and $\mathfrak z(E)$ is the
Lie algebra of $\operatorname{Z}(E)$. (Note that $\mathfrak z(E)$ may not coincide with
the center of $\u(E)$.) Set
\[
\mathfrak s \mathfrak u(E):=\u(E)\cap \sl_A(E).
\]
Then \begin{equation} \label{decomuez}
\u(E)=\mathfrak z(E)\oplus \mathfrak s \mathfrak u(E).
\end{equation}
{\vspace{0.2in}}
When $(A,\tau)$ is one of the five simple commutative involutive
algebras in (\ref{five}), then accordingly, every hermitian
$A$-module must be isomorphic to one of the followings:
\begin{equation}\label{fivee}
\begin{array}{c}
(\mathbb R^{p+q},\,\langle\,,\,\rangle_{\operatorname{O}(p,q)}),\quad (\mathbb{C}^n,\,\langle\,,\,\rangle_{\operatorname{O}(n)}),\quad
(\mathbb{C}^{p+q},\,\langle\,,\,\rangle_{\operatorname{U}(p,q)}),\smallskip\\
(\mathbb R^n\oplus \mathbb R^n,\, \langle\,,\,\rangle_{\mathbb R,n}), \quad (\mathbb{C}^n\oplus \mathbb{C}^n,\,
\langle\,,\,\rangle_{\mathbb{C},n}),
\end{array}
\end{equation}
where $p,q,n\geq 0$, and all spaces involved are considered as
spaces of column vectors. The corresponding hermitian forms are
given as follows: $\langle\,,\,\rangle_{\operatorname{O}(p,q)}$ is the symmetric form
defined by the matrix $I_{p,q}$, $\langle\,,\,\rangle_{\operatorname{O}(n)}$ is the
standard symmetric form on $\mathbb{C}^n$, $\langle\,,\,\rangle_{\operatorname{U}(p,q)}$ is the
usual hermitian form defined by the matrix $I_{p,q}$,
$\langle\,,\,\rangle_{\mathbb R,n}$ and $\langle\,,\,\rangle_{\mathbb{C},n}$ are the maps given
by
\[
\left(\left[
\begin{array}{c}
u\\
v
\end{array}
\right],
\left[\begin{array}{c}
u'\\
v'
\end{array}
\right]\right)\mapsto ({v'}^t u, {u'}^t v).
\]
The group $\operatorname{U}(E)$ corresponding to (\ref{fivee}) is isomorphic to
one of the followings:
\begin{equation}\label{fiveg}
\operatorname{O}(p,q),\, \operatorname{O}_n(\mathbb{C}),\,\operatorname{U}(p,q),\,{\mathrm{GL}}_{n}(\mathbb R),\,{\mathrm{GL}}_{n}(\mathbb{C}).
\end{equation}
{\vspace{0.2in}}
Assume in the rest of this section that $A$ is simple. Fix an
element $c_A$ in $\u(A)$ so that
\begin{equation}
\label{dca}
c_A^2=\left\{
\begin{array}{ll}
0,\quad&\textrm{if } (A,\tau)\cong (\mathbb{K}, 1),\smallskip\\
-1,\quad&\textrm{if }(A,\tau)\cong(\mathbb{C},\bar{\phantom{a}}),\smallskip\\
1 ,\quad&\textrm{if } (A,\tau)\cong(\mathbb{K}\times
\mathbb{K},\tau_\mathbb{K}),
\end{array}
\right.
\end{equation}
where $\mathbb{K} =\mathbb R$ or $\mathbb{C}$, as before. Note that such a $c_A$ is
unique up to a sign.
For any $v\in E$, write
\[
\phi_v(u):=\langle u,v\rangle_E\, v, \quad u\in E,
\]
then $\phi_v\in \mathfrak g \mathfrak l_A(E)$. Denote by $\phi'_v\in \sl_A(E)$ the
projection of $\phi_v$ to the second factor according to the
decomposition
\[
\mathfrak g \mathfrak l_A(E)=\{\textrm{scalar multiplication}\}\oplus \sl_A(E).
\]
For any $x\in \mathfrak s \mathfrak u(E)$, set
\begin{equation}\label{psixv}
\psi_{x,v}:=\left\{
\begin{array}{ll}
c_A\,\phi'_v, \quad&\textrm{if } c_A\neq 0,\smallskip\\
x\phi_v+\phi_v x,\quad&\textrm{if } c_A=0,\\
\end{array}
\right.
\end{equation}
which is checked to be in $\mathfrak s \mathfrak u(E)$. Following \cite{AGRS}, we
define
\begin{equation}\label{dee}
E(x):=\{\,v\in E\mid \psi_{x,v}\in [\mathfrak s \mathfrak u(E),x]\,\}.
\end{equation}
For any Lie subalgebra $\mathfrak h$ of $\mathfrak g \mathfrak l_A(E)$, denote by $\mathfrak h^x$ the
centralizer of $x$ in $\mathfrak h$. An element $x\in \mathfrak s \mathfrak u(E)$ is said to be
nilpotent if it is nilpotent as a $\mathbb R$-linear operator on $E$. The
following lemma gives a description of $E(x)$.
\begin{lem}\label{xue}
Let $x\in \mathfrak s \mathfrak u(E)$.
\begin{itemize}
\item[(a)] If $c_A\neq 0$, then
\[
E(x)=\{\,v\in E\mid \langle yv,v\rangle_E=0, \, y\in \sl_A(E)^x\,\}.
\]
\item[(b)] If $c_A=0$, then
\[
E(x)=\{\,v\in E\mid \langle xyv,v\rangle_E=0, \, y\in \mathfrak g \mathfrak l_A(E)^x\,\}.
\]
\item[(c)] In all cases, if $x$ is nilpotent, then
\[
\langle x^i v, v\rangle_E=0\quad \textrm{for all $v\in E(x)$ and $i>0$.}
\]
\end{itemize}
\end{lem}
\begin{proof}
We only prove Part (a). Part (b) is proved similarly, and Part (c)
follows obviously from (a) and (b). So we assume that $c_A\neq 0$
and let $v\in E$. For simplicity, we write
$\psi_{x,v}:=c_A\phi'_v$ as $\psi_v$.
If $v\in E(x)$, then $\psi_v=[z,x]$ for some $z\in\mathfrak s \mathfrak u(E)$.
Therefore, for all $y\in \sl_A(E)^x$,
\begin{eqnarray*}
&&\quad\langle yv,v\rangle_E={\mathrm{tr}}_A(\phi_v y)={\mathrm{tr}}_A(\phi'_v y)\\
&&=(c_A)^{-1}\,{\mathrm{tr}}_A(\psi_v \,y)=(c_A)^{-1}\,{\mathrm{tr}}_A(zxy-xzy)\\
&&=(c_A)^{-1}\,{\mathrm{tr}}_A(zyx-xzy)=0.
\end{eqnarray*}
On the other hand, assume that for all $y\in \sl_A(E)^x$, we have
\[
\langle yv,v\rangle_E=0, \quad i.e., \quad {\mathrm{tr}}_A(\psi_v \,y)=0.
\]
In particular, we have
\[
\psi_v\in \{\,z\in \mathfrak s \mathfrak u(E)\mid {\mathrm{tr}}_A(zy)=0, \, y\in \mathfrak s \mathfrak u(E)^{x}\,\}.
\]
It is easy to see that the latter space is precisely $[\mathfrak s \mathfrak u(E), x]$
(c.f. \cite[Page 14]{CM}). This finishes the proof.
\end{proof}
Denote by
\begin{equation}
\label{dnull}
\Gamma_E:=\{v\in E\mid \langle v,v\rangle_E=0\}
\end{equation}
the null cone of $E$. View $E$ as a real quadratic space by the
form
\[
\langle u, v\rangle_{E,\mathbb R}:={\mathrm{tr}}_{A/\mathbb R}(\langle u, v \rangle_E),
\]
where ${\mathrm{tr}}_{A/\mathbb R}: A\rightarrow \mathbb R$ is the usual trace map for
commutative algebras.
For any finite dimensional real vector space $F$ and any $x\in
{\mathrm{End}}_\mathbb R(F)$, denote by
\begin{equation}
\label{euler} \epsilon_{F,x}\in \textit{W}[F] \end{equation} the vector
field on $F$ whose tangent vector at $v\in F$ is $xv$. When $x=1$
is the identity operator, this is the usual Euler vector field
$\epsilon_F:=\epsilon_{F,1}$.
For a nilpotent element $\mathbf{e}\in \mathfrak s \mathfrak u(E)$, define
\begin{equation} \label{dve}
{\mathcal {V}}_{E,\mathbf e}:=\textit{C}^{-\xi}(E; E(\mathbf e)\cap\Gamma_E, E(\mathbf e)\cap
\Gamma_E)^{\operatorname{Z}(E)},
\end{equation}
where and as usual, a superscript group indicates the group
invariants. Clearly
\begin{equation}\label{anihilateef}
\epsilon_{E,c_A}f=0,\quad f\in {\mathcal {V}}_{E, \mathbf e}.
\end{equation}
The space ${\mathcal {V}}_{E, \mathbf e}$ arises naturally when one carries out
the reduction within the null cone. See Lemma \ref{support}.
\begin{prpl}\label{v1field} Assume that $\dim_A(E)=1$.
\begin{itemize}
\item[(a)] If $A$ is of Type I, then
\[
\textit{C}^{-\xi}(E;\Gamma_E,\Gamma_E)=\{0\}.
\]
\item[(b)] If $A$ is of Type II, then for every $f\in
\textit{C}^{-\xi}(E;\Gamma_E,\Gamma_E)$,
\[
\epsilon_{E,c_A}f=0\quad\textrm{implies}\quad f=0.
\]
\end{itemize}
Consequently, in all cases, ${\mathcal {V}}_{E, \mathbf e}=\{0\}$ for the
only element $\mathbf e\in \mathfrak s \mathfrak u(E)=\{0\}$.
\end{prpl}
\begin{proof} In case (a), we have $\Gamma_E=\{0\}$, which is metrically proper in $E$.
Therefore the lemma follows from Lemma \ref{lapl}.
In case (b), we assume that
\[
(E,\langle \,,\,\rangle_E)=(\mathbb{K}\oplus \mathbb{K},\, \langle\,,\,\rangle_{\mathbb{K},1})
\]
as in (\ref{fivee}). Then
\[
\Gamma_E=F_0\cup F_1
\]
is the union of two totally isotropic subspaces $F_0$ and $F_1$,
where
\[
F_0:=\{0\}\oplus \mathbb{K}, \quad \textrm{and}\quad F_1:=\mathbb{K}\oplus
\{0\}.
\]
By Proposition \ref{quad}, it suffices to show that for every
$f\in\textit{C}^{-\xi}(E;F_i,F_i)$,
\[
\epsilon_{E,c_A}f=0\quad\textrm{implies}\quad f=0.
\]
To fix the sign, assume that $c_A=(1,-1)$. By Lemma \ref{lemf1},
\[
\textit{C}^{-\xi}(E;F_0,F_0)=\mathbb{C}[F_0]\otimes \textit{C}^{-\xi}(F_1;\{0\}),
\]
and therefore $\epsilon_{E,c_A}$ acts semisimply on it, and all
its eigenvalues are negative integers. Likewise,
$\epsilon_{E,c_A}$ acts semisimply on $\textit{C}^{-\xi}(E;F_1,F_1)$,
and all its eigenvalues are positive integers. This finishes the
proof.
\end{proof}
Recall that a nilpotent element $\mathbf{e}\in \mathfrak s \mathfrak u(E)$ is said to
be distinguished if it commutes with no nonzero semisimple element
in $\mathfrak s \mathfrak u(E)$ (c.f. \cite[Section 8.2]{CM}).
The rest of this section is devoted to a proof of the following
\begin{prpl}\label{glk}Let $\mathbf h, \mathbf e, \mathbf f\in\mathfrak s \mathfrak u(E)$ be a standard
triple, i.e.,
\[
[\mathbf h, \mathbf e]=2\mathbf e, \quad
[\mathbf h,\mathbf f]=2\mathbf f,\quad [\mathbf e, \mathbf f]=\mathbf h.
\]
Assume that $\dim_A(E)\geq 2$ and $\mathbf e$ is distinguished.
Then the vector field $\epsilon_{E,\mathbf h}$ acts semisimply on
${\mathcal {V}}_{E,\mathbf e}$, and all its eigenvalues are nonnegative
integers.
\end{prpl}
For every $\mathbf{h}\in \mathfrak s \mathfrak u(E)$ in a standard triple ${\mathbf h,
\mathbf e, \mathbf f}$, denote by $E_{\mathbf{h}}^i\subset E$ the
eigenspace of $\mathbf{h}$ with eigenvalue $i$, where $i\in \mathbb{Z}$.
Write
\[
E_{\mathbf{h}}^+:=\bigoplus_{i> 0} E_{\mathbf{h}}^i,\quad \textrm{and}\quad
E_{\mathbf{h}}^-:=\bigoplus_{i<0} E_{\mathbf{h}}^i.
\]
Next we prove (a stronger version of) Proposition \ref{glk} when
$A$ is of Type I.
\begin{leml} Assume that $A$ is of Type I and $\dim_A(E)\geq 2$.
Let $\mathbf h, \mathbf e, \mathbf f\in\mathfrak s \mathfrak u(E)$ be a standard
triple, where $\mathbf e$ is a distinguished nilpotent element in
$\mathfrak s \mathfrak u(E)$. Then
\begin{itemize}
\item[(a)] $E(\mathbf e)$ is contained in $E_{\mathbf
h}^{+}+E_{\mathbf h}^0$; \item[(b)] the vector field
$\epsilon_{E,\mathbf h}$ acts semisimply on $\textit{C}^{-\xi}(E;
E(\mathbf e), E(\mathbf e))$, and all its eigenvalues are
nonnegative integers.
\end{itemize}
\end{leml}
\begin{proof} As usual, view $E$ as a $\sl_2(\mathbb R)\otimes_\mathbb R A$-modules via the
standard triple. Let
\begin{equation}\label{dece}
E=E_1\oplus E_2\oplus\cdots \oplus E_s
\end{equation}
be a decomposition of $E$ into irreducible $\sl_2(\mathbb R)\otimes_\mathbb R
A$-modules. By the classification of distinguished nilpotent
orbits (\cite[Theorem 8.2.14]{CM}), we know that $s=1$ if
$(A,\tau)=(\mathbb{C}, \bar{\phantom{a}})$. If $(A,\tau)=(\mathbb{K}, 1)$, then
(\ref{dece}) is an orthogonal decomposition, and $E_1, E_2,
\cdots, E_s$ have pairwise different odd dimensions.
Suppose that we are in the latter (orthogonal) case. Denote by
$\mathbf{e}_i\in \mathfrak s \mathfrak u(E_i)$ the restriction of $\mathbf e$ to
$E_i$. It is easy to see that (\cite[Lemma 5.3]{AGRS})
\[
E(\mathbf e)\subset E_1(\mathbf{e}_1)+ E_2(\mathbf{e}_2)+ \cdots + E_r(\mathbf{e}_r).
\]
To show Part (a), we may therefore assume that $E$ is irreducible
as a $\sl_2(\mathbb R)\otimes_\mathbb R A$-module. Let $r\geq 0$ be its highest
weight and
\[
\{v_i\mid i=-r, -r+2, \cdots, r\}
\]
be an $A$-basis of $E$ such that
\begin{itemize}
\item
$v_i$ is an eigenvector of $\mathbf h$ with eigenvalue $i$, and
\item
$\mathbf{e} v_i=v_{i+2}$, $\quad i<r$.
\end{itemize}
Assume that there is an element
\[
v=a_{-r}v_{-r}+a_{-r+2}v_{-r+2}+\cdots+ a_r v_r\in E(\mathbf e)\setminus (E_{\mathbf h}^{+}+E_{\mathbf h}^0).
\]
Denote by $j>0$ the largest number so that $a_{-j}\neq 0$. Then
\[
\langle \mathbf{e}^j v,v \rangle_E=a_{-j}\, a_{-j}^\tau \, \langle v_j, v_{-j}\rangle_E\neq 0,
\]
which contradicts Part (c) of Lemma \ref{xue}. This proves Part
(a).
By Part (a) and Proposition \ref{lemf3}, we have
\begin{eqnarray*}
&&\quad\textit{C}^{-\xi}(E; E(\mathbf e), E(\mathbf e))\\
&&\subset \textit{C}^{-\xi}(E; E_{\mathbf h}^{+}+E_{\mathbf h}^0,E_{\mathbf h}^{+}+E_{\mathbf h}^0)\\
&&=\mathbb{C}[E_{\mathbf h}^{+}]\otimes \textit{C}^{-\xi}(E_{\mathbf h}^{-};\{0\})\otimes \textit{C}^{-\xi}(E_{\mathbf h}^0).
\end{eqnarray*}
Part (b) therefore follows.
\end{proof}
We are now left with the task of proving Proposition \ref{glk},
when $A$ is of Type II. Thus let $(A,\tau)=(\mathbb{K}\times \mathbb{K},
\tau_\mathbb{K})$, and
\[
(E,\langle\,,\,\rangle_E)=(\mathbb{K}^n\oplus \mathbb{K}^n, \langle\,,\,\rangle_{\mathbb{K},n})\quad
\]
be as in (\ref{fivee}), with $n\geq 2$. Then
\[
\operatorname{U}(E)=\left\{\left[\begin{array}{cc}
g&0\\
0&g^{-t}\\
\end{array}\right]\mid g\in
{\mathrm{GL}}_n(\mathbb{K})\right\}={\mathrm{GL}}_n(\mathbb{K}),
\]
and
\[
\u(E)=\left\{\left[\begin{array}{cc}
x&0\\
0&-x^t\\
\end{array}\right]\mid x\in
\mathfrak g \mathfrak l_n(\mathbb{K})\right\}=\mathfrak g \mathfrak l_n(\mathbb{K}).
\]
A distinguished nilpotent element $\mathbf e$ of $\mathfrak s \mathfrak u(E)$ is
principle and we may assume without loss of generality that
\[
\mathbf e=\left[\begin{array}{cccccc} 0&1&0&\cdots &0&0\\
0&0&1&\cdots &0&0\\
&&\cdots&\cdots&&\\
0&0&0&\cdots &0&1\\
0&0&0&\cdots &0&0\\
\end{array}\right]
\]
and
\[
\mathbf h=\operatorname{diag}(n-1, n-3, \cdots, 3-n,1-n).
\]
Then it is easy to check that
\[
E(\mathbf e)\cap\Gamma_E=F_0\cup F_1\cup\cdots \cup F_n,
\]
where
\[
F_i=(\mathbb{K}^i\oplus\{0\}^{n-i})\oplus (\{0\}^i\oplus \mathbb{K}^{n-i}).
\]
In view of Proposition \ref{quad} and (\ref{anihilateef}), it
suffices to prove the following
\begin{leml}\label{egl}
With notation as above, the vector field $\epsilon_{E,\mathbf h}$
acts semisimply on
\begin{equation}\label{ce2fi}
\{f\in\textit{C}^{-\xi}(E;F_i,F_i)\mid \epsilon_{E,c_A} f=0\},
\end{equation}
with all its eigenvalues nonnegative integers.
\end{leml}
\begin{proof}
We prove the lemma for $\mathbb{K}=\mathbb R$. The complex case is proved in the
same way.
Denote by $x_1,x_2,\cdots,x_n, \,y_1,y_2,\cdots, y_n$ the standard
coordinates of $\mathbb R^n\oplus \mathbb R^n$, and write
\[
\partial_j=\frac{\partial}{\partial x_j}\quad\textrm{and} \quad d_j=\frac{\partial}{\partial
y_j},
\qquad j=1,2,\cdots, n.
\]
By Lemma \ref{lemf1}, the space $\textit{C}^{-\xi}(E;F_i,F_i)$ has a
basis consisting of generalized functions of the form
\begin{eqnarray*}
&&f=x_1^{a_1}x_2^{a_2}\cdots x_i^{a_i}\, y_{i+1}^{b_{i+1}} y_{i+2}^{b_{i+2}} \cdots y_n^{b_n} \\
&&\quad \otimes \partial_{i+1}^{a_{i+1}-1} \partial_{i+2}^{a_{i+2}-1} \cdots
\partial_n^{a_n-1}\, d_1^{b_1-1}d_2^{b_2-1}\cdots d_i^{b_i-1}
\delta_{F'_i},
\end{eqnarray*}
where $a_1,...,a_i,b_{i+1},...,b_n$ are nonnegative integers, and
the rest of $a$'s and $b$'s are positive integers. Here and as
before, $\delta_{F'_i}$ is a fixed Dirac function on the space
\[
F'_i:=(\{0\}^i\oplus \mathbb{K}^{n-i})\oplus (\mathbb{K}^i\oplus\{0\}^{n-i}).
\]
The generalized function $f$ as above is an eigenvector for both
$\epsilon_{E,c_A}$ and $\epsilon_{E,\mathbf h}$. The condition
\[
\epsilon_{E,c_A}f=0
\]
amounts to
\[
\sum_{j\leq i} (a_j+b_j)=\sum_{j> i} (a_j+b_j).
\]
Then the $\epsilon_{E,\mathbf h}$-eigenvalue of $f$ is
\begin{eqnarray*}
&&\phantom{=}(n-1)a_1+(n-3)a_2+\cdots+ (n-2i+1)a_i\\
&&-(n-2i-1)a_{i+1}-(n-2i-3)a_{i+2}-\cdots-(1-n) a_n\\
&&+(n-1)b_1+(n-3)b_2+\cdots+ (n-2i+1)b_i\\
&&-(n-2i-1)b_{i+1}-(n-2i-3)b_{i+2}-\cdots-(1-n) b_n\smallskip\\
&&\!\geq \!(n-2i)a_1+(n-2i)a_2+\cdots+ (n-2i)a_i\\
&&-(n-2i)a_{i+1}-(n-2i)a_{i+2}-\cdots-(n-2i)a_n\\
&&+(n-2i)b_1+(n-2i)b_2+\cdots+ (n-2i)b_i\\
&&-(n-2i)b_{i+1}-(n-2i)b_{i+2}-\cdots-(n-2i) b_n\\
&&=0.
\end{eqnarray*}
\end{proof}
Note that for $i=0,n$, the space (\ref{ce2fi}) is equal to zero.
\section{Reduction within the null cone}
\label{sec:nullcone}
Recall that we are given a commutative involutive algebra
$(A,\tau)$ and a hermitian $A$-module $E$. Denote by
$\tilde{\operatorname{U}}(E)$ the subgroup of ${\mathrm{GL}}_\mathbb R(E)\times \{\pm 1\}$
consisting of pairs $(g,\delta)$ such that either
\[
\delta=1 \quad\textrm{and}\quad g\in \operatorname{U}(E),
\]
or
\begin{equation}
\label{dutilde}
\left\{
\begin{array}{ll}
\delta=-1,&\\
g(av)=a^\tau g(v),\quad & a\in A,\, v\in E,\quad \textrm{ and}\\
\langle gu,gv\rangle_E=\langle v,u\rangle_E,\quad & u,v\in E.
\end{array}
\right.
\end{equation}
Denote by
\begin{equation}
\label{dctilde}
\chi_E: \tilde{\operatorname{U}}(E)\rightarrow \{\pm 1\}
\end{equation}
the quadratic character of $\tilde{\operatorname{U}}(E)$ projecting to the second
factor, which is easily checked to be surjective. Therefore, we get
an exact sequence
\[
\{1\}\rightarrow \operatorname{U}(E)\rightarrow \tilde{\operatorname{U}}(E)\stackrel{\chi_E}{\rightarrow}\{\pm 1\}\rightarrow
\{1\}.
\]
Now let $\tilde{\operatorname{U}}(E)$ act on $\operatorname{U}(E)$ by
\begin{equation}\label{gtaction}
(g,\delta)x:=gx^\delta g^{-1},
\end{equation}
and on $E$ by
\[
(g,\delta)v:=\delta gv.\]
Let $\tilde{\operatorname{U}}(E)$ act on $\u(E)$ through the differential at
the identity of $\operatorname{U}(E)$, i.e.,
\[
(g,\delta)x:=\delta gxg^{-1}.
\]
Let $\tilde{\operatorname{U}}(E)$ act on $\operatorname{U}(E)\times E$ and $\u(E)\times E$
diagonally.
We introduce the following general notation. If $H$ is a Lie group
acting smoothly on a manifold $M$, then for any character $\chi_H$
of $H$, denote by \begin{equation} \label{dchi}
\textit{C}^{-\infty}_{\chi_H}(M)\subset \textit{C}^{-\infty}(M) \end{equation}
the subspace consisting of all $f$ which are $\chi_H$-equivariant,
i.e.,
\[
f(hx)=\chi_H(h)f(x),\quad \textrm{for all } h\in
H.
\]
Similar notations (such as $\textit{C}^{-\xi}_{\chi_H}(M;Z)$) apply
without further explanation. We will be concerned with the space
$\textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)$.
Denote by ${\mathcal {N}}_E\subset \mathfrak s \mathfrak u(E)$ the null cone, which consists of
all nilpotent elements in $\mathfrak s \mathfrak u(E)$. Let
\[
{\mathcal {N}}_E={\mathcal {N}}_0\supset {\mathcal {N}}_1\supset \cdots \supset {\mathcal {N}}_k=\{0\}\supset
{\mathcal {N}}_{k+1}=\emptyset
\]
be a filtration of ${\mathcal {N}}_E$ by its closed subsets so that each
difference
\[
{\mathcal {O}}_i:={\mathcal {N}}_i\setminus {\mathcal {N}}_{i+1},\quad 0\leq i\leq k,
\]
is an $\operatorname{U}(E)$-adjoint orbit.
Our aim is to prove the following reduction result for
$\textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)$. Recall the null cone
$\Gamma _E$ of $E$.
\begin{prp}\label{indn}
Assume that $A$ is simple, $\dim_A(E)\geq 1$, and that every
element of $\textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)$ is supported in
$(\mathfrak z(E)+{\mathcal {N}}_i)\times \Gamma_E$, for some fixed $0\leq i\leq k$.
Then every element of $\textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)$ is
supported in $(\mathfrak z(E)+{\mathcal {N}}_{i+1})\times \Gamma_E$.
\end{prp}
Note that $\u(E)=\mathfrak z(E)\oplus \mathfrak s \mathfrak u(E)$ is a $\tilde{\operatorname{U}}(E)$ stable
decomposition, and $\tilde{\operatorname{U}}(E)$ acts on $\mathfrak z(E)$ trivially.
Therefore by the localization principle (See \cite[Lemma
4.1]{JSZ}, for example), for any fixed $i$,
\begin{eqnarray*}
&&\qquad\,\,\,\textrm{every element of
$\textit{C}^{-\xi}_{\chi_E}(\u(E)\times
E)$ is supported in $(\mathfrak z(E)+{\mathcal {N}}_i)\times \Gamma_E$}\\
&&\Longleftrightarrow \textrm{ every element of
$\textit{C}^{-\xi}_{\chi_E}(\mathfrak s \mathfrak u(E)\times E)$ is supported in
${\mathcal {N}}_i\times \Gamma_E$}.
\end{eqnarray*}
Thus it suffices to prove the following equivalent
\begin{prpp}\label{indn2}
Assume that $A$ is simple, $\dim_A(E)\geq 1$, and that every
element of $\textit{C}^{-\xi}_{\chi_E}(\mathfrak s \mathfrak u(E)\times E)$ is supported in
${\mathcal {N}}_i\times \Gamma_E$, for some fixed $0\leq i\leq k$. Then every
element of $\textit{C}^{-\xi}_{\chi_E}(\mathfrak s \mathfrak u(E)\times E)$ is supported in
${\mathcal {N}}_{i+1}\times \Gamma_E$.
\end{prpp}
For the ease of notation, denote
\[
\mathfrak s:=\mathfrak s \mathfrak u(E).
\]
We shall view $\mathfrak s$ as a non-degenerate real quadratic space via
the form
\[
\langle x,y\rangle_{\mathfrak s,\mathbb R}:={\mathrm{tr}}_{A/\mathbb R}({\mathrm{tr}}_A(xy)).
\]
Note that the null cone ${\mathcal {N}}_E$ is contained in the null cone of
$\mathfrak s$ as a real quadratic space.
\begin{lemp}\label{metricp}
Let ${\mathcal {O}}\subset {\mathcal {N}}_E$ be a nilpotent $\operatorname{U}(E)$-orbit which is not
distinguished. Then ${\mathcal {O}}$ is metrically proper in $\mathfrak s$.
\end{lemp}
\begin{proof}
Let $x\in {\mathcal {O}}$. By definition, it commutes with a nonzero
semisimple element $h\in \mathfrak s$. Denote by $\a_h$ the center of
$\mathfrak s^h$, which is a nonzero non-degenerate subspace of $\mathfrak s$.
Using the fact that every element of $\a_h$ commutes with $x$, we
see that the tangent space
\[
\operatorname{T}_x({\mathcal {O}})=[\u(E),x]
\]
is contained in the proper non-degenerate subspace
\[
(\a_h)^\perp:=\{y\in \mathfrak s\mid \langle
y,z\rangle_{\mathfrak s,\mathbb R}=0, \,\, z\in \a_h\}\subset \mathfrak s.
\]
\end{proof}
\begin{lemp}
Proposition \ref{indn2} holds when ${\mathcal {O}}_i$ is not distinguished.
\end{lemp}
\begin{proof}
Let $f\in\textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E)$. Then ${\mathcal {F}}_{\mathfrak s}(f)\in
\textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E)$, where ${\mathcal {F}}_{\mathfrak s}$ is the partial
Fourier transform (along $\mathfrak s$) specified by the commutative
diagram
\[
\begin{CD}
\textit{C}^{-\xi}(\mathfrak s\times E)@=
\textit{C}^{-\xi}(\mathfrak s)\widehat{\otimes}\textit{C}^{-\xi}(E)\\
@VV{\mathcal {F}}_{\mathfrak s}V @VV{\mathcal {F}}_{\mathfrak s}\otimes 1V\\
\textit{C}^{-\xi}(\mathfrak s\times E)@=
\textit{C}^{-\xi}(\mathfrak s)\widehat{\otimes} \textit{C}^{-\xi}(E)
\,.
\end{CD}
\]
By the assumption, the support of ${\mathcal {F}}_{\mathfrak s} (f)$ is contained in
\[
{\mathcal {N}}_i\times \Gamma_E \subset (\textrm{the null cone of the real
quadratic space $\mathfrak s$}) \times E.
\]
Therefore, some positive power of the partial Laplacian
$\Delta_{\mathfrak s}$ annihilates $f$. Now the lemma follows from Lemma
\ref{metricp} and (a variation of) Lemma \ref{lapl}.
\end{proof}
{\vspace{0.2in}} Before proceeding further, we introduce a version of pull
back of generalized functions.
\begin{dfnp}\label{submersive}
Let $Z$ and $Z'$ be locally closed subsets of manifolds $M$ and
$M'$, respectively. A smooth map $\phi:M\rightarrow M'$ is said to
be submersive from $Z$ to $Z'$ if
\begin{itemize}
\item
$\phi$ is submersive at every point of $Z$, and
\item
for every $z\in Z$, there is an open neighborhood $U$ of $z$
in $M$ such that
\[
\phi^{-1}(Z')\cap U=Z\cap U.
\]
\end{itemize}
\end{dfnp}
The following lemma is elementary.
\begin{lemp}\label{lpullback}
If $\phi:M\rightarrow M'$ is submersive from $Z$ to $Z'$, as in
Definition \ref{submersive}, then there is a unique linear map
\begin{equation}\label{pullback}
\phi^*: \textit{C}^{-\infty}(M';Z')\rightarrow
\textit{C}^{-\infty}(M;Z),
\end{equation}
with the following property: for any open subset $U$ of $M$ and
$U'$ of $M'$, if
\begin{itemize}
\item
$\phi$ restricts to a submersive map $\phi_U: U\rightarrow U'$,
\item $Z'\cap U'$ is closed in $U'$, and \item
$
\phi_U^{-1}(Z'\cap U')=Z\cap U,
$
\end{itemize}
then the diagram
\[
\begin{CD}
\textit{C}^{-\infty}(M';Z')@>\phi^*>>\textit{C}^{-\infty}(M;Z)\\
@VVV @VVV\\
\textit{C}^{-\infty}(U') @> \phi_U^*>> \textit{C}^{-\infty}(U)
\end{CD}
\]
commutes, where the two vertical arrows are restrictions, and the
bottom arrow is the usual pull back map of generalized functions
via a submersion.
\end{lemp}
See \cite[Lemma 8.A.2.5]{W1} for the definition and properties of
the usual pull back map. Note that the vertical arrows are well
defined since $Z'\cap U'$ is closed in $U'$, and $Z\cap U$ is
closed in $U$. The map $\phi^*$ in (\ref{pullback}) is still
called the pull back. It is injective if $\phi(Z)=Z'$. In this
case, we say that $\phi$ is submersive from $Z$ onto $Z'$.
{\vspace{0.2in}}
We continue the proof of Proposition \ref{indn2}. Recall the
notations from Section \ref{sec:eigenvalue}.
\begin{lemp}\label{support}
Under the assumption of Proposition \ref{indn2}, the support of
every $f\in \textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E)$ is contained in
\[
({\mathcal {N}}_{i+1}\times \Gamma_E) \cup (\bigsqcup_{\mathbf{e}\in {\mathcal {O}}_i}
\{\mathbf{e}\}\times (E(\mathbf{e})\cap \Gamma_E)).
\]
\end{lemp}
\begin{proof} We follow the method of \cite{AGRS}.
For every $t\in \mathbb R$, define a map
\[
\begin{array}{rcl}
\eta:=\eta_t: \mathfrak s\times E &\rightarrow &\mathfrak s\times E,\\
(x,v)&\mapsto& (x-t\psi_{x,v},v),
\end{array}
\]
which is checked to be submersive from $\mathfrak s\times \Gamma_E$ to
$\mathfrak s\times \Gamma_E$. Therefore, by Lemma \ref{lpullback}, it
yields a pull back map
\[
\begin{array}{rcl}
\eta^*: \textit{C}^{-\infty}(\mathfrak s\times E;\mathfrak s\times \Gamma_E) &\rightarrow
&\textit{C}^{-\infty}(\mathfrak s\times E;\mathfrak s\times \Gamma_E).\\
\end{array}
\]
Fix $f\in\textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E)$. By our assumption, its
support is contained in ${\mathcal {N}}_{i}\times \Gamma_E$, and so
\[f\in \textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E;{\mathcal {N}}_{i}\times \Gamma_E)
\subset \textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E;\mathfrak s\times \Gamma_E).
\]
Since the map $\eta$ is algebraic and
$\tilde{\operatorname{U}}(E)$-equivariant,
\[
\eta^* (f)\in \textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E;\mathfrak s\times \Gamma_E).
\]
It is routine to check that $\eta$ restricts to a bijection from
$\mathfrak s\times \Gamma_E$ onto itself. Let $(\mathbf{e},v)\in
{\mathcal {O}}_{i}\times \Gamma_E$ be a point in the support of $f$. Denote
by
\[
\mathbf e':=\mathbf e'(\mathbf e,v,t)\in \mathfrak s
\]
the unique element so that
\[
\eta(\mathbf{e}',v)=(\mathbf{e},v).
\]
Then $(\mathbf e',v)$ is in the support of $\eta^*(f)$, and
therefore our assumption implies that
\begin{equation}\label{eprime}
\mathbf e'\in {\mathcal {N}}_i.
\end{equation}
A easy calculation shows that
\[
\mathbf{e}'=\left\{\begin{array}{ll}
\mathbf{e}+t\psi_{\mathbf{e},v},\quad&\textrm{if }c_A\neq 0,\smallskip\\
\mathbf{e}+t \psi_{\mathbf{e},v}+ t^2 \phi_v\mathbf{e} \phi_v,\quad&
\textrm{if }c_A= 0.
\end{array}
\right.
\]
Since ${\mathcal {O}}_i$ is open in ${\mathcal {N}}_i$, (\ref{eprime}) implies that
\[
\psi_{\mathbf{e},v}=\frac{d}{dt}|_{t=0}\, \mathbf{e}'(\mathbf e,v,t)\in
\operatorname{T}_{\mathbf{e}}({\mathcal {O}}_i)=[\u(E), \mathbf{e}]=[\mathfrak s \mathfrak u(E),
\mathbf{e}],
\]
i.e., $v\in E(\mathbf e)$, and the proof is now complete.
\end{proof}
{\vspace{0.2in}} For a nilpotent $\operatorname{U}(E)$-orbit ${\mathcal {O}}\subset {\mathcal {N}}_E$, denote by
\begin{equation}
\label{dcv}
{\mathcal {V}}_{\mathfrak s\times E,{\mathcal {O}}}\subset \textit{C}^{-\xi}(\mathfrak s\times E; {\mathcal {O}}\times
E)^{\operatorname{U}(E)}
\end{equation}
the subspace consisting of all $f$ such that the supports of both
$f$ and its partial Fourier transform ${\mathcal {F}}_E(f)$ are contained in
$\bigsqcup_{\mathbf{e}\in {\mathcal {O}}} \{\mathbf{e}\}\times
(E(\mathbf{e})\cap \Gamma_E)$.
\begin{lemp}\label{negative}
Assume that $A$ is simple, $\dim_A(E)\geq 1$, and ${\mathcal {O}}$ is
distinguished. Then the Euler vector field $\epsilon_{\mathfrak s}$ acts
semisimply on ${\mathcal {V}}_{\mathfrak s\times E,{\mathcal {O}}}$, and all its eigenvalues are
real numbers $<-\frac{1}{2} \dim_\mathbb R \mathfrak s$.
\end{lemp}
Let us prove the following
\begin{lemp}
Lemma \ref{negative} implies Proposition \ref{indn2} when ${\mathcal {O}}_i$
is distinguished.
\end{lemp}
\begin{proof}
Denote by $q_\mathfrak s$ the quadratic form on $\mathfrak s$, i.e.,
\[
q_{\mathfrak s}(x)=\langle x,x\rangle_{\mathfrak s,\mathbb R}= {\mathrm{tr}}_{A/\mathbb R}({\mathrm{tr}}_A(x^2)).
\]
The operators
\[
\epsilon_\mathfrak s+\frac{1}{2} \dim_\mathbb R
\mathfrak s,\quad -\frac{1}{2}q_\mathfrak s,\quad\frac{1}{2}\Delta_\mathfrak s
\]
form a standard triple in $W[\mathfrak s]$, and each of them leaves the
space ${\mathcal {V}}_{\mathfrak s\times E, {\mathcal {O}}_i}$ stable. Lemma \ref{negative} says
that $\epsilon_\mathfrak s+\frac{1}{2} \dim_\mathbb R \mathfrak s$ is semisimple and has
negative eigenvalues on ${\mathcal {V}}_{\mathfrak s\times E, {\mathcal {O}}_i}$, and so by
\cite[ Lemma 8.A.5.1 ]{W1}, the map
\[
\Delta_\mathfrak s: {\mathcal {V}}_{\mathfrak s\times E,{\mathcal {O}}_i}\rightarrow {\mathcal {V}}_{\mathfrak s\times E,{\mathcal {O}}_i}
\]
is injective.
Let $f\in \textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E)$. Applying Lemma
\ref{support} to $f$ and its partial Fourier transform ${\mathcal {F}}_E
(f)$, we conclude that under the restriction map
\[
r_{\mathfrak s\times E}: \textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E)\rightarrow \textit{C}^{-\xi}(\mathfrak s\times E;
{\mathcal {O}}_i\times E),
\]
the image \[r_{\mathfrak s\times E}(f) \in {\mathcal {V}}_{\mathfrak s\times E,{\mathcal {O}}_i}.\]
Since the partial Fourier transform ${\mathcal {F}}_{\mathfrak s}(f)$ is again in
$\textit{C}^{-\xi}_{\chi_E}(\mathfrak s\times E)$, we see that ${\mathcal {F}}_{\mathfrak s}(f)$ is
supported in
\[
{\mathcal {N}}_{i}\times \Gamma_E \subset \textrm{(the null cone of the real quadratic space $\mathfrak s$)}\times
E,
\]
which implies that $r_{\mathfrak s\times E}(f)$ is annihilated by some
positive power of $\Delta_\mathfrak s$. By the injectivity of $\Delta_\mathfrak s$
on ${\mathcal {V}}_{\mathfrak s\times E,{\mathcal {O}}_i}$, we conclude that $r_{\mathfrak s\times
E}(f)=0$ and we are done.
\end{proof}
The remaining part of this section is devoted to the proof of
Lemma \ref{negative}.
For the moment, assume that ${\mathcal {O}}\subset {\mathcal {N}}_E$ is any nilpotent
$\operatorname{U}(E)$-orbit (not necessarily distinguished). Pick any element
$\mathbf e\in{\mathcal {O}}$ and extend it to a standard triple
$\mathbf{h},\mathbf{e}, \mathbf{f}\in \mathfrak s$. Then we have a vector
space decomposition
\[
\mathfrak s=[\mathfrak s,\mathbf{e}]\oplus \mathfrak s^{\mathbf{f}}.
\]
Let $\operatorname{U}(E)$ act on $\operatorname{U}(E)\times \mathfrak s^{\mathbf f} \times E$ via the
left translation on the first factor. Define a
$\operatorname{U}(E)$-equivariant map
\begin{equation}
\label{dftheta}
\begin{array}{rcl}
\theta: \operatorname{U}(E)\times \mathfrak s^{\mathbf f} \times E& \rightarrow & \mathfrak s\times E,\\
(g, x,v)&\mapsto & g(x+\mathbf{e}, v).
\end{array}
\end{equation}
\begin{lemp}
\label{thetar} The vector field
\begin{equation}\label{vector}
\iota_{\mathbf{h}/2}+ \epsilon_{\mathfrak s^{\mathbf{f}},
1-\operatorname{ad}(\mathbf{h}/2)}-\epsilon_{E,\mathbf{h}/2}
\end{equation}
on $\operatorname{U}(E)\times \mathfrak s^{\mathbf f} \times E$ is $\theta$-related to
the Euler vector field $\epsilon_{\mathfrak s}$ on $\mathfrak s\times E$, where
$\iota_{\mathbf{h}/2}$ is the left invariant vector field on
$\operatorname{U}(E)$ whose tangent vector at the identity is $\mathbf{h}/2$.
\end{lemp}
\begin{proof}
Since both vector fields under consideration are
$\operatorname{U}(E)$-invariant, it suffices to prove the $\theta$-relatedness
at a point of the form
\[
\mathbf{x}:=(1,x,v)\in \operatorname{U}(E)\times \mathfrak s^{\mathbf{f}}\times E.
\]
Applying the differential of $\theta$ at $\mathbf{x}$, we have
\[
\begin{array}{lcl}
\iota_{\mathbf{h}/2}|_{\mathbf{x}}=(\mathbf{h}/2,0,0)& \mapsto& ([\mathbf{h}/2, x+\mathbf{e}], (\mathbf{h}/2)v),\smallskip\\
\epsilon_{\mathfrak s^{\mathbf{f}},1-\operatorname{ad}(\mathbf{h}/2)}|_{\mathbf{x}}=(0,x-[\mathbf{h}/2,x],0) &\mapsto& (x-[\mathbf{h}/2,x],0),\smallskip\\
\epsilon_{E,\mathbf{h}/2}|_{\mathbf{x}}=(0,0,(\mathbf{h}/2)v)&\mapsto&
(0,(\mathbf{h}/2)v).
\end{array}
\]
This implies the lemma since
$\epsilon_{\mathfrak s}|_{\theta(\mathbf{x})}=(x+\mathbf{e},0)$.
\end{proof}
Let $\operatorname{Z}(E)$ act on $\mathfrak s^{\mathbf f} \times E$ and $\operatorname{U}(E)\times
\mathfrak s^{\mathbf f} \times E$ via its action on the factor $E$. Then
the map $\theta$ is $\operatorname{Z}(E)$-equivariant as well. Note that
$\theta$ is submersive from $\operatorname{U}(E)\times \{0\}\times E$ onto
${\mathcal {O}}\times E$ (c.f. \cite[Page 299]{W1}). Therefore it yields an
injective pull back map
\begin{eqnarray*}
&& \phantom{\lhook\joinrel\xrightarrow{\theta^*}}
\textit{C}^{-\xi}( \mathfrak s\times E;{\mathcal {O}}\times E)^{\operatorname{U}(E)}\\
&&\lhook\joinrel\xrightarrow{\theta^*}
\textit{C}^{-\xi}(\operatorname{U}(E)\times \mathfrak s^{\mathbf f} \times E;\operatorname{U}(E)\times \{0\}\times E)^{\operatorname{U}(E)\times \operatorname{Z}(E)}.\\
\end{eqnarray*}
Denote by
\begin{eqnarray*}
&& \phantom{\lhook\joinrel\xrightarrow{r_{\mathfrak s^{\mathbf f}\times E}}}
\textit{C}^{-\xi}(\operatorname{U}(E)\times \mathfrak s^{\mathbf f} \times E;\operatorname{U}(E)\times \{0\}\times E)^{\operatorname{U}(E)\times\operatorname{Z}(E)}\\
&& \xrightarrow{r_{\mathfrak s^{\mathbf f}\times E}}
\textit{C}^{-\xi}(\mathfrak s^{\mathbf f} \times E;\{0\}\times E)^{\operatorname{Z}(E)}\\
\end{eqnarray*}
the linear isomorphism specified by the rule
\[
f=1\otimes r_{\mathfrak s^{\mathbf f}\times E}f.
\]
Write ${\mathcal {V}}_{\mathfrak s^{\mathbf f}\times E,\mathbf{e}}$ for the space
\[
\label{dcvsmall}
\textit{C}^{-\xi}(\mathfrak s^{\mathbf f} \times E; \{0\}\times (E(\mathbf e)\cap
\Gamma_E),\{0\}\times (E(\mathbf e)\cap \Gamma_E))^{\operatorname{Z}(E)}.
\]
In previous notations (see (\ref{dcmz}) and (\ref{dve})), we have
\begin{equation}
\label{connect}
{\mathcal {V}}_{\mathfrak s^{\mathbf f}\times E,\mathbf{e}}=\textit{C}^{-\xi}(\mathfrak s^{\mathbf f};\{0\})
\otimes {\mathcal {V}}_{E, \mathbf e}.
\end{equation}
\begin{lemp}\label{injective}
The composition map $r_{\mathfrak s^{\mathbf f}\times E}\circ \theta^*$
sends ${\mathcal {V}}_{\mathfrak s\times E,{\mathcal {O}}}$ into ${\mathcal {V}}_{\mathfrak s^{\mathbf f}\times
E,\mathbf{e}}$, and the following diagram
\[
\begin{CD}
{\mathcal {V}}_{\mathfrak s\times E,{\mathcal {O}}}\, &\lhook\joinrel\xrightarrow{r_{\mathfrak s^{\mathbf f}\times E}\circ\theta^*}
& \,{\mathcal {V}}_{\mathfrak s^{\mathbf f} \times E,\mathbf{e}}\\
@V\epsilon_\mathfrak s VV @VV\epsilon_{\mathfrak s^\mathbf{f},1-\operatorname{ad}(\mathbf{h}/2)}-\epsilon_{E,\mathbf{h}/2}V \\
{\mathcal {V}}_{\mathfrak s\times E,{\mathcal {O}}}\, &\lhook\joinrel\xrightarrow{r_{\mathfrak s^{\mathbf f}\times E}\circ\theta^*}
& \,{\mathcal {V}}_{\mathfrak s^{\mathbf f}\times E,\mathbf{e}}\\
\end{CD}
\]
commutes.
\end{lemp}
\begin{proof} The first assertion follows by noting that both $\theta^*$ and
$r_{\mathfrak s^\mathbf{f},E}$ commute with the partial Fourier transform
along $E$. The second assertion follows from Lemma \ref{thetar}.
\end{proof}
\begin{lemp}
\label{eulersmall} Assume that $\dim_A(E)\geq 2$. Then the vector
field $\epsilon_{\mathfrak s^\mathbf{f},1-\operatorname{ad}(\mathbf{h}/2)}$ acts
semisimply on $\textit{C}^{-\xi}(\mathfrak s^{\mathbf f};\{0\})$, and all its
eigenvalues are real numbers $<-\frac{1}{2}\dim_\mathbb R \mathfrak s$.
\end{lemp}
\begin{proof}
The condition $\dim_A(E)\geq 2$ implies that $\mathfrak s\neq \{0\}$.
We view $\mathfrak s$ as a $\sl_2(\mathbb R)$-module via the adjoint
representation and the standard triple $\{\mathbf{h},\mathbf{e},
\mathbf{f}\}$. We shall prove that the analog of Lemma
\ref{eulersmall} holds for any finite dimensional nonzero
$\sl_2(\mathbb R)$-module $F$. Without loss of generality, we may assume
that $F$ is irreducible of real dimension $r+1$. Then
\[
\epsilon_{F^\mathbf{f},1-\mathbf{h}/2}=(1+r/2)\epsilon_{F^\mathbf{f}},
\]
which clearly acts semisimply on $\textit{C}^{-\xi}(F^{\mathbf
f};\{0\})$, with all its eigenvalues real numbers $\leq -(1+r/2)
=-\frac{1}{2}\dim_\mathbb R F-\frac{1}{2}<-\frac{1}{2}\dim_\mathbb R F$.
\end{proof}
In view of (\ref{connect}), Lemma \ref{negative} will follow from
Lemma \ref{injective}, Lemma \ref{eulersmall}, together with
Propositions \ref{v1field} and \ref{glk}.
\section{Reduction to the null cone}
\label{sec:reduction}
We first recall the following elementary (and well-known) lemma.
\begin{lem}\label{vopen}
Let $H$ be a Lie group acting smoothly on a manifold $M$. Let
$\chi_H$ be a continuous character on $H$. If
$\textit{C}^{-\infty}_{\chi_H}(M)=0$, then
$\textit{C}^{-\infty}_{\chi_H}(M')=0$ for any open submanifold $M'$ of
the form $\phi^{-1}(N')$, where $\phi:M\rightarrow N$ is an
$H$-equivariant smooth map, $N$ is a manifold with trivial
$H$-action, and $N'$ is an open submanifold of $N$.
\end{lem}
Recall that $(A,\tau)$ is a commutative involutive algebra, and
$E$ is a hermitian $A$-module, as well as other notations from
Section \ref{sec:eigenvalue}. The following result may be
considered as a case of Harish-Chandra descent.
\begin{prpl}\label{descent1}
Assume that $A$ is simple, $\dim_A(E)\geq 1$, and for all
commutative involutive algebra $A'$ and all hermitian $A'$-module
$E'$,
\begin{equation}\label{vanishep1}
\dim_{A'}(E')<\dim_A(E) \quad\textrm{implies}\quad \textit{C}^{-\infty}_{\chi_{E'}}(\operatorname{U}(E')\times E')=0.
\end{equation}
Then every $f\in \textit{C}^{-\infty}_{\chi_{E}}(\operatorname{U}(E)\times E)$ is
supported in $(\operatorname{Z}(E){\mathcal {U}}_E)\times E$, where ${\mathcal {U}}_E$ is the set of
unipotent elements in $\operatorname{U}(E)$.
\end{prpl}
\begin{proof}
Extend the involution $\tau$ on $A$ to $\mathfrak g \mathfrak l_A(E)$ (still denoted
by $\tau$), by requiring that
\[
\langle x u,v\rangle_E=\langle u, x^\tau v\rangle_E,\quad x\in \mathfrak g \mathfrak l_A(E),\,u,v\in E.
\]
Now let $x$ be a semisimple element in $\operatorname{U}(E)\setminus \operatorname{Z}(E)$.
Let $A'$ be the $\mathbb R$-subalgebra of $\mathfrak g \mathfrak l_A(E)$ generated by $A$,
$x$ and $x^\tau$, which is a commutative involutive algebra. Put
$E'=E$, but viewed as an $A'$-module. Define a map
\[
\langle\,,\,\rangle_{E'}: E'\times E'\rightarrow A'
\]
by requiring that
\[
{\mathrm{tr}}_{A'/\mathbb R}(a \langle u,v\rangle_{E'})= {\mathrm{tr}}_{A/\mathbb R}(\langle au,v\rangle_{E}),\quad a\in A',\,u,v\in E.
\]
Then $E'$ becomes a hermitian $A'$-module, with
\[
\dim_{A'}(E')<\dim_A(E),
\]
and $\tilde{\operatorname{U}}(E')$ coincides with the subgroup of
$\tilde{\operatorname{U}}(E)$ consisting of all $(g,\delta)$ such that
\[
gxg^{-1}=\left\{
\begin{array}{ll}
x, \quad&\textrm{if }\delta=1,\\
x^\tau, \quad&\textrm{if }\delta=-1.\\
\end{array}
\right.
\]
For any $y\in \operatorname{U}(E')$, denote by $J(y)$ the determinant of the
$\mathbb R$-linear map
\[
1-{\mathrm{Ad}}_{y^{-1}}: \u(E)/\u(E')\rightarrow \u(E)/\u(E').
\]
Note that ${\mathrm{Ad}}_{y}$ preserves a non-degenerate real quadratic form
on $\u(E)/\u(E')$. This implies that $J$ is a
$\tilde{\operatorname{U}}(E')$-invariant function (under the action
(\ref{gtaction})). Put
\[
\operatorname{U}(E')^\circ:=\{y\in \operatorname{U}(E')\mid J(y)\neq 0\},
\]
which contains $x{\mathcal {U}}_{E'}$. The map
\[
\begin{array}{rcl}
\pi: \tilde{\operatorname{U}}(E)\times (\operatorname{U}(E')^\circ\times E')&\rightarrow& \operatorname{U}(E)\times E,\\
(\tilde{g},y,v) &\mapsto &\tilde{g}(y,v)
\end{array}
\]
is a submersion. Therefore we have a well defined restriction map
(\cite[Lemma 4.4]{JSZ})
\[
r_{E,E'}: \textit{C}^{-\infty}_{\chi_{E}}(\operatorname{U}(E)\times E)\rightarrow \textit{C}^{-\infty}_{\chi_{E'}}(\operatorname{U}(E')^\circ\times
E'),
\]
which is specified by the rule
\[
\pi^*(f)=\chi _E\otimes r_{E,E'}(f).
\]
The assumption (\ref{vanishep1}) and Lemma \ref{vopen}
imply that the later space is zero. Thus every $f\in
\textit{C}^{-\infty}_{\chi_{E}}(\operatorname{U}(E)\times E)$ vanishes on the image
of $\pi$. As $x$ is arbitrary, the proposition follows.
\end{proof}
The remaining part of this section is devoted to the proof of the
following
\begin{prpl}\label{descent2} Assume that $A$ is simple,
$\dim_A(E)\geq 1$, and for all commutative involutive algebra $A'$
and all hermitian $A'$-module $E'$,
\begin{equation}
\label{vanishep2}
\dim_{A'}(E')<\dim_A(E)\quad\textrm{implies}\quad \textit{C}^{-\xi}_{\chi_{E'}}(\u(E')\times E')=0.
\end{equation}
Then every $f\in \textit{C}^{-\xi}_{\chi_{E}}(\u(E)\times E)$ is
supported in $(\mathfrak z(E)+{\mathcal {N}}_E)\times \Gamma_E$.
\end{prpl}
Similar to the proof of Proposition \ref{descent1}, we show that
every $f\in \textit{C}^{-\xi}_{\chi_{E}}(\u(E)\times E)$ is supported in
$(\mathfrak z(E)+{\mathcal {N}}_E)\times E$. We are left to show that $f$ is also
supported in $\u(E)\times \Gamma_E$.
Fix
\[
t\in (A^{\times})^{\{1,\tau\}}:=\{a\in A^\times \mid a^\tau
=a\},
\]
and set
\[
E(t):=\{v\in E\mid \langle v,v\rangle_E=t\}.
\]
Fix $v_0\in E(t)$ (when it is nonempty), and put
\[
E':=\{v\in E\mid \langle v, v_0\rangle_E=0\}.
\]
Then
\[
E=E'\oplus Av_0
\]
is an orthogonal decomposition of hermitian $A$-modules. We
identify $\tilde{\operatorname{U}}(E')$ with a subgroup of $\tilde{\operatorname{U}}(E)$ via
the embedding
\begin{equation}\label{embu}
(g,\delta)\mapsto \left(\left[\begin{array}{cc}
\delta g&0\\
0&\tau_\delta\\
\end{array}\right], \delta\right),
\end{equation}
where $\tau_\delta: Av_0\rightarrow A v_0$ is the $\mathbb R$-linear map
given by
\[
\tau_\delta(a v_0)=\left\{
\begin{array}{ll}
a v_0,\quad&\textrm{if }\delta=1,\\
-a^\tau v_0,\quad&\textrm{if }\delta=-1.
\end{array}
\right.
\]
Then $\tilde{\operatorname{U}}(E')$ is precisely the stabilizer of $v_0$ in
$\tilde{\operatorname{U}}(E)$.
Let $\tilde{\operatorname{U}}(E')$ act on $\u(E)$ by way of the action of
$\tilde{\operatorname{U}}(E)$.
\begin{leml}\label{decomue}
With notations as above, we have
\[
\u(E)\cong \u(E')\times E'\times \u(Av_0)
\]
as $\mathbb R$-linear representations of $\tilde{\operatorname{U}}(E')$. Here
$\tilde{\operatorname{U}}(E')$ acts on $\u(Av_0)$ trivially.
\end{leml}
\begin{proof}
Denote by $p_{E'}$ the projection to the first factor according to
the orthogonal decomposition
\[
E=E'\oplus A v_0.
\]
For any
\[
x=\left[\begin{array}{cc}
x_{11}& x_{12}\\
x_{21}& x_{22}
\end{array}
\right]
\in \u(E),
\]
set
\[
\pi_1(x):=x_{11},\quad \pi_2(x):=p_{E'}(xv_0),\quad
\textrm{and }\,\,\pi_3(x):=x_{22}.
\]
It is routine to check that the map
\[
\pi_1\times \pi_2\times \pi_3 : \u(E)\rightarrow \u(E')\times E'\times \u(Av_0)
\]
is injective and $\tilde{\operatorname{U}}(E')$-intertwining. By comparing the
dimension, we see that the map is an isomorphism.
\end{proof}
We now finish the proof of Proposition \ref{descent2}. Define a
map
\[
\begin{array}{rcl}
\rho_t: (A^{\times})^{\{1,\tau\}}\times \u(E)\times E(t)&\rightarrow& \u(E)\times E,\\
(a,x,v)&\mapsto& (x,av).
\end{array}
\]
Let $\tilde{\operatorname{U}}(E)$ act on $(A^{\times})^{\{1,\tau\}}$ trivially.
Note that $\tilde{\operatorname{U}}(E)$ preserves $E(t)$, and is transitive on
$E(t)$ by Witt's lemma. The map $\rho_t$ is a
$\tilde{\operatorname{U}}(E)$-equivariant submersion, and hence defines a pull
back map
\begin{equation*}
\rho_t^*: \textit{C}^{-\xi}_{\chi_E}(\u(E)\times E) \rightarrow
\textit{C}^{-\xi}_{\chi_E}((A^{\times})^{\{1,\tau\}}\times \u(E)\times
E(t)).
\end{equation*}
We have
\begin{eqnarray*}
&&\qquad\textit{C}^{-\xi}_{\chi_E}((A^{\times})^{\{1,\tau\}}\times \u(E)\times
E(t))=0\\
&&\Longleftrightarrow
\textit{C}^{-\xi}_{\chi_E}((\u(E)\times E(t))=0 \qquad \textrm{the localization principle}\\
&&\Longleftrightarrow
\textit{C}^{-\xi}_{\chi_{E'}}(\u(E))=0 \qquad \textrm{Frobenius
reciprocity}\\
&&\Longleftrightarrow
\textit{C}^{-\xi}_{\chi_{E'}}(\u(E')\times E'\times \u(Av_0))=0
\qquad \textrm{Lemma \ref{decomue}}\\
&&\Longleftrightarrow
\textit{C}^{-\xi}_{\chi_{E'}}(\u(E')\times E')=0. \qquad \textrm{the localization principle}
\end{eqnarray*}
For a proof of Frobenius reciprocity, see \cite[Theorem
C.3.3]{AG2} for example. As the last equality holds by assumption
(\ref{vanishep2}), the first equality holds. Consequently, every
$f\in \textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)$ vanishes on the image
of $\rho_t$. As $t\in (A^{\times})^{\{1,\tau\}}$ is arbitrary, and
we conclude that $f$ is supported in $\u(E)\times \Gamma_E$.
\section{Proof of Theorem \ref{thm:mainB}}
\label{sec:proofB}
Let $(A,\tau)$ be a commutative involutive algebra, and let $E$ be a
hermitian $A$-module, as in Section \ref{sec:eigenvalue}. Recall the
group $\tilde{\operatorname{U}}(E)$ and the quadratic character $\chi _E$ of
$\tilde{\operatorname{U}}(E)$, as in (\ref{dutilde}) and (\ref{dctilde}).
\begin{lem}\label{prod}
Write
\[
A=A_1\times A_2\times \cdots \times A_r
\]
as a product of simple commutative involutive algebras. Set
\[
E_j=A_j\otimes_A E,
\]
which is canonically an hermitian $A_j$-module.
\begin{itemize}
\item[(a)] If
\begin{equation}\label{coni0}
\textit{C}^{-\infty}_{\chi_{E_i}}(\operatorname{U}(E_i)\times E_i)=0\quad \textrm{for all
}i=1,2,\cdots,r,
\end{equation}
then
\[
\textit{C}^{-\infty}_{\chi_E}(\operatorname{U}(E)\times E)=0.
\]
\item[(b)]If
\begin{equation*}
\textit{C}^{-\xi}_{\chi_{E_i}}(\u(E_i)\times E_i)=0\quad \textrm{for all
}i=1,2,\cdots,r,
\end{equation*}
then
\[
\textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)=0.
\]
\end{itemize}
\end{lem}
\begin{proof}
Let us prove Part (a). Part (b) is proved similarly. Note that
\[
E=E_1\times E_2\times \cdots \times E_r,
\]
and
\[
\operatorname{U}(E)=\operatorname{U}(E_1)\times\operatorname{U}(E_2)\times \cdots \times \operatorname{U}(E_r).
\]
Recall the following elementary fact (c.f. \cite[Proposition
3.1.5]{AGS1}). Let $H_i$ be a Lie group acting smoothly on a
manifold $M_i$, and let $H_i'$ be a subgroup of $H_i$,
$i=1,2,\cdots,r$. If
\[
\textit{C}^{-\infty}(M_i)^{H_i'}=\textit{C}^{-\infty}(M_i)^{H_i}\quad \textrm{for all
}i=1,2,\cdots,r,
\]
then
\begin{eqnarray}\label{mg}
&&\quad \textit{C}^{-\infty}(M_1\times M_2\times \cdots\times M_k)^{H'_1\times H'_2\times \cdots \times
H'_r}\\
\nonumber
&& =\textit{C}^{-\infty}(M_1\times M_2\times \cdots\times M_k)^{H_1\times H_2\times \cdots \times
H_r}.
\end{eqnarray}
Note that (\ref{coni0}) is equivalent to
\[
\textit{C}^{-\infty}(\operatorname{U}(E_i)\times E_i)^{\operatorname{U}(E_i)}=\textit{C}^{-\infty}(\operatorname{U}(E_i)\times
E_i)^{\tilde{\operatorname{U}}(E_i)}.
\]
By (\ref{mg}), we have
\begin{eqnarray*}
&&\quad \textit{C}^{-\infty}(\operatorname{U}(E)\times E)^{\operatorname{U}(E_1)\times \operatorname{U}(E_2)\times \cdots \times \operatorname{U}(E_r)}\\
&& =\textit{C}^{-\infty}(\operatorname{U}(E)\times E)^{\tilde{\operatorname{U}}(E_1)\times \tilde{\operatorname{U}}(E_2)\times \cdots \times
\tilde{\operatorname{U}}(E_r)}.
\end{eqnarray*}
Now Part (a) of the lemma follows by noting that, as operators on
$\operatorname{U}(E)\times E$, the group $\tilde{\operatorname{U}}(E)$ coincides with the
subgroup of $\tilde{\operatorname{U}}(E_1)\times \tilde{\operatorname{U}}(E_2)\times \cdots
\times \tilde{\operatorname{U}}(E_r)$ consisting of elements of the form
\[
((g_1,\delta), (g_2,\delta),\cdots, (g_r,\delta)).
\]
\end{proof}
\begin{lem}\label{gversust}
Let $\mathbf{H}$ be a reductive linear algebraic group defined
over $\mathbb R$, with an algebraic action on a finite dimensional real
vector space $F$. Let $\chi_{\mathbf{H}}$ be a (continuous)
quadratic character of $\mathbf{H}(\mathbb R)$. Then
\[
\textit{C}^{-\infty}_{\chi_{\mathbf{H}}}(F)=0 \quad \textrm{if and only if}
\quad \textit{C}^{-\xi}_{\chi_{\mathbf{H}}}(F)=0.
\]
\end{lem}
See \cite[Theorem 4.0.8]{AG2} for a proof, which uses geometry
invariant theory.
\begin{prpl}\label{liealg}
One has that
\[
\textit{C}^{-\infty}_{\chi_E}(\u(E)\times E)=0.
\]
\end{prpl}
\begin{proof}
By Lemma \ref{gversust}, we only need to prove that
\begin{equation}\label{conxi}
\textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)=0.
\end{equation}
We prove (\ref{conxi}) by induction on $\dim_A(E)$. When
$\dim_A(E)=0$, we have $\tilde{\operatorname{U}}(E)=\{\pm 1\}$ and so
(\ref{conxi}) is trivially true. So assume that $\dim_A(E)\geq 1$,
and that (\ref{conxi}) holds for all commutative involutive
algebra $A'$ and all hermitian $A'$-module $E'$ with
$\dim_{A'}(E')<\dim_A(E)$.
By Lemma \ref{prod}, we may further assume that $A$ is simple. By
Proposition \ref{descent2}, we see that every $f\in
\textit{C}^{-\xi}_{\chi_E}(\u(E)\times E)$ is supported in
$(\mathfrak z(E)+{\mathcal {N}})\times \Gamma_E$. Proposition \ref{indn} then implies
that $f=0$.
\end{proof}
\begin{prpl}\label{group}
One has that
\[
\textit{C}^{-\infty}_{\chi_E}(\operatorname{U}(E)\times E)=0.
\]
\end{prpl}
\begin{proof}
Again, we prove by induction on $\dim_A(E)$. When $\dim_A(E)=0$,
the proposition is trivially true. So assume that $\dim_A(E)\geq
1$, and that the proposition holds for all commutative involutive
algebra $A'$ and all hermitian $A'$-module $E'$ with
$\dim_{A'}(E')<\dim_A(E)$.
By Lemma \ref{prod}, we may further assume that $A$ is simple. By
Proposition \ref{descent1}, we see that every $f\in
\textit{C}^{-\infty}_{\chi_E}(\operatorname{U}(E)\times E)$ is supported in
$(\operatorname{Z}(E){\mathcal {U}}_E)\times E$.
Define a $\tilde{\operatorname{U}}(E)$-equivariant map
\[
\begin{array}{rcl}
\rho_E: \operatorname{Z}(E)\times \mathfrak s \mathfrak u(E)\times E&\rightarrow& \operatorname{U}(E)\times E,\\
(z,x,v)&\mapsto& (z\exp(x),v).
\end{array}
\]
As is well-known, $\rho_E$ is submersive from $\operatorname{Z}(E)\times
{\mathcal {N}}_E\times E$ onto $(\operatorname{Z}(E) {\mathcal {U}}_E)\times E$. Therefore it yields
an injective pull back map
\begin{eqnarray*}
&& \phantom{\lhook\joinrel\xrightarrow{\rho_E^*}}\,
\textit{C}_{\chi_E}^{-\infty}(\operatorname{U}(E)\times E; (\operatorname{Z}(E) {\mathcal {U}}_E)\times E)\\
&&\lhook\joinrel\xrightarrow{\rho_E^*}
\textit{C}_{\chi_E}^{-\infty}(\operatorname{Z}(E)\times \mathfrak s \mathfrak u(E)\times E; \operatorname{Z}(E)\times
{\mathcal {N}}_E\times E).\\
\end{eqnarray*}
To finish the proof, it suffices to show that
\[
\textit{C}_{\chi_E}^{-\infty}(\operatorname{Z}(E)\times \mathfrak s \mathfrak u(E)\times E)=0.
\]
Since $\tilde{\operatorname{U}}(E)$ acts on $\operatorname{Z}(E)$ trivially, by the
localization principle, this is equivalent to
\begin{equation*}
\textit{C}_{\chi_E}^{-\infty}(\mathfrak s \mathfrak u(E)\times E)=0.
\end{equation*}
Again, since $\tilde{\operatorname{U}}(E)$ acts on $\mathfrak z(E)$ trivially and
$\u(E)=\mathfrak z(E)\oplus \mathfrak s \mathfrak u(E)$, this is equivalent to
\[
\textit{C}_{\chi_E}^{-\infty}(\u(E)\times E)=0,
\]
which is asserted by Proposition \ref{liealg}.
\end{proof}
\begin{thml}\label{thm3}
Let $E$ be a hermitian $A$-module and assume that $A$ is simple.
Let $v_0\in E\setminus \Gamma_E$, and define
\[
E':=\{v\in E\mid \langle v,v_0\rangle_E=0\},
\]
which is a hermitian $A$-module. Identify $\tilde{\operatorname{U}}(E')$ with
the stabilizer of $v_0$ in $\tilde{\operatorname{U}}(E)$ via the embedding
(\ref{embu}), and let $\tilde{\operatorname{U}}(E')$ act on $\operatorname{U}(E)$ by way of
the action of $\tilde{\operatorname{U}}(E)$. Then
\[
\textit{C}^{-\infty}_{\chi_{E'}}(\operatorname{U}(E))=0.
\]
\end{thml}
\begin{proof} As in Section \ref{sec:reduction}, put
\[
t:=\langle v_0,v_0\rangle\in (A^\times)^{\{1,\tau\}},
\]
and
\[
E(t):=\{v\in E\mid \langle v,v\rangle_E=t\},
\]
which is an $\tilde{\operatorname{U}}(E)$-homogeneous space.
Fix a $\tilde{\operatorname{U}}(E)$-invariant positive measure
$\mu_{E(t)}$ on $E(t)$, and a Lebesgue measure $\mu_E$ on $E$.
Define a map
\[
J_t: \textit{C}^{-\infty}(E(t))\rightarrow \textit{C}^{-\infty}(E)
\]
by requiring that the diagram
\[
\begin{CD}
\textit{C}^{-\infty}(E(t))@>J_t>> \textit{C}^{-\infty}(E)\\
@VV\cdot \mu_{E(t)} V @VV\cdot \mu_{E} V\\
\textit{D}^{-\infty}(E(t))@>>> \textit{D}^{-\infty}(E)\\
\end{CD}
\]
commutes, where $\textit{D}^{-\infty}$ stands for the space of
distributions, the lower horizontal arrow is the push forward of
distributions via the closed embedding $E(t)\hookrightarrow E$,
and the vertical arrows are linear isomorphisms given by
multiplications of the indicated measures.
Then we have an injective continuous linear map
\[
1\otimes J_t: \textit{C}^{-\infty}_{\chi_{E}}(\operatorname{U}(E)\times E(t))\hookrightarrow \textit{C}^{-\infty}_{\chi_{E}}(\operatorname{U}(E)\times
E).
\]
The later space vanishes by Proposition \ref{group}, and therefore
so does the former one. We finish the proof by using Frobenius
reciprocity.
\end{proof}
We now show that Theorem \ref{thm:mainB} is implied by Theorem
\ref{thm3}. Let $A$ be one of the five simple commutative
involutive algebras as in (\ref{five}), and $E$ be the hermitian
$A$-module as in (\ref{fivee}), with $n,q\geq 1$. Let $v_0$ be the
vector in $E\setminus \Gamma_E$ given by
\[
v_0:=\left\{\begin{array}{ll}
\! [0,0,\cdots,0,1]^t, \quad & \textrm{if $A$ is of type I,}
\smallskip\\ \!
[0,0,\cdots,0,1,0,0,\cdots,0,1]^t, \quad& \textrm{if $A$ is of type II.}\\
\end{array}
\right.
\]
Then $G'$ of Theorem \ref{thm:mainB} coincides with $\operatorname{U}(E')$ of
Theorem \ref{thm3}. Define $\sigma_0 :=(-x_0,-1)$, where $x_0\in
{\mathrm{GL}}_\mathbb R(E)$ is given by
\[
x_0:=\left\{
\begin{array}{ll}
1, \quad&\textrm{if $A=(\mathbb{K},1)$,} \smallskip\\
\bar{\phantom{a}}, \quad&\textrm{if $A=(\mathbb{C},\bar{\phantom{a}})$,} \\
\left[
\begin{array}{cc}
0& 1\\
1 & 0
\end{array}
\right],
\quad&\textrm{if $A=(\mathbb{K}\times \mathbb{K},\tau_\mathbb{K})$.} \smallskip\\
\end{array}
\right.
\]
Then $\sigma _0$ is an element of $\tilde{\operatorname{U}}(E)\setminus \operatorname{U}(E)$
fixing $v_0$, and so is in $\tilde{\operatorname{U}}(E')\setminus \operatorname{U}(E')$. See
(\ref{dutilde}) for the description of $\tilde{\operatorname{U}}(E)$ and
(\ref{embu}) for the explicit embedding of $\tilde{\operatorname{U}}(E')$ in
$\tilde{\operatorname{U}}(E)$. Theorem \ref{thm:mainB} follows from Theorem
\ref{thm3} by observing that $\sigma_0$ yields the anti-involution
$\sigma$ of $G=\operatorname{U}(E)$, as desired.
\section{Theorem \ref{thm:mainB} implies Theorem \ref{thm:mainA}}
\label{sec:proofA}
This section is devoted to a proof of the following proposition,
which says that Theorem \ref{thm:mainB} implies Theorem
\ref{thm:mainA}, in a general setting. The argument is standard.
\begin{prp}\label{gkmo}
Let $G$ be a real reductive group, with a reductive closed
subgroup $G'$. Let $\sigma$ be a continuous anti-automorphism of
$G$ which leaves $G'$ stable. Assume that for every generalized
function $f$ on $G$, the condition
\[
f(gxg^{-1})=f(x)\quad \textrm{ for all } g\in G'
\]
implies
\[
f(x^\sigma)=f(x).
\]
Then for all irreducible Harish-Chandra smooth representation $V$
and $V'$ of $G$ and $G'$, respectively, we have
\[
\dim {\mathrm{Hom}}_{G'}(V,V')\leq 1.
\]
\end{prp}
We will use the following form of Gelfand-Kazhdan criteria. A more
general version is proved in \cite[Theorem 2.2]{SZ}.
\begin{lemp}\label{gelfand}
Let $S$ be a closed subgroup of a real reductive group $H$, and
let $\sigma$ be a continuous anti-automorphism of $H$. Assume that
every bi $S$-invariant generalized function on $H$ is
$\sigma$-invariant. Then for any two irreducible Harish-Chandra
smooth representations $U_H$ and $V_H$ of $H$ which are
contragredient to each other, one has that
\begin{equation*}
\dim {\mathrm{Hom}}_{S}(U_H, \mathbb{C}) \, \dim {\mathrm{Hom}}_{S}
(V_H,\mathbb{C})\leq 1.
\end{equation*}
\end{lemp}
We return to the proof of Proposition \ref{gkmo}. Set
\[
H:=G\times G',
\]
which contains $G$ as a subgroup. Denote by $S\subset H$ the group
$G'$ diagonally embedded in $H$. For any $x=(g,g')\in H$, set
\[
x^\sigma:=(g^\sigma,{g'}^\sigma).
\]
\begin{lemp}\label{tauinv}
Under the assumption of Proposition \ref{gkmo}, if $f$ is a bi
$S$-invariant generalized function on $H$, then
\[
f(x^\sigma)=f(x).
\]
\end{lemp}
\begin{proof}
The multiplication map
\[
\begin{array}{rcl}
m_H: S\times G\times S&\rightarrow &H\\
(s_1, g, s_2)&\mapsto& s_1 g s_2
\end{array}
\]
is a surjective submersion. Let $f$ be a bi $S$-invariant
generalized function on $H$. Then its pull back has the form
\[
m_H^*(f)=1\otimes f_G\otimes 1, \quad \textrm{with} \ f_G\in
\textit{C}^{-\infty}(G).
\]
By considering the commutative diagram
\begin{equation*}
\begin{CD}
S\times G\times S@>>m_H>H\\
@V{\mathrm{Ad}}_s\times {\mathrm{Ad}}_s\times {\mathrm{Ad}}_s VV @V{\mathrm{Ad}}_sVV \\
S\times G\times S@>>m_H>H\,,
\end{CD}
\end{equation*}
for all $s\in S$, we conclude that $f_G$ is invariant under the
adjoint action of $G'$. By the assumption of Proposition
\ref{gkmo}, we conclude that $f_G$ is $\sigma$-invariant.
Set
\[
(s_1,g,s_2)^\sigma:=(s_2^\sigma, g^\sigma, s_1^\sigma), \quad
(s_1,g,s_2)\in S\times G\times S.
\]
Then $1\otimes f_G\otimes 1\in \textit{C}^{-\infty}( S\times G\times S)$
is also $\sigma$-invariant. We conclude that $f$ is
$\sigma$-invariant by appealing to the commutative diagram
\begin{equation*}
\begin{CD}
S\times G\times S@>>m_H>H\\
@V\sigma VV @V\sigma VV \\
S\times G\times S@>>m_H>H\,.
\end{CD}
\end{equation*}
\end{proof}
\begin{lemp}\label{dualrep}
We are under the assumption of Proposition \ref{gkmo}. Let $(V_H,
\rho)$ be an irreducible Harish-Chandra smooth representation of
$H$.
\begin{itemize}
\item[(a)] Set
\[
\rho_{-\sigma}(h):=\rho(h^{-\sigma}).
\]
Then $(V_H,\rho_{-\sigma})$ is an irreducible Harish-Chandra
smooth representation of $H$ which is contragredient to $(V_H,
\rho)$. \item[(b)] We have \[
\dim {\mathrm{Hom}}_S (V_H,\mathbb{C})\leq 1.
\]
\end{itemize}
\end{lemp}
\begin{proof}
Denote by
\[
\chi_{\rho}\in \textit{C}^{-\infty}(H)
\]
the character of $(V_H, \rho)$. Then its contragredient
representation has character $\chi_{\rho}(h^{-1})$.
It is clear that $(V_H,\rho_{-\sigma})$ is an irreducible
Harish-Chandra smooth representation, with character
$\chi_{\rho}(h^{-\sigma})$. Note that the assumption of
Proposition \ref{gkmo} easily implies that every generalized
function on $H$ is $\sigma$-invariant provided it is invariant
under the adjoint action of $H$. Since a character is always
invariant under the adjoint action, we conclude that
\[
\chi_{\rho}(h^{-1})=\chi_{\rho}(h^{-\sigma}).
\]
Part (a) then follows from the well-known fact that an irreducible
Harish-Chandra smooth representation is determined by its
character.
For Part (b), denote by $U_H$ the irreducible Harish-Chandra
smooth representation which is contragredient to $V_H$. Lemma
\ref{gelfand} and Lemma \ref{tauinv} imply that
\[
\dim {\mathrm{Hom}}_S(U_H,\mathbb{C})\, \dim {\mathrm{Hom}}_S (V_H,\mathbb{C})\leq 1.
\]
Now Part (a) clearly implies that
\[
\dim {\mathrm{Hom}}_S (U_H,\mathbb{C})=\dim {\mathrm{Hom}}_S (V_H,\mathbb{C}).
\]
We therefore conclude that $\dim {\mathrm{Hom}}_S (V_H,\mathbb{C})\leq 1$.
\end{proof}
{\vspace{0.2in}}
We now finish the proof of Proposition \ref{gkmo}. Denote by $U'$
the irreducible Harish-Chandra smooth representation of $G'$ which
is contragredient to $V'$. Set
\[
V_H:=V\widehat{\otimes} U',
\]
which is an irreducible Harish-Chandra smooth representation of
$H$. As usual, we have an obvious linear embedding
\[
{\mathrm{Hom}}_{G'}(V,V')\hookrightarrow {\mathrm{Hom}}_S(V_H,\mathbb{C}).
\]
The later space is at most one dimensional by Lemma \ref{dualrep},
and so is the former.
|
1,108,101,566,476 | arxiv | \subsection{#1}}
\theoremstyle{remark}
\newtheorem*{remark}{Remark}
\newcommand{\comma}{{},\,}
\newcommand{\col}{\colon}
\newcommand{\bbZ}{\mathbb Z}
\newcommand\Z{\bbZ}
\newcommand{\bbR}{\mathbb R}
\newcommand{\mcA}{\mathcal A}
\newcommand{\mcB}{\mathcal B}
\newcommand{\mcC}{\mathcal C}
\newcommand{\mcD}{\mathcal D}
\newcommand{\mcI}{\mathcal I}
\newcommand{\mcJ}{\mathcal J}
\newcommand{\mcS}{\mathcal S}
\newcommand{\mcX}{\mathcal X}
\newcommand{\sfP}{\mathsfsl{P}}
\newcommand{\sfQ}{\mathsfsl{Q}}
\DeclareMathAlphabet{\mathsfsl}{OT1}{cmss}{m}{sl}
\newcommand{\Map}{\mathsfsl{Map}}
\newcommand{\Pair}{{\mathsfsl{Pair}}}
\newcommand{\SSet}{\mathsfsl{SSet}}
\newcommand{\Alg}{\mathsfsl{Alg}}
\newcommand{\MPS}[1]{\mathsfsl{MPS}_{#1}}
\newcommand{\EMPS}[1]{\mathsfsl{EMPS}_{#1}}
\newcommand{\PMPS}[2][]{\ifthenelse{\equal{#1}{}}{\mathsfsl{PMPS}_{#2}}{\mathsfsl{PMPS}_{#2,#1}}}
\newcommand{\HMPS}[2][]{\ifthenelse{\equal{#1}{}}{\mathsfsl{HMPS}_{#2}}{\mathsfsl{HMPS}_{#2,#1}}}
\newcommand{\xra}[1]{\xrightarrow{#1}}
\newcommand{\xla}[1]{\xleftarrow{#1}}
\newcommand{\ra}{\rightarrow}
\newcommand{\la}{\leftarrow}
\newcommand{\xlra}[1]{\xrightarrow{\ #1\ }}
\newcommand{\xlla}[1]{\xleftarrow{\ #1\ }}
\newcommand{\lra}{\longrightarrow}
\newcommand{\lla}{\longleftarrow}
\newcommand{\Ra}{\xRightarrow{\ \ }}
\newcommand{\La}{\xLeftarrow{\ \ }}
\newcommand{\LRa}{\xLeftrightarrow{\ \ \ }}
\newdir{c}{{}*!/-5pt/@^{(}}
\newdir{d}{{}*!/-5pt/@_{(}}
\newdir{ >}{{}*!/-5pt/@{>}}
\newdir{s}{{}*!/+10pt/@{}}
\newdir{|>}{{}*!/2pt/@{|}*@{>}}
\newcommand{\pbsize}{15pt}
\newcommand{\pboffset}{.5}
\newcommand{\xycorner}[3]{\save #2="a";#1;"a"**{}?(\pboffset);"a"**\dir{-};#3;"a"**{}?(\pboffset);"a"**\dir{-}\restore}
\newcommand{\pb}{\xycorner{[]+<\pbsize,0pt>}{[]+<\pbsize,-\pbsize>}{[]+<0pt,-\pbsize>}}
\newcommand{\po}{\xycorner{[]+<-\pbsize,0pt>}{[]+<-\pbsize,\pbsize>}{[]+<0pt,\pbsize>}}
\newcommand{\xymatrixc}[1]{\xy *!C\xybox{\xymatrix{#1}}\endxy}
\newcommand{\cof}[1][]{\mathbin{\:\!\!\xymatrix@1@C=15pt{{}\ar@{ >->}[r]^{#1} & {}}}}
\newcommand{\fib}[1][]{\mathbin{\:\!\!\xymatrix@1@C=15pt{{}\ar@{->>}[r]^{#1} & {}}}}
\newcommand{\family}[1][]{\mathbin{\:\!\!\xymatrix@1@C=15pt{{}\ar@{~>}[r]^{#1} & {}}}}
\newcommand{\coker}{\mathop\mathrm{coker}\nolimits}
\newcommand{\cone}{\mathop\mathrm{cone}\nolimits}
\newcommand{\cyl}{\mathop\mathrm{cyl}\nolimits}
\newcommand{\defeq}{\stackrel{\mathrm{def}}{=}}
\newcommand{\ef}{\mathrm{ef}}
\newcommand{\ev}{\mathop\mathrm{ev}\nolimits}
\newcommand{\fibre}{\mathop\mathrm{fib}\nolimits}
\newcommand{\Gr}{\mathop\mathrm{Gr}\nolimits}
\newcommand{\hofibre}{\mathop\mathrm{hofib}\nolimits}
\newcommand{\hocolim}{\mathop\mathrm{hocolim}}
\newcommand{\id}{\mathop\mathrm{id}\nolimits}
\newcommand{\Id}{\mathop\mathrm{Id}\nolimits}
\newcommand{\im}{\mathop\mathrm{im}\nolimits}
\newcommand{\map}{\mathop\mathrm{map}\nolimits}
\newcommand{\op}{\mathrm{op}}
\newcommand{\pr}{\mathop\mathrm{pr}\nolimits}
\newcommand{\size}{\mathop\mathrm{size}\nolimits}
\newcommand{\sk}{\mathop\mathrm{sk}\nolimits}
\newlength{\hlp}
\newcommand{\leftbox}[2]{\settowidth{\hlp}{$#1$}\makebox[\hlp][l]{${#1}{#2}$}}
\newcommand{\rightbox}[2]{\settowidth{\hlp}{$#2$}\makebox[\hlp][r]{${#1}{#2}$}}
\newcommand{\tdcong}{\rotatebox{-90}{$\cong$}}
\entrymodifiers={+!!<0pt,\the\fontdimen22\textfont2>}
\newcommand{\Hopf}{H-}
\newcommand{\bdry}{d}
\newcommand{\ZG}{{\bbZ G}}
\newcommand{\hnabla}{\mathop{\widehat\nabla}}
\newcommand{\hDelta}{\mathop{\widehat\Delta}}
\newcommand{\htimes}{\mathbin{\widehat\times}}
\newcommand{\hvee}{\mathbin{\widehat\vee}}
\newcommand{\add}{\mathop\mathrm{add}\nolimits}
\newcommand{\inv}{\mathop\mathrm{inv}\nolimits}
\newcommand{\hadd}{\mathop\mathrm{add'}\nolimits}
\newcommand{\hplus}{\mathbin{+'}}
\newcommand{\theconn}{{d}}
\newcommand{\thedim}{{n}}
\newcommand{\theotherdim}{{m}}
\newcommand{\thedimm}{{i}}
\newcommand{\thedimmm}{{j}}
\newcommand{\stdsimp}[1]{\Delta^{#1}}
\newcommand{\horn}[2]
\mbox{$\xy
<0pt,-\the\fontdimen22\textfont2>;p+<.1em,0em>:
{\ar@{-}(0,0.1);(3,7)},
{\ar@{-}(3,7);(6,0.1)},
{\ar@{-}(3.2,7);(6.2,0.1)},
{\ar@{-}(3.4,7);(6.4,0.1)}
\endxy\;\!{}^{#1}_{#2}$}}
\newcommand{\vertex}[1]{#1}
\newcommand{\Pnew}{{P_\thedim}}
\newcommand{\Pold}{{P_{\thedim-1}}}
\newcommand{\Pm}{{P_\theotherdim}}
\newcommand{\tPnew}{{\widetilde P_\thedim}}
\newcommand{\pin}{{\pi_\thedim}}
\newcommand{\onew}{{o}}
\newcommand{\oold}{{o}}
\newcommand{\oi}{{o}}
\newcommand{\ommo}{{o_{\theotherdim-1}}}
\newcommand{\om}{{o_\theotherdim}}
\newcommand{\ompo}{{o_{\theotherdim+1}}}
\newcommand{\Kn}{{K_{\thedim+1}}}
\newcommand{\En}{{E_\thedim}}
\newcommand{\Ln}{{L_\thedim}}
\newcommand{\Li}{{L_\thedimm}}
\newcommand{\kn}{{k_\thedim}}
\newcommand{\ki}{{k_\thedimm}}
\newcommand{\konep}{{k_1'}}
\newcommand{\knp}{{k_\thedim'}}
\newcommand{\kip}{{k_\thedimm'}}
\newcommand{\knst}{{k_{\thedim*}}}
\newcommand{\kist}{{k_{\thedimm*}}}
\newcommand{\kmst}{{k_{\theotherdim*}}}
\newcommand{\kmpost}{{k_{(\theotherdim+1)*}}}
\newcommand{\konepef}{{\kappa_1^\ef}}
\newcommand{\knpef}{{\kappa_\thedim^\ef}}
\newcommand{\kipef}{{\kappa_\thedimm^\ef}}
\newcommand{\pn}{{p_\thedim}}
\newcommand{\pnst}{{p_{\thedim*}}}
\newcommand{\qn}{{q_\thedim}}
\newcommand{\qnst}{{q_{\thedim*}}}
\newcommand{\jn}{{j}}
\newcommand{\jnst}{{j_*}}
\newcommand{\kno}{{\kn\oold}}
\newcommand{\kio}{{\ki\oi}}
\newcommand{\kmo}{{k_\theotherdim\ommo}}
\newcommand{\kmpoo}{{k_{\theotherdim+1}\om}}
\newcommand{\qno}{{\qn\onew}}
\newcommand{\fn}{{f_\thedim}}
\newcommand{\varphin}{{\varphi_\thedim}}
\newcommand{\varphinp}{{\varphi_\thedim'}}
\newcommand{\varphinst}{{\varphi_{\thedim*}}}
\newcommand{\varphinpst}{{\varphi_{\thedim*}'}}
\newcommand{\tvarphin}{{\widetilde\varphi_\thedim}}
\newcommand{\psin}{{\psi_\thedim}}
\newcommand{\psiold}{{\psi_{\thedim-1}}}
\newcommand{\tpsin}{{\widetilde\psi_\thedim}}
\usepackage{titlesec}
\titleformat{\section}[block]
{\normalfont\Large\filcenter\bfseries}{\thesection.}{.33em}{}
\titlespacing*{\section}
{0pt}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}
\titleformat{\subsection}[runin]
{\normalfont\normalsize\bfseries}{\thesubsection.}{.33em}{}[.]
\titlespacing*{\subsection}
{0pt}{3.25ex plus 1ex minus .2ex}{.5em}
\def\immediateFigure#1
\smallskip\begin{center}#1\end{center}\smallskip }
\newcommand{\immfig}[1]
{\immediateFigure{\mbox{\includegraphics{#1}}}}
\title
{Algorithmic solvability of the lifting-extension problem\thanks
The research of M.\ \v{C}.\ was supported by the project CZ.1.07/2.3.00/20.0003 of the Operational Programme Education for Competitiveness of the Ministry of Education, Youth and Sports of the Czech Republic. The research by M.\ K.\ was supported by the Center of Excellence -- Inst.\ for Theor.\ Comput.\ Sci., Prague (project P202/12/G061 of GA~\v{C}R) and by the Project LL1201 ERCCZ CORES. The research of L.\ V.\ was supported by the Center of Excellence -- Eduard \v{C}ech Institute (project P201/12/G028 of GA~\v{C}R).
\vskip .5ex \noindent
\emph{2010 Mathematics Subject Classification}. Primary 55Q05; Secondary 55S91.
\vskip .5ex \noindent
\emph{Key words and phrases}. Homotopy classes, equivariant, fibrewise, lifting-extension problem, algorithmic computation, embeddability, Moore--Postnikov tower.
}}
\author
{Martin \v{C}adek \and Marek Kr\v{c}\'al \and Luk\'a\v{s} Vok\v{r}\'{\i}nek}
\begin{document}
\maketitle
\begin{abstract}
Let $X$ and $Y$ be finite simplicial sets (e.g.\ finite simplicial complexes), both equipped with a free simplicial action of a finite group $G$. Assuming that $Y$ is $d$-connected and $\dim X\le 2d$, for some $d\ge 1$, we provide an algorithm that computes the set of all equivariant homotopy classes of equivariant continuous maps $|X|\to|Y|$; the existence of such a map can be decided even for $\dim X\leq 2d+1$.
\ifpoly
For fixed $G$ and $\theconn$, the algorithm runs in polynomial time.
\fi
This yields the first algorithm for deciding topological embeddability of a $k$-dimensional finite simplicial complex into $\bbR^n$ under the condition $k\leq\frac 23 n-1$.
More generally, we present an algorithm that, given a lifting-extension problem satisfying an appropriate stability assumption, computes the set of all homotopy classes of solutions. This result is new even in the non-equivariant situation.
\end{abstract}
\section{Introduction}
Our original goal for this paper was to design an algorithm that decides existence of an equivariant map between given spaces under a certain ``stability'' assumption. To explain our solution however, it is more natural to deal with a more general lifting-extension problem. At the same time, lifting-extension problems play a fundamental role in algebraic topology since many problems can be expressed as their instances. We start by explaining our original problem and its concrete applications and then proceed to the main object of our study in this paper -- the lifting-extension problem.
\subsection*{Equivariant maps}
Consider the following
algorithmic problem: given a finite group $G$ and
two free $G$-spaces $X$ and $Y$, decide the existence of
an equivariant map $f\col X\to Y$.
In the particular case $G=\Z_2$ and $Y=S^{n-1}$ equipped with the antipodal $\Z_2$-action, this problem has various applications in geometry and combinatorics.
Concretely, it is well-known that if a simplicial complex $K$ embeds into $\bbR^{\thedim}$ then there exists a $\Z_2$-equivariant map $(K \times K) \smallsetminus \Delta_K \to S^{n-1}$; the converse holds in the so-called \emph{metastable range} $\dim K \leq \tfrac{2}{3} \thedim-1$ by \cite{Weber:PlongementsPolyedresDomaineMetastable-1967}. Algorithmic aspects of the problem of embeddability of $K$ into $\bbR^{\thedim}$ were studied in \cite{MatousekTancerWagner:HardnessEmbeddings-2011} and the meta-stable range was essentially the only remaining case left open.
\ifpoly
Theorem~\ref{t:emb-metast} below shows that, for fixed $n$, it is solvable in polynomial time.
\else
Theorem~\ref{t:emb-metast} below shows that it is solvable.
\fi
Equivariant maps also provide interesting applications of topology to combinatorics. For example, the celebrated result of Lov\'asz on Kneser's conjecture states that for a graph $G$, the absence of a $\Z_2$-equivariant map $B(G) \to S^{n-1}$ imposes a lower bound $\chi(G) \geq n+2$ on the chromatic number of $G$, where $B(G)$ is a certain simplicial complex constructed from $G$, see \cite{Mat-top}.
Building on the work of Brown \cite{Brown}, that is not applicable for $Y=S^{\thedim-1}$, we investigated in papers \cite{CKMSVW11,polypost} the
simpler, non-equivariant situation, where $X$ and $Y$ were topological
spaces and we were interested in $[X,Y]$, the set of all homotopy
classes of continuous maps $X\to Y$. Employing methods of \emph{effective homology}
developed by Sergeraert et al.\ (see e.g.\ \cite{SergerGenova}),
we showed that for any fixed $d\ge 1$, $[X,Y]$ is polynomial-time computable
if $Y$ is $d$-connected and $\dim X\le 2d$.
In contrast, \cite{ext-hard} shows that the problem of computing $[X,Y]$ is \#P-hard when the dimension restriction on $X$ is dropped. More strikingly, a related problem of the existence of a continuous extension of a given map $A \to Y$, defined on a subspace $A$ of $X$, is \emph{undecidable} as soon as $\dim X \geq 2d+2$.
Here we obtain an extension of the above computability result for free $G$-spaces and equivariant maps. The input $G$-spaces $X$ and $Y$ can be given as finite \emph{simplicial sets} (generalizations of finite simplicial complexes, see \cite{Friedm08}), and the free action of $G$ is supposed to be simplicial. The simplicial sets and the $G$-actions on them are described by a finite table.
\begin{theorem}\label{theorem:equivariant}
Let $G$ be a finite group. There is an algorithm that, given finite simplicial sets $X$ and $Y$ with free simplicial actions of $G$, such that $Y$ is $d$-connected, $d\geq 1$, and $\dim X\leq 2d+1$, decides the existence of a continuous equivariant map $X\to Y$.
If such a map exists and $\dim X\leq 2d$, then the set $[X,Y]$ of all equivariant homotopy classes of equivariant continuous maps can be equipped with the structure of a finitely generated abelian group, and the algorithm outputs the isomorphism type of this group.
\ifpoly
For fixed $G$ and $\theconn$, this algorithm runs in polynomial time.
\fi
\end{theorem}
Here the isomorphism type is output as an abstract abelian group given by a (finite) number of generators and relations. Furthermore, there is an algorithm that, given an equivariant simplicial map $\ell\col X\to Y$, computes the element of this group that $\ell$ represents.
As a consequence, we also have an algorithm that, given two equivariant simplicial maps $X\to Y$,
tests whether they are equivariantly homotopic under the above dimension restrictions on $X$.
Building on the methods of the present paper, \cite{Filakovsky:suspension} removes the
dimension restriction for the latter question: it provides a homotopy-testing algorithm
assuming only that $Y$ is simply connected.
\subsection*{The lifting-extension problem}
We obtain Theorem~\ref{theorem:equivariant} by an inductive approach that works more generally and more naturally in the setting of the \emph{(equivariant) lifting-extension problem}, summarized in the following diagram:
\begin{equation}\label{eq:basic_square}
\xy *!C\xybox{\xymatrix@C=40pt{
A \ar[r]^-{f} \ar@{ >->}[d]_-\iota & Y \ar@{->>}[d]^-\varphi \\
X \ar[r]_-{g} \ar@{-->}[ru]^-{\ell
} & B
}}\endxy
\end{equation}
The input objects for this problem are the solid part of the diagram and we require that:
\begin{enumerate}[labelindent=.5em,leftmargin=*,label=$\bullet$,itemsep=0pt,parsep=0pt,topsep=5pt]
\item $A$, $X$, $Y$, $B$ are free $G$-spaces;
\item $f\col A\to Y$ and $g\col X\to B$ are equivariant maps;
\item $\iota\col A\cof X$ is an equivariant inclusion map;
\item $\varphi\col Y\fib B$ is an equivariant \emph{fibration} (which we mark by the double-head arrow
\footnote{
In the algorithmic context, we will work with a simplicial version called a Kan fibration, see \cite{May:SimplicialObjects-1992}.
}; and
\item the square commutes (i.e.\ $g\iota=\varphi f$).
\end{enumerate}
The lifting-extension problem asks whether there exists a \emph{diagonal} in the square, i.e.\ an (equivariant) map
$\ell\col X\to Y$, marked by the dashed arrow, that makes both
triangles commute. We call such an $\ell$ a
\emph{solution} of the lifting-extension
problem~\eqref{eq:basic_square}.
Moreover, if such an $\ell$ exists,
we would like to compute the structure of the set
$[X,Y]^A_B$ of all solutions, up to (equivariant) \emph{fibrewise} homotopy
\emph{relative to $A$}.\footnote{Here a homotopy
$h\col [0,1]\times X\to Y$ is \emph{fibrewise} if
$\varphi(h(t,x))=g(x)$ for all $t\in[0,1]$ and $x\in X$.
It is \emph{relative to $A$} if, for $a\in A$, $h(t,a)$ is independent of $t$, i.e.\ $h(t,a)=f(a)$ for all $t\in [0,1]$ and $a\in A$.}
More concretely, in the cases covered by our algorithmic results,
we will be able to equip $[X,Y]^A_B$ with a structure of an abelian
group, and the algorithm computes the isomorphism type
of this group.
\subsection*{The generalized lifting-extension problem}
Spaces appearing in a fibration $\varphi \col Y\fib B$ must typically be represented
by infinite simplicial set
\footnote
If $\varphi$ is a Kan fibration between finite $1$-connected simplicial sets then its fibre is a finite Kan complex and it is easy to see that it then must be discrete. Consequently, $\varphi$ is a covering map between $1$-connected spaces and thus an isomorphism.
, and their representation
as inputs to an algorithm can be problematic. For this reason,
we will consider a \emph{generalized lifting-extension problem},
where, compared to the above, $\varphi\col Y\to B$ can be an arbitrary equivariant map,
\emph{not} necessarily a fibration.
In this case, it makes no sense from the homotopy point of view to define a solution as a map $X\to Y$ making both triangles commutative. A homotopically correct way of defining a solution is to consider pairs $(\ell,h)$, where $\ell\col X\to Y$ is a map for which the upper triangle commutes strictly and the lower one commutes up to the specified homotopy $h\col [0,1]\times X\to B$ relative to $A$. We will not pursue this approach any further (in particular, we will not define the right notion of homotopy of such pairs) and choose an equivalent, technically less demanding alternative, which consists in replacing the map $\varphi$ by a homotopy equivalent fibration.
To this end, we factor $\varphi\col Y\to B$ as
a weak homotopy equivalence $j\col Y\xra\sim Y'$ followed by
a fibration $\varphi'\col Y'\fib B$ (in the simplicial setup, see Lemma~\ref{l:fibrant_replacement}).
We define a \emph{solution} of the
considered generalized lifting-extension problem to be a solution
$\ell\col X\to Y'$ of the lifting-extension problem
\[\xymatrixc{
A \ar[r]^-{f} \ar@{ >->}[d] & Y \ar[r]^-j & Y' \ar@{->>}[d]^-{\varphi'} \\
X \ar[rr]_-{g} \ar@{-->}[rru]^-\ell & & B
}\]
If $\varphi$ was a fibration to begin with, we naturally take $Y=Y'$ and
$j=\id$, and then the two notions of a solution coincide.
With some abuse of notation, we write $[X,Y]_B^A$ for the set
$[X,Y']_B^A$ of all homotopy classes of solutions of the above lifting-extension problem.
We remark that $Y'$ is used merely as a theoretical tool -- for actual computations, we use a different approximation of $Y$,
namely a suitable finite stage of a Moore--Postnikov tower
for $\varphi\col Y\to B$; see Section~\ref{s:main_proofs}. Moreover, $Y'$
is not determined uniquely, and thus neither are the solutions of the generalized lifting-extension problem.
However, rather standard considerations show that the \emph{existence} of a solution and the \emph{isomorphism type}
of $[X,Y']_B^A$ as an abelian group are independent of the choice of $Y'$.
\subsection*{Examples of lifting-extension problems}
In order to understand
the meaning of the (generalized) lifting-extension problem,
it is instructive to consider some special
cases.
\begin{enumerate}[labelindent=.5em,leftmargin=*,label=(\roman*)]
\item (Classification of extensions.)
First, let us consider $G=\{e\}$ trivial (thus, the equivariance conditions
are vacuous) and $B$ a point (which makes the lower triangle in the
lifting-extension problem superfluous).
Then we have an \emph{extension problem},
asking for the existence of a map $\ell\col X\to Y$ extending a given $f\col A\to Y$. Moreover, $[X,Y]$ is the set of appropriate homotopy classes of such extensions
\footnote{
This problem (under our usual condition on the dimension of $X$) was considered in \cite{polypost}, but with a different equivalence relation on the set of all extensions: \cite{polypost} dealt with the (slightly unnatural) \emph{coarse classification}, where two extensions $\ell_0$ and $\ell_1$ are considered equivalent if they are homotopic as maps $X\to Y$, whereas here we deal with the \emph{fine classification}, where the equivalence of $\ell_0$ and $\ell_1$ means that they are homotopic relative to~$A$.
}
We remind that this problem is undecidable when $\dim X$ is not bounded, according to~\cite{ext-hard}.
\item (Equivariant maps.) Here $G$ is nontrivial, $A=\emptyset$, and $B=EG$,
a contractible free $G$-space (it is unique up to equivariant homotopy equivalence).
For every free $G$-space $Z$, there is an equivariant map $c_Z\col Z\to EG$,
unique up to equivariant homotopy.
If we set $g=c_X$ and $\varphi=c_Y$
in the generalized lifting-extension problem, then it can be proved
that $[X,Y]_{EG}^\emptyset$ is in a bijective correspondence with all
equivariant maps $X\to Y$ up to equivariant homotopy.
This is how we obtain
Theorem~\ref{theorem:equivariant}.\footnote{Note that we cannot simply
take $B$ to be a point in the lifting-extension problem with a nontrivial $G$,
since there is no free action of $G$ on a point. Actually, $EG$
serves as an equivariant analogue of a point among free $G$-spaces.}
\item (Extending sections in a vector bundle.)
Let $G=\{e\}$, and let $\varphi\col Y\to B$ be the inclusion
${BSO}(\thedim-k)\ra{BSO}(\thedim)$,
where $BSO(\thedim)$ is the classifying space of the special orthogonal group $SO(\thedim)$.
Then the commutative square in the
generalized lifting-extension
problem is essentially an oriented vector bundle of dimension $\thedim$ over $X$
together with $k$ linearly independent vector fields over $A$.
The existence of a solution is then equivalent to the existence of
linearly independent continuations of these vector fields to the whole of~$X$.
We remark that in order to apply our theorem to this situation,
a finite simplicial model of the classifying space $BSO(\thedim)$
would have to be constructed.
As far as we know, this has not been carried out yet.
We briefly remark that for non-oriented bundles, it is possible to pass to certain two-fold ``orientation'' coverings and reduce the problem to the one for oriented bundles but with a further $\bbZ_2$-equivariance constraint.
\end{enumerate}
\subsection*{The main theorem} Now we are ready to state the main
result of this paper.
\begin{theorem} \label{thm:main_theorem}
Let $G$ be a finite group and let an instance of the generalized lifting-extension problem be input as follows: $A$, $X$, $Y$, $B$ are finite simplicial sets with free simplicial actions of $G$, $A$ is an (equivariant) simplicial subset of $X$, and $f$, $g$, $\varphi$ are equivariant simplicial maps. Furthermore, both $B$ and $Y$ are supposed to be $1$-connected,
and the \emph{homotopy fibre}\footnote{The homotopy fibre of
$\varphi$ is the fibre of $\varphi'$, where $\varphi$
is factored through $Y'$ as above. It is unique up to homotopy
equivalence, and so the connectivity is well defined.}
of $\varphi\col Y\to B$ is supposed to be $d$-connected for some $d\geq 1$.
There is an algorithm that, for $\dim X\leq 2d+1$,
decides the existence of a solution.
Moreover, if $\dim X\leq 2d$ and a solution exists, then the set $[X,Y]_B^A$
can be equipped with the structure of an abelian group,
and the algorithm computes its isomorphism type.
\ifpoly
The running time of this algorithm is polynomial when $G$ and $\theconn$ are fixed.
\fi
\end{theorem}
As in Theorem~\ref{theorem:equivariant}, the isomorphism type means an abstract abelian group (given by generators and relations) isomorphic to $[X,Y]^A_B$.
Given an arbitrary diagonal $\ell\col X\to Y$ in the considered square, one can compute the element of this group
that the solution $j\ell\col X\to Y'$ represents.
Constructing the abelian group structure on $[X,Y]_B^A$
will be one of our main objectives.
In the case of all \emph{continuous} maps $X\to Y$ up to
homotopy, with no equivariance condition imposed, as in \cite{CKMSVW11},
the abelian group structure on $[X,Y]$ is canonical.
In contrast, in the setting of the
lifting-extension problem, the structure is canonical only up to
a choice of a zero element; in other words, $[X,Y]_B^A$ really has an
``affine'' nature (in very much the same way as an affine space
is naturally a vector space up to a choice of its origin).
This non-canonicality of zero is one of the phenomena making
the equivariant problem (and the lifting-extension problem)
substantially different from the non-equivariant case treated
in \cite{CKMSVW11}. We will have to devote much effort to dealing
with the choice of zero, and working with ``zero sections'' in the
considered fibrations.
\subsection*{Embeddability and equivariant maps}
Theorem~\ref{theorem:equivariant}
has the following consequence
for embeddability of simplicial complexes:
\begin{theorem}\label{t:emb-metast}
\ifpoly
Let $n$ be a fixed integer.
\else
Let $n$ be an integer.
\fi
There is an algorithm that, given a finite simplicial complex $K$ of dimension $k \leq \frac23 n-1$, decides the existence of an embedding of $K$ into $\bbR^n
\ifpoly
{} in polynomial tim
\f
.
\end{theorem}
The algorithmic problem of testing embeddability of a given $k$-dimensional
simplicial complex into $\bbR^n$, which is a natural generalization
of graph planarity,
was studied in \cite{MatousekTancerWagner:HardnessEmbeddings-2011}.
Theorem~\ref{t:emb-metast} clarifies the decidability of this problem
for $k\le \frac23 n-1$; this is
the so-called \emph{metastable range} of dimensions,
which was left open in \cite{MatousekTancerWagner:HardnessEmbeddings-2011}.
Briefly, in the metastable range, the classical theorem of Weber (see \cite{Weber:PlongementsPolyedresDomaineMetastable-1967})
asserts that embeddability is equivalent to the existence of a $\Z_2$-equivariant map $(K\times K)\smallsetminus \Delta_K \to S^{\thedim-1}$ whose domain is equivariantly homotopy equivalent to a finite simplicial comple
\footnote
The complex is (the canonical triangulation of) the union of all products $\sigma \times \tau$ of disjoint simplices $\sigma,\,\tau \in K$, $\sigma \cap \tau = \emptyset$.}
with a free simplicial action of $\Z_2$. Thus, Theorem~\ref{t:emb-metast}
follows immediately from Theorem~\ref{theorem:equivariant};
we refer to \cite{MatousekTancerWagner:HardnessEmbeddings-2011} for
details.
\ifpoly\else
\subsection*{Polynomial running times}
We remark that for fixed $G$ and $\theconn$, the algorithms of Theorems~\ref{theorem:equivariant} and~\ref{thm:main_theorem} run in polynomial time. The algorithm of Theorem~\ref{t:emb-metast} also runs in polynomial time when the dimension $n$ is fixed. These claims are proved in an extended version of this paper, \cite{aslep-long}.
\fi
\subsection*{Outline of the proof}
In the rest of this section we sketch the main ideas and tools needed for the algorithm of Theorem~\ref{thm:main_theorem}. Even though the computation is very similar in its nature to that of \cite{CKMSVW11}, there are several new ingredients which we had to develop in order to make the computation possible. We will briefly describe these after the outline of the proof.
Our first tool is a Moore--Postnikov tower $\Pnew$ for $\varphi \col Y\ra B$ within the framework of (equivariant) effective algebraic topology (essentially, this means that all objects are representable in a computer); it is enough to construct the first $\dim X$ stages. It can be shown that $[X,Y]^A_B\cong[X,\Pnew]^A_B$ for $\thedim \geq \dim X$ and so it suffices to compute inductively $[X,\Pnew]^A_B$ from $[X,\Pold]^A_B$ for $\thedim \leq \dim X$. This is the kind of problems considered in obstruction theory. Namely, there is a natural map $[X,\Pnew]^A_B\to[X,\Pold]^A_B$ and it is possible to describe all preimages of any given homotopy class $[\ell] \in [X,\Pold]^A_B$ using, in addition, an inductive computation of $[\stdsimp{1}\times X,\Pold]^{(\partial\stdsimp{1}\times X)\cup(\stdsimp{1}\times A)}_B$. In general however, $[X,\Pold]^A_B$ is infinite and it is thus impossible to compute $[X,\Pnew]^A_B$ as a union of preimages of all possible homotopy classes $[\ell]$ (on the other hand, if these sets are finite, the above description provides an algorithm, probably not very efficient, see \cite{Brown}).
For this reason, we use in the paper to a great advantage our second tool, an abelian group structure on the set $[X,\Pnew]^A_B$ of homotopy classes of diagonals, which only exists on a ``stable'' part $\thedim\leq 2\theconn$ and, of course, only if this set is non-empty. It comes from an ``up to homotopy'' (fibrewise) abelian group structure on Moore--Postnikov stages (of a certain pullback of $Y'\ra B$) which we construct algorithmically -- this is the heart of the present paper. We remark that the abelian group structure on $[X,\Pnew]^A_B$ was already observed in \cite{McClendon}; however, this paper did not deal with algorithmic aspects.
In the stable part of the Moore--Postnikov tower, the natural map $[X,\Pnew]^A_B\to[X,\Pold]^A_B$ is a group homomorphism and the above mentioned computation of the preimages of a given homotopy class $[\ell]$ may be reduced to a \emph{finite} set of generators of the image; the computation is conveniently summarized in a long exact sequence \eqref{eq:les}. This finishes the rough description of our inductive computation.
\subsection*{New tools}
In the process of building the Moore--Postnikov tower and also later, it is important to work with infinite simplicial sets, such as the Moore--Postnikov stages $P_n$, in an algorithmic way. This is handled by the so-called \emph{equivariant} effective algebraic topology and effective homological algebra. The relevant non-equivariant results are described in \cite{SergerGenova,polypost}. In many cases, only minor and/or straightforward modifications are needed. One exception is the equivariant effective homology of Moore--Postnikov stages, for which we rely on a separate paper \cite{Vokrinek}.
Compared to our previous work \cite{CKMSVW11}, the main new ingredient is the weakening of the \Hopf space structure that exists on Moore--Postnikov stages. This is needed in order to carry out the whole computation algorithmically. Accordingly, the construction of this structure is much more abstract. In \cite{CKMSVW11} we had $B=*$ and Postnikov stages carried a unique basepoint. In the case of nontrivial $B$, the basepoints are replaced by sections and Moore--Postnikov stages may not admit a section at all -- this is related to the possibility of $[X,Y]^A_B$ being empty. Even if a section exists, it might be non-unique and consequently, the addition on $\Pnew$ will not preserve it in general. This is the origin of the above weakening. A further complication emerging from the non-uniqueness of the section is that it might happen that in the stage $\Pold$ we choose a section which does not lift to $\Pnew$. In that case, we need to change the section of $\Pold$ and compute everything again from scratch.
\subsection*{Plan of the paper}
In the second section, we give an overview of equivariant effective homological algebra that we use in the rest of the paper.
The third section is devoted to the algorithmic construction of an equivariant Moore--Postnikov tower.
The proofs of Theorems~\ref{theorem:equivariant} and~\ref{thm:main_theorem
\ifpoly
, without their polynomial time claims,
\else
{}
\fi
are given in the following section, although proofs of its two important ingredients are postponed to Sections~$5$ and $6$.
In the fifth section, we construct a certain weakening of an (equivariant and fibrewise) \Hopf space structure on stable stages of Moore--Postnikov towers with sections.
In the sixth section, we show how this structure enables one to endow the sets of homotopy classes with addition in an algorithmic way. Finally, we derive an exact sequence relating $[X,\Pnew]^A_B$ to $[X,\Pold]^A_B$ and thus enabling an inductive computation.
In the seventh section, we provide proofs that we feel would not fit the previous sections.
\ifpoly
In the last section, we prove polynomial bounds for the running time of our algorithms.
\fi
\section{Equivariant effective homological algebra} \label{sec:equi_eff_hlgy_alg}
\heading{Basic setup}
For a simplicial set, the face operators are denoted by $d_\thedimm$, and the degeneracy operators by $s_\thedimm$. The standard $\theotherdim$-simplex $\stdsimp\theotherdim$ is a simplicial set with a unique non-generate $\theotherdim$-simplex and no relations among its faces. The simplicial subset generated by the $\thedimm$-th face of $\stdsimp\theotherdim$ will be denoted by $\bdry_\thedimm\stdsimp\theotherdim$. The boundary $\partial\stdsimp\theotherdim$ is the union of all the faces and the $\thedimm$-th horn $\horn\theotherdim\thedimm$ is generated by all faces $\bdry_\thedimmm\stdsimp\theotherdim$, $\thedimmm\neq\thedimm$. Finally, we denote the vertices of $\stdsimp\theotherdim$ by $0,\ldots,\theotherdim$.
Sergeraert et al.\ (see \cite{SergerGenova})
have developed an ``effective version'' of homological algebra, in which
a central notion is an object (simplicial set or chain complex) with
effective homology. Here we will discuss analogous notions in the equivariant
setting, as well as some other extensions. For a key result,
we rely on a separate paper \cite{Vokrinek} which shows, roughly speaking,
that if the considered action is free, equivariant effective homology can be obtained from non-equivariant
one.
We begin with a description of the basic computational objects,
sometimes called \emph{locally effective} objects.
The underlying idea is that in every definition one replaces sets
by computable sets and mappings by computable mappings.
A \emph{computable set} is a set whose elements have a finite encoding
by bit strings, so that they can be represented in a computer,
and an algorithm is available that, given a bit string,
decides whether it represents an element of the set
\footnote{Strictly speaking, we only need a membership test for \emph{subsets}, i.e.\ when $A\subseteq X$ is a subset, we need an algorithm that tests whether an element of $X$ belongs to $A$ (while we \emph{do not} need to test whether a bit string represents an element of $X$).}
On the other hand, it may happen that
no ``global'' information about the set is available; e.g.\
it is algorithmically undecidable in general whether a given
computable set is nonempty.
A \emph{mapping} between computable sets is \emph{computable} if there is
an algorithm computing its values.
We will need two particular cases of this principle -- simplicial sets and chain complexes.
\subsection*{Simplicial sets}
A \emph{locally effective simplicial set}
is a simplicial set $X$ whose simplices have a specified finite encoding
and whose face and degeneracy operators are specified by algorithms.
Our simplicial sets will be equipped with a simplicial \emph{action} of
a finite group $G$ that is also computed by an algorithm (whose input is an element of $G$ and a simplex of $X$).
We will assume that this action is \emph{free} and that a distinguished set of representatives
of orbits is specified -- we will call such $X$ \emph{cellular}. In the locally effective context,
we require that there is an algorithm that expresses each simplex $x\in X$
(in a unique way) as $x=ay$ where $a\in G$ and $y\in X$ is a distinguished simplex.
\begin{remark}
We will not put any further restrictions on the representation of simplicial sets in a computer -- the above algorithms will be sufficient. On the other hand, it is important that such representations exist. We will describe one possibility for finite simplicial sets and complexes.
Let $X$ be a finite simplicial set with a free action of $G$. Let us choose arbitrarily one simplex from each orbit of the non-degenerate simplices; these simplices together with all of their degeneracies are the distinguished ones. Then every simplex $x\in X$ can be represented uniquely as $x=as_Iy$, where $a\in G$, $s_I$ is an iterated degeneracy operator (i.e.\ a composition $s_{\thedimm_\theotherdim}\cdots s_{\thedimm_1}$ with $\thedimm_1<\cdots<\thedimm_\theotherdim$), and $y$ is a non-degenerate distinguished simplex. With this representation, it is possible to compute the action of $G$ and the degeneracy operators easily, while face operators are computed using the relations among the face and degeneracy operators and a table of faces of non-degenerate distinguished simplices. This table is finite and it can be provided on the input.
A special case is that of a finite simplicial complex. Here, one can prescribe a simplex (degenerate or not) uniquely by a finite sequence of its vertices.
\end{remark}
\subsection*{Chain complexes}
For our computations, we will work with nonnegatively graded chain complexes $C_\ast$ of abelian groups on which $G$ acts by chain maps; denoting by $\ZG$ the integral group ring of $G$, one might equivalently say that $C_*$ is a chain complex of $\ZG$-modules. We will adopt this terminology from now on. We will also assume that these chain complexes are \emph{cellular}, i.e.\ equipped with a distinguished $\ZG$-basis; this means that for each $\thedim\geq 0$ there is a collection of distinguished elements of $C_\thedim$ such that the elements of the form $ay$, with $a\in G$ and $y$ distinguished, are all distinct and form a $\Z$-basis of $C_\thedim$.
In the locally effective version, we assume that the elements of the chain complex have a finite encoding, and there is an algorithm expressing arbitrary elements as (unique) $\ZG$-linear combinations of the elements of the distinguished bases.
We require that the operations of zero, addition, inverse, multiplication by elements of $\ZG$, and differentials are computable.\footnote{These requirements (with the exception of the differentials) are automatically satisfied when the elements of the chain complex are represented directly as $\ZG$-linear combinations of the distinguished bases.}
A basic example on which these assumptions are modelled is that of the normalized chain complex $C_*X$ of a simplicial set $X$. For each $\thedim\geq 0$, a $\Z$-basis of $C_\thedim X$ is given by the set of nondegenerate $\thedim$-dimensional simplices of $X$. If $X$ is equipped with a free simplicial action of $G$, then this induces an action of $G$ on $C_*X$ by chain maps, and a $\ZG$-basis for each $C_\thedim X$ is given by a collection of nondegenerate distinguished $\thedim$-dimensional simplices of $X$, one from each $G$-orbit.
If $X$ is locally effective as defined above, then so is $C_*X$ (for evaluating the differential, we observe that a simplex $x$
is degenerate if and only if $x=s_\thedimm d_\thedimm x$ for some $\thedimm$, and this can be checked algorithmically).
\begin{convention}\label{con:G_action_loc_eff}
We fix a finite group $G$. All simplicial sets are
locally effective, equipped with a free action of $G$ and cellular in the locally effective sense.
All chain complexes are non-negatively graded locally effective chain complexes
of free $\ZG$-modules that are moreover cellular in the locally effective sense.
All simplicial maps, chain maps, chain homotopies,
etc.\ are equivariant and computable.
\end{convention}
Later, Convention~\ref{con:fibrewise} will introduce additional standing assumptions.
\begin{definition}
An \emph{effective} chain complex is a (locally effective) chain complex
equipped with an
algorithm that generates a list of elements
of the distinguished basis in any given dimension
(in particular, the distinguished bases are finite in each dimension).
\end{definition}
For example, if a simplicial set $X$ admits an algorithm generating a (finite) list of its non-degenerate
distinguished simplices in any given dimensio
\ifpoly
{} (we call it \emph{effective} in Section~\ref{sec:polynomiality}
\f
, then its normalized chain complex $C_*X$ is effective.
\heading{Reductions, strong equivalences}\label{s:effReduction}
We recall that a \emph{reduction} (also called \emph{contraction} or \emph{strong deformation retraction}) $C_*\Ra C'_*$ between two chain complexes
is a triple $(\alpha,\beta,\eta)$ such that $\alpha\col C_*\ra C'_*$
and $\beta\col C'_*\ra C_*$ are equivariant chain maps with $\alpha\beta=\id$ and $\eta$ is an equivariant chain homotopy on $C_*$ with $\partial\eta+\eta\partial=\id-\beta\alpha$; moreover, we require that $\eta\beta=0$, $\alpha\eta=0$ and $\eta\eta=0$. The following diagram illustrates this definition:
\[(\alpha,\beta,\eta)\col C_*\Ra C'_*\quad\equiv\quad\xymatrix@C=30pt{
C_* \ar@(ul,dl)[]_{\eta} \ar@/^/[r]^\alpha & C'_* \ar@/^/[l]^{\beta}
}\]
Reductions are used to solve homological problems in $C_*$ by translating them to $C_*'$ and vice versa, see \cite{SergerGenova}; a particular example is seen at the end of the proof of Lemma~\ref{l:lift_ext_one_stage}. While for this principle to work, chain homotopy equivalences would be enough, they are not sufficient for the so-called perturbation lemmas (we will introduce them later), where the real strength of reductions lies.
For the following definition, we consider pairs $(C_*,D_*)$, where
$C_*$ is a chain complex and $D_*$ is a subcomplex of $C_*$.
Such pairs are always understood in the \emph{cellular} sense; i.e.\
the distinguished basis of each $D_\thedim$ is a subset
of the distinguished basis of $C_\thedim$.
\begin{definition}
A \emph{reduction} $(C_*,D_*)\Ra(C'_*,D'_*)$ \emph{of (cellular) pairs} is a reduction $C_*\Ra C'_*$ that restricts to a reduction $D_*\Ra D'_*$, i.e.\ such that $\alpha(D_*)\subseteq D'_*$, $\beta(D'_*)\subseteq D_*$, and $\eta(D_*)\subseteq D_*$.
\end{definition}
From this reduction, we get an induced reduction $C_*/D_*\Ra C'_*/D'_*$ of the quotients.
We will need to work with a notion more
general than reductions, namely strong equivalences.
A \emph{strong equivalence} $C_*\LRa C'_*$ is a pair of reductions
$C_*\La\widehat C_*\Ra C'_*$, where $\widehat C_*$ is some chain complex.
Similarly, a strong equivalence $(C_*,D_*)\LRa(C'_*,D'_*)$
is a pair of reductions $(C_*,D_*)\La(\widehat C_*,\widehat D_*)\Ra(C'_*,D'_*)$.
Strong equivalences can be (algorithmically) composed:
if $C_*\LRa C'_*$ and $C'_*\LRa C''_*$, then one obtains $C_*\LRa C''_*$
(see e.g.\ \cite[Lemma~2.7]{polypost}).
\begin{definition}
Let $C_*$ be a chain complex. We say that $C_*$ is equipped with \emph{effective homology} if there is specified a strong equivalence $C_*\LRa C_*^\ef$ of $C_*$ with some effective chain complex $C_*^\ef$. Effective homology for pairs $(C_*,D_*)$ of chain complexes is introduced similarly using strong equivalences of pairs. A simplicial set $X$ is equipped with \emph{effective homology} if $C_*X$ is. Finally, a pair $(X,A)$ of simplicial sets is equipped with \emph{effective homology} if $(C_*X,C_*A)$ is
\footnote{
We could equally well work with pairs $(X,A)$ of simplicial sets for which the relative chain complex $C_*(X,A)$ is equipped with effective homology (as observed, this is a weaker condition).
}
\end{definition}
\begin{remark}
In what follows, we will only assume $(X,A)$, $Y$, $B$ to be equipped with effective homology. Consequently, it can be seen that Theorems~\ref{theorem:equivariant} and~\ref{thm:main_theorem} also hold under these weaker assumptions. The dimension restriction on $X$ can be weakened to: the equivariant cohomology groups of $(X,A)$, defined in Section~\ref{sec:equiv_cohomology}, vanish above dimension $2\theconn$
\footnote{
By passing to the mapping cylinder $X'=(\stdsimp1\times A)\cup X$, we may even relax the condition on the pair $(X,A)$ to each of $A$, $X$ being equipped with effective homology separately since then the pair $(X',A)$ has effective homology (this is very similar to but easier than Proposition~\ref{p:effective_homotopy_colimits}) and the resulting generalized lifting-extension problem is equivalent to the original one.
}
\end{remark}
The following theorem shows that, in order to equip a chain complex
with effective homology, it suffices to have it equipped
with effective homology in the non-equivariant sense.
\begin{theorem}[\cite{Vokrinek}]\label{t:vokrinek}
Let $C_*$ be a chain complex (of free $\ZG$-modules). Suppose that as a chain complex of abelian groups $C_*$ can be equipped with effective homology (i.e.\ in the non-equivariant sense). Then it is possible to equip $C_*$ with effective homology in the equivariant sense. This procedure is algorithmic.
\end{theorem}
The original strong equivalence $C_*\LRa C_*^\ef$ gets replaced by an equivariant one $C_*\LRa BC_*^\ef$, where $BC_*^\ef$ is a bar construction of some sort; see \cite{Vokrinek} for details.
Thus, although non-equivariant effective homology is not the same as equivariant effective homology, it is possible to construct one from the other. In this paper, effective homology will be always understood in the equivariant sense.
We recall that the \emph{Eilenberg--Zilber reduction} is a particular reduction $C_*(X\times Y)\Ra C_*X\otimes C_*Y$; see e.g.\ \cite{EML54,polypost,SergerGenova}.
It is known to be functorial (see e.g.\ \cite[Theorem~2.1a]{EML54}), and hence it is equivariant. We extend it to pairs.
\begin{proposition}[Product of pairs]\label{prop:relative_product}
If pairs $(X,A)$ and $(Y,B)$ of simplicial sets
are equipped with effective homology, then it is also possible to equip the pair
\[(X,A)\times(Y,B)\defeq\big(X\times Y,(A\times Y)\cup(X\times B)\big)\]
with effective homology.
\end{proposition}
\begin{proof}
The Eilenberg--Zilber reduction $C_*(X\times Y)\Ra C_*X\otimes C_*Y$ is functorial, which implies that it restricts to a reduction
\[C_*\big((A\times Y)\cup(X\times B)\big)\Ra(C_*A\otimes C_*Y)+(C_*X\otimes C_*B)\defeq D_*.\]
The strong equivalences $C_*X\LRa C_*^\ef X$ and $C_*Y\LRa C_*^\ef Y$ induce a strong equivalence (by \cite[Proposition~61]{SergerGenova}, whose construction is functorial, and hence applicable to the equivariant setting)
\[C_*X\otimes C_*Y\LRa C_*^\ef X\otimes C_*^\ef Y\]
that, again, restricts to a strong equivalence of the subcomplex $D_*$ above with its obvious effective version $D_*^\ef$. The composition of these two strong equivalences finally yields a strong equivalence $C_*((X,A)\times(Y,B))\LRa(C_*^\ef X\otimes C_*^\ef Y,D_*^\ef)$.
\end{proof}
Important tools allowing us to work efficiently with reductions are
two \emph{perturbation lemmas}.
Given a reduction $C_*\Ra C'_*$, they provide a way of obtaining
a new reduction, in which the differentials of the complexes
$C_*$, $C'_*$ are perturbed.
Again, we will need versions for pairs.
\begin{definition} Let $C_*$ be a chain complex with a differential $\partial$. A collection of morphisms $\delta\col C_\thedim\to C_{\thedim-1}$ is called a \emph{perturbation} of the differential $\partial$ if the sum $\partial+\delta$ is also a differential.
\end{definition}
Since there will be many differentials around, we will emphasize them in the notation.
\begin{proposition}[Easy perturbation lemma]\label{p:epl}
Let $(\alpha,\beta,\eta)\col(C_*,D_*,\partial)\Ra(C'_*,D'_*,\partial')$ be a reduction
and let $\delta'$ be a perturbation of the differential $\partial'$ on $C'_*$ satisfying $\delta'(D_*')\subseteq D_*'$. Then $(\alpha,\beta,\eta)$ also constitutes a reduction $(C_*,D_*, \partial+\beta\delta'\alpha)\Ra(C'_*,D'_*, \partial'+\delta')$.
\end{proposition}
\begin{proposition}[Basic perturbation lemma]\label{p:bpl}
Let $(\alpha,\beta,\eta)\col(C_*,D_*,\partial)\Ra(C'_*,D'_*,\partial')$ be a reduction
and let $\delta$ be a perturbation of the differential $\partial$ on $C_*$ satisfying $\delta(D_*)\subseteq D_*$. Assume that for every $c\in C_*$ there is a $\nu\in\mathbb N$ such that $(\eta\delta)^{\nu}(c)=0$. Then it is possible to compute a perturbation $\delta'$ of the differential $\partial'$ on $C'_*$ that preserves $D_*'$ and a reduction $(\alpha',\beta',\eta')\col(C_*,D_*,\partial+\delta)\Ra(C'_*,D'_*, \partial'+\delta')$.
\end{proposition}
The absolute versions (i.e.\ versions where all considered subcomplexes are zero) of the perturbation lemmas
are due to \cite{Shih}. There are explicit formulas provided there for $\delta'$ etc.\ (see also \cite{SergerGenova}), which show that the resulting reductions are equivariant (since all the involved maps are equivariant). Similarly, these formulas show that in the presence of subcomplexes $D_*$ and $D_*'$, these are preserved by all the maps in the new reductions (since all the involved maps preserve them).
The following proposition is used for the construction of the Moore--Postnikov tower in Section~\ref{sec:Moore_Postnikov}. Here $Z_{\thedim+1}(C_*)$ denotes
the group of all cycles in $C_{\thedim+1}$.
\begin{proposition} \label{prop:projectivity}
Let $C_*$ be an effective chain complex such that $H_\thedimm(C_*)=0$ for $\thedimm\le\thedim$. Then there is a (computable)
retraction $C_{\thedim+1}\ra Z_{\thedim+1}(C_*)$, i.e.\
a homomorphism that restricts to the identity on $Z_{\thedim+1}(C_*)$.
\end{proposition}
\begin{proof}
We construct a contractio
\footnote{We recall that a contraction is a map $\sigma$ of degree $1$ satisfying $\partial\sigma+\sigma\partial=\id$.}
$\sigma$ of $C_*$ by induction on the dimension,
and use it for splitting $Z_{\thedim+1}(C_*)$ off
$C_{\thedim+1}$. It suffices to define $\sigma$ on
the distinguished bases.
Since every basis element $x\in C_0$ is a cycle,
it must be a boundary. We compute some $y\in C_1$ for which $x=\partial y$,
and we set $\sigma(x)=y$; since $G$ is finite, we may treat $\partial\col C_1\ra C_0$ as a $\bbZ$-linear map between finitely generated free $\bbZ$-modules and solve for $y$ using Smith normal form.
Now assume that $\sigma$ has been constructed up to dimension $\thedimm-1$
in such a way that $\partial\sigma+\sigma\partial=\id$, and we want to
define $\sigma(x)$ for a basis element $x\in C_\thedimm$.
Since $x-\sigma(\partial x)$ is a cycle, we can compute
some $y$ with $x-\sigma(\partial x)=\partial y$, and set
$\sigma(x)=y$.
This finishes the inductive construction of~$\sigma$.
The desired retraction $C_{\thedim+1}\ra Z_{\thedim+1}(C_*)$ is
given by $\id-\sigma\partial$.
\end{proof}
\heading{Eilenberg--MacLane spaces and fibrations}\label{sec:equiv_cohomology}
For an abelian group $\pi$, the simplicial abelian group $K(\pi,\thedim+1)_\theotherdim=Z^{\thedim+1}(\stdsimp{\theotherdim},\pi)$ of normalized cocycles and the simplicial abelian group $E(\pi,\thedim)_\theotherdim=C^{\thedim}(\stdsimp{\theotherdim},\pi)$ of normalized cochains are standard models for the Eilenberg--MacLane space and its path space. The coboundary operator $\delta\col E(\pi,\thedim)\to K(\pi,\thedim+1)$ is a fibration with fibre $K(\pi,\thedim)$.
The Eilenberg--MacLane spaces are useful for their relation to cohomology. Here we only summarize the relevant results, details may be found in \cite[Section~24]{May:SimplicialObjects-1992} or \cite[Section~3.7]{polypost} (both in the non-equivariant setup though).
When $\pi$ is a $\ZG$-module, there is an induced action of $G$ on both $K(\pi,\thedim)$ and $E(\pi,\thedim)$. We note that, in contrast to our general assumption, this action is \emph{not free} and consequently, these spaces may not possess effective homology. This will not matter since they will not enter our constructions on their own but as certain principal twisted cartesian products, see \cite{May:SimplicialObjects-1992} for the definition. Firstly, $K(\pi,\thedim)$ possesses non-equivariant effective homology by \cite[Theorem~3.16]{polypost}. The principal twisted cartesian product $P = Q \times_\tau K(\pi,\thedim)$ has a free $G$-action whenever $Q$ does and \cite[Corollary~12]{Filakovsky} constructs the non-equivariant effective homology of $P$ from that of $Q$ and $K(\pi,\thedim)$. Theorem~\ref{t:vokrinek} then provides (equivariant) effective homology for $P$.
It is easy to see that the addition in the simplicial abelian groups $K(\pi,\thedim)$, $E(\pi,\thedim)$ and the homomorphism $\delta$ between them are equivariant. Moreover, for every simplicial set $X$, there is a natural isomorphism
\[\map(X,E(\pi,\thedim))\cong C^\thedim(X;\pi)^G\]
between equivariant simplicial maps and equivariant cochains, that sends $f\col X\ra E(\pi,\thedim)$ to $f^*(\ev)$, where $\ev \in C^\thedim(E(\pi,\thedim);\pi)^G$ is the canonical cochain that assigns to each $\thedim$-simplex of $E(\pi,\thedim)_\thedim$, i.e.\ an $\thedim$-cochain on $\stdsimp\thedim$, its value on the unique non-degenerate $\thedim$-simplex of $\stdsimp{\thedim}$.
The set $\map(X,K(\pi,\thedim))$ is naturally an abelian group, with addition inhereted from that on $K(\pi,\thedim)$, and the above isomorphism is and isomorphism of groups.
When $X$ is finite, this isomorphism is computable (objects on both sides are given by a finite amount of data). When $X$ is merely locally effective, then
an algorithm that computes a simplicial map
$X\ra E(\pi,\thedim)$ can be converted into an algorithm
that evaluates the corresponding cochain in $C^\thedim(X;\pi)^G$,
and vice versa.
The above isomorphism restricts to an isomorphism
\[\map(X,K(\pi,\thedim))\cong Z^\thedim(X;\pi)^G.\]
We will denote the cohomology groups of $C^*(X;\pi)^G$ by $H^*_G(X;\pi)$
\footnote{Our groups $H_G^*(X;\pi)$ are the equivariant cohomology groups of $X$ with coefficients in a certain system associated with $\pi$ (see the remark in \cite[Section~I.9]{Bredon:EquivariantCohomology-1967}) or, alternatively, they are the cohomology groups of $X/G$ with local coefficients specified by $\pi$.}
We have an induced isomorphism
\[[X,K(\pi,\thedim)]\cong H^n_G(X;\pi)\]
between homotopy classes of equivariant maps and these cohomology groups. By the naturality of these isomorphisms, the maps which are zero on $A$ correspond precisely to relative cocycles and consequently
\[[(X,A),(K(\pi,\thedim),0)]\cong H^n_G(X,A;\pi).\]
\heading{Constructing diagonals for Eilenberg--MacLane fibrations}
When solving the generalized lifting-extension problem,
we will replace $\varphi\col Y\to B$ by a fibration built inductively from
Eilenberg--MacLane fibrations $\delta\col E(\pi,\thedim)\to K(\pi,\thedim+1)$.
The following lemma will
serve as an inductive step in the computation of $[X,Y]^A_B$.
It also demonstrates how effective homology of pairs enters the game.
\begin{lemma} \label{l:lift_ext_one_stage}
There is an algorithm that, given a commutative square
\[\xymatrix@C=30pt{
A \ar[r]^-c \ar@{ >->}[d] & E(\pi,\thedim) \ar@{->>}[d]^\delta \\
X \ar[r]_-z \ar@{-->}[ru] & K(\pi,\thedim+1)
}\]
where the pair $(X,A)$ is equipped with effective homology,
decides whether a diagonal exists. If it does, it computes one.
If $H^{\thedim+1}_G(X,A;\pi)=0$, then a diagonal exists for
every $c$ and $z$.
\end{lemma}
Let us remark that although our main result, Theorem~\ref{thm:main_theorem},
assumes $X$ finite, we will need to use the lemma
for infinite simplicial sets~$X$, and then the effective homology
assumption for $(X,A)$ is important.
\begin{proof}
Thinking of $c$ as a cochain in $C^\thedim(A;\pi)^G$, we extend it to a cochain on $X$ by mapping all $\thedim$-simplices not in $A$ to zero. This prescribes a map $\widetilde c \colon X\ra E(\pi,\thedim)$ that is a solution of the lifting-extension problem from the statement for $z$ replaced by $\delta\widetilde c$. Since the lifting-extension problems and their solutions are additive, one may subtract this solution from the previous problem
and obtain an equivalent lifting-extension problem
\[\xymatrix@C=30pt{
A \ar[r]^-{0} \ar@{ >->}[d] & E(\pi,\thedim) \ar@{->>}[d]^\delta \\
X \ar[r]_-{z-\delta\widetilde c} \ar@{-->}[ru]^-{c_0} & K(\pi,\thedim+1)
}\]
A solution of this problem is an (equivariant) relative cochain $c_0$ whose coboundary is $z_0=z-\delta\widetilde c$ (this $c_0$ yields a solution $\widetilde c+c_0$ of the original problem). If $C_*(X,A)$ is effective, then such a $c_0$ is computable whenever it exists (and it always exists in the case $H^{\thedim+1}_G(X,A;\pi)=0$).
However, $C_*(X,A)$ itself is not effective in general,
it is only strongly equivalent to an effective complex.
Thus, we need to check that the computability of a preimage under $\delta$
is preserved under reductions in both directions.
Let $(\alpha,\beta,\eta)\col C_*\Ra C'_*$ be a reduction. First,
let us suppose that $z_0'\col C'_*\ra\pi$ is a cocycle
with $z_0'\alpha=\delta c_0$. Then
\[z_0'=z_0'\alpha\beta=(\delta c_0)\beta=\delta(c_0\beta),\]
and we may set $c_0'=c_0\beta$. Next, suppose that $z_0\col C_*\ra\pi$ is
a cocycle with $z_0\beta=\delta c_0'$. Then
\[z_0=z_0(\partial\eta+\eta\partial+\beta\alpha)=z_0\eta\partial+\delta c_0'\alpha=\delta(z_0\eta+c_0'\alpha),\]
and we may set $c_0=z_0\eta+c_0'\alpha$.
\end{proof}
\section{Constructing a Moore--Postnikov tower}\label{sec:Moore_Postnikov}
We remind that we defined $Y'$ by factoring $\varphi$ as a composition $Y\cof Y'\fib[\varphi']B$ of a weak homotopy equivalence followed by a fibration; such a factorization exists by Lemma~\ref{l:fibrant_replacement}. Using this approximation, $[X,Y]^A_B$ was defined as the set of homotopy classes $[X,Y']^A_B$. In order to compute this set, we approximate $Y'$ by the Moore--Postnikov tower of $Y$ over $B$. Then the computation will proceed by induction over the stages of this tower, as will be explained in Section~\ref{s:main_proofs}. For now, we will give a definition of an equivariant Moore--Postnikov tower of a simplicial map $\varphi\col Y\to B$ and review some of the statements of the last section in the context of this tower. The actual construction of the tower, when both simplicial sets $Y$ and $B$ are equipped with effective homology, will be carried out later in Section~\ref{sec:MP_tower_proof}.
\heading{Definition}
Let $\varphi\col Y\to B$ be a map. A (simplicial)
\emph{Moore--Postnikov tower} for $\varphi$ is a commutative diagram
\[\xymatrix{
& & {} \ar@{.}[d] \\
& & \Pnew \ar[d]^-{\pn} \\
& & \Pold \ar@{.}[d] \\
Y \ar[uurr]^-{\varphin} \ar[urr]_-{\varphi_{\thedim-1}} \ar[rr]_-{\varphi_1} \ar[drr]_-{\rightbox{\scriptstyle\varphi={}}{\scriptstyle\varphi_0}}
& & P_{1} \ar[d]^-{p_1} \\
& & \leftbox{P_{0}}{{}=B}
}\]
satisfying the following conditions:
\begin{enumerate}[labelindent=.5em,leftmargin=*,label=\arabic*.]
\renewcommand{\theenumi}{\arabic{enumi}}
\item\label{MP1}
The induced map $\varphinst\col \pi_\thedimm(Y)\to \pi_{\thedimm}(\Pnew)$ is an isomorphism for $0\leq\thedimm\leq\thedim$ and an epimorphism for $\thedimm=\thedim+1$.
\item\label{MP2}
The map $\Pnew\to B$ induces an isomorphism $\pi_\thedimm(\Pnew)\to\pi_\thedimm(B)$ for $\thedimm\ge\thedim+2$ and a monomorphism for $\thedimm=\thedim+1$.
\item\label{MP3}
The map $\pn\col\Pnew\to\Pold$ is a Kan fibration induced by a map
\[\knp\col\Pold\to K(\pin,\thedim+1)\]
for some $\ZG$-module $\pin$, i.e.\ there exists a pullback square
\[\xymatrix@C=30pt{
\Pnew \pb \ar[r] \ar[d]_-\pn & E(\pin,\thedim) \ar[d]^{\delta} \\
\Pold \ar[r]_-\knp & K(\pin,\thedim+1)
}\]
identifying $\Pnew$ with the pullback $\Pold\times_{K(\pin,\thedim+1)}E(\pin,\thedim)$. Alternatively, one may identify $\Pnew$ as the principal twisted cartesian product $\Pnew\times_\tau K(\pin,\thedim)$ -- this will be used to equip $\Pnew$ with effective homology.
\end{enumerate}
We remark that the axioms imply $\pin\cong\pin F$, where $F$ is the homotopy fibre of $Y\ra B$, i.e.\ the fibre of $Y'\to B$.
\newcommand{\MPt}
{There is an algorithm that, given a map $\varphi\col Y\to B$ between simply connected simplicial sets with effective homology and an integer $\thedim$, constructs the first $\thedim$ stages of a Moore--Postnikov tower for $\varphi$. The stages $P_\thedimm$ are constructed as simplicial sets with effective homology, and $\varphi_\thedimm$, $\kip$, $p_\thedimm$ as computable maps.}
\begin{theorem}\label{t:MP_tower}
\MPt
\end{theorem}
The proof of the theorem is postponed to Section~\ref{sec:MP_tower_proof}. From the tower, we obtain a new lifting-extension problem with $\fn=\varphin f$:
\[\xymatrix@C=40pt{
A \ar[r]^-{\fn} \ar@{ >->}[d] & \Pnew \ar@{->>}[d]^-{\psin} \\
X \ar[r]_-g \ar@{-->}[ru] & B
}\]
The following theorem explains the role of the Moore--Postnikov tower in our algorithm.
\newcommand{\nequiv}
{There exists a map $\varphinp\col Y'\to\Pnew$ inducing a bijection $\varphinpst\col[X,Y']^A_B\to [X,\Pnew]^A_B$ for every $\thedim$-dimensional simplicial set $X$ with a free action of $G$.}
\begin{theorem}\label{t:n_equivalence}
\nequiv
\end{theorem}
The theorem should be known but we could not find an equivariant fibrewise version anywhere. For this reason, we include a proof in Section~\ref{sec:n_equivalence_proof}.
From the point of view of Theorem~\ref{thm:main_theorem}, we have reduced the computation of $[X,Y]^A_B=[X,Y']^A_B$ to that of $[X,\Pnew]^A_B$, where $\thedim=\dim X$. Before going into details of this computation, we present a couple of results that are directly related to the Moore--Postnikov tower. They will be essential tools in the proof of Theorem~\ref{thm:main_theorem}.
\heading{Inductive construction of diagonals}
We will slightly reformulate Lemma~\ref{l:lift_ext_one_stage}
in terms of the Moore--Postnikov tower.
\begin{proposition} \label{prop:lift_ext_one_stage}
There is an algorithm that, given a diagram
\[\xymatrix@C=40pt{
A \ar[r]^-f \ar@{ >->}[d] & \Pnew \ar@{->>}[d]^-{\pn} \\
X \ar[r]_-g \ar@{-->}[ru] & \Pold
}\]
where the pair $(X,A)$ is equipped with
effective homology, decides whether a diagonal exists.
If it does, it computes one.
When $H^{\thedim+1}_G(X,A;\pin)=0$, a diagonal exists for every
$f$ and~$g$.
\end{proposition}
\begin{proof}
We will use property (3) of Moore--Postnikov towers, which expresses $\pn$ as a pullback:
\[\xymatrix@C=30pt{
A \ar[r]^-f \ar@{ >->}[d] & \Pnew \pb \ar[r] \ar[d]_-{\pn} & E(\pin,\thedim) \ar@{->>}[d]^\delta \\
X \ar[r]_-g \ar@{-->}[ru]^{\ell} & \Pold \ar[r]_-{\knp} & K(\pin,\thedim+1)
}\]
Thus, diagonals $\ell$ are exactly of the form
$(g,c)\col X\ra \Pold\times_{K(\pin,\thedim+1)}E(\pin,\thedim)$,
where $c\col X\to E(\pin,\thedim)$
is an arbitrary diagonal in the composite square
and thus computable by Lemma~\ref{l:lift_ext_one_stage}.
\end{proof}
We obtain two important consequences as special cases. The first one is an algorithmic version of lifting homotopies across $\Pnew\fib\Pm$.
\begin{proposition}[homotopy lifting] \label{prop:homotopy_lifting}
Given a diagram
\[\xymatrix{
(\vertex\thedimm\times X)\cup(\stdsimp{1}\times A) \ar[r] \ar@{ >->}[d]_-\sim & \Pnew \ar@{->>}[d] \\
\stdsimp{1}\times X \ar[r] \ar@{-->}[ru] & P_\theotherdim
}\]
where $\thedimm\in\{0,1\}$ and $(X,A)$ is equipped with
effective homology, it is possible to compute a diagonal.
In other words, one may lift homotopies in Moore--Postnikov towers
algorithmically.
\end{proposition}
\begin{proof}
It is possible to equip $(\stdsimp{1}\times X,
(\vertex\thedimm\times X)\cup(\stdsimp{1}\times A))$
with effective homology by Proposition~\ref{prop:relative_product}.
Moreover, this pair has zero cohomology since there exists
a (continuous) equivariant deformation of $\stdsimp{1}\times X$
onto the considered subspace. Thus a diagonal can be
constructed by a successive use of Proposition~\ref{prop:lift_ext_one_stage}.
\end{proof}
The second result concerns algorithmic concatenation of homotopies.
Let $\horn21$ denote the first horn in the standard $2$-simplex $\stdsimp2$, i.e.\ the simplicial subset of the standard simplex $\stdsimp2$
spanned by the faces $\bdry_2\stdsimp2$ and $\bdry_0\stdsimp2$.
Given two homotopies $h_2,h_0\col\stdsimp1\times X\to Y$
that are compatible, in the sense that $h_2$ is a homotopy from $\ell_0$ to $\ell_1$ and $h_0$ is a homotopy from $\ell_1$ to $\ell_2$,
one may prescribe a map $\horn21\times X\to Y$ as $h_2$ on
$\bdry_2\stdsimp2\times X$ and as $h_0$ on $\bdry_0\stdsimp2\times X$.
If this map has an extension
$H\col \stdsimp2\times X\to Y$, then the restriction of $H$ to
$\bdry_1\stdsimp2\times X$ gives a homotopy from $\ell_0$ to $\ell_2$,
which can be thought of as a \emph{concatenation} of $h_2$ and $h_0$.
We will need the following effective,
relative and fibrewise version; the proof is entirely analogous
to that of the previous proposition and we omit it.
\begin{proposition}[homotopy concatenation] \label{prop:homotopy_concatenation}
Given a diagram
\[\xymatrix{
(\horn{2}{1}\times X)\cup(\stdsimp{2}\times A) \ar[r] \ar@{ >->}[d]_-\sim & \Pnew \ar@{->>}[d] \\
\stdsimp{2}\times X \ar[r] \ar@{-->}[ru] & P_\theotherdim
}\]
where $(X,A)$ is equuipped with effective homology, it is possible to
compute a diagonal. In other words, one may concatenate homotopies
in Moore--Postnikov towers algorithmically.
\end{proposition}
\section{Computing homotopy classes of maps}\label{s:main_proofs}
In this section, we prove Theorems~\ref{theorem:equivariant} and~\ref{thm:main_theorem}. First, we will explain our computational model for abelian groups, since these are one of our main computational objects and also form the output of our algorithms.
There are two levels of these computational models:
\emph{semi-effective} and \emph{fully effective} abelian groups.
They are roughly analogous to locally effective chain complexes and effective ones. There is, however, one significant difference: while an element
of a chain complex is assumed to have a unique computer representation,
a single element of a semi-effective abelian group may have many
different representatives. We can perform the group operations
in terms of the representatives, but in general, we cannot decide whether
two representatives represent the same group element.
This setting is natural
when working with elements of $[X,\Pnew]^A_B$, i.e.\ with homotopy classes of diagonals. The representatives are simplicial maps
$X\to\Pnew$, and at first, we will not be able to decide whether
given two such maps are homotopic.
Given a semi-effective abelian group, it is not possible to compute its isomorphism type (even when it is finitely generated); for this we need additional information, summarized in the notion of a fully effective abelian group.
A semi-effective abelian group can be made fully effective provided
that it is a part of a suitable exact sequence, additionally provided
with set-theoretic sections; this is described in Section~\ref{s:abelops}.
This suggests a computation of $[X,\Pnew]^A_B$ in two steps. First, we will endow it with a structure of a semi-effective abelian group, whose addition comes from the weak \Hopf space structure on $\Pnew$ constructed in Section~\ref{sec:H_space_constr}. Next, we will promote it to a fully effective abelian group by relating it to $[X,\Pold]^A_B$ through a long exact sequence.
We note that the long proofs of Theorems~\ref{t:semi_eff}~and~\ref{thm:exact_sequence} are postponed to later sections. This enables us to complete the proof of the main Theorem~\ref{thm:main_theorem} in the present section.
\heading{Operations with abelian groups}\label{s:abelops}
This subsection is a short summary of a detailed discussion found in \cite{CKMSVW11}.
In our setting, an abelian group $A$ is represented by a set $\mcA$, whose elements are called \emph{representatives}; we also assume that the representatives can be stored in a computer.\footnote{We do not assume that $\mcA$ is computable -- this will not be the case e.g.\ for $[X,\Pnew]$ when $X$ is infinite.} For $\alpha\in\mcA$, let $[\alpha]$ denote the element of $A$ represented by $\alpha$. The representation is generally non-unique; we may have $[\alpha]=[\beta]$ for $\alpha\ne\beta$.
We call $A$ represented in this way \emph{semi-effective}, if algorithms for the following three tasks are available: provide an element $o\in\mcA$ with $[o]=0$ (the neutral element); given $\alpha\comma\beta\in\mcA$, compute $\gamma\in\mcA$ with $[\gamma]=[\alpha]+[\beta]$; given $\alpha\in\mcA$, compute $\beta \in\mcA$ with $[\beta]=-[\alpha]$.
For semi-effective abelian groups $A$, $B$, with sets $\mcA$, $\mcB$ of representatives, respectively, we call a mapping $f\col A\to B$ \emph{computable} if there is a computable mapping $\varphi\col\mcA\to\mcB$ such that $f([\alpha])=[\varphi(\alpha)]$ for all $\alpha\in \mcA$.
We call a semi-effective abelian group $A$ \emph{fully effective} if there is given an isomorphism $A\cong\bbZ/q_1\oplus\cdots\oplus\bbZ/q_r$, computable together with its inverse. In detail, this consists of \begin{itemize}
\item
a finite list of generators $a_1,\ldots,a_r$ of $A$ (given by representatives) and their orders $q_1,\ldots,q_r\in\{2,3,\ldots\}\cup\{0\}$ (where $q_\thedimm=0$ gives $\bbZ/q_\thedimm=\bbZ$),
\item
an algorithm that, given $\alpha\in\mcA$, computes integers $z_1,\ldots,z_r$ so that $[\alpha]=\sum_{i=1}^rz_i a_i$; each coefficient $z_i$ is unique within $\bbZ/q_i$.
\end{itemize}
The proofs of the following lemmas are not difficult; they are given in \cite{CKMSVW11}.
\begin{lemma}[kernel and cokernel]\label{l:ker_coker}
Let $f\col A\to B$ be a computable homomorphism
of fully effective abelian groups.
Then both $\ker(f)$ and $\coker(f)$ can be represented as fully effective abelian groups.
\end{lemma}
\begin{lemma}[short exact sequence]\label{l:ses}
Let $A$, $B$, $C$ be abelian groups, with $A$, $C$ fully effective and $B$ semi-effective, and let
\[\xymatrix{0 \ar[r]& A \ar[r]^{f} & B \ar[r]^{g} & C \ar[r]&0}\]
be a short exact sequence with $f$, $g$ computable. Assume, moreover, that the following computable maps are given:
\begin{itemize}
\item
$r\col\im f\to A$ such that $f(r(b))=b$ for every $b\in\im f$,
\item
$\sigma\colon \mcC\to \mcB$ (where $\mcB$, $\mcC$ are the sets of representatives for $B$, $C$, respectively) that behaves like
a set-theoretic section for $g$, i.e.\ such that $g([\sigma(\gamma)])=[\gamma]$ for all $\gamma \in \mcC$.
\end{itemize}
Then we can obtain a fully effective representation of~$B$.
\end{lemma}
We remark that we \emph{do not} require $[\sigma(\gamma)]$ to depend only on $[\gamma]$ -- it might well depend on the representative $\gamma$.
\heading{Semi-effectivity of $\boldsymbol{[X,\Pnew]^A_B}$ for pointed stable stages $\boldsymbol{\Pnew}$} \label{s:semi_eff_str}
We call a Moore--Postnikov stage $\Pnew$ \emph{stable} if $\thedim\leq 2\theconn$, where $\theconn$ is the connectivity of the homotopy fibre of $\varphi\col Y\to B$ as in the introduction. The significance of this restriction lies in the existence of an abelian group structure on $[X,\Pnew]^A_B$. The construction of this structure is (together with the construction of the Moore--Postnikov tower) the technically most demanding part of the paper and we postpone it to later sections. For its existence, we will have to assume further that $\Pnew$ is \emph{pointed} (i.e.\ provided with a section); this equips $[X,\Pnew]^A_B$ with a particular choice of zero. In detail, a lifting-extension problem \emph{for a pointed fibration} is a lifting-extension problem
\[\xymatrix@C=40pt{
A \ar[r]^-\fn \ar@{ >->}[d]_-\iota & \Pnew \ar@{->>}[d]_-\psin \\
X \ar[r]_-{g} \ar@{-->}[ru]^-{\ell} & B \ar@{-->}@/_10pt/[u]_-o
}\]
where $\fn=f\varphin$ as before, in which a section $o\col B\to\Pnew$ is given, referred to as the \emph{zero section}, such that $\fn=og\iota$, i.e.\ such that $\fn$ takes values on this zero section. In this case, $\fn$ is uniquely determined from the rest of the data and we may thus specify such a lifting-extension problem equivalently by a pair $(X,A)$ of simplicial sets over $B$ and a pointed fibration $\psin\col\Pnew\to B$. In this case, a solution is a map $\ell\col X\to\Pnew$ over $B$ that sends $A$ to the zero section. In particular, $\ell=og$ (the zero map) is a solution, and we use it as a basepoint in $[X,\Pnew]^A_B$, making it into a pointed set.
\newcommand{\se}
{Suppose that $\thedim\leq 2\theconn$, that $\Pnew\to B$ is given a zero section $\onew$, that $\fn$ takes values on this zero section, and that $(X,A)$ is equipped with effective homology. Then the (pointed) set $[X,\Pnew]^A_B$ admits a structure of a semi-effective abelian group, whose elements
are represented by algorithms that compute diagonals $X\to \Pnew$.}
\begin{theorem}\label{t:semi_eff}
\se
\end{theorem}
The proof of the theorem occupies a significant part of the paper. First, we construct a ``weak \Hopf space structure'' on $\Pnew$ in Section~\ref{s:weak_H_spaces} and then show how this structure gives rise to addition on the homotopy classes of diagonals in Section~\ref{sec:weak_H_space}.
A special case of a pointed stable Moore--Postnikov stage is $\Ln=B\times K(\pin,\thedim)$ (it is the first non-trivial stage with the corresponding Postnikov invariant $\knp$ zero). It is equipped with the zero section $b\to(b,0)$, which we denote by~$0$. The weak \Hopf space structure on $\Ln$ is very simple to describe: for two elements $z=(b,z')$ and $w=(b,w')$ of $\Ln$ lying over the same $b\in B$, we define $z+w\defeq(b,z'+w')$. This addition makes $\Ln$ even into a fibrewise simplicial abelian group. In this case, we get immediately a stronger result.
\begin{lemma}\label{l:fully_eff_cohlgy}
Let $(X,A)$ be equipped with effective homology. Then it is possible to equip $[X,\Ln]^A_B$ with a structure of a fully effective abelian group; the elements are represented by algorithms that compute (equivariant and fibrewise) simplicial maps $X\to\Ln$ that map $A$ to the zero section.
\end{lemma}
\begin{proof}
We start with isomorphisms
\[[X,\Ln]^A_B\cong[(X,A),(K(\pi,\thedim),0)]\cong H^\thedim_G(X,A;\pi)\cong H^\thedim_G(X,A;\pi)^\ef\]
where the group on the right is the cohomology group of the complex of equivariant cochains $C_*^\ef(X,A)\to\pi$ on the effective chain complex of $(X,A)$ and the last isomorphism comes from effective homology of $(X,A)$.
It is possible to represent elements of these groups by algorithms that compute the respective (equivariant) simplicial maps or equivariant cocycles -- since it is possible to transform one such representing algorithm into another, this follows from the above isomorphisms and the case of $H^\thedim_G(X,A;\pi)^\ef$, where any cocycle is computable. Thus, the set of homotopy classes can be computed as the cohomology group of
\[C^*_\ef(X,A;\pi)^G=\operatorname{Hom}_\ZG(C_*^\ef(X,A),\pi)\]
using a Smith normal form algorithm -- it is a cochain complex of finitely generated abelian groups. The algorithm also gives the required generators. Any homotopy class is expressed as a combination of the chosen generators by translating to the right-hand side of the above isomorphism and using the output of the Smith normal form algorithm.
\end{proof}
\heading{An exact sequence relating consecutive pointed stable stages} \label{sec:exact_sequence}
To promote the semi-effective structure on $[X,\Pnew]^A_B$ to a fully effective one, we will apply Lemma~\ref{l:ses} to a certain exact sequence relating two consecutive stable stages of the Moore--Postnikov tower. Thus, again, we assume that $\thedim\leq 2\theconn$, that $\Pnew\to B$ has a zero section $\onew$, and that $\fn$ takes values on this zero section. Defining the zero section of $\Pold$ to be $B\xra{\onew}\Pnew\xra{\pn}\Pold$, and denoting it for simplicity also by $\oold$, these assumptions will apply also to $\Pold$.
The description of $\Pnew$ in the definition of a Moore--Postnikov tower as a pullback is classical and is also useful for the actual construction of the tower. For the upcoming computations, it has a major disadvantage, though -- the spaces appearing in the pullback square are not spaces over $B$. This is easily corrected by replacing the Eilenberg--MacLane space by the product $\Kn=B\times K(\pin,\thedim+1)$ and the ``path space'' by $\En=B\times E(\pin,\thedim)$. Denoting by $\kn$ the fibrewise Postnikov invariant, i.e.\ the map whose first component is the projection $\psiold\col\Pold\to B$ and the second component is the original (non-fibrewise) Postnikov invariant $\knp$, we obtain another pullback square
\[\xymatrix@C=40pt{
\Pnew \pb \ar[r]^-{\qn} \ar[d]_-{\pn} & E_\thedim \ar[d]^-\delta \\
\Pold \ar[r]_-\kn & K_{\thedim+1}
}\]
Let $\fibre_\oold\pn$ denote the \emph{(fibrewise) fibre} of $\pn$ over $\oold$, i.e.\ the subset of $\Pnew$ lying over the zero section of $\Pold$. We will need that $\fibre_\oold\pn$ is isomorphic to $\Ln=B\times K(\pin,\thedim)$. For the description of this isomorphism, it is useful to recall that $\pn$ is a principal fibration. Thinking of $\Pnew$ as a subset of $\Pold\times\En$, $z\in\Ln$ acts on $(x,c)\in\Pnew$ by $(x,c)+z\defeq(x,c+z)$ where the sum $c+z$ is taken within the fibrewise simplicial group $\En$. Then the isomorphism $\Ln\to \fibre_\oold\pn$ is the restriction of the map $\jn\col\Ln\to\Pnew$, $z\mapsto\onew+z$.
Giving both $\Kn$ and $\Ln$ the zero section $b\mapsto(b,0)$, the previous subsection endows both $[X,\Ln]^A_B$ and $[X,\Kn]^A_B$ with a structure of a fully effective abelian group.
\newcounter{les}
\newcommand{\exseq}{
Suppose that $\thedim\leq 2\theconn$ and that $\Pnew$ is equipped with a zero section $\onew$ on which $\fn$ takes values. Then there is an exact sequence
\ifthenelse{\equal{\theles}{0}}{
\begin{equation}\label{eq:les}
[\stdsimp 1 \times X,\Pold]^{(\partial\stdsimp 1 \times X) \cup (\stdsimp 1 \times A)}_B\xlra{\partial}[X,\Ln]^A_B\xlra{\jnst}[X,\Pnew]^A_B\xlra{\pnst}[X,\Pold]^A_B\xlra{\knst}[X,\Kn]^A_B
\end{equation}\setcounter{les}{1}}{
\begin{equation*}
[\stdsimp 1 \times X,\Pold]^{(\partial\stdsimp 1 \times X) \cup (\stdsimp 1 \times A)}_B\xlra{\partial}[X,\Ln]^A_B\xlra{\jnst}[X,\Pnew]^A_B\xlra{\pnst}[X,\Pold]^A_B\xlra{\knst}[X,\Kn]^A_B
\end{equation*}
of semi-effective abelian groups and computable homomorphisms.}
\begin{theorem} \label{thm:exact_sequence}
\exseq
\end{theorem}
The exactness itself ought to be well known and is nearly \cite[Proposition~II.2.7]{Crabb_James}.
The proof is postponed to Section~\ref{sec:leftover_proofs}; until then, we only describe the maps in this sequence -- these will be important in the proceeding. The middle homomorphisms are induced by the maps $\jn\col\Ln\ra\Pnew$ and $\pn\col\Pnew\ra\Pold$. The last homomorphism $\knst$ sends $[\ell]\in[X,\Pold]^A_B$ to $[\kn\ell-\kno]\in[X,\Kn]^A_B$. We remark that without subtracting $\kno$, the result would not be an element of $[X,\Kn]^A_B$, since $\kn\ell$ is in general not zero on $A$.
It remains to describe the connecting homomorphism $\partial$. Starting with a homotopy $h:\stdsimp{1}\times X\ra\Pold$, we lift it to a homotopy $\widetilde h:\stdsimp{1}\times X\ra\Pnew$ in such a way that $(\vertex0\times X)\cup(\stdsimp{1}\times A)$ is mapped to the zero section. This can be computed using Proposition~\ref{prop:homotopy_lifting}. The restriction of $\widetilde h$ to $\vertex1\times X$ then represents $\partial[h]$.
\heading{Proof of Theorem~\ref{thm:main_theorem} for pointed fibrations} \label{sec:description}
In addition to the assumptions of Theorem~\ref{thm:main_theorem}, we assume that $\Pnew\to B$ admits a zero section $\onew$ on which $\fn$ takes values.
Let us review the reductions made so far. By Theorem~\ref{t:n_equivalence}, it is enough to compute $[X,\Pnew]^A_B$ for $\thedim=\dim X\leq 2\theconn$. By Theorem~\ref{t:semi_eff}, it is a semi-effective abelian group. According to Theorem~\ref{thm:exact_sequence}, this group fits into an exact sequence with all the remaining terms fully effective either by Lemma~\ref{l:fully_eff_cohlgy} or by induction, since they concern diagonals into $\Pold$. By Lemma~\ref{l:ker_coker}, the resulting short exact sequence
\[\xymatrix{
0 \ar[r] & \coker\partial \ar[r]_-{\jnst} & [X,\Pnew]^A_B \ar[r]_-{\pnst} \POS[l]+R*{\vphantom{|}}="a";[]+L*{\vphantom{|}} \ar@<-5pt>@/_2pt/"a"_-r & \ker\knst \ar[r] \POS[l]+R*{\vphantom{|}}="a";[]+L*{\vphantom{|}} \ar@<-5pt>@/_2pt/"a"_-\sigma & 0.
}\]
has both $\coker\partial$ and $\ker\knst$ also fully effective. In order to apply Lemma~\ref{l:ses}, it thus suffices to construct the indicated set-theoretic sections $r$ and $\sigma$.
In the following, we will denote, for every space over $B$, the unique (fibrewise) map that takes values on the zero section by $o$ and call it the \emph{zero map}. In particular, the zero of $[X,\Pnew]^A_B$ is the homotopy class of the zero map $o=\onew g$.
\subsection*{Computing sections}
The section $\sigma$ is defined on the level of representatives (on which it depends) by mapping a partial diagonal $\ell\col X\ra\Pold$ to an arbitrary lift $\widetilde\ell\col X\ra\Pnew$ of $\ell$, zero on $A$. The computation of $\widetilde\ell$ is taken care of by Proposition~\ref{prop:lift_ext_one_stage}; a lift exists by $\ker\knst=\im\pnst$.
For the construction of the partial inverse $r$ on $\im\jnst=\ker\pnst$, let $\ell\col X\ra\Pnew$ be a diagonal such that its composition with $\pn\col\Pnew\ra\Pold$ is homotopic to $\oold$. First suppose that we have computed such a nullhomotopy $h_{\thedim-1}$, i.e.\ a homotopy from the zero diagonal $\oold$ to $\pn\ell$. Using Proposition~\ref{prop:homotopy_lifting}, we lift it along $\pn$ to a homotopy from some $\ell'$ to $\ell$. Since $\pn\ell'=\oold$, the diagonal $\ell'$ in fact lands in $\fibre_\oold\pn\cong\Ln$ and we may set $r([\ell])=[\ell']$.
\subsection*{Computing nullhomotopy}
Thus, it remains to compute the nullhomotopy $h_{\thedim-1}$ by induction on the height of the Moore--Postnikov tower. Let $\ell_\thedimm$ denote the projection of $\ell$ onto the $\thedimm$-th stage $P_\thedimm$ of the Moore--Postnikov tower. Suppose that we have computed a nullhomotopy $h_{\thedimm-1}$ of $\ell_{\thedimm-1}$ and lift it by Proposition~\ref{prop:homotopy_lifting} to a homotopy $\widetilde h_{\thedimm-1}\col\ell'_\thedimm\sim\ell_\thedimm$ from some map $\ell'_\thedimm$, whose image necessarily lies in $\fibre_\oold p_i$.
Since Proposition~\ref{prop:homotopy_concatenation} provides algorithmic means for concatenating homotopies, it remains to construct a nullhomotopy $h'_\thedimm$ of $\ell'_\thedimm$. Consider the connecting homomorphism in \eqref{eq:les} for stages $P_{\thedimm-1}$ and $P_\thedimm$, i.e.
\[\partial\col[\stdsimp{1}\times X,P_{\thedimm-1}]^{(\partial\stdsimp{1}\times X)\cup(\stdsimp{1}\times A)}_B\lra[X,\Li]_B^A.\]
From the exactness of \eqref{eq:les} and from $\ell'_\thedimm\sim\ell_\thedimm\sim o$, it follows that $[\ell'_\thedimm]$ lies in the image of $\partial$. By computing generators of the domain of $\partial$ and their images, we obtain some $h_{\thedimm-1}^2$ with $\partial[h_{\thedimm-1}^2]=[\ell'_\thedimm]$.
We claim that $h'_\thedimm$ can then be computed as a diagonal in the following lifting-extension problem with the top map zero on $(0\times X)\cup(\stdsimp{1}\times A)$ and equal to $\ell'_\thedimm$ on $1\times X$:
\[\xymatrix{
(\partial\stdsimp{1}\times X)\cup(\stdsimp{1}\times A) \ar[r] \ar[d] & P_\thedimm \ar[d] \\
\stdsimp{1}\times X \ar[r]_-{h_{\thedimm-1}^2} \ar@{-->}[ru]_-{h'_\thedimm} & P_{\thedimm-1}
}\]
Once we show the existence of such a lift, Proposition~\ref{prop:lift_ext_one_stage} provides an algorithm for its computation and the proof is finished.
\subsection*{The existence of $\boldsymbol{h_\thedimm'}$}
Let $\widetilde h_{\thedimm-1}^2$ be an arbitrary lift of $h_{\thedimm-1}^2$ that is zero on $(0\times X)\cup(\stdsimp1\times A)$. It ends at a diagonal $\ell''_\thedimm\col X\to P_\thedimm$, which is homotopic in $\Li$ to $\ell'_\thedimm$, by the definition of the connecting homomorphism and by $\partial[h_{\thedimm-1}^2]=[\ell'_\thedimm]$. Let $h_\thedimm^0$ be this homotopy, say from $\ell''_\thedimm$ to $\ell'_\thedimm$. The homotopy $h_\thedimm'$ is obtained as a concatenation of $\widetilde h_{\thedimm-1}^2$ and $h_\thedimm^0$, using Proposition~\ref{prop:homotopy_concatenation}.
\begin{figure}[H]
\centering
\includegraphics{lift}
\end{figure}\noindent
It remains to specify the bottom map in the corresponding lifting-extension problem. Our choice is illustrated in the above picture (for $(X,A)=(*,\emptyset)$) -- the projection of the hatched triangle is a degeneracy of $h_{\thedimm-1}^2$, namely the composition\footnote{We remind that $s^1$ is a simplicial map sending the unique non-degenerate $2$-simplex of $\stdsimp2$ to the $s_1$-degeneracy of the unique non-degenerate $1$-simplex of $\stdsimp1$.}
\[\stdsimp2\times X\xra{s^1\times\id}\stdsimp1\times X\xra{h_{\thedimm-1}^2}P_{\thedimm-1}.\]
This implies easily that $h_\thedimm'$ is indeed a lift of $h_{\thedimm-1}^2$.
\heading{Proof of Theorem~\ref{thm:main_theorem}}\label{s:main_proof}
Again, we need to compute $[X,\Pnew]^A_B$ for $\thedim=\dim X\leq 2\theconn$. This time however, we do not assume the existence of a zero section $\onew$. The proof is naturally split into two parts that also form an actual algorithm: finding an element of $[X,\Pnew]^A_B$ if it exists at all and computing this set when an element of it is specified. The second step is simpler since it reduces quickly to the previously considered pointed case.
\subsection*{Non-empty case}
First suppose that we have computed a diagonal $\ell_0\col X\to\Pnew$. Consider the following lifting-extension problem, where $\tpsin$ is obtained as a pullback of $\psin$ along $g$ (as in the square on the right).
\[\xymatrix@C=30pt{
A \ar[r]^-{(\iota,\fn)} \ar@{ >->}[d]_-\iota & X\times_B\Pnew \ar@{->>}[d]^-{\tpsin} & & X\times_B\Pnew \pb \ar@{->>}[d]_-{\tpsin} \ar[r] & \Pnew \ar@{->>}[d]^-{\psin} \\
X \ar[r]_-\id \ar@{-->}[ru] & X & & X \ar[r]_-g & B
}\]
Its diagonals are of the form $(\id,\ell)$ with $\ell\col X\to\Pnew$ an arbitrary diagonal in the original problem and thus these problems are equivalent, $[X,\Pnew]_B^A\cong[X,X\times_B\Pnew]_X^A$.
Since the bottom map in the new problem is the identity, its solutions are \emph{sections} of $\tpsin\col X\times_B\Pnew\ra X$ that extend a given partial section $(\iota,\fn)$ defined on $A$. The solution $(\id,\ell_0)$, whose existence we assume, provides $\tpsin$ with a zero section on which $(\iota,\fn)$ takes values.
In order to employ the results of Section~\ref{sec:description}, we need to show that $\tpsin\col X\times_B\Pnew\to X$ is a stage of a Moore--Postnikov tower composed of simplicial sets with effective homology. This follows from the argument of the proof of Theorem~\ref{t:MP_tower}, since $X\times_B\Pnew$ is obtained as an iterated pullback of Eilenberg--MacLane fibrations in exactly the same way as $\Pnew$. Thus, we obtain a fully effective abelian group structure on $[X,\Pnew]_B^A$ whose zero is $[\ell_0]$.
\subsection*{Finding an element}
It remains to decide whether a diagonal $\ell_0$ exists at all and, if this is the case, find one. This is also done by induction. Therefore, we assume that a diagonal $X\to\Pold$ has been computed. By Section~\ref{sec:description}, we are able to compute $[X,\Pold]^A_B$ and we would like to decide whether there exists a partial diagonal $\ell\col X\to\Pold$ that admits a lift further to $\Pnew$, as indicated in the square on the left.
\[\xymatrix@C=40pt{
A \ar[r]^-{\fn} \ar@{ >->}[d] & \Pnew \pb \ar[r]^-{\qn} \ar[d]_-\pn & \En \ar[d]^\delta \\
X \ar@{-->}[ru] \ar[r]_-\ell & \Pold \ar[r]_-\kn & \Kn
}\]
This lift exists precisely when there exists a diagonal in the composite square
\[\xymatrix@C=40pt{
A \ar[r]^-{\qn \fn} \ar@{ >->}[d] & \En \ar[d]^{\delta} \\
X \ar@{-->}[ru] \ar[r]_-{\kn\ell} & \Kn
}\]
Let $\widetilde c\col X\ra\En$ be any extension of $\qn \fn$. According to the proof of Lemma~\ref{l:lift_ext_one_stage}, a diagonal exists if and only if the class
\[[\kn\ell-\delta\widetilde c]\in[X,\Kn]_B^A\cong H^{\thedim+1}_G(X,A;\pin)\]
is zero. By Theorem~\ref{thm:exact_sequence}, the induced map
\[\knst\col [X,\Pold]^A_B\ra[X,\Kn]^A_B,\ [\ell]\mapsto [\kn\ell-\kno]\]
is a group homomorphism. Clearly $[\kn\ell-\delta\widetilde c]$ is zero if and only if $[\kn\ell-\kno]=[\delta\widetilde c-\kno]$. Therefore, a diagonal $\ell$ liftable to $\Pnew$ exists if and only if the image of $\knst$ contains $[\delta\widetilde c-\kno]\in[X,\Kn]^A_B$, a class independent of $\ell$. We are able to determine this image completely from our knowledge of (generators of) $[X,\Pold]^A_B$. Moreover, if $[\delta\widetilde c-\kno]$ lies in this image, we are able to compute an $\ell$ with $[\kn\ell-\kno]=[\delta\widetilde c-\kno]$ and use it for the construction of a diagonal: Let $\kn\ell-\delta\widetilde c=\delta c_0$, where $c_0$ is zero on $A$. The corresponding diagonal is then given by $(\ell,\widetilde c+c_0)\col X\to\Pnew\subseteq\Pold\times\En$.
\subsection*{Existence}
When $\thedim=2\theconn+1$, Lemma~\ref{l:lift_ext_one_stage} guarantees the existence of a diagonal $X\to\Pnew$ for \emph{any} partial diagonal $X\to\Pold$. Thus, it is enough to decide whether the stable $[X,\Pold]^A_B$ is non-empty.\qed
\heading{Proof of Theorem~\ref{theorem:equivariant}}
We describe how the set of equivariant homotopy classes of maps $[X,Y]$ between two simplicial sets can be computed as a particular stable instance of the lifting-extension problem, namely $[X,Y]^\emptyset_{EG}$, so that Theorem~\ref{thm:main_theorem} applies.
This instance is obtained by setting $B=EG$ where $EG$ is a non-commutative version of $E(\pi,0)$. It has as $\thedim$-simplices sequences $(a_0,\ldots,a_\thedim)$ of elements $a_\thedimm\in G$, and its face and degeneracy operators are the maps
\begin{align*}
d_\thedimm(a_0,\ldots,a_\thedim) & =(a_0,\ldots,a_{\thedimm-1},a_{\thedimm+1},\ldots,a_\thedim) \\
s_\thedimm(a_0,\ldots,a_\thedim) & =(a_0,\ldots,a_{\thedimm-1},a_\thedimm,a_\thedimm,a_{\thedimm+1},\ldots,a_\thedim).
\end{align*}
There is an obvious diagonal action of $G$ which is clearly free.
As every $k$-simplex of $EG$ is uniquely determined by its (ordered) collection of vertices, it is clear that a simplicial map $g\col X\ra EG$ is uniquely determined by the mapping $g_0\col X_0\ra G$ of vertices and $g$ is equivariant if and only if $g_0$ is. A particular choice of a map $X\to EG$ is thus uniquely specified by sending the distinguished vertices of $X$ to $(e)$; it is clearly computable. Moreover, any two equivariant maps $X\to EG$ are (uniquely) equivariantly homotopic (vertices of $\stdsimp1\times X$ are those of $\vertex0\times X$ and $\vertex1\times X$).
Factoring $Y\to EG$ as $Y\cof[\sim]Y'\fib EG$ using Lemma~\ref{l:fibrant_replacement}, the geometric realization of $Y'$ equivariantly deforms onto that of $Y$. This shows that the first map in
\[[X,Y]\xra\cong[X,Y']\la[X,Y']^\emptyset_{EG}\]
is a bijection and it remains to study the second map. As observed above, for every simplicial map $X\to Y'$, the lower triangle in
\[\xymatrix@C=40pt{
\emptyset \ar[r] \ar[d] & Y' \ar@{->>}[d] \\
X \ar[r] \ar@{-->}[ur]^-\ell & EG
}\]
commutes up to homotopy. Since $Y'\fib EG$ is a fibration, one may replace $\ell$ by a homotopic map for which it commutes strictly, showing surjectivity of $[X,Y']^\emptyset_{EG}\to[X,Y']$. The injectivity is implied by uniqueness of homotopies -- every homotopy of maps $X\to Y'$ that are diagonals is automatically vertical.
It remains to show how to identify a given equivariant map $\ell\col X\to Y$ as an element of the computed group $[X,\Pnew]^\emptyset_{EG}$. By its fully effective abelian group structure, it is enough to find the corresponding diagonal $X\to\Pnew$. As above, compute a homotopy $h$ from $\varphi\ell$ to $g\col X\to EG$; then, using Proposition~\ref{prop:homotopy_lifting}, compute a lift of $h$ that fits into
\[\xymatrix{
\vertex0\times X \ar[r]^-{\ell} \ar@{ >->}[d] & Y \ar[r]^-{\varphin} & \Pnew \ar@{->>}[d] \\
\stdsimp1\times X \ar[rr]_-h \ar@{-->}[rru]^{\widetilde h} & & EG
}\]
The restriction of $\widetilde h$ to $\vertex1\times X$ is the required diagonal $X\to\Pnew$.
\qed
\section{Weak \Hopf spaces}\label{s:weak_H_spaces}
In this section, we construct a weak fibrewise \Hopf space structure on stable Moore--Postnikov stages with sections. This will culminate in the proof of Theorem~\ref{t:semi_eff}.
First, we explain a simple approach to constructing
a \emph{strict} fibrewise \Hopf space structure, which we cannot make
algorithmic, though.
However, it introduces the ideas employed in the actual
proof of~Theorem~\ref{t:semi_eff}, and it also
shows why a weakening of the \Hopf space structure is needed.
We start with additional running assumptions.
\begin{convention}\label{con:fibrewise}
In addition to Convention~\ref{con:G_action_loc_eff}, all simplicial sets will be equipped with a map to $B$ and all maps, homotopies, etc.~will be fibrewise, i.e.\ they will commute with the specified maps to $B$. In the case of homotopies, this means that they remain in one fibre the whole time or in other words that they are vertical.
\end{convention}
\heading{Fibrewise \Hopf spaces}
Let $\psi\col P\fib B$ be a given fibration with a section $o\col B\ra P$. We remind that the pullback $P\times_BP$ consists of pairs $(x,y)$ with $\psi(x)=\psi(y)$. Associating to $(x,y)$ this common value makes $P\times_BP$ into a space over $B$. We remind that a (fibrewise) \emph{\Hopf space structure} on $\varphi$ is a fibrewise map
\[\add\col P\times_BP\ra P,\]
where we write $\add(x,y)=x+y,$ that satisfies a single condition -- the zero section $o$ should act as a zero for this addition, i.e.\ for $x\in P$ lying over $b=\psi(x)$ we have $o(b)+x=x=x+o(b)$. In the proceeding, we will abuse the notation slightly and write $o$ for any value of $o$, so that we rewrite the zero axiom as $o+x=x=x+o$. After all, there is a single value of $o$ for which this makes sense. It will be convenient to organize this structure into a commutative diagram
\[\xymatrix@C=40pt{
P\vee_BP \ar[rd]^-\nabla \ar[d]_\vartheta \\
P\times_BP \ar[r]_-\add & P
}\]
with $P\vee_BP$ the fibrewise wedge sum, $P\vee_BP=(B\times_BP)\cup(P\times_BB)$ (where $B\subseteq P$ is the image of the zero section), and with $\nabla$ denoting the fold map given by $(o,x)\mapsto x$ and $(x,o)\mapsto x$. As explained, all maps are fibrewise over $B$. Under this agreement, the above diagram is a \emph{definition} of a (fibrewise) \Hopf space structure.
We say that the \Hopf space structure is \emph{homotopy associative} if there exists a homotopy $(x+y)+z\sim x+(y+z)$ (i.e.\ formally a homotopy of maps $P\times_BP\times_BP\to P$) that is constant when restricted to $x=y=z=o$. \emph{Homotopy commutativity} is defined similarly. Finally, it has a (right) \emph{homotopy inverse} if there exists a map $\inv\col P\to P$, denoted $x\mapsto -x$, such that $-o=o$ and such that there exists a homotopy $x+(-x)\sim o$, constant when restricted to $x=o$.
We have already met an example of an \Hopf space, namely $\Ln=B\times K(\pin,\thedim)$ with the addition $(b,z)+(b,w)=(b,z+w)$. Not only it plays an important role in inductive considerations, it is also the most trivial case of an \Hopf space structure on pointed stable Moore--Postnikov stages. In general, we have the following crucial theorem. The proof can be found in Section~\ref{sec:existence_of_H_space_structures_proof}.
\newcommand{\eoHss}
{Let $\varphi\col Y\to B$ be a map whose homotopy fibre is $d$-connected, and let
$n\leq 2d$. Then there exists a fibrewise \Hopf space structure on the Moore--Postnikov stage $\Pnew$. Any such structure is homotopy associative, homotopy commutative and has a homotopy inverse.}
\begin{theorem}\label{t:existence_of_H_space_structures}
\eoHss
\end{theorem}
The importance of this result does not lie in the existence of an \Hopf space structure but in its properties. After all, we will need to construct this structure and in this respect, the above existential result is not sufficient.
\heading{\Hopf space structures on pullbacks}\label{sec:non_constr_H_space}
We describe a general method for introducing \Hopf space structures on pullbacks since $\Pnew$ is defined in this way. Let us start with a general description of our situation. We are given a pullback square
\[\xymatrix@C=40pt{
P \ar[r] \ar[d] \pb & R \ar@{->>}[d]^-\psi \\
Q \ar[r]_-\chi & S
}\]
with $\psi$ a fibration. We assume that all of $Q$, $R$ and $S$ are fibrewise \Hopf spaces over $B$, and that $R$ and $S$ are strictly associative and with a strict inverse. If both $\psi$ and $\chi$ preserved the addition strictly we could define addition on $P\subseteq Q\times R$ componentwise. In our situation,
though, $\chi$ preserves the addition only up to homotopy and accordingly, the addition on $P$ will have to be perturbed to
\begin{equation}\label{eq:addition_on_pullback}
(x,y)+(x',y')=(x+x',y+y'+M(x,x')).
\end{equation}
There are two conditions that need to be satisfied in order for this formula to be correct: $\psi M(x,x')=\chi(x+x')-(\chi(x)+\chi(x'))$ (so that the result lies in the pullback) and $M(x,o)=o=M(o,x)$ (to get an \Hopf space). Both are summed up in the following lifting-extension problem
\[\xymatrix@C=30pt{
P\vee_BP \ar[r]^-o \ar[d]_-\vartheta & R \ar@{->>}[d]^-\psi \\
P\times_BP \ar[r]_-m \ar@{-->}[ru]^-M & S
}\]
with $m(x,x')=\chi(x+x')-(\chi(x)+\chi(x'))$. In our situation, $\psi$ is $\delta\col\En\to\Kn$. Thus, Lemma~\ref{l:lift_ext_one_stage} would give us a solution if the pair $(P\times_BP,P\vee_BP)$ had effective homology.
However, we have not been able to prove this, and consequently, we cannot construct the addition on the pullback. In the computational world, we are thus forced to replace this pair by a certain homotopy version $(P\htimes_BP,P\hvee_BP)$ of it that admits effective homology. This transition corresponds, as explained later, to the passage from \Hopf spaces to a weakened notion, where the zero section serves as a zero for the addition only up to homotopy.
After this rather lengthy introduction, the plan for the rest of the section is to introduce weak \Hopf spaces and then to describe the inductive construction of the weak \Hopf space structure on pointed stable stages of Moore--Postnikov towers. We believe that to understand the weak version, it helps a lot to have in mind the above formula for addition on $P$.
Assuming that a right inverse exists in $Q$, $R$ and $S$, a right inverse in $P$ is given by the formula
\begin{equation}\label{eq:inverse_on_pullback}
-(x,y)=(-x,-y-M(x,-x)).
\end{equation}
\heading{Weak \Hopf spaces} \label{s:hocolim}
We will need a weak version of an \Hopf space. Roughly speaking this is defined to be a fibrewise addition $x+y$ together with left zero and right zero homotopies $\lambda\col x\sim o+x$ and $\rho\col x\sim x+o$ that become homotopic as homotopies $o\sim o+o$. In simplicial sets a homotopy between homotopies can be defined in various ways. Here we will interpret it as a map $\eta\col\stdsimp{2}\times B\ra P$ that is a constant homotopy on $\bdry_2\stdsimp{2}\times B$ and restricts to the two unit homotopies on $\bdry_1\stdsimp{2}\times B$ and $\bdry_0\stdsimp{2}\times B$, respectively.
\[\xymatrix{
& o+o \POS[];[d]**{}?(.6)*{//{\scriptstyle\eta}//} \\
o \ar[rr]_{s_0o} \ar[ru]^{\lambda(o)} & & o \ar[lu]_{\rho(o)}
}\]
We will organize this data into a map $\add\col P\htimes_BP\to P$ with similar properties to the strict \Hopf space structure. The space $P\htimes_BP$ will be a special case of the following construction which works for any commutative square (of spaces over $B$)
\[\mcS={}\quad\xymatrixc{
Z \ar[r]^-{u_0} \ar[d]_-{u_1} & Z_0 \ar[d]^-{v_0} \\
Z_1 \ar[r]_-{v_1} & Z_2
}\]
that we denote for simplicity by $\mcS$. We define
\[|\mcS|=\big(\stdsimp{2}\times Z\big)\cup\big(\bdry_1\stdsimp{2}\times Z_0\big)\cup\big(\bdry_0\stdsimp{2}\times Z_1\big)\cup\big(2\times Z_2\big)\]
together with the subspace
\[\bdry_2|\mcS|=\big(\bdry_2\stdsimp{2}\times Z\big)\cup\big(0\times Z_0\big)\cup\big(1\times Z_1\big),\]
where we assume for simplicity that all the maps in $\mcS$ are inclusions; otherwise, the union has to be replaced by a certain (obvious) colimit. In this case, $|\mcS|$ is naturally a subspace of $\stdsimp2\times Z_2$ and as such admits an obvious map to $\stdsimp2\times B$ whose fibres are equal to those of $Z$, $Z_0$, $Z_1$ or $Z_2$ (over $B$), depending on the point of $\stdsimp2$. In the picture below, $B=\{*\}$ and $|\mcS|$ is thus depicted as a space over $\stdsimp2$; here, $Z_2$ is a $3$-simplex, $Z_0$ and $Z_1$ its edges and $Z$ their common vertex.
\begin{figure}[H]
\centering
\includegraphics{hocolim}
\end{figure}\noindent
\begin{remark}
The construction $|\mcS|$ is a small model of a homotopy colimit of the square $\mcS$ with $\bdry_2|\mcS|$ the corresponding small model for a homotopy pushout of $Z_0$ and $Z_1$ along $Z$.
\end{remark}
The construction $|\mcS|$ possesses the following universal property: to give a map $f\col |\mcS|\ra Y$ is the same as to give maps $f_\thedimm\col Z_\thedimm\ra Y$ (for $\thedimm=0\comma 1\comma 2$), homotopies $h_\thedimm\col f_\thedimm\sim f_2v_\thedimm$ (for $\thedimm=0\comma 1$) and a ``second order homotopy'' $H\col \stdsimp{2}\times Z\ra Y$ whose restriction to $\bdry_\thedimm\stdsimp{2}\times Z$ equals $h_\thedimm u_\thedimm$ (for $\thedimm=0\comma 1$). Similarly, a map $\bdry_2|\mcS|\ra Y$ is specified by $f_0\col Z_0\to Y$ and $f_1\col Z_1\to Y$ as above and a homotopy $\bdry_2\stdsimp{2}\times Z$ from $f_0u_0$ to $f_1u_1$.
In order to apply this definition to weak \Hopf spaces, we consider the square
\[\mcS_P={}\qquad\xymatrixc{
B\times_BB \ar@{ >->}[r] & \leftbox{B\times_BP}{{}\defeq P_\mathrm{right}} \ar@{ >->}[d] \\
\rightbox{P_\mathrm{left}\defeq{}}{P\times_BB} \ar@{ >->}[r] \POS[]*+!!<0pt,\the\fontdimen22\textfont2>{\phantom{P\times_BB}}="a";[u]\ar@{ >->}"a" & P\times_BP
}\]
(the subspaces consist of pairs where one of the two components, or both, lie on the zero section). We will denote $B\times_BB$ for simplicity by $B$, to which it is canonically isomorphic.
\begin{definition}
Let $P\to B$ be a Kan fibration. We define simplicial sets
\[P\hvee_BP\stackrel{\mathrm{def}}{=}\bdry_2|\mcS_P|,\qquad P\htimes_BP\stackrel{\mathrm{def}}{=}|\mcS_P|.\]
We denote the inclusion by $\vartheta\col P\hvee_BP\to P\htimes_BP$.
Furthermore, we define a ``fold map'' $\hnabla\col P\hvee_BP\ra P$, prescribed as the identity map on $0\times P_\mathrm{right}$ and $1\times P_\mathrm{left}$ and as the constant homotopy at $o$ on $\bdry_2\stdsimp{2}\times B$.
\end{definition}
We remark that $P\hvee_BP$ and $P\htimes_BP$ are weakly homotopy equivalent to $P\vee_BP$ and $P\times_BP$, respectively; this is proved in Lemma~\ref{l:whe_weak_strict}. Now, we are ready to define weak \Hopf spaces.
\begin{definition}
A \emph{weak \Hopf space structure} on $P$ is a (fibrewise) map $\add\col P\htimes_BP\to P$ that fits into a commutative diagram
\[\xymatrix@C=40pt{
P\hvee_BP \ar[rd]^-\hnabla \ar[d]_\vartheta \\
P\htimes_BP \ar[r]_-\add & P
}\]
We denote the part of $\add$ corresponding to $2\times(P\times_BP)$ by $x+y=\add(2,x,y)$, the part corresponding to $\bdry_1\stdsimp{2}\times P_\mathrm{right}$, i.e.\ the left zero homotopy, by $\lambda$, and the part corresponding to $\bdry_0\stdsimp{2}\times P_\mathrm{left}$, i.e.\ the right zero homotopy, by $\rho$.
Finally, we define a ``diagonal'' $\hDelta\col P\ra P\htimes_BP$ by $x\mapsto(2,x,x)$. All these associations are natural, making $P\htimes_BP$ etc.~into functors and $\hnabla$ etc.~into natural transformations.
\end{definition}
\newcommand{\ehc}
{Assume that all the spaces in the square $\mcS$ have effective homology. Then so does the pair $(|\mcS|,\bdry_2|\mcS|)$.}
\begin{proposition}\label{p:effective_homotopy_colimits}
\ehc
\end{proposition}
The proof is given in Section~\ref{sec:effective_homotopy_colimits}. The following special case will be crucial in constructing a weak \Hopf space structure on stable stages of Moore--Postnikov towers with sections.
\newcommand{\wHseh}
{Let $\Pnew$ be a Moore--Postnikov stage of a map $\varphi\col Y\to B$ between simplicial sets with effective homology. Then it is possible to equip the pair $(\Pnew\htimes_B\Pnew,\Pnew\hvee_B\Pnew)$ with effective homology.}
\begin{corollary} \label{c:weak_H_space_eff_hlgy}
\wHseh
\end{corollary}
\begin{proof}
As in Section~\ref{s:main_proof}, it is possible to equip $\Pnew\times_B\Pnew$ with effective homology as a pullback of a Moore--Postnikov stage. Thus, the result follows from the previous proposition.
\end{proof}
\begin{remark}
Alternatively, we may compute $\Pnew\times_B\Pnew$ as a stage of the Moore--Postnikov tower for $Y'\times_BY'\ra B$ (at the same time as we build the tower for $Y\ra B$) with all the Eilenberg--MacLane spaces and Postnikov classes ``squared''.
\end{remark}
The following proposition will be used in Section~\ref{sec:H_space_constr} as a certificate for the existence of a weak \Hopf space structure on $\Pnew$; namely, it will guarantee that all relevant obstructions vanish.
\newcommand{\wHsc}{
If the homotopy fibre of $\varphi\col Y\ra B$ is $\theconn$-connected, then for any Moore--Postnikov stage $\Pnew$ the pair $(\Pnew\htimes_B\Pnew,\Pnew\hvee_B\Pnew)$ is $(2\theconn+1)$-connected.
In particular, the cohomology groups $H^*_G(\Pnew\htimes_B\Pnew,\Pnew\hvee_B\Pnew;\pi)$ of this pair vanish up to dimension $2\theconn+1$.}
\begin{proposition} \label{prop:weak_H_space_connect}
\wHsc
\end{proposition}
The proof can be found in Section~\ref{sec:leftover_proofs}.
\heading{Constructing weak \Hopf spaces}\label{sec:H_space_constr}
Prime examples of weak \Hopf spaces are the strict ones, and in particular every fibrewise simplicial group is a weak \Hopf space. In the proceeding, we will make use of the trivial bundles $\Kn=B\times K(\pin,\thedim+1)$ and $\En=B\times E(\pin,\thedim)$. Since $\Kn$ is a fibrewise simplicial group, we have a whole family of weak \Hopf space structures on $\Kn$, one for each choice of a zero section $o\col B\ra\Kn$. Namely, we define addition $z+_{o}w=z-o+w$ (the inverse then becomes $-_oz=-z+2o$). A similar formula defines an \Hopf space structure on $\En$ for every choice of its zero section. Unless stated otherwise, we use the real zero section $0$.
We are now ready to prove the following crucial proposition.
\begin{proposition}\label{prop:weak_H_space_structure}
If $\thedim\leq 2\theconn$ and $\Pnew$ is given a zero section $\onew$, it is possible to construct a structure of a weak \Hopf space on $\Pnew$ with a strict right inverse.
\end{proposition}
\begin{proof}
Let $\Pold$ be a Moore--Postnikov stage and $\kn\col\Pold\ra\Kn$ the respective (fibrewise) Postnikov invariant. There is a pullback square
\begin{equation} \label{eq:consecutive_stages}
\xy *!C\xybox{\xymatrix@C=40pt{
\Pnew \ar[r]^{\qn} \ar[d]_{\pn} \pb & \En \ar[d]^\delta \\
\Pold \ar[r]^\kn & \Kn
}}\endxy
\end{equation}
of spaces over $B$. We denote the images of the zero section $\onew\col B\to\Pnew$ by $\oold=\pn\onew$ in $\Pold$ and $\qn\onew$ in $\En$ and use these sections as basepoints in the fibres over $B$.
In this way $\Kn$ is equipped with two sections, the zero section $0$ and
the composition $\kno$. We will see that the fact that these do not
coincide in general causes some technical problems.
Assume inductively that there is given a structure of a weak \Hopf space on $\Pold$.
\[\xymatrix@C=40pt{
\Pold\hvee_B\Pold \ar[rd]^-\hnabla \ar[d]_\vartheta \\
\Pold\htimes_B\Pold \ar[r]_-\add & \Pold
}\]
We form the ``non-additivity'' map $m\col \Pold\htimes_B \Pold\ra \Kn$ as the difference of the following two compositions
\[\xymatrix@=15pt{
& \Pold\htimes_B\Pold \ar[ld]_{\add} \ar[rd]^{\kn\htimes \kn} & \\
\Pold \ar[rd]_\kn & - & \leftbox{\Kn}{{}\htimes_B\Kn} \ar[ld]^{\add_{\kno}} \\
& \Kn
}\]
where $\add_{\kno}$ is the \Hopf space structure on $\Kn$ whose zero section is $\kno$. We remind that it is given by $z+_{\kno}w=z-\kno+w$.
We will now construct a weak \Hopf space structure on $\Pnew=\Pold\times_\Kn\En$ under our stability assumption $\thedim\leq 2\theconn$. The zero of this structure will be $\onew$. We compute a diagonal in
\begin{equation}\label{eq:lift_homotopy_additivity}
\xy *!C\xybox{\xymatrix@C=30pt{
\Pold\hvee_B\Pold \ar[r]^-0 \ar[d]_-\vartheta & \En \ar[d]^-\delta \\
\Pold\htimes_B\Pold \ar[r]_-m \ar@{-->}[ru]^-M & \Kn
}}\endxy
\end{equation}
by Lemma~\ref{l:lift_ext_one_stage}, whose hypotheses are satisfied according to Corollary~\ref{c:weak_H_space_eff_hlgy} and Proposition~\ref{prop:weak_H_space_connect}. The existence of $M$ says roughly that $\kn$ is additive up to homotopy. We define
\[\mathop{\add}\col \Pnew\htimes_B\Pnew\lra \Pnew=\Pold\times_\Kn\En\]
by its two components. The first is uniquely specified by the requirement that $\pn\col \Pnew\ra \Pold$ is a homomorphism, i.e.\ by the commutativity of the square
\[\xymatrix@C=30pt{
\Pnew\htimes_B\Pnew \ar[r]^-\add \ar[d]_-{\pn\htimes\pn} & \Pnew \ar[d]^-{\pn} \\
\Pold\htimes_B\Pold \ar[r]_-\add & \Pold
}\]
The second component $\qn\mathop{\add}$ is given as a sum
\[\xymatrix@=15pt{
& \Pnew\htimes_B\Pnew \ar[ld]_{\qn\htimes\qn} \ar[rd]^{\pn\htimes\pn} & \\
\rightbox{\En\htimes_B{}}{\En} \ar[rd]_{\add_{\qn\onew}} & + & \leftbox{\Pold}{{}\htimes_B\Pold} \ar[ld]^M \\
& \En \\
}\]
The last two diagrams are a ``weak'' version of the formula \eqref{eq:addition_on_pullback}. A simple diagram chase shows that the two components are compatible and satisfy the condition of a weak \Hopf space; details can be found in Lemma~\ref{lem:calculations}.
We will also need a right inverse, which we denote by $\mathop{\inv}\col \Pnew\ra \Pnew$, $x\mapsto -x$. We assume that $\inv$ is constructed on $\Pold$ in such a way that $x+(-x)=\onew$. Then we define a right inverse on $\Pnew$ by the formula
\[-(x,c)=(-x,-c+2\qn\onew-M(2,x,-x)).\]
Again, $\inv$ is well defined and a right inverse for $\add$; details can be found in Lemma~\ref{lem:calculations}.
\end{proof}
\section{Structures induced by weak \Hopf spaces}\label{sec:weak_H_space}
In the case of a strict \Hopf space $P\to B$, it is easy to define addition on $[X,P]^A_B$: simply put $[\ell_0]+[\ell_1]=[\ell_0+\ell_1]$. In particular, this defines addition on $[X,\Ln]^A_B$ which, under the identification of $[X,\Ln]^A_B$ with $H^\thedim_G(X,A;\pin)$, corresponds to the addition in the cohomology group.
It is technically much harder to equip $[X,P]^A_B$ with addition when the \Hopf space structure on $P$ is weak. In this case, $\ell_0+\ell_1$ is not zero on $A$ and thus does not represent an element of $[X,P]^A_B$. This problem is solved in Section~\ref{sec:strictness}. Next, we will discuss the appropriate notion of a map between weak \Hopf spaces and show how they induce group homomorphisms on homotopy classes. Finally, we will derive the exact sequence relating diagonals into two subsequent pointed stable stages of a Moore--Postnikov tower.
\heading{Strictification and addition of homotopy classes} \label{sec:strictness}
The point of this subsection is to describe a perturbation of a weak \Hopf space structure to one for which the zero is strict. We will then apply this to the construction of addition on $[X,\Pnew]^A_B$. Assume thus that we have a weak \Hopf space structure
\[\xymatrix@C=40pt{
P\hvee_BP \ar[d]_-\vartheta \ar[rd]^-{\hnabla} \\
P\htimes_BP \ar[r]_-\add & P
}\]
Form the following lifting-extension problem where the top map is $\add$ on $P\htimes_BP$ and $\nabla\pr_2$ on $\bdry_2\stdsimp{2}\times(P\vee_BP)$. Lemma~\ref{l:whe_weak_strict} shows that the map on the left is a weak homotopy equivalence and thus a diagonal exists (but in general not as a computable map).
\[\xymatrix@C=50pt{
(P\htimes_BP)\cup\big(\bdry_2\stdsimp{2}\times(P\vee_BP)\big) \ar@{ >->}[d]_-\sim \ar[r]^-{[\add,\nabla\pr_2]} & P \ar@{->>}[d] \\
\stdsimp{2}\times(P\times_BP) \ar[r] \ar@{-->}[ur] & B
}\]
The restriction of the diagonal to $0\times(P\times_BP)$ is then an \Hopf space structure which we denote $\hadd$ with the corresponding addition $\hplus$. The restriction to $\bdry_1\stdsimp2\times(P\times_BP)$ is a homotopy $x_0\hplus x_1\sim x_0+x_1$.
\begin{definition}
Let $\varphi\col P\to B$ be a weak \Hopf space with addition $\add$. Let $\hadd$ be its perturbation to a strict \Hopf space structure as above. We define the addition in $[X,P]^A_B$ by $[\ell_0]+[\ell_1]=[\ell_0\hplus\ell_1]$. Below, we prove that it is independent of the choice of a perturbation.
\end{definition}
Composing the above homotopy with $\ell_0$ and $\ell_1$, we obtain $\ell_0\hplus\ell_1\sim\ell_0+\ell_1$ whose restriction to $A$ is the left zero homotopy $\lambda(o)\col o=o\hplus o\sim o+o$; again, we will be using $o$ to denote the unique map with values on the zero section. We will use this observation as a basis for the computation of the homotopy class of $\ell_0\hplus\ell_1$, since we do not see a way of computing $\hadd$ directly -- it seems to require certain pairs to have effective homology and we think that this might not be the case in general.
Restricting to the case $P=\Pnew$ of Moore--Postnikov stages, the addition in $[X,\Pnew]^A_B$ is computed in the following algorithmic way. Let $\ell_0,\ell_1\col X\to\Pnew$ and consider $\ell_0+\ell_1$ which is not zero on $A$ but rather takes values $o+o$. Extend the left zero homotopy $\lambda(o)\col o\sim o+o$ on $A$ to a homotopy $\sigma\col\ell\sim\ell_0+\ell_1$ on $X$. It is quite easy to see that the resulting map $\ell$ is unique up to homotopy relative to $A$.\footnote{Given two such homotopies, one may form out of them a map $(\horn22\times X)\cup(\stdsimp2\times A)\to\Pnew$, whose extension to $\stdsimp2\times X$ (suitably fibrewise) gives on $\bdry_2\stdsimp2\times X$ the required homotopy.} Since $\ell_0\hplus\ell_1$ is also obtained in this way, this procedure gives correctly $[\ell]=[\ell_0]+[\ell_1]\in[X,\Pnew]^A_B$. From the algorithmic point of view, this is well behaved -- if $(X,A)$ is equipped with effective homology, we may extend homotopies by Proposition~\ref{prop:homotopy_lifting}. This proves the first half of the following proposition.
\begin{proposition}\label{prop:addition_on_homotopy_classes}
If $(X,A)$ is equipped with effective homology and $\Pnew\ra B$ is given a weak fibrewise \Hopf space structure with zero section $\onew$, then there exists an algorithm that computes, for any two maps $\ell_0\comma\ell_1\col X\ra\Pnew$, zero on $A$, a representative of $[\ell_0]+[\ell_1]$. If the weak \Hopf space structure has a strict right inverse, then the computable $o+(-\ell)$ is a representative of $-[\ell]$.
\end{proposition}
\begin{proof}
The formula $\ell\mapsto\onew+(-\ell)$ prescribes a mapping $[X,\Pnew]^A_B\ra[X,\Pnew]^A_B$ since $\onew+(-\onew)=\onew$. It is slightly more complicated to show that it is an inverse for our perturbed version of the addition. To this end, we have to exhibit a homotopy
\[\onew\sim\ell+(\onew+(-\ell))\]
that on $A$ agrees with the left zero homotopy $\lambda(\onew)$. We start with the left zero homotopy $\lambda(-\ell)\col-\ell\sim\onew+(-\ell)$ and add $\ell$ to it on the left to obtain $\ell+\lambda(-\ell)\col\onew\sim\ell+(\onew+(-\ell))$. By Lemma~\ref{lem:commuting_left_zero_homotopies}, its restriction $\onew+\lambda(-\onew)$ to $A$ is homotopic to the left zero homotopy $\lambda(\onew+(-\onew))=\lambda(\onew)$. Extending this second order homotopy from $A$ to $X$, we obtain a new homotopy $\onew\sim \ell+(\onew+(-\ell))$ that agrees with the left zero homotopy on $A$ as desired.
\end{proof}
\begin{lemma}\label{lem:commuting_left_zero_homotopies}
The homotopies $\lambda(\onew+x),\onew+\lambda(x)\col\onew+x\sim \onew+(\onew+x)$ are homotopic relative to $\partial\stdsimp{1}\times\Pnew$.
\end{lemma}
\begin{proof}
We concatenate the two homotopies from the statement with the left zero homotopy $\lambda(x)\col x\sim \onew+x$. It is then enough to show that the two concatenations are homotopic. The homotopy between them is simply $\stdsimp{1}\times\stdsimp{1}\times\Pnew\ra\Pnew$, $(s,t,x)\mapsto\lambda(s,\lambda(t,x))$.
\end{proof}
\addtocounter{equation}{-1}
{\renewcommand{\theequation}{\ref{t:semi_eff} (restatement)}
\begin{theorem}
\se
\end{theorem}}
\begin{proof}
This is a corollary of a collection of results obtained so far.
By Proposition~\ref{prop:weak_H_space_structure}, it is possible to construct on $\Pnew$ the structure of a weak \Hopf space. By results of this subsection, it is possible to strictify this structure which makes $\Pnew$ into an \Hopf space. According to Theorem~\ref{t:existence_of_H_space_structures}, it is homotopy associative, homotopy commutative and with a right homotopy inverse; consequently, $[X,\Pnew]^A_B$ is an abelian group. By Proposition~\ref{prop:addition_on_homotopy_classes}, it is possible to compute the addition and the inverse in this group on the level of representatives, making it finally into a semi-effective abelian group.
\end{proof}
\heading{\Hopf maps}\label{sec:H-maps}
In this subsection, we assume that $\psi\col P\to Q$ is a map between two weak \Hopf spaces, that preserves the zero section, i.e.\ $\psi(o)=o$
\footnote{It is also possible to define \Hopf maps with sections $\psi(o)$ and $o$ merely homotopic. In this paper, such generality is not needed.}
We say that $\psi$ is a \emph{homomorphism}, if it preserves the whole structure strictly, i.e.\ when
\[\xymatrix@C=30pt{
P\htimes_BP \ar[r]^-\add \ar[d]_-{\psi\htimes\psi} & P \ar[d]^-\psi \\
Q\htimes_BQ \ar[r]_-\add & Q
}\]
commutes. This notion is too strict for our purposes, but still useful.
We say that $\psi$ is an \emph{\Hopf map} if there is specified a homotopy
\[\alpha\col \psi(x+y)\sim\psi(x)+\psi(y)\]
together with a second order homotopy
\begin{equation}\label{eq:second_order_homotopy}
\xymatrixc{
\psi(o+o) \ar[rr]^-\alpha & & \psi(o)+\psi(o) \\
& \psi(o) \ar[lu]^-{\psi\lambda(o)} \ar[ru]_-{\lambda(o)} \POS[];[u]**{}?(.6)*{/////}
}\end{equation}
where $\lambda$ stands for the left zero homotopies. In this situation, the induced map $\psi_*\col [X,P]^A_B\ra[X,Q]^A_B$ is a group homomorphism for every pair $(X,A)$ over $B$. This is because the composition of homotopies
\[\xymatrix{
\psi(x\hplus y) \ar[r]^-{\psi\sigma} & \psi(x+y) \ar[r]^-\alpha & \psi(x)+\psi(y) & \psi(x)\hplus\psi(y) \ar[l]_-\sigma,
}\]
with $\sigma$ denoting the homotopies from the strictifications, restricts to $A$ to a homotopy that is homotopic to a constant homotopy via the above second order homotopy. Using the homotopy extension property, $\psi(x\hplus y)\sim \psi(x)\hplus\psi(y)$ by a homotopy relative to $A$.
There is an obvious variation with $\alpha$ pointing the other way, i.e.\ $\alpha^{-1}\col\psi(x)+\psi(y)\sim\psi(x+y)$. In the case that $Q\to B$ is a Kan fibration, this notion is equivalent to the previous one.
\heading{Proof of Theorem~\ref{thm:exact_sequence}}
First, we restate the theorem.
\addtocounter{equation}{-1}
{\renewcommand{\theequation}{\ref{thm:exact_sequence} (restatement)}
\begin{theorem}
\exseq
\end{theorem}}
\begin{proof}
To start with, we have a fibration sequence of spaces over $B$ with sections,
\[(\Ln,0)\xlra{\jn}(\Pnew,\onew)\xlra{\pn}(\Pold,\oold)\xlra{\kn}(\Kn,\kno)\]
where the last two maps are those from the diagram~\eqref{eq:consecutive_stages} and $\Ln=B\times K(\pin,\thedim)$ is identified with the (fibrewise) fibre of $\pn$,
\[\jn\col\Ln\xlra\cong\fibre_\oold\pn\subseteq\Pnew,\ z\mapsto\onew+z.\]
There is an isomorphism $(\Kn,\kno)\cong(\Kn,0)$ of fibrewise simplicial groups, given by $z\mapsto z-\kno$.
This fibration sequence induces, for each pair $(X,A)$ over $B$, an exact sequence of pointed sets of homotopy classes of maps over $B$,
\[[X,\Ln]^A_B\ra[X,\Pnew]^A_B\ra[X,\Pold]^A_B\ra[X,\Kn]^{A,\kno}_B\cong[X,\Kn]^A_B,\]
where the decoration ``$\kno$'' in $[X,\Kn]^{A,\kno}_B$ is used to distinguish it from $[X,\Kn]^A_B$, where the zero section $0$ is used instead of $\kno$. The base points of all terms are given by maps taking values on the respective zero sections.
It is possible to extend the sequence to the left by $[\stdsimp{1}\times X,P]^{(\partial\stdsimp{1}\times X)\cup(\stdsimp{1}\times A)}_B$ where $\stdsimp{1}\times X$ is thought of as an object over $B$ via $\stdsimp{1}\times X\xra{\pr}X\ra B$. First, we describe
\[[\stdsimp{1}\times X,\Pold]^{(\partial\stdsimp{1}\times X)\cup(\stdsimp{1}\times A)}_B\ra[X,\Ln]^A_B.\]
Let $h\col \stdsimp{1}\times X\ra\Pold$ be zero on the prescribed subspace. We choose a lift $\widetilde h$ of the homotopy $h$ along $\pn\col\Pnew\ra\Pold$ which starts at the zero map and is zero on $A$ for the whole time (this can be carried out in an algorithmic way by Proposition~\ref{prop:homotopy_lifting}). Restricting to the end of the homotopy prescribes a map $\widetilde h_\mathrm{end}\col X\ra\Pnew$ that lies over the zero section and may thus be viewed as a map $X\ra\Ln$.
The homotopy $\widetilde h$ shows that $\widetilde h_\mathrm{end}$, thought of as a map $X\ra\Pnew$, is in fact nullhomotopic. The exactness is also easy -- a nullhomotopy of $\ell\col X\ra\Ln$ in $\Pnew$ projects down to $\Pold$ to a map $\stdsimp{1}\times X\ra\Pold$ that is zero on $(\partial\stdsimp{1}\times X)\cup(\stdsimp{1}\times A)$. To summarize, what we have so far is an exact sequence of pointed sets
\[[\stdsimp1\times X,\Pold]^{(\partial\stdsimp{1}\times X)\cup(\stdsimp{1}\times A)}_B\ra[X,\Ln]^A_B\ra[X,\Pnew]^A_B\ra[X,\Pold]^A_B\ra[X,\Kn]^A_B.\]
Our next aim will be to show that the maps in this sequence are homomorphisms of groups with respect to the group structures described in Section~\ref{sec:strictness}. According to Section~\ref{sec:H-maps} in order to show that $\jnst$, $\pnst$ and $\knst$ are homomorphisms of groups it is enough to exhibit, in each case, an additivity homotopy $\alpha$ together with a second order homotopy as in \eqref{eq:second_order_homotopy} -- in our case, the second order homotopy will always be degenerate.
The simplest case is $\pn\col\Pnew\to\Pold$, which is a homomorphism, so that $\alpha$ may be taken constant.
For $\jn\col\Ln\ra\Pnew$, the additivity homotopy is simply the left zero homotopy since, according to the definition
of addition on $\Pnew$, we have
\[\onew+z+w\sim\onew+(\onew+z+w)=(\onew+z)+(\onew+w).\]
For $\kn\col\Pold\ra\Kn$, start with the map $M$ from the diagram~\eqref{eq:lift_homotopy_additivity} of Section~\ref{sec:H_space_constr} and restrict it to the part over $\bdry_1\stdsimp2$ to obtain
\[(\bdry_1\stdsimp{2}\times(\Pold)_\mathrm{right})\cup(2\times(\Pold\times_B\Pold))\xlra{M'}\En.\]
Next, extend it to $\bdry_1\stdsimp{2}\times(\Pold\times_B\Pold)$ arbitrarily so that $0\times(\Pold\times_B\Pold)$ gets mapped to $0$. Then the additivity homotopy $\alpha^{-1}\col\kn x+_{\kno}\kn y\sim\kn(x+y)$ is given by $(t,x,y)\mapsto\delta M'(t,x,y)+\kn x+_{\kno}\kn y$.
It remains to treat the connecting homomorphism $\partial$. If $h_0,h_1\col \stdsimp{1}\times X\ra\Pold$ represent two elements of the domain, then the lift of $h_0\hplus h_1$ may be chosen to be the sum $\widetilde h_0\hplus\widetilde h_1$ of the two lifts. Its restriction to the end of the homotopy is the sum of the two restrictions, $(\widetilde h_0\hplus\widetilde h_1)_\mathrm{end}=(\widetilde h_0)_\mathrm{end}\hplus(\widetilde h_1)_\mathrm{end}$.
\end{proof}
\section{Leftover proofs} \label{sec:leftover_proofs}
The purpose of this section is to prove statements that were used in the main part but whose proofs would disturb the flow of the paper.
\addtocounter{equation}{-1}
{\renewcommand{\theequation}{\ref{t:MP_tower} (restatement)}
\begin{theorem}
\MPt
\end{theorem}}\label{sec:MP_tower_proof}
The proof will be presented in two parts. First, we describe the construction of the objects and then we will prove that they really constitute a Moore--Postnikov tower.
The construction itself follows ideas by E.~H.~Brown for non-equivariant simplicial sets in \cite{Brown} and by C.~A.~Robinson for topological spaces with free actions of a group in \cite{Robinson}.
We described the construction in the non-equivariant non-fibrewise case $G=1$ and $B=*$ in detail in \cite{polypost}. Here, we will give a brief overview with the emphasis on the necessary changes for $G$ and $B$ non-trivial.
\subsection*{Construction}
The first step of the construction is easy. Put $P_0=B$ and $\varphi_0=\varphi$. To proceed by induction, suppose that we have constructed $\Pold$ and a map $\varphi_{\thedim-1}\col Y\to \Pold$ with properties~\ref{MP1} and~\ref{MP2} from the definition of the Moore--Postnikov tower. Moreover, assume that $\Pold$ is equipped
with effective homology.
Viewing $\cone\varphi_{(\thedim-1)*}$ as a perturbation of $C_*\Pold\oplus C_*Y$, we obtain from strong equivalences $C_*\Pold\LRa C_*^\ef\Pold$ and $C_*Y\LRa C_*^\ef Y$ a strong equivalence $\cone\varphi_{(\thedim-1)*}\LRa C_*^\ef$ with $C_*^\ef$ effective (for details, see \cite[Proposition~3.8]{polypost}). Let us consider the composition
\[C_{\thedim+1}^\ef\ra Z_{\thedim+1}(C_*^\ef)\ra H_{\thedim+1}(C_*^\ef)\defeq\pin,\]
where the first map is an (equivariant) retraction of $Z_{\thedim+1}(C_*^\ef)\subseteq C_{\thedim+1}^\ef$, computed by the algorithm of Proposition~\ref{prop:projectivity}. The second map is simply the projection onto the homology group. The homology group itself is computed from $C_*^\ef$ -- by forgetting the action of $G$, it is a chain complex of finitely generated abelian groups and Smith normal form is available. The $G$-action on $\pin$ is easily computed from the $G$-action on $C_*^\ef$. Composing with the chain map $\cone\varphi_{(\thedim-1)*}\ra C_*^\ef$ coming from the strong equivalence, we obtain
\[\kappa+\lambda\col C_{\thedim+1}\Pold\oplus C_\thedim Y=\big(\cone\varphi_{(\thedim-1)*}\big)_{\thedim+1}\ra C_{\thedim+1}^\ef\ra\pin\]
whose components are denoted $\kappa$ and $\lambda$. They correspond, respectively, to maps
\[\knp\col \Pold\ra K(\pin,\thedim+1),\ l_\thedim\col Y\ra E(\pin,\thedim)\]
that fit into a square
\begin{equation}\label{eq:postnikov_square}
\xymatrixc{
Y \ar[r]^-{l_\thedim} \ar[d]_-{\varphi_{\thedim-1}} & E(\pin,\thedim) \ar[d]^-\delta \\
\Pold \ar[r]_-{\knp} & K(\pin,\thedim+1)
}\end{equation}
which commutes by the argument of \cite[Section~4.3]{polypost}.
Now we can take $\Pnew=\Pold\times_{K(\pin,\thedim+1)}E(\pin,\thedim)$ to be the pullback as in part~\ref{MP3} of the definition of the tower. By the commutativity of the square \eqref{eq:postnikov_square}, we obtain a map $\varphin=(\varphi_{\thedim-1},l_\thedim)\col Y\ra \Pnew$ as in
\[\xymatrix@=10pt{
Y \ar@/^10pt/[drrr]^{l_\thedim} \ar@/_10pt/[dddr]_{\varphi_{\thedim-1}} \ar@{-->}[dr]^-{\varphin} \\
& \Pnew \ar[rr] \ar[dd]^{\pn} & & E(\pin,\thedim) \ar[dd]^{\delta} \\
\\
& \Pold \ar[rr]_-{\knp} & & K(\pin,\thedim+1)
}\]
which we will prove to satisfiy the remaining conditions for the $\thedim$-th stage of a Moore--Postnikov tower.
First, however, we will equip $\Pnew$ with effective homology. To this end, observe that $\Pnew$ is isomorphic to the twisted cartesian product $\Pold\times_{\tau}K(\pin,\thedim)$, see \cite[Proposition~18.7]{May:SimplicialObjects-1992}. Since $\Pold$ is equipped with effective homology by induction, and $K(\pin,\thedim)$ admits effective homology non-equivariantly by \cite[Theorem~3.16]{polypost}, it follows from \cite[Corollary~12]{Filakovsky} (or \cite[Proposition~3.10]{polypost}) that $\Pnew$ can also be equipped with effective homology non-equivariantly. Since the $G$-action on $\Pnew$ is clearly free (any fixed point would get mapped by $\psin$ to a fixed point in $B$), Theorem~\ref{t:vokrinek} provides (equivariant) effective homology for $\Pnew$ (distinguished simplices of $\Pnew$ are pairs with the component in $\Pold$ distinguished).
\subsection*{Correctness}
From the exact sequence of homotopy groups associated with the fibration sequence
\[\Pnew\ra \Pold\ra K(\pin,\thedim+1)\]
and the properties~\ref{MP1} and~\ref{MP2} for $\Pold$, we easily get that $\Pnew$ satisfies the condition~\ref{MP2} and that $\varphinst\col \pi_\thedimm(Y)\to\pi_\thedimm(\Pnew)$ is an isomorphism for $0\le\thedimm\le\thedim-1$.
The rest of the proof is derived, as in \cite[Section~4.3]{polypost}, from the morphism of long exact sequences of homotopy groups
\[\xymatrix@R=15pt@C=10pt{
& \pi_{\thedim+1}(Y)\ar[r] \ar[d]^{\varphinst}& \pi_{\thedim+1}(\cyl\varphi_{\thedim-1})\ar[r] \ar[d]^{\cong} & \pi_{\thedim+1}(\cyl\varphi_{\thedim-1},Y) \ar[d]^{\cong} \ar[r] & \pin(Y) \ar[d]^{\varphinst} \ar[r] & \pin(\cyl\varphi_{\thedim-1}) \ar[d]^{\cong} \ar[r] & 0 \\
0 \ar[r] & \pi_{\thedim+1}(\Pnew)\ar[r] & \pi_{\thedim+1}(\cyl \pn) \ar[r] & \pi_{\thedim+1}(\cyl \pn,\Pnew) \ar[r] & \pin(\Pnew) \ar[r] & \pin(\cyl \pn)
}\]
associated with pairs $(\cyl\varphi_{\thedim-1},Y)$ and $(\cyl \pn,\Pnew)$. The arrow in the middle is an isomorphism by \cite[Lemma~4.5]{polypost}, while the remaining two isomorphisms are consequences of the fact that both cylinders deform onto the same base $\Pold$. The zero on the left follows from the fact that the fibre of $\pn$ is $K(\pin,\thedim)$ and the zero on the right comes from the condition~\ref{MP1} for $\Pold$. By the five lemma, $\varphinst$ is an isomorphism on $\pin$ and an epimorphism on $\pi_{\thedim+1}$ which completes the proof of condition~\ref{MP1}.\qed
\vskip\topsep
For the next proof, we will use the following observation.
\begin{lemma}\label{l:fibrant_replacement}
Every map $\psi\col Q\to P$ can be factored as $\psi\col Q\cof[j]Q'\fib[\psi']P$, where $j$ is a weak homotopy equivalence and $\psi'$ is a Kan fibration.
\end{lemma}
By a \emph{weak homotopy equivalence}, we will understand a map whose geometric realization is a $G$-homotopy equivalence.
\begin{proof}
This is the small object argument (see e.g.\ \cite[Section~10.5]{Hirschhorn} or~\cite[Section~7.12]{DwyerSpalinski}) applied to the collection $\mcJ$ of ``$G$-free horn inclusions'' $G\times\horn\thedim\thedimm\to G\times\stdsimp\thedim$, $\thedim\geq1$, $0\leq\thedimm\leq\thedim$. Using the terminology of \cite{Hirschhorn}, the $\mcJ$-injectives are exactly those maps that have non-equivariantly the right lifting property with respect to $\horn\thedim\thedimm\to \stdsimp\thedim$ (this follows from the equivalence \eqref{eq:horn-injectivity} from the next proof), i.e.\ Kan fibrations. The geometric realization of every relative $\mcJ$-cell complex is a $G$-homotopy equivalence since the geometric realization of $G\times\stdsimp\thedim$ clearly deforms onto that of $G\times\horn\thedim\thedimm$.
\end{proof}
\addtocounter{equation}{-1}
{\renewcommand{\theequation}{\ref{t:n_equivalence} (restatement)}
\begin{theorem}
\nequiv
\end{theorem}}\label{sec:n_equivalence_proof}
\begin{proof}
By construction, $\varphin\col Y\to \Pnew$ is an $(\thedim+1)$-equivalence. By the proof of Lemma~\ref{l:fibrant_replacement}, we may assume $Y\to Y'$ to be a relative $\mcJ$-cell complex. We will show that $\varphin$ factors through $Y'$. Given that this is true for $\Pold$, we form the square
\[\xymatrix@C=40pt{
Y \ar[r]^-{\varphin} \ar@{ >->}[d]_-\sim & \Pnew \ar@{->>}[d]^\pn \\
Y' \ar[r]_-{\varphi_{\thedim-1}'} \ar@{-->}[ur]^-{\varphinp} & \Pold
}\]
in which a diagonal exists by the fact that $Y\to Y'$ is a relative $\mcJ$-cell complex and such maps have the left lifting property with respect to Kan fibrations.
The map $\varphinp$ is also an $(\thedim+1)$-equivalence. We will prove more generally that
\[\psi_*\col[X,Q]^A_B\to[X,P]^A_B\]
is an isomorphism for any $(\thedim+1)$-equivalence $\psi\col Q\to P$.
The basic idea is that $X$ is built from $A$ by consecutively attaching ``cells with a free action of $G$'', namely $X=\cup X_\thedimm$ and in each step $X_\thedimm=X_{\thedimm-1}\cup_{G\times\partial\stdsimp{\theotherdim_\thedimm}}G\times\stdsimp{\theotherdim_\thedimm}$ with $\theotherdim_\thedimm\leq\thedim$.\footnote{Thus, the action needs only be free away from $A$ and the same generalization applies to the dimension.}
First, we prove that $\psi_*$ is surjective under the assumption that $\psi$ is an $\thedim$-equivalence. For convenience, we replace $\psi$ by a $G$-homotopy equivalent Kan fibration using Lemma~\ref{l:fibrant_replacement}. Suppose that $\psi_*$ is surjective with $X$ replaced by $X_{\thedimm-1}$ and we prove the same for $X_\thedimm$. This is clearly implied by the solvability of the following lifting-extension problem
\[\xymatrix@C=40pt{
X_{\thedimm-1} \ar[r] \ar@{ >->}[d] & Q \ar@{->>}[d]^-\psi \\
X_\thedimm \ar[r]_-{\ell} \ar@{-->}[ru] & P
}\]
(to find a preimage of $[\ell]$ at the bottom, we find the top map by the inductive hypothesis; if the lift exists, it gives a preimage of $[\ell]$ as required). As $X_\thedimm$ is obtained from $X_{\thedimm-1}$ by attaching a single cell, the problem is equivalent to
\begin{equation}\label{eq:horn-injectivity}
\xy *!C\xybox{\xymatrix@C=30pt{
G\times\partial\stdsimp{\theotherdim_\thedimm} \ar[r] \ar@{ >->}[d] & Q \ar@{->>}[d]^-\psi \ar@{}[drrr]|-{\parbox{\widthof{that is further}}{that is further equivalent to}} & & & \partial\stdsimp{\theotherdim_\thedimm} \ar[r] \ar@{ >->}[d] & Q \ar@{->>}[d]^-\psi \\
G\times\stdsimp{\theotherdim_\thedimm} \ar[r] \ar@{-->}[ru] & P & & & \stdsimp{\theotherdim_\thedimm} \ar[r] \ar@{-->}[ru] & P
}}\endxy
\end{equation}
where the problem on the right is obtained from the left by restricting to $e\times\stdsimp{\theotherdim_\thedimm}$ and is non-equivariant. Its solution is guaranteed by $\psi$ being an $\theotherdim_\thedimm$-equivalence.
To prove the injectivity of $\psi_*$, we put back the assumption of $\psi$ being an $(\thedim+1)$-equivalence. We study the preimages of $[\ell]\in[X,P]^A_B$ under $\psi_*$; these clearly form $[X,Q]^A_P$. By the surjectivity part, this set is non-empty. By pulling back along $\ell$, we thus obtain a fibration $\ell^*Q\to X$ with a section $X\to\ell^*Q$ which is an $\thedim$-equivalence.\footnote{The fibres of $\psi$ are $\thedim$-connected and isomorphic to those of $\ell^*Q\to X$. From the long exact sequence of homotopy groups of this fibration, it follows that $\ell^*Q\to X$ is also an $(\thedim+1)$-equivalence and its section must then be an $\thedim$-equivalence.} Thus,
\[[X,Q]^A_P\cong[X,\ell^*Q]^A_X\xlla\cong[X,X]^A_X=*\]
by the surjectivity part (any surjection from a one-element set is a bijection).
\end{proof}
Next, we need the following lemma.
\begin{lemma}\label{l:whe_weak_strict}
The natural maps $P\hvee_BP\ra P\vee_BP$ and $P\htimes_BP\ra P\times_BP$ are weak homotopy equivalences.
The inclusion $(P\htimes_BP)\cup\big(\bdry_2\stdsimp{2}\times(P\vee_BP)\big)\cof\stdsimp2\times(P\times_BP)$ is a weak homotopy equivalence.
\end{lemma}
\begin{proof}
The space $P\hvee_BP$ is naturally a subspace of $\bdry_2\stdsimp2\times(P\vee_BP)$ and it is enough to show that it is in fact a deformation retract. A continuous deformation is obtained from a deformation of $\bdry_2\stdsimp2\times P_\mathrm{right}$ onto $(0\times P_\mathrm{right})\cup(\bdry_2\stdsimp2\times B)$ and a symmetric deformation of $\bdry_2\stdsimp2\times P_\mathrm{left}$ onto $(1\times P_\mathrm{left})\cup(\bdry_2\stdsimp2\times B)$.
To prove the remaining claims, consider the deformation of $\stdsimp2\times(P\times_BP)$ onto $2\times(P\times_BP)$, given by deforming $\stdsimp2$ linearly onto $2$ and by a constant homotopy on the second component $P\times_BP$. By an easy inspection, it restricts to a deformation of $P\htimes_BP$ onto $2\times(P\times_BP)$, giving the second claim.
Since both $\stdsimp2\times(P\times_BP)$, $P\htimes_BP$ deform onto the same $2\times(P\times_BP)$, it is enough for the last claim to find a deformation of
\[(P\htimes_BP)\cup\big(\bdry_2\stdsimp{2}\times(P\vee_BP)\big)\]
onto $P\htimes_BP$. This is provided by the deformation of $\bdry_2\stdsimp2\times(P\vee_BP)$ onto $P\hvee_BP$ (the intersection of the two spaces in the union above) from the first paragraph.
\end{proof}
Now we are ready to prove the following proposition.
\addtocounter{equation}{-1}
{\renewcommand{\theequation}{\ref{prop:weak_H_space_connect} (restatement)}
\begin{proposition}
\wHsc
\end{proposition}}
\begin{proof}
By the first part of the previous lemma, we may replace the pair in the statement by $(\Pnew\times_B\Pnew,\Pnew\vee_B\Pnew)$.
First, we recall that $\Pnew\ra B$ is a minimal fibration (each $\delta\col E(\pi_\thedimm,\thedimm)\ra K(\pi_\thedimm,\thedimm+1)$ is one and the class of minimal fibrations is closed under pullbacks and compositions, see \cite{May:SimplicialObjects-1992}). It is well known that over each simplex $\sigma\col \stdsimp{\thedimm}\ra B$ any minimal fibration is trivial and it is easy to modify this to an isomorphism $\sigma^*\Pnew\cong\stdsimp{\thedimm}\times F$ of fibrations with sections\footnote{Start with an inclusion $(\stdsimp{\thedimm}\times*)\cup(0\times F)\ra\sigma^*\Pnew$ given by the zero section on the first summand and by the inclusion on the second. Extend this to a fibrewise map $\stdsimp{\thedimm}\times F\ra\sigma^*\Pnew$ which is a fibrewise homotopy equivalence, hence an isomorphism by the minimality of $\Pnew\ra B$.}. Here, $F$ denotes the fibre of $\Pnew\ra B$ and is $\theconn$-connected by the assumptions. Consequently, $\Pnew\vee_B\Pnew$ is a fibre bundle with fibre $F\vee F$. Thus, we have a map of fibre sequences
\[\xymatrix{
F\vee F \ar[r] \ar[d] & \Pnew\vee_B\Pnew \ar[r] \ar[d] & B \ar@{=}[d] \\
F\times F \ar[r] & \Pnew\times_B\Pnew \ar[r] & B
}\]
The left map is $(2\theconn+1)$-connected. By the five lemma applied to the long exact sequences of homotopy groups, the middle map $\Pnew\vee_B\Pnew\ra \Pnew\times_B\Pnew$ is also $(2\theconn+1)$-connected.
To show that the equivariant cohomology group vanishes, we will make use of a contraction of $C_*(\Pnew\htimes_B\Pnew)$ onto $C_*(\Pnew\hvee_B\Pnew)$ in dimensions $\leq2\theconn+1$; its existence follows from the proof of Proposition~\ref{prop:projectivity}. By the additivity of $\operatorname{Hom}_{\ZG}(-,\pi)$, there is an induced contraction of $C^*_G(\Pnew\htimes_B\Pnew;\pi)$ onto $C^*_G(\Pnew\hvee_B\Pnew;\pi)$ and thus the relative cochain complex is acyclic.
\end{proof}
\addtocounter{equation}{-1}
{\renewcommand{\theequation}{\ref{t:existence_of_H_space_structures} (restatement)}
\begin{theorem}
\eoHss
\end{theorem}}\label{sec:existence_of_H_space_structures_proof}
\begin{proof}
By the previous proposition, the left vertical map in
\[\xymatrix@C=40pt{
\Pnew\vee_B\Pnew \ar[r]^-\nabla \ar@{ >->}[d]_\vartheta & \Pnew \ar@{->>}[d]^-{\psin} \\
\Pnew\times_B\Pnew \ar[r] \ar@{-->}[ru]_-\add & B
}\]
is $(2\theconn+1)$-connected. Since the homotopy groups of the fibre of $\psin$ are concentrated in dimensions $\theconn\leq\thedimm\leq\thedim$, the relevant obstructions (they can be extracted from the proof of Proposition~\ref{prop:lift_ext_one_stage}) for the existence of the diagonal lie in
\[H^{\thedimm+1}_G(\Pnew\times_B\Pnew,\Pnew\vee_B\Pnew)=0\]
(since $i+1\leq\thedim+1\leq 2\theconn+1$). The diagonal is unique up to homotopy by the very same computation. Thus, in particular, replacing $\add$ by the opposite addition $\add^\op\col(x,y)\mapsto y+x$ yields a homotopic map, proving homotopy commutativity. Similarly, homotopy associativity follows from the uniqueness of a diagonal in
\[\xymatrix@C=40pt{
(B\times_B\Pnew\times_B\Pnew)\cup(\Pnew\times_B\Pnew\times_BB) \ar[r] \ar@{ >->}[d] & \Pnew \ar@{->>}[d]^-{\psin} \\
\Pnew\times_B\Pnew\times_B\Pnew \ar[r] \ar@{-->}[ru] & B
}\]
(the pair on the left is again $(2\theconn+1)$-connected) with two diagonals specified by mapping $(x,y,z)$ to $(x+y)+z$ and $x+(y+z)$.
The existence of a homotopy inverse is a fibrewise and equivariant version of \cite[Theorem~3.4]{Stasheff}; the proof applies without any complications when the action of $G$ is free. We will not provide more details since we construct the inverse directly in Section~\ref{sec:H_space_constr}.
\end{proof}
For the next proof, we will use a general lemma about filtered chain complexes. Let $C_*$ be a chain complex equipped with a filtration
\[0 = F_{-1}C_* \subseteq F_0C_* \subseteq F_1C_* \subseteq \cdots\]
such that $C_* = \bigcup_i F_iC_* $. As usual, we assume that each $F_iC_*$ is a cellular subcomplex, i.e.\ generated by a subset of the given basis of $C_*$. We assume that this filtration is \emph{locally finite}, i.e.\ for each $n$, we have $C_n = F_iC_n$ for some $i \geq 0$. For the relative version, let $D_*$ be a (cellular) subcomplex of $C_*$ and define $F_iD_* = D_* \cap F_iC_*$.
\begin{lemma}\label{l:filt}
Under the above assumptions, if each filtration quotient $G_iC_* = F_iC_*/F_{i-1}C_*$ has effective homology then so does $C_*$. More generally, if each $(G_iC_*,G_iD_*)$ has effective homology then so does $(C_*,D_*)$.
\end{lemma}
\begin{proof}
We define $G_* = \bigoplus_{i \geq 0} G_iC_*$, the associated graded chain complex. Then $C_*$ is obtained from $G_*$ via a perturbation that decreases the filtration degree $i$. Taking a direct sum of the given strong equivalences $G_iC_* \La \widehat G_iC_* \Ra G_i^\ef C_*$, we obtain a strong equivalence $G_* \La \widehat G_* \Ra G_*^\ef$ with all the involved chain complexes equipped with a ``filtration'' degree. Since the perturbation on $G_*$ decreases this degree, while the homotopy operator preserves it, we may apply the perturbation lemmas, Propositions~\ref{p:epl} and~\ref{p:bpl}, to obtain a strong equivalence $C_* \La \widehat C_* \Ra C_*^\ef$.
\end{proof}
\addtocounter{equation}{-1}
{\renewcommand{\theequation}{\ref{p:effective_homotopy_colimits} (restatement)}
\begin{proposition}
\ehc
\end{proposition}}\label{sec:effective_homotopy_colimits}
We continue the notation of Section~\ref{s:hocolim}.
\begin{proof}
We apply Lemma~\ref{l:filt} to the natural filtration $F_iC_*|\mcS| = C_*\sk_i|\mcS|$, where $\sk_i|\mcS|$ is the preimage of the $i$-skeleton $\sk_i\stdsimp 2$ under the natural projection $|\mcS| \to \stdsimp 2$. The Eilenberg--Zilber reduction applies to the quotient
\[C_*\sk_2|\mcS|/C_*\sk_1|\mcS| \cong C_*(\stdsimp 2 \times Z, \partial\stdsimp 2 \times Z) \Ra C_*(\stdsimp 2,\partial\stdsimp 2) \otimes C_*Z \cong s^2C_*Z\]
where $s$ denotes the suspension. The effective homology of $Z$ provides a further strong equivalence with $s^2C_*^\ef Z$. Similarly, $C_*\sk_1|\mcS|/C_*\sk_0|\mcS|$ is isomorphic to
\[C_*((\bdry_2\stdsimp 2,\partial\bdry_2\stdsimp 2) \times Z) \oplus C_*((\bdry_1\stdsimp 2,\partial\bdry_1\stdsimp 2) \times Z_0) \oplus C_*((\bdry_0\stdsimp 2,\partial\bdry_0\stdsimp 2) \times Z_1)\]
and thus strongly equivalent to $sC_*^\ef Z \oplus sC_*^\ef Z_0 \oplus sC_*^\ef Z_1$. Finally, $C_*\sk_0|\mcS|$ is strongly equivalent to $C_*^\ef Z_0 \oplus C_*^\ef Z_1 \oplus C_*^\ef Z_2$.
The subcomplexes corresponding to $\bdry_2|\mcS|$ are formed by some of the direct summands above and are thus preserved by all the involved strong equivalences. This finishes the verification of the assumptions of Lemma~\ref{l:filt}.
\end{proof}
\begin{lemma}\label{lem:calculations}
The two components $\qn\mathop{\add}$ and $\pn\mathop{\add}$ defined in Section~\ref{sec:H_space_constr} determine a map $\mathop{\add}\col\Pnew\htimes_B\Pnew\ra\Pnew$ and this map is a weak \Hopf space structure.
The two components $\qn\mathop{\inv}$ and $\pn\mathop{\inv}$ defined in Section~\ref{sec:H_space_constr} determine a map $\mathop{\inv}\col\Pnew\ra\Pnew$ and this map is a right inverse for $\add$.
\end{lemma}
\begin{proof}
The compatibility for $\add$:
\begin{align*}
\delta\qn\mathop{\add} & =\delta\big(\mathop{\add_{\qn\onew}}(\qn\htimes\qn)+M(\pn\htimes\pn)\big)=\mathop{\add_{\kno}}(\delta\htimes\delta)(\qn\htimes\qn)+m(\pn\htimes\pn) \\
& =\mathop{\add_{\kno}}(\kn\htimes \kn)(\pn\htimes\pn)+\big(\kn\mathop{\add}-\mathop{\add_{\kno}}(\kn\htimes \kn)\big)(\pn\htimes\pn) \\
& =\kn\mathop{\add}(\pn\htimes\pn)=\kn\pn\mathop{\add}
\end{align*}
The weak \Hopf space condition $\mathop{\add}\vartheta=\hnabla$ on $\Pnew$ verified for its two components:
\begin{align*}
\pn\mathop{\add}\vartheta & =\mathop{\add}(\pn\htimes\pn)\vartheta=\mathop{\add}\vartheta(\pn\hvee\pn)=\hnabla(\pn\hvee\pn)=\pn\hnabla \\
\qn\mathop{\add}\vartheta & =\big(\mathop{\add_{\qn\onew}}(\qn\htimes\qn)+M(\pn\htimes\pn)\big)\vartheta=\mathop{\add_{\qn\onew}}\vartheta(\qn\hvee\qn)+\underbrace{M\vartheta}_0(\pn\hvee\pn) \\
& =\hnabla(\qn\hvee\qn)=\qn\hnabla
\end{align*}
The compatibility for $\inv$:
\begin{align*}
\delta(-c+2\qn\onew&-M(2,x,-x))=-\delta c+2\delta\qn\onew-m(2,x,-x) \\
& =-\kn x+2\kno-(\kn(\underbrace{x+(-x)}_{\oold})-\kn x+\kno-\kn(-x))=\kn(-x)
\end{align*}
The condition $\mathop{\add}(\id\htimes\mathop{\inv})\hDelta=\onew$ of being a right inverse:
\begin{align*}
(x,c)+(-(x,c)) & =(x,c)+(-x,-c+2\qn\onew-M(2,x,-x)) \\
& =(x+(-x),c-\qn\onew+(-c+2\qn\onew-M(2,x,-x))+M(2,x,-x)) \\
& =(\oold,\qn\onew)=\onew\qedhere
\end{align*}
\end{proof}
\ifpoly
\section{Polynomiality}\label{sec:polynomiality}
\subsection*{Basic notions}
The algorithm of Theorem~\ref{thm:main_theorem} was described for a single generalized lifting-extension problem. To prove that its running time is polynomial, we will have to deal with the class of all generalized lifting-extension problems and also certain related classes, e.g.\ the class of Moore-Postnikov stages. We will base our analysis on the notion of a locally polynomial-time simplicial set, described in \cite{polypost}. Here, we will call it a polynomial-time family of simplicial sets.
Since we assume $\theconn$ to be fixed and our algorithms only access information up to dimension $2\theconn+2$, we make the following standing assumption.
\begin{convention}
In this section, when speaking about the running time of algorithms, it is understood that inputs are limited to dimension at most $2\theconn+2$.
\end{convention}
\begin{definition}
A \emph{family of (locally effective) simplicial sets} is a collection $(X(p))_{p\in\sfP}$ of simplicial sets, such that the elements of the \emph{parameter set} $\sfP$ have a representation in a computer, simplices of each $X(p)$ also have a representation in a computer and such that there are provided algorithms, all taking as inputs pairs $(p,x)$ with $p\in\sfP$, and performing the following tasks:
\begin{enumerate}[labelindent=.5em,leftmargin=*,label=$\bullet$,itemsep=0pt,parsep=0pt,topsep=5pt]
\item
decide whether $x$ is a simplex of the simplicial set $X(p)$,
\item
compute all faces of $x$,
\item
compute all degeneracies of $x$,
\item
compute the action of each $a\in G$ on $x$,
\item
compute the expression of $x$ as $x=ay$ with $a\in G$ and $y$ distinguished.
\end{enumerate}
We say that this family is \emph{polynomial-time} if all these algorithms have their running time bounded by $g(\size(p)+\size(x))$, where $g$ is some polynomial function and $\size(p)$, $\size(x)$ are the encoding sizes of $p$ and $x$ (we remind the assumption $\dim x\leq 2\theconn+2$).
A \emph{family of effective simplicial sets} possesses, in addition, an algorithm that, given $p\in\sfP$, outputs the list of all non-degenerate distinguished simplices of $X(p)$.
\end{definition}
\begin{example}
In Section~\ref{sec:equi_eff_hlgy_alg}, we described a way of encoding finite simplicial sets. Such encodings comprise the parameter set $\SSet$; it then supports an obvious polynomial-time family of effective simplicial sets.
\end{example}
The notion of a family can be similarly defined for pairs of (effective) simplicial sets, (effective) chain complexes, strong equivalences, simplicial sets with effective homology etc. Each such class $\mcC$ is described by a collection of algorithms that are required to specify its object. Families of objects of $\mcC$ are then obvious parametrized versions of such collections of algorithms; we will denote them as $\sfP\family\mcC$.
Since for each class $\mcC$ the number of the required algorithms is finite, we may consider the parameter set $\Alg(\mcC)$, whose elements are such collections of algorithms (non-parametrized, i.e.\ describing a single object). Further, we denote by $\Alg_g(\mcC)$ the collections of algorithms that run in time bounded by the polynomial $g$. When membership algorithms are present, as in the case of simplicial sets, $\Alg(\mcC)$ supports an obvious family $\Alg(\mcC)\family\mcC$ that assigns to each collection of algorithms the object they represent (the parametrized version of each algorithm simply runs the appropriate algorithm contained in the parameter). It restricts to a polynomial-time family parametrized by $\Alg_g(\mcC)$. We will use this observation as a motivation for the following.
In constructing new families of objects, it is important that these are polynomial-time whenever the old ones are. We will encapsulate this situation in the notion of a polynomial-time construction. A \emph{construction} $F\col\mcC\to\mcD$ is simply a mapping; we use a different name to emphasize that it operates on the level of objects (i.e.\ mathematical structures of some sort). In general, we will not require it to be single-valued, having in mind an example of associating to an equation its solution -- no solution needs exist and if it does, there might be many.
Assuming the presence of membership algorithms in the class $\mcC$, we say that the construction $F$ is \emph{computable}, if the collection $\Alg(\mcC)\family\mcC\to\mcD$ is made into a family. It is said to be \emph{polynomial-time} if, in addition, this family restricts to a polynomial-time family $\Alg_g(\mcC)\family\mcD$ for each polynomial $g$. Thus, computability is an extra structure on a construction, given by a collection of algorithms required to set up a family in $\mcD$; these algorithms may use as subroutines the algorithms that specify objects of $\mcC$. We remark that this description does not depend on the presence of membership algorithms and we may thus define polynomial-time constructions between general classes.
\begin{example}
The use of algorithms from the class $\mcC$ to set up those for $\mcD$ is a typical situation -- for a pair of simplicial sets $X$, $Y$, we may form their product $X\times Y$; then the $\thedimm$-th face $d_\thedimm(x,y)=(d_\thedimm x,d_\thedimm y)$ may be computed from the respective faces in $X$ and $Y$. Since this computation is clearly polynomial-time, we have a polynomial-time construction
\[\xymatrix{
\{\text{pairs of locally effective simplicial sets}\} \ar[r] & \{\text{locally effective simplicial sets}\}.
}\]
\end{example}
The usefullness of polynomial-time constructions $F\col\mcC\to\mcD$ lies in the following simple observation: when a polynomial-time family $C\colon\sfP\family\mcC$ is given, the composition with $F$ yields a collection $\sfP\family[C]\mcC\xra{F}\mcD$ which gets a structure of a polynomial-time family from that of $F$ and $C$, i.e.\ $F$ ``preserves'' polynomial-time families. The dual situation is called a reparametrization: when $\Phi\col\sfQ\to\sfP$ is a polynomial-time function and $\sfP$ supports a polynomial-time family $C\col\sfP\family\mcC$ then $\sfQ\xra{\Phi}\sfP\family[C]\mcC$ is another polynomial-time family.
At the same time, the polynomiality of a construction does not depend on the actual encodings but only on the classes involved. The main result of this section is the following.
\begin{theorem}\label{thm:main_poly}
For each fixed $\theconn\geq 1$, the algorithm of Theorem~\ref{thm:main_theorem} describes a polynomial-time construction
\setlength{\hlp}{\widthof{${}\cup\{\emptyset\},$}*\real{0.5}}
\[\xymatrix@R=15pt{
\left\{\parbox{\widthof{\upshape $\theconn$-stable generalized lifting-extension problems}}{\upshape $\theconn$-stable generalized lifting-extension problems composed of effective simplicial sets}\right\} \ar[r] & *+!!<-\the\hlp,\the\fontdimen22\textfont2>{\left\{\parbox{\widthof{\upshape abelian groups}}{\upshape fully effective abelian groups}\right\}\cup\{\emptyset\},} \\
{} \POS*!<0pt,-13pt>\xybox{\xymatrix@=15pt{
\scriptstyle A \ar[r] \ar@{ >->}[d] & \scriptstyle Y \ar[d] \\
\scriptstyle X \ar[r] & \scriptstyle B
}} \ar@{|->}[r] & [X,Y]^A_B,
}\]
where the $\theconn$-stability of a generalized lifting-extension problem means that $\dim X\leq 2\theconn$, both $B$ and $Y$ are $1$-connected and the homotopy fibre of $\varphi\col Y\to B$ is $\theconn$-connected.
\end{theorem}
From the definition, we are required to set up a polynomial-time family indexed by $\Alg(\mcC)$ where $\mcC$ is the class of generalized lifting-extension problems in question. We will make use of restricted parameter sets $\Map$ and $\Pair$ that describe $\varphi$ and $(X,A)$ respectively.
The whole computation is summarized in the following chains of computable functions between parameter sets that describe various partial stages of the computation; we will explain all the involved parameter sets later. The functions
\[\xymatrix@R=2pt{
\rightbox{\Map={}}{\EMPS0} \ar@{.>}[rr] \POS[];[rr]**\dir{}?<>(.33)**\dir{-}*\dir{>},[];[rr];**\dir{}?<>(.33)**\dir{-} & & \EMPS\thedim & \Map\times_\SSet\MPS\thedim \ar[r] & \MPS\thedim,
}\]
for $\thedim=\dim X$, describe the computation of the Moore--Postnikov system over $B$ and its pullback to $X$,
\[\xymatrix@R=2pt{
\Pair\times_\SSet\HMPS[\theotherdim-1]\thedim \ar[r] & \PMPS[\theotherdim]\thedim\cup\{\bot\} & \PMPS[\theotherdim]\thedim \ar[r] & \HMPS[\theotherdim]\thedim,
}\]
for $\theotherdim\leq\thedim$, describe the computation of the weak \Hopf space structure on the stable part of the pullback (when it admits a section at all) and
\[\xymatrix@R=2pt{
\Gamma_{\thedim}\col\Pair\times_\SSet\HMPS[\thedim]\thedim \ar@{~>}[r] & \{\text{fully effective abelian groups}\}
}\]
describes a polynomial-time family, given by the homotopy classes of sections of the final $\thedim$-th stage that are zero on $A$.
\subsection*{Moore--Postnikov systems}
The elements of the parameter set $\EMPS\thedim$ encode ``extended'' Moore--Postnikov systems and are composed of the following data
\begin{enumerate}[labelindent=.5em,leftmargin=*,label=$\bullet$,itemsep=0pt,parsep=0pt,topsep=5pt]
\item
finite $1$-connected simplicial sets $Y$, $B$;
\item
finitely generated abelian groups $\pi_1,\ldots,\pi_\thedim$;
\item
effective Postnikov invariants $\konepef,\ldots,\knpef$ (to be explained below);
\item
a simplicial map $\varphi_\thedim\col Y\to P_\thedim$;
\end{enumerate}
where we set, by induction, $P_0=B$ and $P_\thedimm=P_{\thedimm-1}\times_{K(\pi_\thedimm,\thedimm+1)}E(\pi_\thedimm,\thedimm)$, a pullback taken with respect to the Postnikov invariant $\kip\col P_{\thedimm-1}\to K(\pi_\thedimm,\thedimm+1)$ that corresponds to the equivariant cocycle
\[\xymatrix{
C_{\thedimm+1}P_{\thedimm-1} \ar[r] & C_{\thedimm+1}^\ef P_{\thedimm-1} \ar[r]^-{\kipef} & \pi_\thedimm
}\]
with the first map the canonical map coming from the effective homology of $P_{\thedimm-1}$. Thus, $\kipef$ is required to be an equivariant cocycle as indicated.\footnote{When $Y$ is not finite, $\varphi_\thedim$ has to be replaced by a certain collection of effective cochains on $Y$; details are explained in \cite{polypost}.}
The above simplicial sets provide a number of families
\[\xymatrix{
B,P_1,\ldots,P_\thedim,Y\col\EMPS\thedim \ar@{~>}[r] & \left\{\parbox{\widthof{simplicial sets equipped}}{simplicial sets equipped with effective homology}\right\}
}\]
and also a number of families of simplicial maps $p_\thedimm$, $\kip$, $\varphi_\thedimm$ etc.\ between these. They are polynomial-time essentially by the results of \cite{polypost} -- there is only one significant difference, namely the (equivariant) polynomial-time homology of Moore--Postnikov stages. For that we need the following observation: the functor $B$ of Theorem~\ref{t:vokrinek} is a polynomial-time construction defined on
\[\left\{\parbox{\widthof{strong equivalences $C\LRa C^\ef$ with $C$ locally}}{strong equivalences $C\LRa C^\ef$ with $C$ locally effective over $\ZG$, $C^\ef$ effective over $\bbZ$}\right\}\]
and taking values in a similar collection with everything $\ZG$-linear. Its polynomiality is guaranteed by the explicit nature of this functor, see~\cite{Vokrinek}.
Polynomiality of functions $\EMPS{\thedimm-1}\to\EMPS\thedimm$ is proved in the same way as in \cite{polypost} with the exception of the use of Proposition~\ref{prop:projectivity} that describes a polynomial-time construction
\[\xymatrix@R=2pt{
\left\{\parbox{\widthof{$n$-connected effective}}{$n$-connected effective chain complexes}\right\} \ar[r] & \left\{\parbox{\widthof{homomorphisms of effective}}{homomorphisms of effective abelian groups}\right\}, \\
C \ar@{|->}[r] & (C_{n+1}\to Z(C_{n+1})).
}\]
Parameters for a Moore--Postnikov system are comprised of the same data with the exception of $Y$ and $\varphi_\thedim$; we denote their collection by $\MPS\thedim$. The parameters for the pullback $g^*S$ of a Moore--Postnikov system $S$ of $\varphi\col Y\to B$ along $g\col X\to B$ are: the base is $X$, the homotopy groups remain the same and the Postnikov invariants are pulled back along $X\times_BP_\thedimm\to P_\thedimm$. Thus, the pullback function $\Map\times_\SSet\EMPS\thedim\to\MPS\thedim$, $(g,S)\mapsto g^*S$ is polynomial-time (it is defined whenever the target of $g$ agrees with the base of $S$).
\subsection*{Stable Moore--Postnikov systems}
For the subsequent developement, the most important ingredient is Lemma~\ref{l:lift_ext_one_stage}. It is easy to see that it is a polynomial-time construction
\[\xymatrix{
\left\{\parbox{\widthof{$(X,A)$ equipped with effective homology, $\pi$ fully}}{$(X,A)$ equipped with effective homology, $\pi$ fully effective abelian group, $z\col X\to K(\pi,\thedim+1)$, $c\col A\to E(\pi,\thedim)$ computable such that $\delta c=z|_A$}\right\} \ar[r] & \left\{\parbox{\widthof{$X\to E(\pi,\thedim)$}}{$X\to E(\pi,\thedim)$ computable}\right\}\cup\{\bot\}.
}\]
It will be useful to split this construction into two steps: finding an ``effective'' cochain $c_0^\ef\col C_\thedim^\ef(X,A)\to\pi$ and computing from it the solution $\widetilde c+c_0$. The advantage of this splitting lies in the possibility of storing the effective cochain as a parameter.
We enhance the parameter set $\MPS\thedim$ to $\PMPS[\theotherdim]\thedim$ by including the parameter
\begin{enumerate}[labelindent=.5em,leftmargin=*,label=$\bullet$,itemsep=0pt,parsep=0pt,topsep=5pt]
\item
a simplicial map $\onew\col B\to P_\theotherdim$;
\end{enumerate}
and to $\HMPS[\theotherdim]\thedim$ by including in addition the parameters
\begin{enumerate}[labelindent=.5em,leftmargin=*,label=$\bullet$,itemsep=0pt,parsep=0pt,topsep=5pt]
\item
equivariant effective cochains $M_\thedimm^\ef\col C_\thedimm^\ef(P_{\thedimm-1}\htimes_B P_{\thedimm-1},P_{\thedimm-1}\hvee_B P_{\thedimm-1})\to\pi_\thedimm$, $1\leq\thedimm\leq\theotherdim$;
\end{enumerate}
which give the zero section and the addition in the Moore--Postnikov stages; for the latter, we use the observation above.
There are polynomial-time functions
\[\xymatrix{
\PMPS[\theotherdim]\thedim \ar[r] & \HMPS[\theotherdim]\thedim
}\]
which compute inductively the equivariant cochains $M_\thedimm$, $1\leq\thedimm\leq\theotherdim$, using Lemma~\ref{l:lift_ext_one_stage}.
\subsection*{Computing diagonals}
We describe a number of polynomial-time families supported by $\HMPS{}$ and its relatives. We restrict our attention to the pullback Moore--Postnikov system $\widetilde S$ over $X$ whose stages will be denoted $\tPnew$. The use of $\add_\theotherdim$ and Proposition~\ref{prop:homotopy_lifting} gives a polynomial-time family
\[\xymatrix@R=3pt{
\Gamma_{\thedim,\theotherdim}\col\Pair\times_\SSet\HMPS[\theotherdim]\thedim \ar@{~>}[r] & \{\text{semi-effective abelian groups}\} \\
((X,A),\widetilde S) \ar@{|->}[r] & [X,\widetilde P_\theotherdim]^A_X
}\]
(defined whenever the bigger space $X$ of the pair $(X,A)$ agrees with the base of $\widetilde S$) which is then extended to a polynomial-time family of exact sequences from Theorem~\ref{thm:exact_sequence}. More precicesly, it is a family of exact sequences $A\to B\xra{f}C\xra{g}D\to E$ of semi-effective abelian groups together with computable sections of the middle homomorphisms $f$, $g$ (as in Lemma~\ref{l:ses}); we will call them semi-effective exact sequences for short. We assume, by induction, that $\Gamma_{\thedim,\theotherdim-1}$ has been already promoted to a polynomial-time family of fully effective abelian groups. The ``five lemma'' for fully effective structures (Lemmas~\ref{l:ker_coker} and~\ref{l:ses}) provides a polynomial-time construction
\[\xymatrix{
\left\{\parbox{\widthof{semi-effective exact sequences}}{semi-effective exact sequences $A\to B\to C\to D\to E$ with $A$, $B$, $D$, $E$ fully effective}\right\} \ar[r] & \{\text{fully effective abelian groups}\}
}\]
sending each exact sequence to its middle term $C$. Thus, $\Gamma_{\thedim,\theotherdim}$ is enhanced to a polynomial-time family of fully effective abelian groups.
\subsection*{Computing zero sections}
It remains to analyze the function
\[\xymatrix{
\Pair\times_\SSet\HMPS[\theotherdim-1]\thedim \ar[r] & \PMPS[\theotherdim]\thedim\cup\{\bot\}.
}\]
By the above, we obtain a polynomial-time family of homorphisms
\[\xymatrix{
\kmst\col[X,\widetilde P_{\theotherdim-1}]^A_X \ar[r] & H^{\theotherdim+1}_G(X,A;\pi_\theotherdim)
}\]
of fully effective abelian groups, parametrized by $\Pair\times_\SSet\HMPS[\theotherdim-1]\thedim$, and consequently a family of generators of the image of $\kmst$. We need to compute a preimage of $[\delta\widetilde c-\kmo]$, then transform it into a section of $\widetilde P_{\theotherdim-1}\to X$ and thus enhance $\widetilde S\in\HMPS[\theotherdim-1]\thedim$ to an element of $\PMPS[\theotherdim]\thedim$, all in polynomial time. For this purpose, we employ a polynomial-time construction
\[\xymatrix{
\left\{(A,a_1,\ldots,a_\theotherdim,b)\ \left|\ \parbox{\widthof{$A$ a fully effective abelian}}{$A$ a fully effective abelian group, $a_1,\ldots,a_\theotherdim,b\in A$}\right.\right\} \ar[r] & \{\text{lists of integers}\}\cup\{\bot\},
}\]
testing whether $b$ belongs to the subgroup generated by $a_1,\ldots,a_\theotherdim$; if it does, it computes a list of coefficients $(z_1,\ldots,z_\theotherdim)$ such that $b=z_1a_1+\cdots+z_\theotherdim a_\theotherdim$. (The class on the right only contains an algorithm that prints the list.)
\fi
\subsection*{Acknowledgement}
We are grateful to Ji\v{r}\'{i} Matou\v{s}ek and Uli Wagner for many useful discussions, comments and suggestions that improved this paper a great deal. Moreover, this paper could hardly exist without our long-term collaboration, partly summarized in \cite{CKMSVW11} and \cite{polypost}.
\bibliographystyle{plain}
|
1,108,101,566,477 | arxiv | \subsection{System partition}
\indent Fluctuations of power flow distribution can be attributed as fluctuation of power injections. Hence in this paper, fluctuation sources are modeled as injections. For fluctuation source $i$, we know its connecting node $k_{i}$, standard error for fluctuation of active and reactive power injections $\Delta P_{i}$ and $\Delta Q_{i}$. The values for $\Delta P_{i}$ and $\Delta Q_{i}$ can be assigned by operation experience. Especially, for wind farms, these values can be set according to the variance of real-time wind power prediction. Subsequently, the fluctuation level of power flow on branch $l$, denoted by $\Delta P_{l}$ and $\Delta Q_{l}$, can be computed by:
\begin{equation}
\begin{array}{lll}
\Delta P_{l}&=&\sum \limits_{i=1}^{n}|S_{P}(k_{i},l)\Delta P_{i}|,\\
\Delta Q_{l}&=&\sum \limits_{i=1}^{n}|S_{Q}(k_{i},l)\Delta Q_{i}|.
\end{array}\label{eq:line_delta}
\end{equation}
where $S_{P}(k_{i},l)$ is the entry corresponding to node $k_{i}$ and branch $l$ in active power shift factor matrix, and $S_{Q}(k_{i},l)$ is the same entry in reactive power shift factor matrix. Scalar $n$ denotes the number of fluctuation sources.\\
\indent The fluctuation level of node $j$ is $\Delta_{j}$ defined by:
\begin{equation}
\begin{array}{lll}
\Delta P_{j}&=&\max \limits_{l\in \mathcal{A}_{j}}P_{l},\\
\Delta Q_{j}&=&\max \limits_{l\in \mathcal{A}_{j}}Q_{l},\\
\Delta_{j}&=&\sqrt{\Delta P_{j}^{2}+\Delta Q_{j}^{2}}.
\end{array}\label{eq:node_delta}
\end{equation}
where $\mathcal{A}_{j}$ denotes the adjacent branch set for node $j$.\\
\indent With the fluctuation level of each node, the system node set $\mathcal{N}$ is partitioned into steady area node set $\mathcal{N_{S}}$, quasi-steady area node set $\mathcal{N_{Q}}$ and fluctuant area node set $\mathcal{N_{F}}$:
\begin{equation}
\begin{array}{lll}
\mathcal{N_{S}}&=&\{j\in\mathcal{N}|\Delta_{j}\leq\epsilon_{q}\},\\
\mathcal{N_{Q}}&=&\{j\in\mathcal{N}|\epsilon_{q}<\Delta_{j}\leq\epsilon_{f}\},\\
\mathcal{N_{F}}&=&\{j\in\mathcal{N}|\Delta_{j}>\epsilon_{f}\}.
\end{array}\label{eq:sys_partition}
\end{equation}
where $\epsilon_{q}$ and $\epsilon_{f}$ are artificial thresholds.\\
\indent Similarly, the system measurement set $\mathcal{M}$ is also partitioned into steady area measurement set $\mathcal{M_{S}}$, quasi-steady area measurement set $\mathcal{M_{Q}}$ and fluctuant area measurement set $\mathcal{M_{F}}$. The steady area measurement set $\mathcal{M_{S}}$ includes measurements for node in $\mathcal{N_{S}}$ and measurements for head (tail) side for branches whose head (tail) nodes are in $\mathcal{N_{S}}$.\\
\indent Note that the system partition can change with system operations. Hence it is necessary to update the system partition after system operations are scheduled.
\subsection{State estimation method}
\indent Take steady area as example, its states are denoted by vector $x_{S}$, and the states for nodes outside but adjacent to steady area are denoted by $x_{BS}$. Then the measurement functions for steady area are:
\begin{equation}
z_{S}=h_{S}(x_{S},x_{BS})+e_{S}.\label{eq:meas_steadyarea}
\end{equation}
where $z_{S}$, $h_{S}$ and $e_{S}$ denote, respectively, measurement values, measurement functions and errors for measurement set $\mathcal{M_{S}}$.\\
\indent The state estimates for steady area are obtained by solving following weighted least square problem:
\begin{equation}
\begin{array}{ll}
\min \limits_{x_{S}}& (z_{S}-h_{S}(x_{S},x_{BS}))^{\dagger}W_{S}(z_{S}-h_{S}(x_{S},x_{BS})).
\end{array}\label{eq:semodel_steadyarea}
\end{equation}
where $W_{S}$ is the weight matrix for measurement set $\mathcal{M_{S}}$.
\indent Model (\ref{eq:semodel_steadyarea}) estimates the states in steady area with states in quasi-steady and fluctuant areas fixed. Similarly, the state estimation in quasi-steady and fluctuant areas will only update their own states as well. At any time moment $t$, the state estimates for the entire power system is given by:
\begin{equation}
\hat{x}=[\hat{x}_{S},\hat{x}_{Q},\hat{x}_{F}].\label{eq:se_fusion}
\end{equation}
where $\hat{x}_{S}$, $\hat{x}_{Q}$ and $\hat{x}_{F}$ are newest state estimates for steady area, quasi-steady area and fluctuant area, respectively.
\indent There are several effective hybrid PSSE methods, in this paper we use the method in \cite{Zhou_PWRS2006} for hybrid PSSE. The PSSE method with pure PMU measurements are given in section \ref{sec:concept}.
\subsection{Motivation}
Large interconnected power systems are often operated by independent system operators (ISOs), each has its own operating area within which internal resources are used economically. The operating areas are connected physically by tie-lines that allow one area to import from or export to neighboring areas for better utilization of overall system resources.
Existing approaches to tie-line scheduling rely on trades across borders at proxy buses by market participants. The ad hoc uses of proxy buses and the imperfect information used by market participants result in substantial economic loss, estimated at the level of \$784 million annually for the New York and New England customers \cite{White&Pike:11WP}.
Ideally, the optimal utilization of tie-lines is determined by the {\em joint economic dispatch} (JED) that treats interconnected operating areas as one. Because each operating area is controlled by an ISO, joint optimality needs to be achieved in a decentralized fashion, possibly involving a coordinator. Typically, each ISO optimizes its internal dispatch and exchanges intermediate solutions with its neighbors or the coordinator. This process iterates until convergence. One of the major challenges of implementing decentralized (but jointly optimal) economic dispatch is to limit the number of iterations without involving each area discloses its private information \cite{PJMNY_Agreement}.
\subsection{Related Works}
Multi-area economic dispatch (MAED) has been studied extensively, dating back to \cite{Lin&Viviani:1984TPAS} in the 1980's. Existing techniques can be classified based on the methodology used in decomposing decision variables.
The primal decomposition methods partition the decision variables of the overall problem into local and coupling variables. The dual decomposition techniques, on the other hand, solve a relaxed local problem and use the dual variables to coordinate local optimizations.
Among the dual decomposition methods, the most classic kind of approach is based on Lagrangian relaxation
\cite{Kim&Baldick:97TPS,ConejoAguado98TPS,Chen&Thorp&Mount:03HICCS,Binetti&Etal:14TPS,Erseghe:15TPS,Baldick99APPOPF,Wang&Song&Lu:01IEE,Lai&Xie:15TPS}. These techniques typically require updates on the Lagrange multipliers, which, depending on the parameter setting, often require a large number of iterations and substantial computation and communication costs.
There is also a body of works based on primal decompositions where coupling variables are first fixed in the subproblems and then solved iteratively as part of the master problem \cite{Nogales&Prieto&Conejo:03OR,Bakirtzis&Biskas:03TPS,Min&Abur:06PSCE,Zhao&LitvinovZheng:14TPS,Chatterjee&Baldic:14COR,Li&Wu&Zhang&Wang::15TPS}.
The key step is to define the coupling variables that need to be solved in the master problem.
Among the primal decomposition algorithms, the recent work of Zhao, Litvinov, and Zheng \cite{Zhao&LitvinovZheng:14TPS} has the special property that the algorithm converges in {\em a finite number of steps}, which is especially attractive for the MAED problem. The key idea in \cite{Zhao&LitvinovZheng:14TPS} is the so-called marginal equivalent decomposition (MED) of variables involving the set of active constraints and ``free variables'' of the local solutions. By communicating these ``marginal variables'', the algorithm implicitly exploits the finiteness of the structure of active constraint set.
\subsection{Summary of contributions}
In this paper, we propose a MAED method referred to as critical region projection (CRP). As a primal decomposition method, CRP defines for each area a sub-problem using the internal generation as decision variables and its boundary phase angles as coupling variables. The proposed approach is based on a key property in multi-parametric quadratic programming: the optimal generation in each area is a piecewise affine function of boundary state, and its associated optimal cost is a piecewise quadratic function of boundary state. This implies that the space of the boundary state can be partitioned into critical regions, within each region the optimal generation and the optimal cost can be characterized succinctly by the affine and quadratic functions.
CRP iterates between the coordinator and regional operators: Given a boundary state, each area solves its sub-problem, derives its optimal cost as a quadratic function of boundary state, and defines the critical region that contains the given boundary state. The coordinator solves the master problem and projects the point of boundary state to a new critical region with strictly lower cost for the next iteration.
CRP shares some of the important features of the MED approach \cite{Zhao&LitvinovZheng:14TPS}, most important being the finite-step convergence. Our approach does not require any exchange of system information such as shift factors, status of generations, and capacities of internal generators and branches.
Because the number of boundary buses is relatively small, the parameterization proposed in our approach results in the reduced amount of data exchange. CRP does require a coordinator that may complicate practical implementations.
The reminder of this paper is organized as follows. In section II, we present the JED model and decompose it into local sub-problems and the coordinator's problem. The outline of CRP and the solutions to local sub-problems and the master problem are elaborated in section III. In section IV, we establish the optimality and finite-step convergence of CRP and review its computation/communication costs. In section V, CRP is applied in various test systems and its performance is compared with JED and approaches based on Lagrangian relaxation and MED.
\subsection{Feasibility Check}\label{sec:FC}
\subsection{Critical Region Determination}\label{sec:CR}
\indent With given $\theta_B$, each area derives the explicit definition of current critical region containing the point of $\theta_B$ and the optimal cost function defined in current critical region.\\
\indent The Lagrange function for the local ED model (\ref{eq:local_MPQP}) is
\begin{equation}
\begin{array}{ll}
\!\!\!\!\!\!L(g_{i},\lambda_{i},\mu_{i})\!\!=\!\!&\!\!\!\!c_{i}\!+\!\lambda_{i}^{\dagger}(M_ig_{i}\!+\!M_B\theta_{B}\!+M_C)\\
\!\!\!\!&\!\!\!\!+\mu_{i}^{\dagger}(N_ig_i+N_B\theta_B+N_C).
\end{array}\label{eq:16}
\end{equation}
where $\lambda_{i}$ and $\mu_{i}$ denote multipliers for the equality and inequality constraints in the local ED model (\ref{eq:local_MPQP}), respectively. The KKT conditions for model (\ref{eq:local_MPQP}) are
\begin{eqnarray}
\!\!\left[\!\!\!\!
\begin{array}{ccc}
\setlength{\arraycolsep}{0.01pt}
2A_{i}\!\!\!\!&\!\!\!\!M_i^{\dagger}\!\!\!&\!\!\!\{N_i\}_{\mathcal{A}}^{\dagger}\\
M_i\!\!\!\!&\!\!\!\!\mbox{}\!\!\!&\!\!\!\mbox{}\\
\{N_i\}_{\mathcal{A}}\!\!\!\!&\!\!\!\!\mbox{}\!\!\!&\!\!\!\mbox{}
\end{array}
\!\!\!\!\right]\!\!\left[\!\!\!\!
\begin{array}{ccc}
g_{i}\\
\lambda_{i}\\
\{\mu_{i}\}_{\mathcal{A}}
\end{array}\!\!\!\!\right]
\!\!=\!\!\left[\!\!\!
\begin{array}{ccc}
-B_{i}\\
-M_B\theta_B-M_C\\
-\{N_{B}\theta_B\!\!+\!\!N_{C}\}_{\mathcal{A}}
\end{array}
\!\!\!\right]\!\!,
\label{eq:17}\\
\{\mu_{i}\}_{\mathcal{A}}\geq0,\{N_{i}g_{i}+N_{B}\theta_{B}+N_{C}\}_{\mathcal{A}}=0,\nonumber\\
\{\mu_{i}\}_{\mathcal{I}}=0,\{N_{i}g_{i}+N_{B}\theta_{B}+N_{C}\}_{\mathcal{I}}\leq0,\nonumber
\end{eqnarray}
where $\{\centerdot\}_{\mathcal{I}}$ denotes variables for inactive constraints.\\
\indent Assuming model (\ref{eq:local_MPQP}) is non-degenerate, then (\ref{eq:17}) can be solved by taking inverse to its coefficient matrix:
\begin{equation}
\left[\!\!
\begin{array}{ccc}
\setlength{\arraycolsep}{0.1pt}
g_{i}\\
\lambda_{i}\\
\{\mu_{i}\}_{\mathcal{A}}
\end{array}\!\!\right]\!\!\!=\!\!\!
\left[\!\!
\begin{array}{ccc}
\setlength{\arraycolsep}{0.1pt}
K_{11}\!\!&\!\!K_{12}\!\!&\!\!K_{13}\\
K_{12}^{\dagger}\!\!&\!\!K_{22}\!\!&\!\!K_{23}\\
K_{13}^{\dagger}\!\!&\!\!K_{23}^{\dagger}\!\!&\!\!K_{33}
\end{array}
\!\!\right]\!\!\!
\left[\!\!\!
\begin{array}{ccc}
-B_{i}\\
-M_B\theta_B-M_C\\
-\{N_{B}\theta_B\!\!+\!\!N_{C}\}_{\mathcal{A}}
\end{array}
\!\!\!\right]\!\!.\label{eq:18}
\end{equation}
\indent For active constraints, their multipliers $\{\mu_{i}\}_{A}$ are affine functions of $\theta_{B}$ and yield
\begin{equation}
\begin{array}{ll}
\{\mu_{i}\}_{A}=&\hspace{-0.25cm}-(K_{32}M_B+K_{33}\{N_B\}_{\mathcal{A}})\theta_{B}\\
&\hspace{-0.25cm}-(K_{31}B_i+K_{32}M_C+K_{33}\{N_C\}_{\mathcal{A}})\geq0.
\end{array}
\label{eq:21}
\end{equation}
\indent The optimizer $g^{*}_{i}$ is also an affine function of $\theta_{B}$ as
\begin{equation}
\begin{array}{l}
g^{*}_{i}=R_B\theta_B+R_C,\\
R_B=-K_{12}M_B-K_{13}\{N_{B}\}_{\mathcal{A}},\\
R_C=-K_{11}B_{i}-K_{12}M_C-K_{13}\{N_C\}_{\mathcal{A}}.
\end{array}
\label{eq:22}
\end{equation}
\indent Then for inactive constraints, their values yield
\begin{equation}\label{eq:23}
\begin{array}{l}
(\{N_i\}_{\mathcal{I}}R_B+\{N_B\}_{\mathcal{I}})\theta_B\hspace{-0.05cm}+\hspace{-0.05cm}\{N_i\}_{\mathcal{I}}R_C\hspace{-0.05cm}+\hspace{-0.05cm}\{N_C\}_{\mathcal{I}}\leq 0.
\end{array}
\end{equation}
\indent By combining (\ref{eq:21}) and (\ref{eq:23}), the explicit definition of the current critical region $CR_{i,k}$ in area $i$ is given by
\begin{equation}
\begin{array}{l}
CR_{i,k}=\{\theta_B|S_{i}\theta_{B}+T_{i}\leq0\},\\
S_{i}=\left[
\begin{array}{l}
K_{32}M_B+K_{33}\{N_B\}_{\mathcal{A}}\\
\{N_i\}_{\mathcal{I}}R_B+\{N_B\}_{\mathcal{I}}
\end{array}
\right],\vspace{0.1cm}\\T_i=\left[
\begin{array}{l}
K_{31}B_i+K_{32}M_C+K_{33}\{N_C\}_{\mathcal{A}}\\
\{N_i\}_{\mathcal{I}}R_C\hspace{-0.05cm}+\hspace{-0.05cm}\{N_C\}_{\mathcal{I}}
\end{array}
\right].
\end{array}
\label{eq:24}
\end{equation}
\indent Inequality (\ref{eq:21}) defines active constraints via their multipliers, and inequality (\ref{eq:23}) defines inactive constraints via their values. Their intersection (\ref{eq:24}) defines current partition of active and inactive constraints, which also defines current critical region. The critical region defined by (\ref{eq:24}) is a polyhedron, which is consistent with theorem 1. The redundant inequalities should be removed from (\ref{eq:24}), with method in \cite{INEQ}.\\
\subsection{Optimal Cost Function}
\indent Within current critical region defined by (\ref{eq:24}), the expression of optimal cost function $J_{i}^{*}(\theta_{B})$ can be got by substituting (\ref{eq:22}) to the cost function (\ref{eq:edi_cost}):
\begin{equation}
\begin{array}{l}
J_{i}^{*}(\theta_{B})\hspace{-0.1cm}=\hspace{-0.1cm}c_i(g_i^{*}(\theta_B))\hspace{-0.1cm}=\hspace{-0.1cm}(R_B\theta_B\hspace{-0.1cm}+\hspace{-0.1cm}R_C)^{\dagger}A_i(R_B\theta_B\hspace{-0.1cm}+\hspace{-0.1cm}R_C)\\
+B_i^{\dagger}(R_B\theta_B+R_C)=\theta_B^{\dagger}\tilde{A}_i\theta_B+\tilde{B}_i^{\dagger}\theta_B+\tilde{C}_i,\\
\tilde{A}_i=R_B^\dagger A_iR_B,\tilde{B}_i=2R_B^{\dagger}A_iR_C+R_B^{\dagger}B_i.
\end{array}
\label{eq:28}
\end{equation}
\section{abstract}
This is abstract!
\section{introduction}
This is introduction
\end{document}
\subsection{Joint Economic Dispatch Model}
For simplicity, the MAED model is illustrated via the two area system in Fig.\ref{fig:1}. Similar method can be proposed for systems with more than two areas.
The system state variables are partitioned into four subsets: internal phase angles $\theta_i$ in area $i$ and boundary phase angles $\bar{\theta}_i$ in area $i,(i=1,2)$.
Without loss of generality, we make the following assumptions:
A1) There is no power generation on boundary buses.
A2) Each internal bus is connected with one unit and one load and each boundary bus is connected with one load;
For assumption A1, we can introduce fictitious boundary buses outside the physical ones in case of the presence of boundary generators. With assumption A2, units, loads, and buses have the same indices. Similar approach can be derived if we consider different indices.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Fig/Fig1.eps}
\caption{\small An illustration for multi-area power systems}\label{fig:1}
\end{figure}
The JED is to solve the following centralized optimization:
\begin{align}
&\min \limits_{\{g_{i},\theta_i,\bar{\theta}_i\}}c(g)=\sum_{i=1}^{2}c_{i}(g_{i})=\sum_{i=1}^{2}(g_{i}^T A_{i}g_{i}+b_{i}^T g_{i}),\label{eq:ged_obj}\\
&\textrm{subject}\hspace{0.1cm}\textrm{to}\hspace{0.1cm}H_{i}\theta_{i}+H_{\bar{i}}\bar{\theta}_{i}\leq f_{i},i=1,2,\label{eq:ged_internalline}\\
&\hspace{1.5cm}\bar{H}_{\bar{1}}\bar{\theta}_{1}+\bar{H}_{\bar{2}}\bar{\theta}_{\bar{2}}\leq \bar{f},\label{eq:ged_tieline}\\
&\hspace{1.5cm}\check{g}_{i}\leq g_{i}\leq \hat{g}_{i},i=1,2,\label{eq:ged_ramping}\\
&\left[
\begin{array}{cccc}
Y_{11}&Y_{1\bar{1}}&\mbox{}&\mbox{}\\
Y_{\bar{1}1}&Y_{\bar{1}\bar{1}}&Y_{\bar{1}\bar{2}}&\mbox{}\\
\mbox{}&Y_{\bar{2}\bar{1}}&Y_{\bar{2}\bar{2}}&Y_{\bar{2}2}\\
\mbox{}&\mbox{}&Y_{2\bar{2}}&Y_{22}
\end{array}
\right]\hspace{-0.15cm}\left[
\begin{array}{c}
\theta_{1}\\
\bar{\theta}_{1}\\
\bar{\theta}_{2}\\
\theta_{2}
\end{array}
\right]=\left[
\begin{array}{c}
g_{1}-d_{1}\\
-\bar{d}_{1}\\
-\bar{d}_{2}\\
g_{2}-d_{2}
\end{array}\right],
\label{eq:ged_dclf}
\end{align}
where, as shown in Fig.\ref{fig:1}, the vectors $g_i$ and $d_i$ are internal generations and loads in area $i$ and $\bar{d}_i$ is the vector of boundary load power. The cost functions in (\ref{eq:ged_obj}) are quadratic with coefficients $A_{i}$ and $b_{i}$. The superscript $T$ denotes transpose.
Inequality (\ref{eq:ged_internalline}) represents power flow limits for internal branches of area $i$. Here $H_{i}$ is the branch-bus admittance matrix between internal branches of area $i$ and $\theta_i$, $H_{\bar{i}}$ is the branch-bus admittance matrix between internal branches of area $i$ and $\bar{\theta}_i$, and $f_i$ the power flow limits of internal branches of area $i$. Inequality (\ref{eq:ged_tieline}) describes constraints on boundary power flows, with $\bar{H}_{\bar{i}}$ the branch-bus admittance matrix between tie-lines and boundary state $\bar{\theta}_i$ and $\bar{f}$ the boundary power flow limits. Inequality (\ref{eq:ged_ramping}) restricts the power generations $g_i$ between the lower bound $\check{g}_i$ and upper bound $\hat{g}_i$. Equation (\ref{eq:ged_dclf}) represents the DC load flow equations in which $Y$ is the bus admittance matrix.
In following subsections we decompose the JED model into local optimizations and the coordinator's optimization.
\subsection{Local optimization}
The sub-problem of area $i$ is an economic dispatch (ED) problem with fixed boundary state defined by:
\begin{align}
\min \limits_{\{g_{i},\theta_i\}}&\hspace{0.25cm}c_{i}(g_{i})=g^T_i A_{i}g_{i}+b^T_{i}g_{i},\label{eq:edi_cost}\\
\textrm{subject}\hspace{0.1cm}\textrm{to}&\hspace{0.25cm}H_{i}\theta_{i}+H_{\bar{i}}\bar{\theta}_{i}\leq f_{i},\label{eq:edi_line}\\
\mbox{}&\hspace{0.25cm}\check{g}_i\leq g_{i}\leq \hat{g}_i,\vspace{0.05cm}\label{eq:edi_ramping}\\
\mbox{}&\hspace{-1.0cm}\left[
\begin{array}{cc}
Y_{ii}\hspace{-0.15cm}&\hspace{-0.15cm}Y_{i\bar{i}}\\
Y_{\bar{i}i}\hspace{-0.15cm}&\hspace{-0.15cm}Y_{\bar{i}\bar{i}}
\end{array}\right]\hspace{-0.15cm}\left[\hspace{-0.15cm}
\begin{array}{c}
\theta_{i}\\
\bar{\theta}_{i}
\end{array}\hspace{-0.15cm}\right]\hspace{-0.1cm}=\hspace{-0.1cm}\left[\hspace{-0.1cm}
\begin{array}{c}
g_{i}-d_{i}\\
-\bar{d}_{i}-Y_{\bar{i}\bar{j}}\bar{\theta}_{j}
\end{array}\hspace{-0.1cm}\right].\label{eq:edi_dclf}
\end{align}
By eliminating $\theta_i$ and summarizing all boundary phase angles as $\bar{\theta}=[\bar{\theta}_{i};\bar{\theta}_{j}]$, we write the local sub-problem in area $i$ as
\begin{equation}
\begin{array}{cc}
\min \limits_{g_{i}}&\hspace{0.25cm}c_{i}(g_{i})=g^T_{i}A_{i}g_{i}+b^T_{i}g_{i},\\
\textrm{subject}\hspace{0.1cm}\textrm{to}&\hspace{0.25cm}M_{i}g_{i}+\bar{M}_{i}\bar{\theta}+\tilde{m}_{i}=0,\\
\mbox{}&\hspace{0.25cm}N_{i}g_{i}+\bar{N}_i\bar{\theta}+\tilde{n}_{i}\leq 0,
\end{array}\label{eq:local_MPQP}
\end{equation}
where
\begin{equation}
\hspace{-0.25cm}
\begin{array}{l}
M_{i}=Y_{\bar{i}i}Y_{ii}^{-1},\bar{M}\hspace{-0.1cm}=\hspace{-0.1cm}[Y_{\bar{i}\hspace{0.02cm}\bar{i}}-Y_{\bar{i}i}Y_{ii}^{-1}Y_{i\bar{i}},Y_{\bar{i}\bar{j}}],\\
\tilde{m}_{i}=\bar{d}_{i}-Y_{\bar{i}i}Y_{ii}^{-1}d_{i},
N_{i}=\left[
\begin{array}{c}
H_{i}Y_{ii}^{-1}\\
I\\
-I\end{array}\right],\\
\bar{N}_{i}=\left[\hspace{-0.2cm}
\begin{array}{cc}
-H_{i}Y_{ii}^{-1}Y_{i\bar{i}}\hspace{-0.1cm}+\hspace{-0.1cm}H_{\bar{i}}&\hspace{-0.2cm}0\\
0&\hspace{-0.2cm}0\\
0&\hspace{-0.2cm}0
\end{array}\hspace{-0.2cm}\right],\tilde{n}_{i}\hspace{-0.1cm}=\hspace{-0.1cm}\left[\hspace{-0.2cm}
\begin{array}{c}
-H_{i}Y_{ii}^{-1}d_{i}\hspace{-0.1cm}-\hspace{-0.1cm}f_{i}\\
-\hat{g}_{i}\\
\check{g}_{i}
\end{array}\hspace{-0.2cm}\right].
\end{array}
\end{equation}
Specifically, the equality constraints in (\ref{eq:local_MPQP}) are in the second row of (\ref{eq:edi_dclf}). The inequality constraints in (\ref{eq:local_MPQP}) are arranged in the order of branch power flow limits (\ref{eq:edi_line}) and upper and lower generation limits (\ref{eq:edi_ramping}).
The local sub-problem (\ref{eq:local_MPQP}) has the standard form of multi-parametric quadratic program (MPQP) with boundary phase angles $\bar{\theta}$ as parameters and internal generations $g_i$ as decision variables.
In MPQP, it is of interest to represent the optimal decision variables $g_i^*$ and the value of optimization $c_i(g_i^*)$ as functions of parameters $\bar{\theta}$. Here we give the following theorem that describes the basic properties of the MPQP (\ref{eq:local_MPQP}):
\textit{\textbf{Theorem}} 1 \cite{borrelli2003constrained}: Consider the multi-parametric quadratic programming (\ref{eq:local_MPQP}). Assuming the region $\Theta$ from which the parameters $\bar{\theta}$ take value is convex, then we have the following:
i) The optimal decision variables $g_i^{*}(\bar{\theta})$ is continuous and piecewise affine in $\Theta$;
ii) The value function $J_i^{*}(\bar{\theta})\triangleq c_i(g_i^{*}(\bar{\theta}))$ is continuous, convex, and piecewise quadratic in $\Theta$;
iii) If model (\ref{eq:local_MPQP}) is non-degenerate in $\Theta$, \textit{i.e.}, the rows in matrix $[M_i;\{N_i\}_\mathcal{A}]$ is linearly dependent where $\{N_i\}_\mathcal{A}$ is the sub-matrix of $N_i$ associated with active constraints. Then $J_i^{*}(\bar{\theta})$ is differentiable in $\Theta$.
The key implication of Theorem 1 is that, for the sub-problem of area $i$, the region $\Theta$ is composed of critical regions. Each critical region corresponds to a particular partition of active and inactive constraints, which is a polyhedron within which $g_i^*(\bar{\theta})$ is an affine function and $J_i^*(\bar{\theta})$ is a quadratic function. Typically, critical regions are half-open-half-closed set. In this paper, to achieve a successive iteration process, we use the closure of critical regions $k$ in the operator $i$'s sub-problem that is denoted as $\Theta_{i,(k)}$. For convenience, we no longer add the word "closure" in the rest of this paper.
\subsection{Coordinator's Optimization}
The main task of the coordinator is to optimize boundary state $\bar{\theta}$ to minimize the overall cost in all areas subjecting to boundary constraints:
\begin{equation}
\begin{array}{ll}
\min \limits_{\bar{\theta}}&J^{*}(\bar{\theta})=\sum \limits_{i=1}^{2}J_{i}^{*}(\bar{\theta}),\\
\textrm{subject}\hspace{0.1cm}\textrm{to}&\bar{H}\bar{\theta}+\tilde{h}\leq 0.
\end{array}\label{eq:11}
\end{equation}
In (\ref{eq:11}) the boundary power flow constraints are written in the same form as local sub-problems in (\ref{eq:local_MPQP}).
The challenge, however, is that the coordinator does not have the exact functional form of $J_i^*$. Thus (13) cannot be solved directly by the coordinator. The main idea of CRP, as we describe in the next section, is to obtain a partial description of $J_i^*$ from the solution to the local sub-problem $i$ and update boundary state in an iterative fashion.
\subsection{Architecture and General Approach}
We first describe, at a high level, the architecture and the general approach. As illustrated in Fig.\ref{fig:arch}, the proposed approach involves a coordinator interacting with local area dispatch centers.
Given an intermediate boundary state, each local operator constructs the critical region that contains the boundary state and the parameters of the optimal cost function. Subsequently, the coordinator updates a new boundary state that guarantees a reduced cost for the next iteration.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{Fig/Figarch.eps}
\caption{\small The architecture and data flow of CRP}\label{fig:arch}
\end{figure}
The detailed constructions of critical regions and projections are described in Sections III.B-C. Here we illustrate key steps of CRP using a two dimensional example in Fig.\ref{fig:2}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{Fig/Fig3.eps}
\caption{\small Illustration for key steps of CRP}\label{fig:2}
\end{figure}
Initially, the coordinator has the region $\Theta$ from which the boundary state takes value and an initial point $\bar{\theta}^{(0)}\in\Theta$. It communicates $\bar{\theta}^{(0)}$ to areas 1 and 2 who derives the critical regions that contain $\bar{\theta}^{(0)}$, respectively denoted by $\Theta_{1,(1)}$ and $\Theta_{2,(1)}$, and the quadratic optimal cost functions $J_1^*(\bar{\theta})$ and $J_2^*(\bar{\theta})$. The region $\Theta_{(1)}=\Theta_{1,(1)}\cap\Theta_{2,(1)}$ is the critical region of the coordinator's problem in which $J^*(\bar{\theta})$ is quadratic. Hence the coordinator can obtain the optimum point $\bar{\theta}_{(1)}^{*}\in \Theta_{(1)}$ by solving a quadratic programming (QP).
Note that model (\ref{eq:11}) is a convex programming with a unique optimal point. Unless $\bar{\theta}_{(1)}^{*}$ happens to be globally optimal, it resides on the boundary of $\Theta_{(1)}$.
The coordinator then projects the boundary state to a new critical region with strictly lower cost by moving along the anti-gradient direction. See $\bar{\theta}^{(1)}$ in Fig.\ref{fig:2}. Note that the coordinator does not need the exact form of the new critical region $\Theta_{(2)}$.
In the following iterations, the coordinator sequentially gets $\bar{\theta}^{(2)}$ and $\bar{\theta}^{*}$ along the convergence trajectory shown by the arrows. During the iteration process, we only construct the critical regions through which the convergence trajectory passes, denoted by the shadows. Since there are only finite number of critical regions, the iterative process stops in a finite number of steps.
The following subsections will elaborate the solution to local sub-problems and the method for the coordinator to update the boundary state.
\subsection{Local Sub-problems}\label{sec:CR}
Before elaborating the solution to local sub-problems, we add the following assumptions in CRP:
A3) Given any boundary state that satisfies (\ref{eq:11}), all local sub-problems have feasible solutions;
A4) The JED (\ref{eq:ged_obj})-(\ref{eq:ged_dclf}) has a unique optimal solution;
A5) The local sub-problem (\ref{eq:local_MPQP}) is always non-degenerate.
For assumption A3, the boundary constraints in (\ref{eq:11}) include not only thermal constraints of tie-lines, but also other constraints imposed by system operators (such as limits on maximum export/import power) that guarantee the local sub-problems have feasible solutions. Accordingly, the region $\Theta$ from which the boundary state takes value is defined by
\begin{equation}
\Theta=\{\bar{\theta}|\bar{H}\bar{\theta}+\tilde{h}\leq 0\}. \label{eq:Theta}
\end{equation}
For assumption A5, in case of model (\ref{eq:local_MPQP}) being degenerate, it can be converted to a non-degenerate one by arranging all inequality constraints in a certain sequence, finding as many linearly independent active constraints as possible along the sequence, then setting the other constraints as inactive.
The Lagrangian for the local sub-problem (\ref{eq:local_MPQP}) is
\begin{equation}
\begin{array}{ll}
\!\!\!\!L(g_{i},\lambda_{i},\mu_{i})\!=\!&\!\!\!\!c_{i}(g_i)\!+\!\lambda_{i}^T(M_ig_{i}\!+\!\bar{M}_i\bar{\theta}\!+\tilde{m}_i)\\
\!\!\!\!&\!\!\!+\mu_{i}^{T}(N_ig_i+\bar{N}_i\bar{\theta}+\tilde{n}_i),
\end{array}\label{eq:16}
\end{equation}
where $\lambda_{i}$ and $\mu_{i}$ are the multipliers for the equality and inequality constraints, respectively.
The KKT conditions are
\begin{eqnarray}
\!\!\left[\!\!\!\!
\begin{array}{ccc}
\setlength{\arraycolsep}{0.01pt}
2A_{i}\!\!\!\!&\!\!\!\!M_i^{T}\!\!\!&\!\!\!\{N_i\}_{\mathcal{A}}^{T}\\
M_i\!\!\!\!&\!\!\!\!\mbox{}\!\!\!&\!\!\!\mbox{}\\
\{N_i\}_{\mathcal{A}}\!\!\!\!&\!\!\!\!\mbox{}\!\!\!&\!\!\!\mbox{}
\end{array}
\!\!\!\!\right]\!\!\left[\!\!\!\!
\begin{array}{ccc}
g_{i}\\
\lambda_{i}\\
\{\mu_{i}\}_{\mathcal{A}}
\end{array}\!\!\!\!\right]
\!\!=\!\!\left[\!\!\!
\begin{array}{ccc}
-b_{i}\\
-\bar{M}_i\bar{\theta}-\tilde{m}_i\\
-\{\bar{N}_i\bar{\theta}+\!\tilde{n}_i\}_{\mathcal{A}}
\end{array}
\!\!\!\right]\!\!,
\label{eq:17}\\
\{\mu_{i}\}_{\mathcal{A}}\geq0,\{N_{i}g_{i}+\bar{N}_{i}\bar{\theta}+\tilde{n}_i\}_{\mathcal{A}}=0,\nonumber\\
\{\mu_{i}\}_{\mathcal{I}}=0,\{N_{i}g_{i}+\bar{N}_{i}\bar{\theta}+\tilde{n}_i\}_{\mathcal{I}}\leq0,\nonumber
\end{eqnarray}
where $\{\centerdot\}_{\mathcal{A}}$ and $\{\centerdot\}_{\mathcal{I}}$ denote, respectively, variables associated with active and inactive constraints.
The solution of (\ref{eq:17}) has the form:
\begin{equation}
\left[\!\!
\begin{array}{ccc}
\setlength{\arraycolsep}{0.1pt}
g_{i}\\
\lambda_{i}\\
\{\mu_{i}\}_{\mathcal{A}}
\end{array}\!\!\right]\!\!\!=\!\!\!
\left[\!\!
\begin{array}{ccc}
\setlength{\arraycolsep}{0.1pt}
K_{11}\!\!&\!\!K_{12}\!\!&\!\!K_{13}\\
K_{21}\!\!&\!\!K_{22}\!\!&\!\!K_{23}\\
K_{31}\!\!&\!\!K_{32}\!\!&\!\!K_{33}
\end{array}
\!\!\right]\!\!\!
\left[\!\!\!
\begin{array}{ccc}
-b_{i}\\
-\bar{M}_i\bar{\theta}-\tilde{m}_i\\
-\{\bar{N}_{i}\bar{\theta}+\tilde{n}_i\}_{\mathcal{A}}
\end{array}
\!\!\!\right]\!\!.\label{eq:18}
\end{equation}
\indent For active constraints, their multipliers $\{\mu_{i}\}_{\mathcal{A}}$ are affine functions of $\bar{\theta}$:
\begin{equation}
\begin{array}{ll}
\{\mu_{i}\}_{\mathcal{A}}=&\hspace{-0.25cm}-(K_{32}\bar{M}_i+K_{33}\{\bar{N}_i\}_{\mathcal{A}})\bar{\theta}\\
&\hspace{-0.25cm}-(K_{31}b_i+K_{32}\tilde{m}_i+K_{33}\{\tilde{n}_i\}_{\mathcal{A}})\geq0.
\end{array}
\label{eq:21}
\end{equation}
\indent The optimal generations $g^{*}_{i}$ are also affine functions of $\bar{\theta}$:
\begin{equation}
\begin{array}{l}
g^{*}_{i}=\bar{R}_i\bar{\theta}+\tilde{r}_i,\\
\bar{R}_i=-K_{12}\bar{M}_i-K_{13}\{\bar{N}_{i}\}_{\mathcal{A}},\\
\tilde{r}_i=-K_{11}b_{i}-K_{12}\tilde{m}_i-K_{13}\{\tilde{n}_i\}_{\mathcal{A}}.
\end{array}
\label{eq:22}
\end{equation}
By substituting (\ref{eq:22}) to inactive constraints, we have
\begin{equation}\label{eq:23}
\begin{array}{l}
(\{N_i\}_{\mathcal{I}}\bar{R}_i+\{\bar{N}_i\}_{\mathcal{I}})\bar{\theta}\hspace{-0.05cm}+\hspace{-0.05cm}\{N_i\}_{\mathcal{I}}\tilde{r}_i\hspace{-0.05cm}+\hspace{-0.05cm}\{\tilde{n}_i\}_{\mathcal{I}}\leq 0.
\end{array}
\end{equation}
Given the point of $\bar{\theta}^{(t)}$ and with $g^*_i(\bar{\theta}^{(t)})$, inequality (\ref{eq:21}) defines active constraints via their multipliers, and inequality (\ref{eq:23}) defines inactive constraints via their values.
The intersection of (\ref{eq:21}) and (\ref{eq:23}) defines \emph{current} critical region $k$ that contains $\bar{\theta}^{(t)}$:
\begin{equation}
\begin{array}{l}
\Theta_{i,(k)}=\{\bar{\theta}|\bar{S}_{i,(k)}\bar{\theta}+\tilde{s}_{i,(k)}\leq0\},\\
\bar{S}_{i,(k)}=\left[
\begin{array}{l}
K_{32}\bar{M}_i+K_{33}\{\bar{N}_i\}_{\mathcal{A}}\\
\{N_i\}_{\mathcal{I}}\bar{R}_i+\{\bar{N}_i\}_{\mathcal{I}}
\end{array}
\right],\vspace{0.1cm}\\
\tilde{s}_{i,(k)}=\left[
\begin{array}{l}
K_{31}b_i+K_{32}\tilde{m}_i+K_{33}\{\tilde{n}_i\}_{\mathcal{A}}\\
\{N_i\}_{\mathcal{I}}\tilde{r}_i\hspace{-0.05cm}+\hspace{-0.05cm}\{\tilde{n}_i\}_{\mathcal{I}}
\end{array}
\right].
\end{array}
\label{eq:24}
\end{equation}
The critical region defined by (\ref{eq:24}) is a polyhedron. The redundant inequalities should be removed from (\ref{eq:24}), see \cite{INEQ}.
Within current critical region defined by (\ref{eq:24}), the expression of optimal cost function $J_{i}^{*}(\bar{\theta})$ can be obtained by substituting (\ref{eq:22}) to the cost function (\ref{eq:edi_cost}):
\begin{equation}
J_{i}^{*}(\bar{\theta})=c_i(g_i^{*}(\bar{\theta}))=\bar{\theta}^{T}\bar{A}_{i,(k)}\bar{\theta}+\bar{b}_{i,(k)}^{T}\bar{\theta}+\bar{c}_i,
\label{eq:28}
\end{equation}
where
\begin{equation}
\bar{A}_{i,(k)}=\bar{R}_i^T A_i\bar{R}_i,\bar{b}_{i,(k)}=2\bar{R}_i^{T}A_i\tilde{r}_i+\bar{R}_i^{T}b_i.
\end{equation}
The coordinator knows beforehand that each critical region is a polyhedron and the optimal cost function is quadratic. Therefore, the local system operator only needs to communicate the coefficients $\bar{S}_{i,(k)}$ and $\tilde{s}_{i,(k)}$ in (\ref{eq:24}) and $\bar{A}_{i,(k)}$ and $\bar{b}_{i,(k)}$ in (\ref{eq:28}) to the coordinator.
\subsection{The Coordinator's Problem}
In each iteration, the coordinator searches for the optimal point of $\bar{\theta}$ only within the intersection of current critical regions from local operators:
\begin{align}
\min \limits_{\bar{\theta}}&\hspace{0.2cm}J^{*}(\bar{\theta})=\bar{\theta}^{T}\bar{A}_{\Sigma,(k)}\bar{\theta}+\bar{b}_{\Sigma,(k)}^{T}\bar{\theta}, \label{eq:coobj}\\
\textrm{subject}\hspace{0.1cm}\textrm{to}& \hspace{0.2cm}\bar{S}_{i,(k)}\bar{\theta}+\tilde{s}_{i,(k)}\leq0, \forall i\in 1,2,\label{eq:cocr}\\
\mbox{}& \hspace{0.2cm}\bar{H}\bar{\theta}+\tilde{h}\leq0,\label{eq:cobnd}
\end{align}
where
\begin{equation}
\bar{A}_{\Sigma,(k)}=\sum \limits_{i} \bar{A}_{i,(k)}, \bar{b}_{\Sigma,(k)}=\sum \limits_{i} \bar{b}_{i,(k)}.
\end{equation}
The master problem (\ref{eq:coobj})-(\ref{eq:cobnd}) is a standard QP. CRP converges to the global optimal point $\bar{\theta}^*$ if all constraints associated with critical regions (\ref{eq:cocr}) are inactive. In practise we introduce the stopping tolerance $\epsilon$ on the multipliers $\bar{\mu}$ associated with critical region constraints:
\begin{equation}
\|\bar{\mu}\|_2^2<\epsilon. \label{eq:stop}
\end{equation}
If (\ref{eq:stop}) does not hold, then there are active constraints in (\ref{eq:cocr}) and the optimal point in current critical region $k$, denoted by $\bar{\theta}_{(k)}^{*}$, resides on its boundary. According to Theorem 1, the objective function $J^{*}(\bar{\theta})$ is differentiable in $\Theta$. Therefore, the coordinator projects the point of boundary state to a new critical region by moving along the anti-gradient direction:
\begin{equation}
\bar{\theta}^{(t+1)}=\bar{\theta}_{(k)}^{*}-\alpha(P\nabla_{\bar{\theta}}J^{*}),\label{eq:39}
\end{equation}
where $\alpha$ is a small positive constant. The matrix $P$ is the projection matrix that incorporates possible active boundary constraints (\ref{eq:cobnd}), which can be computed by \cite{Haftka_StructOpt}
\begin{equation}
P=I-\{\bar{H}\}_{\mathcal{A}}(\{\bar{H}\}_{\mathcal{A}}^T \{\bar{H}\}_{\mathcal{A}})^{-1} \{\bar{H}\}_{\mathcal{A}}^T.
\end{equation}
The schematic of CRP is given in Fig.\ref{fig:scheme}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{Fig/FigScheme.eps}
\caption{\small The schematic of CRP}\label{fig:scheme}
\end{figure}
\section{Introduction}
\input intro_v1
\section{Problem Decomposition}
\input Problem_modeling
\section{Proposed Method}
\input Proposedmethod
\section{Performance analysis}
\input Theoretical_new
\section{Numerical Tests}
\input Tests_new
\section{Conclusion}
\indent A coordinated multi-area economic dispatch method based on critical region projection is proposed in this paper. With a given boundary state, each area solves its local dispatch problem, determines its current critical region, and derives its optimal cost function. The coordinator minimizes the overall cost within current critical region and then project the boundary state to a new critical region with a reduced cost. The iterative process between local sub-problems and the coordinator will converge to the global optimum solution within finite number of iterations.
{
\bibliographystyle{IEEEtran}
\subsection{2-area 6-bus system test}
CRP was tested on various test beds and compared with the following three approaches:
i) Direct solution to the JED (\ref{eq:ged_obj})-(\ref{eq:ged_dclf});
ii) The Lagrangian relaxation method (LR) \cite{ConejoAguado98TPS}, the multipliers associated with boundary constraints were
initialized as zero and the artificial parameters were tuned to achieve relatively fast convergence;
iii) The marginal equivalence decomposition based method (MED) \cite{Zhao&LitvinovZheng:14TPS}, the binding constraints set
were initialized as void. The quadratic cost functions were approximated by piecewise linear functions with 20 equal size blocks.
In all tests, the initial boundary phase angles of CRP were set as zero. The values for $\epsilon$ and $\epsilon_1$ were set as
$10^{-6}$ and the step size $\alpha$ was set as $10^{-4}$.
We first compared these four methods on a simple 6-bus system whose configuration, branch reactance, and cost functions were
given in Fig.\ref{fig:6bussys}. The overall costs, iteration times, and computation and communication costs of the four approaches were
compared in TABLE \ref{table:6bussysresults}.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{Fig/Fig6sys.eps}
\caption{\small Configuration and parameters of 6-bus system}
\label{fig:6bussys}
\end{figure}
\vspace{-0.7cm}
\begin{table}[H]
\centering
\caption{\small Performances comparison for 6-bus system test}
\begin{tabular}{ccccc}
\hline
Method&Iteration&Overall costs&CPU time&Float data\\
×&(\$/hr)&costs (ms)&exchanged\\
\hline
JED&-&2375.00&84.28&-\\
LR&12&2376.10&340.92&48\\
MED&2&2375.00&149.27&80*\\
CRP&1&2375.00&113.38&38\\
\hline
\end{tabular}\\
\vspace{0.05cm}
*Shift matrices were not counted, same for other tests
\label{table:6bussysresults}
\end{table}
\vspace{-0.2cm}
LR converged in 12 iterations, its cost was a little higher than that of JED due to the convergence tolerance and its CPU time
cost was about four times of that of JED. MED needed two iterations to converge to the optimal block in its piecewise linear
cost functions; its results were optimal in this test and its computation time cost was much less than LR, while its
communication cost was higher.
On the other hand, as no constraint was considered in this test, there was only one critical region that covered the entire
boundary state space. Accordingly, CRP achieved the optimal solution within only one iteration. It also had satisfactory
computation and communication efficiencies.
\vspace{-0.2cm}
\subsection{2-area 44-bus system test}
Similar test was performed on a two area system composed by the IEEE 14- (area 1) and 30-bus (area 2) systems. Two tie-lines were
added between the two areas, the first connected bus 9 in area 1 and bus 15 in area 2 with reactance 0.15p.u., the second
connected bus 9 in area 1 and bus 28 in area 2 with reactance 0.25p.u.. The configuration of the test system was illustrated
in Fig.\ref{fig:sys}:
There were three boundary buses, setting bus 9 in area 1 as phase angle reference, then the space of boundary state had the
dimension of two. The boundary constraints (\ref{eq:ged_tieline}) were
\begin{equation}
\begin{array}{ll}
-50MW\leq P_{9-15}, P_{9-28}\leq 80MW,&\\
-80MW\leq P_{9-15}+P_{9-28}\leq 80MW.&
\end{array}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=0.41\textwidth]{Fig/Figtestsys.eps}
\caption{\small Configuration of 14- and 30-bus system}
\label{fig:sys}
\end{figure}
Note that IEEE 14- and 30-bus systems are primarily independent and their cost coefficients are very different. Hence
two different scenarios were designed in this test:
i) The cost coefficients in IEEE 30-bus system increased to ten times of their default values.
ii) Default cost coefficients were used.
For both scenarios, the performances of the four approaches were compared in TABLE \ref{table:IEEEresults2}.
In the first scenario, the prices in the two areas were comparable. Accordingly, the optimum point of boundary state resided
inside $\Theta$ with zero gradient. The CRP method needed two iterations to converge, with one projection of
critical regions. The critical region partition for the boundary state space at the coordinator and the convergence
trajectory were plotted in Fig.\ref{fig:add}. For comparison, LR needed 127 iterations to converge with prohibitive
computation and communication costs. The MED approach converged in three iterations, its results in this test were
sub-optimal due to the piecewise linearization to cost functions. Its CPU time cost was about three times
of that of JED and its communication cost was lower than LR. CRP was the only one out of the three distributed approaches
that achieved the same results with JED, it also needed the least number of iterations and computation/communication costs.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Fig/Fig_add.eps}
\caption{\small The convergence trajectory of CRP in scenario 1}
\label{fig:add}
\end{figure}
\begin{table}[h]
\centering
\caption{\small Performances comparison for the IEEE system test}
\vspace{-0.3cm}
\vspace{0.3cm}
\begin{tabular}{ccccc}
\hline
Method&Iteration&Overall costs&CPU time&Float data\\
×&(\$/hr)&costs (ms)&exchanged\\
\hline
&&Scenario 1&\\
\hline
JED&-&14597.54&124.34&-\\
LR&127&14598.11&8933.6&1016\\
MED&3&14599.73&399.43&876\\
CRP&2&14597.54&177.63&188\\
\hline
&&Scenario 2&\\
\hline
JED&-&6095.31&142.74&-\\
LR&270&6095.88&12033.5&2160\\
MED&Infeasible&-&-&-\\
CRP&2&6095.31&183.12&188\\
\hline
\end{tabular}\label{table:IEEEresults2}
\end{table}
In the second scenario, the prices in area 2 was much lower than those in area 1 and the optimal point of boundary state
resided on the boundary of $\Theta$. The critical region partition and the convergence
trajectory of CRP were given in Fig.\ref{fig:3a}. CRP method needed two iterations to
obtain the same results as JED with reasonable computation and communication costs. For comparison,
LR needed more iteration times than the first scenario. In MED,
the sub-problem of area 2 became infeasible during its iteration process.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{Fig/Fig33.eps}
\caption{\small The convergence trajectory in scenario 2}
\label{fig:3a}
\end{figure}
\subsection{3-area 448-bus system test}
The four MAED approaches were also compared on a 3-area system composed by IEEE 30-bus, 118-bus, and 300-bus systems. Their
interconnections were illustrated in Fig.\ref{fig:3area}. The power limits for all tie-lines were set as 40MW. The performances
of the four approaches were compared in TABLE \ref{table:3areasysresults}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{Fig/Figbigsystest.eps}
\caption{\small Configuration of the 3-area system}
\label{fig:3area}
\end{figure}
\begin{table}[h]
\centering
\caption{\small Performances comparison for 3-area 448-bus system test}
\begin{tabular}{ccccc}
\hline
Method&Iteration&Overall costs&CPU time&Float data\\
×&(\$/hr)&costs (ms)&exchanged\\
\hline
JED&-&$8.31\times10^5$&918.87&-\\
LR&Did not converge&-&-&-\\
MED&5&$8.40\times10^5$&9158.90&3630\\
CRP&5&$8.31\times10^5$&5185.98&1618\\
\hline
\end{tabular}\label{table:3areasysresults}
\end{table}
LR did not converge in this scenario. Both MED and CRP needed five iterations to converge. The overall cost of MED was a
little higher than JED due to the linearization. While CRP got the same cost with JED and needed less computation and
communication costs than MED.
In particular, in the first iteration of CRP, we compared the number of rows in matrices $\bar{S}_{i,(k)}$ before
and after the removal of redundant inequalities as TABLE \ref{table:rcremoval}:
\begin{table}[h]
\centering
\caption{\small The number of inequalities describing current critical regions before and after the redundant removal}
\begin{tabular}{ccc}
\hline
Area&Before&After\\
\hline
30-bus&98&11\\
118-bus&486&19\\
300-bus&966&17\\
\hline
\end{tabular}\label{table:rcremoval}
\end{table}
From TABLE \ref{table:rcremoval} we found that most constraints were redundant. Although CRP might require substantial
communication cost in the worst case, it had satisfactory communication efficiencies in all our simulations. Intuitively,
this is because a low dimensional (the number of boundary buses) polyhedron usually has limited number of edges
(the number of non-redundant constraints).
\subsection{Discussions}
Among the benchmark techniques compared, both CRP and MED require minimum iterations among local operators.
This is a very important feature as the size of local optimization is quite large and the cost of
optimization is substantial. In this respect, the LR technique is at a disadvantage.
Both LR and CRP require minimal information exchange per-iteration. This is also very important in practice.
The MED technique, however, requires local operators to share system parameters and configurations. CRP, on the other hand,
exchange only intermediate boundary state, critical regions and optimal cost functions, which tend to be in low dimensions
and do not contain any information of internal parts of subareas.
The computation cost of LR per iteration is quite low (although more iterations are needed).
MED and CRP have comparable computation cost, with MED requiring to solve local problems with larger scales and CRP requiring
computation to obtain critical regions and optimal cost functions.
In summary, experience from our numerical experiments suggested that CRP is competitive in its overall performance
in accuracy and cost.
\subsection{Finite-step Convergence and Optimality}\label{sec:opt}
\textit{\textbf{Theorem 2}}: Setting the stopping criterion as (\ref{eq:stop}), we have the following properties on the convergence and optimality of CRP:
i) For any step size $\alpha$ satisfying
\begin{equation}
\alpha < \min \limits_{k}\{\min\{\frac{2}{M_k},l_{k}\}\},\label{eq:upbndalpha}
\end{equation}
where $M_k$ is the maximum eigenvalue of $\bar{A}_{\Sigma,(k)}$ and $l_{k}$ is the distance between $\bar{\theta}_{(k)}^*$ and the boundary of $\Theta$ along the anti-gradient direction at $\bar{\theta}_{(k)}^*$, CRP converges within finite steps, \textit{i.e.}, there exists a constant $K$ such that the iteration of CRP terminates at $t_{\epsilon} <K$;
ii) Assume that the QP solver for the master problem (\ref{eq:coobj})-(\ref{eq:cobnd}) converges to $\epsilon_1$-suboptimality \cite{Boyd_CVXPROG}, \textit{i.e.}, the gap between the objective functions of the primal and dual problems is bounded by
\begin{equation}
J(\bar{\theta}^*_{(k)})-D(\bar{\mu}^*_{(k)},\bar{\nu}^*_{(k)})<\epsilon_1, \label{eq:qpgap}
\end{equation}
where $D$ is the objective function of the dual problem for (\ref{eq:coobj})-(\ref{eq:cobnd}) and $\bar{\nu}$ denotes the multipliers associated with boundary constraints, then the overall cost and generations obtained by CRP converge to the optimal values when $\epsilon$ and $\epsilon_1$ both approach zero, \textit{i.e.},
\begin{equation}
\lim_{\epsilon,\epsilon_1\rightarrow 0} c(g^*(\bar{\theta}^{(t_\epsilon)}))=c(g^*(\bar{\theta}^*)), \label{eq:proof6}
\end{equation}
and
\begin{equation}
\lim_{\epsilon,\epsilon_1\rightarrow 0} g^*(\bar{\theta}^{(t_\epsilon)})=g^*(\bar{\theta}^*). \label{eq:proof15}
\end{equation}
\textit{Proof:} i) As $J^*$ is convex and piecewise quadratic, consider the entire region of $\Theta$, we have
\begin{equation}
\nabla_{\bar{\theta}}^2 J^* \preceq MI, M=\max \limits_{k} M_k.\label{eq:proof0}
\end{equation}
For (\ref{eq:39}), the values of $J^*(\bar{\theta}_{(k)}^{*})$ and $J^*(\bar{\theta}^{(t+1)})$ yield to
\begin{equation}
\begin{array}{ll}
J^*(\bar{\theta}^{(t+1)})=&\hspace{-0.2cm}J^*(\bar{\theta}_{(k)}^{*})+\nabla_{\bar{\theta}} J^*(\bar{\theta}_{(k)}^{*})^T(-\alpha P\nabla_{\bar{\theta}} J^*(\bar{\theta}_{(k)}^{*}))\\
&\hspace{-0.8cm}+\frac{1}{2}(\alpha P\nabla_{\bar{\theta}} J^*(\bar{\theta}_{(k)}^{*}))^T \nabla_{\bar{\theta}}^2 J^*(z)(\alpha P\nabla_{\bar{\theta}} J^*(\bar{\theta}_{(k)}^{*})).
\end{array}
\label{eq:proof1}
\end{equation}
where $z$ is a point on the line segment between $\bar{\theta}_{(k)}^{*}$ and $\bar{\theta}^{(t+1)}$. To make $J^*(\bar{\theta}^{(t+1)})$ smaller than $J^*(\bar{\theta}_{(k)}^{*})$, the step size $\alpha$ should yield to
\begin{equation}
-\alpha \nabla_{\bar{\theta}} J^*(\bar{\theta}_{(k)}^{*})^T P\nabla_{\bar{\theta}} J^*(\bar{\theta}_{(k)}^{*}) + \alpha^2 \frac{M}{2} \|P\nabla J^*(\bar{\theta}_{(k)}^{*})\|_2^2 < 0. \label{eq:proof11}
\end{equation}
Note that matrix $P$ is idempotent. The solution to (\ref{eq:proof11}) is
\begin{equation}
\alpha < \frac{2}{M}. \label{eq:proof2}
\end{equation}
Furthermore, the point of $J^*(\bar{\theta}^{(t+1)})$ should remain in $\Theta$. Hence the upper bound of step size $\alpha$ is given as (\ref{eq:upbndalpha}). The upper bound in (\ref{eq:upbndalpha}) does not change with iterations.
For any iteration $t$, setting $\alpha$ less than its upper bound, we always have
\begin{equation}
J^*(\bar{\theta}^{(t+1)})<J^*(\bar{\theta}_{(k)}^{*})\leq J^*(\bar{\theta}^{(t)}), \label{eq:proof3}
\end{equation}
which means the objective function strictly decreases by iterations. Furthermore, there are finite number of critical regions and $\bar{\theta}^{(t+1)}$ is in a different critical region from $\bar{\theta}^{(t)}$. Assume that there are $K$ critical regions, then CRP terminates within finite number of iterations $t_{\epsilon}<K$.
ii) The dual problem of the master problem (\ref{eq:coobj})-(\ref{eq:cobnd}) is
\begin{equation}
\begin{array}{l}
\max \limits_{\{\bar{\mu},\bar{\nu}\}} D(\bar{\mu},\bar{\nu})=-\frac{1}{4}([
\bar{H}^T\hspace{0.1cm}\bar{S}_{i,(k)}^T
]\left[
\begin{array}{c}
\bar{\nu}\\
\bar{\mu}
\end{array}\right]+\bar{b}_{\Sigma,(k)})^T\bar{A}_{\Sigma,(k)}^{-1}\\
([
\bar{H}^T\hspace{0.1cm}\bar{S}_{i,(k)}^T
]\left[
\begin{array}{c}
\bar{\nu}\\
\bar{\mu}
\end{array}\right]
+\bar{b}_{\Sigma,(k)})+[
\tilde{h}^T\hspace{0.1cm}\tilde{s}_{i,(k)}^T
]\left[
\begin{array}{c}
\bar{\nu}\\
\bar{\mu}\end{array}\right],\\
\textrm{subject}\hspace{0.1cm}\textrm{to}\hspace{0.1cm}\bar{\nu}\geq0, \bar{\mu}\geq0.
\end{array}\label{eq:dual}
\end{equation}
By substituting (\ref{eq:dual}) to (\ref{eq:qpgap}) we have
\begin{equation}
\begin{array}{l}
J^*(\bar{\theta}^{(t_\epsilon)})-\frac{1}{4}(\bar{\nu}^{(t_\epsilon)})^T\bar{H}\bar{A}_{\Sigma,(k)}^{-1}\bar{H}^T\bar{\nu}^{(t_\epsilon)}\\
-\frac{1}{2}(\bar{\nu}^{(t_\epsilon)})^T\bar{H}\bar{A}_{\Sigma,(k)}^{-1}\bar{S}_{i,(k)}^T\bar{\mu}^{(t_\epsilon)}\\
-\frac{1}{4}(\bar{\mu}^{(t_\epsilon)})^T\bar{S}_{i,(k)}\bar{A}_{\Sigma,(k)}^{-1}\bar{S}_{i,(k)}^T\bar{\mu}^{(t_\epsilon)}\\
-\frac{1}{2}\bar{b}_{\Sigma,(k)}^T\bar{A}_{\Sigma,(k)}^{-1}\bar{H}^T\bar{\nu}^{(t_\epsilon)}
-\frac{1}{2}\bar{b}_{\Sigma,(k)}^T\bar{A}_{\Sigma,(k)}^{-1}\bar{S}_{i,(k)}^T\bar{\mu}^{(t_\epsilon)}\\
-\tilde{h}^T\bar{\nu}^{(t_\epsilon)}-\tilde{s}_{i,(k)}^T\bar{\mu}^{(t_\epsilon)}<\epsilon_1.
\end{array}\label{eq:proof4}
\end{equation}
When CRP terminates at $\bar{\theta}^{(t_\epsilon)}$, by substituting (\ref{eq:stop}) to (\ref{eq:proof4}) and dropping the quadratic term of $\epsilon$, we have
\begin{equation}
\begin{array}{l}
J^*(\bar{\theta}^{(t_\epsilon)}))-\frac{1}{4}(\bar{\nu}^{(t_\epsilon)})^T\bar{H}\bar{A}_{\Sigma,(k)}^{-1}\bar{H}^T\bar{\nu}^{(t_\epsilon)}\\
-\frac{1}{2}\bar{b}_{\Sigma,(k)}^T\bar{A}_{\Sigma,(k)}^{-1}\bar{H}^T\bar{\nu}^{(t_\epsilon)}-\tilde{h}^T\bar{\nu}^{(t_\epsilon)}
<\epsilon_1+\gamma\epsilon.
\end{array}\label{eq:proof5}
\end{equation}
where
\begin{equation}
\begin{array}{ll}
\gamma=&\sup \limits_{\bar{\theta}\in\Theta} (\|\frac{1}{2}(\bar{\nu}^{(t_\epsilon)})^T\bar{H}\bar{A}_{\Sigma,(k)}^{-1}\bar{S}_{i,(k)}^T\\
&+\frac{1}{2}\bar{b}_{\Sigma,(k)}^T\bar{A}_{\Sigma,(k)}^{-1}\bar{S}_{i,(k)}^T+\tilde{s}_{i,(k)}^T\|).
\end{array} \label{eq:gamma}
\end{equation}
Note that inequality (\ref{eq:proof5}) actually bounds the sub-optimality level of the following problem:
\begin{equation}
\begin{array}{ll}
\min \limits_{\bar{\theta}}& J^*(\bar{\theta})=\bar{\theta}^T \bar{A}_{\Sigma,(k)} \bar{\theta}+ \bar{b}_{\Sigma,(k)}^T \bar{\theta}\\
\textrm{subject}\hspace{0.1cm}\textrm{to}& \bar{H}\bar{\theta}+\tilde{h}\leq0.
\end{array}\label{eq:proof9}
\end{equation}
Model (\ref{eq:proof9}) minimizes the overall cost in $\Theta$ by assuming the quadratic function in critical region $k$ holds in the entire region $\Theta$. Let $J'$ be the optimal value for (\ref{eq:proof9}), then from (\ref{eq:proof5}) we have
\begin{equation}
J^*(\bar{\theta}^{(t_\epsilon)})-J' < \epsilon_1+\gamma\epsilon. \label{eq:proof10}
\end{equation}
According to the convexity of $J^*(\bar{\theta})$, there is $J'\leq J^*(\bar{\theta}^*)$. Hence the difference between $J^*(\bar{\theta}^{(t_\epsilon)}))$ and $J^*(\bar{\theta}^*)$ is bounded by
\begin{equation}
J^*(\bar{\theta}^{(t_\epsilon)})-J^*(\bar{\theta}^*) < \epsilon_1+\gamma\epsilon. \label{eq:proof8}
\end{equation}
When $\epsilon$ and $\epsilon_1$ both approach to zero, the limit of the right hand side in (\ref{eq:proof8}) equals to zero. Therefore we have
\begin{equation}
\lim_{\epsilon,\epsilon_1\rightarrow 0} [J^*(\bar{\theta}^{(t_\epsilon)})-J^*(\bar{\theta}^*)]=0. \label{eq:proof13}
\end{equation}
According to the definition of $J^*$, (\ref{eq:proof6}) can be proved. Consequently, (\ref{eq:proof15}) also holds due to the convexity of $c(g)$.
\QEDB
Theorem 2 theoretically proves the convergence and optimality of CRP. In practise, however, we choose $\alpha$ as a small constant according to our experience. We do not really calculate the upper bound in (\ref{eq:upbndalpha}) or the constant $\gamma$ in (\ref{eq:gamma}).
\subsection{Computation/Communication Costs}
The computation cost of CRP mainly includes the following two parts:
i) \emph{Local sub-problem solution and critical region determination in each area}. The local sub-problems have standard forms of QP. The definitions of current critical regions can also be naturally obtained via (\ref{eq:21})-(\ref{eq:24});
ii) \emph{The solution to the master problem (\ref{eq:coobj})-(\ref{eq:cobnd}) at the coordinator}. The master problem also has the standard form of QP. Because the dimension of the QP is the size of the boundary state vector, the computation cost of this step is expected to be small.
On communication cost, as shown in Fig.\ref{fig:arch}, the data exchange in CRP includes the following two parts:
i) \emph{Communications from local areas to the coordinator}. Each area communicates the coefficients $\bar{S}_{i,(k)}$ and $\tilde{s}_{i,(k)}$ in (\ref{eq:24}) and $\bar{A}_{i,(k)}$ and $\bar{b}_{i,(k)}$ in (\ref{eq:28}) to the coordinator. The number of columns of $\bar{S}_{i,(k)}$ is small, but the numbers of rows of $\bar{S}_{i,(k)}$ and $\tilde{s}_{i,(k)}$ may be large. According to our experience, however, a large portion of the inequalities in (\ref{eq:24}) are redundant and can be eliminated. The sizes of $\bar{A}_{i,(k)}$ and $\bar{b}_{i,(k)}$ equal to the number of boundary buses. In particular, these coefficients do not include any specific information of physical systems.
ii) \emph{Communications from coordinator to local areas.} The coordinator sends the newest boundary state $\bar{\theta}$ to corresponding areas. This step only involves vector communication.
Furthermore, the finite-step convergence of CRP also guarantees its computation and communication efficiencies.
|
1,108,101,566,478 | arxiv | \section{Introduction}
\label{sec:intro}
Computer systems are distressingly insecure.
Visiting a website, opening an email, or serving a client request is often
all it takes to be subjected to a control-hijacking attack.
These devastating low-level attacks typically exploit memory-safety
vulnerabilities such as buffer overflows, use-after-frees, or double
frees, which are abundant in large software systems.
Various techniques have been
proposed for guaranteeing memory safety~\cite{NagarakatteZMZ09, NagarakatteZMZ10, DeviettiBMZ08,
Nagarakatte2013, NagarakatteMZ14, NagarakatteMZ15,
interlocks_ahns2012, micropolicies2015, LowFat2013}, but
the challenges of efficiency~\cite{NagarakatteZMZ09, NagarakatteZMZ10},
precision~\cite{m7}, scalability~\cite{ZitserLL04}, backwards
compatibility~\cite{cheri_asplos2015}, and effective
deployment~\cite{DeviettiBMZ08, Nagarakatte2013, NagarakatteMZ14,
NagarakatteMZ15, interlocks_ahns2012, micropolicies2015, LowFat2013,
pump_asplos2015}
have hampered their widespread adoption.
Meanwhile, new mitigation techniques have been proposed to deal with the most
onerous consequences of memory unsafety---for instance, techniques
aimed at preventing
control-flow hijacking even in unsafe
settings~\cite{Abadi2005, AbadiBEL09, Erlingsson07, TiceRCCELP14, BurowCBPNLF16}.
Unfortunately, these defenses often underestimate the power of the attackers
they may face~\cite{Erlingsson07, SnowMDDLS13, outofcontrol_ieeesp2014,
DaviSLM14, EvansLOSROS15, EvansFGOTSSRO15}---if, indeed, they have any
clear model at all of what they are protecting against.
Clarifying the precise security properties and
attacker models of practical mitigation techniques is thus an important
research problem---and a challenging one, since a good model has to capture
not only the defense mechanism itself but also the essential features of the
complex world in which low-level attacks occur.
In this paper we focus on the use of
{\em compartmentalization}~\cite{GudkaWACDLMNR15, cheri_oakland2015,
wedge_nsdi2008} as a strong, practical defense mechanism
against low-level attacks exploiting memory unsafety.
The key idea is to break up a large software system into
mutually distrustful components that run with minimal
privileges and can interact only via well-defined interfaces.
This is not only good software engineering; it also gives strong security
benefits. In particular, control-hijacking attacks can compromise only
specific components with exploitable vulnerabilities, and thus only give the
attacker direct control over the privileges held by these components.
Also, because compartmentalization can be enforced by more coarse-grained
mechanisms, acceptable efficiency and backwards compatibility are generally
easier to achieve than for techniques enforcing full-blown memory safety.
\ch{They also protect against potentially malicious code. Although
that's not the main focus here, things like NaCl are made to isolate
untrusted plugins.}
\iffull
\bcp{Here and in the abstract, do we need to say ``when used this way''?
I.e., when compartmentalization is not used this way, is it {\em never}
implemented by compiler/runtime cooperation?}\ch{No clue, I'm just trying
to focus attention to the use of compartmentalization from this paper.
All other uses are irrelevant as far as I'm concerned.}%
\bcp{For me, qualifying the statement with ``When used this way'' actually
focuses attention on the other ways it might be used and implemented.}%
\ch{I'm really only referencing to the first sentence in the previous
paragraph, tried to make this more explicit.}%
\bcp{I know. My question stands, but I think this is less important than
most of the other remaining points, so let's save it for later.}%
\fi
When used as a defense mechanism against memory unsafety,
compartmentalization is often achieved via cooperation
between a compiler and a low-level compartmentalization
mechanism~\cite{KrollSA14, ZengTE13, JuglaretHAPST15, GudkaWACDLMNR15,
PatrignaniDP16, cheri_oakland2015, cheri_asplos2015}.
In this paper we use {\em compartmentalizing compilation} to refer to
cooperative implementations of this sort.
The compiler might, for instance, insert dynamic checks and cleanup
code when switching components and provide information about
components and their interfaces to the low-level compartmentalizing
mechanism, which generally provides at least basic
isolation.
Two such low-level compartmentalization technologies are already widely
deployed: process-level privilege separation~\cite{Kilpatrick03,
GudkaWACDLMNR15, wedge_nsdi2008} (used, \EG by OpenSSH~\cite{ProvosFH03}
and for sandboxing plugins and tabs in modern web browsers~\cite{ReisG09})
and software fault isolation~\cite{sfi_sosp1993} (provided, \EG by
Google Native Client~\cite{YeeSDCMOONF10}); many more
are on the drawing boards~\cite{micropolicies2015, sgx, PatrignaniDP16,
cheri_oakland2015, cheri_asplos2015}.
So what security guarantees does compartmentalizing compilation provide,
and what, exactly, is its attacker model?
\ch{The usual? No longer think it's good but it's standard.}%
A good starting point for addressing these questions is the familiar notion
of {\em fully abstract compilation}~\cite{abadi_protection98,
PatrignaniASJCP15, AgtenSJP12, abadi_aslr12, JagadeesanPRR11,
FournetSCDSL13, AbadiPP13, AbadiP13, AhmedB11, AhmedB08, NewBA16}.
A fully abstract compiler toolchain (compiler, linker, loader, and
underlying architecture with its security mechanisms) protects the
interactions between a compiled program and its low-level environment,
allowing programmers to reason
soundly about the behavior of their code when it is placed in an arbitrary
target-language context, by considering only its behavior in arbitrary
source-language contexts.
In particular, if we link the code produced by such a compiler against
arbitrary low-level libraries---perhaps compiled from an unsafe language or
even written directly in assembly---%
the resulting execution will not be any less
secure than if we had restricted ourselves to library code written in the
same high-level language as the calling program.
(Why is it useful to
restrict attention to attackers written in a high-level language? First,
because reasoning about what attackers might do---in particular, what
privileges they might exercise---is easier in a high-level language. And
second, because by phrasing the property in terms of low- and
high-level programs rather than directly in terms of attacker behaviors,
specific notions of privilege, etc., we can re-use the same property for
many different languages.)
\iflater
\ch{The parenthesis is good, but only motivates why we prefer to
reason about high-level attackers, as opposed to security guarantees
for low-level compartmentalization, by starting from fully abstract
compilation first of all we reason about high-level {\em
programs}. Tempted to turn this into full-fledged paragraph
on why full abstraction is a good start (Task \#1).}
\ch{We want high-level reasoning despite low-level compromise/attacks,
and the compiler is what relates the high and the low level, so we
can't escape talking about it?}
\bcp{I think the parenthesis (which is now a paragraph by itself) is pretty
good as it stands. Refinements to do with compartmentalization belong
in the next bit of the story.}
\fi
Since full abstraction works by partitioning the world into a program and
its context, one might
expect it to apply to compartmentalized programs as well:
some set of components that are assumed to be subject to control-hijacking
attacks could be
grouped into the ``low-level context,'' while some others that are assumed
to be immune to
such attacks would constitute the ``high-level program.'' Full
abstraction would then allow us to reason about the possible behaviors
of the whole system using the simplifying assumption that the
attacker's injected behavior for the compromised components can be
expressed in the same high-level language as the good components.
\ifsooner
\bcp{I've tried to expand this a bit to spell out what
full abstraction would mean for compartmentalization---I hope it is
reasonably clear.\ch{looks reasonably clear, yes}
But it leads me to wonder why we (I) ever thought that
full abstraction would be a useful property in this setting: If some
compartment is compromised by an attacker, why would I care what
language the attacker wrote their injected behavior in?? Two things seem
important: (1) The attacker can only exercise privileges that the
vulnerable compartment possesses, and (2) The compromised compartment can
only interact with other components according to the rules of the
mediating interfaces. But both of these seem independent of the language
in which the attacker writes the injected behavior.
\ch{I don't really see how (2) could ever be language independent. The
whole notion of interface usually depends on things like types
and general interaction model (say procedures vs methods vs
closures or whatever).}
\ch{My impression is that it's easier to reason about an arbitrary
{\bf fully defined} high-level context, then about the compilation of
an undefined context (\IE more or less arbitrary ASM).}
So full abstraction
seems like pretty clearly not the right thing. However, we then need to
be clear about why these objections don't also apply to our proposed
definitions -- in other words, why is FA even a good starting point?}
\ch{I'm tempted to say let's think about such philosophical questions
in peace after the deadline. Even if FA wasn't such a good starting
point, it's not like we're going to change our starting point in the
next 4 days. I think what we have is still nice and useful.}
\fi
Sadly, this intuition does not withstand closer examination. Full
abstraction, as previously formulated in the literature, suffers
from three important limitations that make it unsuitable for characterizing
the security guarantees of compartmentalizing compilation.
First, fully abstract compilation assumes that the source language itself is
secure,
so that it makes sense to define target-level security with respect to the
semantics of the source language.
However, compartmentalization is often applied to languages like C and C++,
which do {\em not} have a secure semantics---the C and C++ standards leave
most of the security burden to the programmer by calling out a large number
of {\em undefined behaviors}, including memory-safety violations, that are
assumed never to occur.
Valid compilers for these languages are allowed to generate code that does
literally {\em anything}---in particular, anything a remote
attacker may want---when applied to inputs that lead to undefined behavior.
There is no way to tell, statically, whether or not a program may have
undefined behavior, and compilers do not check for this situation. (Indeed,
not only do they not check: they aggressively exploit the assumption of no
undefined behaviors to produce the fastest possible code for well-defined
programs, often leading to easily exploitable behaviors when this assumption
is broken.)
The point of compartmentalizing compilation
is to ensure that the potential effects of undefined
behavior are limited to the compromise of the component in which it occurs:
other components can only be influenced by compromised
ones via controlled interactions respecting specified interfaces.
To characterize the security of compartmentalizing
compilation, we therefore need a formal property that can meaningfully
accommodate source languages in which components can be compromised
via undefined behavior.
Full abstraction as conventionally formulated does not fit the bill,
because, in order to preserve equivalences of programs with undefined
behavior, compilers must abandon the aggressive optimizations that are
the reason for allowing undefined behaviors in the first place.
To see this, consider C expressions {\tt buf[42]} and {\tt
buf[43]} that read at different positions {\em outside} the bounds of a
buffer {\tt buf}.
These two programs are equivalent at the source level: they both
lead to arbitrary behavior.
However, a real C compiler would never compile these expressions to
equivalent code, since this would require runtime checks that many C
programmers would deem too expensive.
Second, fully abstract compilation makes an {\em open world} assumption
about the attacker context. While the context is normally required to
be compatible with the protected program, for instance by respecting
the program's typed interface, the structure and privilege of the context are
unrestricted (the full abstraction definition quantifies over {\em
arbitrary} low-level contexts).
This comes in direct contradiction with the idea of least
privilege%
\ifsooner
\bcp{well, it's not in {\em direct} contradiction with the {\em
idea}, it's in direct contradiction with the definition.}\ch{how
does one define least privilege?}\bcp{I was just looking for a simpler way
of saying it}\ch{what definition are you referring to?}\bcp{I shouldn't have
said definition---I should have said it's not in direct contradiction with
the idea, but rather with the thing itself. But my real complaint was just
that it seemed like an overly roundabout way of saying it.}\fi,
which is crucial to compartmentalization, and which relies on the fact
that even if a component is compromised, it does not immediately get
more privilege.
Compromised components cannot change the basic rules of the
compartmentalization game.
For instance, in this paper we consider a static compartmentalization
setting, in which the breakup of the application into components is
fixed in advance, as are the privileges of each component.
A security property suitable for this setting needs to be restricted
to contexts that conform to a fixed breakup into components with
static privileges.%
\footnote{%
In a setting where new components can be dynamically created and
privileges can be exchanged dynamically between components, the
details of this story will be more complicated; still, we expect any
secure compartmentalizing compilation property to limit the ability
of low-level attacker contexts to ``forge'' the privileges of
existing components.}
\iffull
\bcp{Are we sure that there aren't already refinements of full abstraction
that deal with this properly (e.g., by Amal, maybe)? Seems like others
must have hit this issue before, e.g. for programs with refs.}
\ch{I'm not aware, somebody should double check this. By the way,
limiting privilege is important, but so is enforcing a fixed
structure, and I would be very surprised if the reference people
were doing that. And no, structure and privilege are not the same in
my opinion. I don't think that being split into 3 chunks of 100KB
each is a privilege. More details in a comment from \autoref{sec:fa-not-enough}}
\bcp{Let's postpone this for the next draft.}
\fi
Third, because the definition of full abstraction involves applying the
compiler only to a program and not to the untrusted context in which it
runs, a fully abstract compiler may choose to achieve its protection goals
by introducing just a single barrier around the trusted part to protect it
from the untrusted part~\cite{PatrignaniASJCP15, AgtenSJP12, LarmuseauPC15,
PatrignaniCP13, patrignani_thesis}.
Such compilation schemes force the programmer to commit in advance to a
single compromise scenario, \IE to a single static split of their
application into a ``good'' trusted program and an ``evil'' untrusted
context from which this \iffull trusted\fi program has to be protected.
This is not realistic in the setting of compartmentalizing compilation,
where we generally cannot predict which components may be vulnerable to
compromise by control hijacking attacks, and instead must simultaneously
guard against multiple compromise scenarios.
Compartmentalizing compilers allow us to build more secure applications
that go beyond the blunt trusted/untrusted distinction made by some fully
abstract compilers.
To describe their guarantees accurately, we thus need a new property that
captures the protection obtained by breaking up applications into multiple
mutually distrustful components, each running with least privilege, and that
permits reasoning about multiple scenarios in which different subsets of
these components are compromised.
Our main contribution is the definition of such a property,
which we call {\em secure compartmentalizing compilation (SCC)}
(\autoref{sec:sc}).
While similar in many respects to full abstraction, our property
escapes the three limitations discussed above.
First, it
applies to unsafe source languages with
undefined behaviors by introducing a new notion of {\em fully defined}
sets of components.
While undefined behavior is a property of whole programs, full definedness
is compositional.
Intuitively, a set of components is fully defined if they cannot be
{\em blamed}~\cite{FindlerF02prime} for undefined behavior
in any context satisfying fixed interfaces.
Second, SCC makes a {\em closed-world} assumption about compromised
components, enforcing the basic rules of the compartmentalization game
like the fixed division into components and the fixed privileges of
each component, including for instance with which other components it
is allowed to interact.
Third, SCC ensures protection for
multiple, mutually distrustful components; it does not assume we know in
advance which components are going to be compromised (i.e., in the C
setting, which components may contain exploitable undefined behaviors), but
instead explicitly quantifies over all possible compromise scenarios.
Our second contribution is relating SCC
to standard formulations of full abstraction both intuitively and formally
(\autoref{sec:fa-not-enough}).
We start from full abstraction and show how the three
limitations that make it unsuitable in our setting can be lifted
one by one.
This results in two properties we call {\em structured full abstraction} and
{\em separate compilation}, which can be combined and instantiated to obtain
SCC.
\ifsooner
\bcp{I was surprised that this
didn't say ``which culminates in secure compartmentalization.'' Maybe we
could say it this way in the introduction and leave discussion of the need
for the last step, from MDFA to SC, as a detail for the technical
part.}\ch{This part is still technically unclear; in particular we
might only have implication not equivalence. We'll have to return to
it anyway once we figure it out precisely.}
\fi
While our property directly captures the intuition of our
attacker model, reducing it to structured full abstraction is a useful
technical step, since the latter is easier to establish for specific
examples using a variant of existing proof techniques.
Moreover, arriving at the same property by two different paths
increases our confidence that we found the right property.
\ifsooner
\ch{Generally, the third contribution was the most work so trying to
say more about it and why it's interesting. Please review.}
\bcp{It's good. It would be even better if the three insights were
rewritten to be grammatically parallel.}
\fi
Our third contribution is establishing the SCC property for a simple
unsafe imperative language with components interacting via procedure
calls and returns, compiling to an abstract machine with protected
compartments (\autoref{sec:instance}).
Despite the simplicity of the setting, this result gives useful
insights.
First, the source language and compilation strategy enable interesting
attacks on components with potential buffer overflows, similar to those
found in C.
Second, we illustrate how SCC can be
achieved by the cooperation of a compiler (cleaning and restoring
registers) and a low-level protection mechanism (totally isolating
compartments and providing a secure interaction mechanism using calls
and returns).
Third, our SCC proof adapts a standard technique called {\em trace
semantics}~\cite{JeffreyR05, PatrignaniC15}, via the reduction to
structured full abstraction.
The closed-world assumption about the context made by structured full
abstraction requires some nontrivial changes to the trace
semantics proof technique.
The remainder of the paper describes each of our three contributions in
detail (\autoref{sec:sc}--\autoref{sec:instance}) and closes by discussing
related work (\autoref{sec:related}) and future directions
(\autoref{sec:conclusion}).
The supplemental materials \ifanon submitted \else
associated \fi with this paper includes:
(a) a Coq proof for \autoref{thm:sfa-to-sc};
(b)~technical details and proofs for the SCC instance from
\autoref{sec:instance} (while most of these proofs are done only on
paper, the main structured full abstraction result,
\autoref{thm:sfa-instance}, is also proved in Coq); and
(c) a trace mapping algorithm in OCaml using property-based
testing to support \autoref{assumption:definability}.
These materials can be found at:
\url{https://github.com/secure-compilation/beyond-good-and-evil}
\iffalse
\clearpage
\section*{Discussions}
\ch{I think we should explicitly give a name to the MD instance of SFA
that we use, say mutual distrust full abstraction (MFA),
not just to SFA in general. In
this case though, wouldn't MFA be just an equivalent reformulation of SC?}
\ch{Arthur? do you feel like proving both directions of that
implication?}
\aaa{I'm not sure I see what equivalence you're talking about
here---didn't we want to relate MD and SFA?}
\ch{I think if we have this we can claim a 3rd contribution:
formally investigating the relation between SC to FA. We could
then structure the whole paper around these 3 contributions:
(1) SC; (2) from FA to SC; (3) SC instance}
\ch{Went for this already, although it would be stronger if we had the
equivalence}
\apt{I'm unclear about this. Is SFA really different from FA?
\ch{Yes, I think so. SFA is 2 steps away from FA and
one step away from SC. \autoref{sec:fa-not-enough} will explain}
FA is usually defined in terms of possible low-level behaviors.
Doesn't our low-level system prohibit by construction behaviors
that violate interfaces?}
\ch{That part of the text has changed quite a lot since then.
Maybe we can discuss this again once we have a first version
of \autoref{sec:fa-not-enough}}
\apt{
``First, secure compartmentalization applies to unsafe source
languages with undefined behaviors, by requiring both the program and
the context to be {\em fully defined}.''
Here is where I get lost. How does it apply to languages with
undefined behaviors if it only applies to components that are fully
defined? It is non-trivial to ensure that an
unsafe HLL component is fully defined (by defensive programming or wrapping
in dynamic contract checking or whatever). Do we really think this
is a realistic model of what it means to work with an unsafe HLL?
\ch{Maybe. We're making a worst-case assumption that if
a set of components can be tricked by the remaining
compromised ones into an undefined behavior, then they
can be taken over and our compiler can provide no
guarantees for them. I don't see how we could prove
anything stronger than this for an unsafe language.}
Or, are we saying that, having analyzed what it means to use
secure low-level compartmentalization with unsafe languages,
we conclude that this is the \emph{best} we can hope for?
\ch{So yes, I think we're in the second situation, although
I wasn't yet attempting to make this point here. Maybe we should?}
Or have I missed the whole point somehow? (I do understand that
being able to consider only fully defined attacking contexts
makes it \emph{easier} to reason about the combined program---but
only when the attacked component is fully defined itself, right?)
\ch{The restriction on the high-level context is a positive
thing, since it enables reasoning. The restriction on
the program is a necessary evil in our setting, but
there are ways our new property sweetens the pill:
both the closed-worldedness and the quantification
over all compromise scenarios make this restriction
more palatable. This is now explained in \autoref{sec:lfa}}}
\ch{What do other people think about the secure compartmentalization
name? I like it more than mutual distrust, which I find overly general.}
\apt{But perhaps ``secure compartmentalization'' is too specific?
How about ``mutual abstraction'' ?}
\ch{Actually, I think specific is good here. I don't expect our
property to meaningfully apply beyond compartmentalization,
and it seems confusing to use a name that implies otherwise.}
\aaa{About ``full definedness'':
To me, at a first glance, this looks weaker than full
abstraction: isn't one of the points of full abstraction that we
allow contexts not to be fully defined? (i.e., by allowing them to
perform behaviors that a priori have no source-level counterpart.)}
\yj{The high-level attacker's purpose in FA and SC is to model the
low-level one. So requiring more restrictions on the high-level
attacker yields a stronger model (maybe this should be made more
explicit in the sentence above). In fact, modeling any low-level
attacker as a trivially non-defined high-level attacker is enough to
prove full abstraction in a setting with undefined behaviors (say,
one that voluntarily triggers buffers overflows on one of his
buffers upon any procedure call), so full abstraction isn't
interesting in such setting.\ch{Interesting idea, will have to think
a bit about it though}
Undefined attackers are trivially as
powerful as low-level ones and can't be made less powerful than
them, so you won't prove anything interesting about the power of
low-level attackers if you allow to model them as undefined
attackers (the property would say that low-level attackers are as
powerful as low-level attackers).}
\ch{To answer Arthur's original question:
In a sense it's both weaker (because of the full definedness
conditions on the programs) and stronger (because of the full
definedness conditions on contexts). This when looking at the
interesting direction of full abstraction. Anyway, I discuss
such details in \autoref{sec:lfa}}
\aaa{I think it was just a problem with how the sentence was
structured: it used to say something like ``by requiring both the
program and the context to be fully defined'', which could be
interpreted as saying ``in order to use this framework, you have to
prove that both program and context are fully defined''}
\ch{This was still a problem; changed to ``by considering {\em fully
defined} programs and contexts'', please fix better if you wish.}
\fi
\section{Secure Compartmentalizing Compilation}
\label{sec:sc}
We start with an intuitive explanation of compartmentalizing compilation,
its attacker model, and its security benefits, and then introduce {\em
secure compartmentalizing compilation (SCC)}.
We consider compartmentalization mechanisms provided by the
compiler and runtime system for an unsafe programming language with some
notion of components.%
\footnote{We use the term ``runtime system'' loosely to include operating
system mechanisms~\cite{Kilpatrick03, GudkaWACDLMNR15, wedge_nsdi2008,
ProvosFH03, ReisG09} and/or hardware
protections~\cite{micropolicies2015,sgx,PatrignaniDP16,cheri_oakland2015}
that may be used by the compiler.}
In \autoref{sec:instance} we will present a simple example in detail,
but for the present discussion it suffices to think informally of C or
C++ enriched with some compartmentalization mechanism.
This mechanism allows security-conscious developers to break large
applications into mutually distrustful components running with
least privilege and interacting only via well-defined interfaces.
We assume that the interface of each component also gives a precise
description of its privilege.
Our notion of interface here is quite generic: interfaces might include any
information that can be dynamically enforced on components, including module
signatures, lists of allowed system calls, or more detailed access control
specifications describing legal parameters to inter-component calls (\EG
ACLs for files).
We assume that the division of the application into components and the
interfaces of those components are statically determined and fixed
throughout execution.
In \autoref{sec:instance}, we instantiate this picture with a rather simple
and rigid notion of components and interfaces, where components don't
directly share any state and where the only thing one component can do to
another one is to call the procedures allowed by the interfaces of both
components.
We do not fix a specific compartmentalizing compilation mechanism; we
just assume that whatever mechanism is chosen can guarantee that, even
if one component is compromised (e.g., by a control-hijacking attack),
it will still be forced to adhere to its specified interface in its
interactions with other components.
What a compromised component {\em can} do in this model is use its access
to other components, as allowed by its interface, to trick them into
misusing their own privileges (confused deputy attacks) and/or attempt to
mount further control-hijacking attacks on other
components by communicating with them via defined
interfaces.
We do not assume we know in advance which components will be
compromised: the compartmentalizing compilation mechanism has to
protect each component from all the others.
This allows developers to reason informally about various compromise
scenarios and their impact on the security of the whole
application~\cite{GudkaWACDLMNR15}, relying on conditional
reasoning of the form: ``If {\em these} components get taken over and {\em
these} do not, then {\em this} might happen (while {\em that} cannot),
whereas if these other components get taken over, then this other thing
might happen...''
If the practical consequences of some plausible compromise scenario are too
serious\iffull to ignore\fi,
developers can further reduce or separate privilege by
narrowing interfaces or splitting components,
or they can make components more defensive by dynamically validating the
inputs they receive from other components.
For instance, developers of a compartmentalized web browser~\cite{ReisG09}
might reason about situations in which some subset of plugins and tabs gets
compromised and how this might impact the browser kernel and the remaining
plugins and tabs.
A possible outcome of this exercise might be noticing that, if the browser
kernel itself is compromised, then all bets are off for all the components
and the application as a whole, so the developers should put extra energy in
defending the kernel against attacks from compromised plugins or tabs. On
the other hand, if interfaces {\em between} tabs and plugins are
appropriately limited, then compromise of one should not disrupt the rest.
Our goal is to articulate a security property that supports reasoning about
multiple compromise scenarios and clarifies the associated attacker model.
At the same time, our property is intended to serve as a benchmark for
developers of compartmentalizing compilation mechanisms who want to argue
formally that their mechanisms are secure.
In the rest of this section we explain the technical ideas behind the
SCC property and then give its formal definition.
An {\em application} is a set $\ii{Cs}$ of {\em components}, with
corresponding {\em interfaces} $\ii{CIs}$.
\iffull
\bcp{being pedantic, is this saying
that there are two sets and a bijection between them?}\ch{it's trying to say
that we have a set of pairs; you can see that as a function from components
to interfaces, but it will not necessarily be bijective.}\bcp{OK. Not
sure if it's worth being more pedantic about it.}
\ch{Actually, it might be a bijection, but whatever}
\fi
These components are separately compiled (individually compiling each
component in the set \ii{Cs} is written $\comp{\ii{Cs}}$) and linked
together (written $\link{\comp{\ii{Cs}}}$) to form an executable
binary for the application.
SCC quantifies over all {\em compromise
scenarios}---i.e., over all ways of partitioning the components
into a set of compromised ones and a set of uncompromised ones.
In order to ensure that the set
of compromised components doesn't expand during evaluation,
we require that the uncompromised components be {\em fully defined}
with respect to the interfaces of the compromised components.
That is, the uncompromised components must not exhibit undefined
behaviors
even if we replace the compromised components with arbitrary code (obeying
the same interfaces).
The full definedness condition is a necessary part of
the {\em static compromise model} considered in this paper.
Intuitively, if an uncompromised component can be tricked into an
undefined behavior by interface-respecting communication with other
components, then we need to conservatively assume that the already
compromised components will succeed in compromising this component
dynamically, so it belongs in the set of compromised components from
the start.
This static model is much simpler to reason about than a model of
dynamic compromise, in which one could perhaps provide guarantees to
not-fully-defined components up to the point at which they exhibit
undefined behavior, but which could, however, invalidate standard
compiler optimizations that involve code motion.
Moreover, it seems highly nontrivial to define our property for such a
more complex model.
\autoref{fig:compromise-scenario} illustrates one way to partition
five components $C_1,\dots,C_5$ with interfaces $i_1,\ldots,i_5$,
representing the scenario where $C_2$, $C_4$, and $C_5$ are
compromised and $C_1$ and $C_3$ are not.
In order for this compromise scenario to be considered by our property,
$C_1$ and $C_3$ need to be fully defined with respect to interfaces
$i_2$, $i_4$, and $i_5$, which means $C_1$ and $C_3$ cannot cause
undefined behaviors when linked with any components $B_2, B_4, B_5$
satisfying interfaces $i_2, i_4, i_5$.
Formally, full definedness is a language-specific parameter to our
definition of SCC, just as the program equivalence relations are
language-specific parameters to both SCC and vanilla full abstraction.
For instance, in the simple imperative language in
\autoref{sec:instance}, we will say that components $\ii{Cs}$
are fully defined with respect to a set of adversary interfaces
$\ii{BIs}$ if, for all components $\ii{Bs}$
satisfying $\ii{BIs}$, the complete program
$\link{\comp{\ii{Cs}} \mathrel{\cup} \comp{\ii{Bs}}}$ cannot reduce
to a stuck non-final state (corresponding to undefined behavior) where
the currently executing component is one of the ones in
$\ii{Cs}$ (\IE no component in $\ii{Cs}$ can be
``blamed''~\cite{FindlerF02prime} for undefined behavior).
Full definedness might well be defined differently for another
language; for instance, in a concurrent language undefined behaviors
cannot be as easily modeled by stuckness since normally other threads
can proceed even if one of the threads is stuck.
One last thing to note is that full definedness of a set of components is
generally a much weaker property than the full definedness of each
individual component in the set. Since the interfaces of
the adversary components \ii{BIs} can (and in \autoref{sec:instance} do)
restrict not only the operations they export but also the operations they
import from \ii{Cs}, the components in the set can export dangerous
operations just to other components in the set; the components actually in
the set might then all use these operations properly, whereas arbitrary
components with the same interfaces could abuse them to trigger undefined
behaviors.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{img/compromise-scenario-cropped-14.pdf}
\caption{Compromise scenarios}
\label{fig:compromise-scenario}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{img/secure-compartmentalization-more-formal-cropped-14.pdf}
\caption{SCC distinguishability game, for one of the
compromise scenarios\ifsooner\bcp{Nit: maybe put yellow backgrounds on the two rows
upstairs and the two rows downstairs?}\fi}
\label{fig:secure-compartmentalization-more-formal}
\end{figure}
SCC states that, \emph{in all such
compromise scenarios}, the compiled compromised components must not be
able to cause more harm to the compiled uncompromised components via
low-level attacks than can be caused by some high-level components written
in the source language.
Basically this means that any low-level attack can
be mapped back to a high-level attack by compromised components satisfying
the given interfaces.
\iffull
\bcp{Rest of the paragraph is pretty subtle. Spell out more?}\ch{Unsure
what more there is to say}%
\fi
The property additionally ensures that the high-level components
produced by this ``mapping back'' are fully defined with respect to
the interfaces of the uncompromised components.
So with SCC,
instead of having to reason about the low-level consequences of
undefined behavior in the compromised components, we can
reason in the source language and simply replace the compromised
components by equivalent ones that are guaranteed to
cause no undefined behavior.
Formally, SCC is stated by quantifying over multiple
distinguishability games, one for each
compromise scenario, where the individual games are reminiscent of full
abstraction.
The goal of the attacker in each game is to distinguish between two
variants of the uncompromised components.
\autoref{fig:secure-compartmentalization-more-formal} illustrates
these two variants as $C_1, C_3$ and $D_1, D_3$, where we use $\neqh$ and
$\neql$ to indicate that the behaviors of two (high- or
low-level) complete programs are distinguishable, \IE they produce
different observable outcomes when executed.
For this compromise scenario, SCC specifies that,
if compiled compromised components $\comp{C_2}$, $\comp{C_4}$,
$\comp{C_5}$ can distinguish the $\comp{C_1},\comp{C_3}$ and
$\comp{D_1},\comp{D_3}$ variants at the low
level, then
there must exist some (fully defined) components $A_2, A_4, A_5$
that distinguish ${C_1},{C_3}$ and ${D_1},{D_3}$ at
the high level.
With all this in mind, the SCC property is
formally\ifsooner\bcp{CSF people are probably going to flag the word
``formally''}\bcp{maybe it's a bit better now, after recent changes...}\fi{}
expressed as follows:
\begin{defn}[SCC]~
\begin{itemize}
\item For any complete compartmentalized program and
for all ways of {\em partitioning} this program into
a set of {\em uncompromised} components $\ii{Cs}$
and their interfaces $\ii{CIs}$,
and a set of {\em compromised} components $\ii{Bs}$
and their interfaces $\ii{BIs}$, so that
$\ii{Cs}\text{ is {fully defined} with respect to }\ii{BIs}$, and
\item for all ways of replacing the uncompromised components with
components $\ii{Ds}$ that satisfy the same interfaces $\ii{CIs}$
and are {fully defined} with respect to $\ii{BIs}$,
\item if $\link{\comp{\ii{Cs}} \mathrel{\cup} \comp{\ii{Bs}}}
\neql \link{\comp{\ii{Ds}} \mathrel{\cup} \comp{\ii{Bs}}}$,
\item then there exist components $\ii{As}$ satisfying interfaces $\ii{BIs}$
and {fully defined} with respect to $\ii{CIs}$ such that\\
$\link{\ii{Cs} \mathrel{\cup} \ii{As}}
\neqh \link{\ii{Ds} \mathrel{\cup} \ii{As}}$.
\end{itemize}
\end{defn}
As suggested before, our property applies to any {\em fully defined}
sets of components \ii{Cs} and \ii{Ds} (which cannot be dynamically
compromised by some components with interfaces \ii{BIs}\ifsooner\bcp{This way of
saying it seems too strong: undefined behavior may not be the only way for
one component to compromise another.}\ch{improvements welcome}\fi).
We conjecture that this full definedness precondition is strictly
required in the static corruption model we are assuming.
It is worth noting that we are not proposing any method for proving that
programs are fully
defined; this comes with the territory when dealing with
C-like languages.
What we are after is bringing formal foundations
to {\em conditional} reasoning of the form ``if these \ii{Cs} are fully
defined and the remaining components \ii{Bs} get compromised,
then...''
Note that the $\ii{Bs}$ in our SCC definition need not be fully defined---\IE the
property allows the compromised components to contain undefined
behaviors (this may well be why they are compromised!) and promises
that, even if they do, we can find some other components $\ii{As}$
that are able to distinguish between $\ii{Cs}$ and $\ii{Ds}$ in
the source language without causing any undefined behaviors.
Indeed, for those compromise scenarios in which $\ii{Bs}$ are already
fully defined, our SCC property trivially follows from correct
compilation (\autoref{assumption:compiler-correctness}) since in that case
we can always pick $\ii{As} = \ii{Bs}$.
This generic property is parameterized over
a source and a target language with a notion of component for each,
source- and target-level notions of linking sets of components ($\bowtie$),
source- and target-level notions of distinguishability ($\not\sim$),
a compiler mapping source components to target components ($\downarrow$),
a source-level notion of interface and an interface satisfaction relation
(lifted
to sets of components and interfaces), and
a notion of a set of components $\ii{Cs}$ being
fully defined with respect to a set of adversary interfaces $\ii{BIs}$.
\section{From Full Abstraction to SCC}
\label{sec:fa-not-enough}
\ifsooner
\bcp{Technical question: Is FA an instance of SC for a safe language? I
guess no, because we only consider compiled contexts, but is there any
other reason?}
\fi
\autoref{sec:sc} presented SCC by directly
characterizing the attacker model against which it defends.
In this section we step back and show how SCC can instead be obtained by
starting from the well-established notion of full abstraction and
removing each of the three limitations that make it unsuitable
in our setting.
This results in two properties, {\em structured full abstraction} and {\em
separate compilation}, which we then combine and instantiate to obtain SCC.
This reduction is
not only theoretically interesting, but also practically useful,
since structured full abstraction can more easily be proved by adapting
existing proof techniques, as we will see in \autoref{sec:instance}.
\ifsooner
\ch{It might be easier to first remove each of the limitations
independently, and only then to remove all three at once. The
problem is that things get rather complex when combining all 3
fixes, so it might be better to explain the parts separately
before getting there.}
\ch{Just noticed that the MFA idea only makes sense with undefined
behaviors, so the only parallel split we can do is for SFA and LFA,
but MFA needs to build on top of LFA}
\ch{If we go for this we would also need a good name for the
combination of LFA, SFA, and MFA ... LSMFA? Alternatively, we could
call the complete combination MFA, and call LFA + separate
compilation something else; like CFA (for Compositional FA) or
CLFA (for Compositional Low-Level FA}
\fi
\SUBSECTION{Full abstraction}
\label{sec:fa}
A {\em fully abstract} compiler protects compiled programs from their interaction
with unsafe low-level code and thus allows sound reasoning about security
(and other aspects of program behavior) in terms of the source language.
Fully abstract compilation~\cite{abadi_protection98}
intuitively states that no
low-level attacker can do more harm to a compiled program than some
program in the source language already could.
This strong property requires enforcing all high-level language
abstractions against arbitrary low-level attackers.
Formally, full abstraction is phrased as a distinguishability game
requiring that low-level attackers have no more distinguishing
power than high-level ones.
\begin{defn}\label{defn:fa-simple}
We call a compilation function (written $\downarrow$) {\em fully abstract}
if, for all $P$ and $Q$,
\[
(\forall A.~ A[P] \eqh A[Q])
\Rightarrow (\forall a.~ a[\comp{P}] \eql a[\comp{Q}]).
\]
\end{defn}
\noindent Here, $P$ and $Q$ are partial programs, $A$ is a high-level
context whose job is to try to distinguish $P$ from $Q$, and $a$ is a
low-level ``attacker context'' that tries to distinguish $\comp{P}$ from
$\comp{Q}$.
The relations $\eql$ and $\eqh$ are parameters to the definition,
representing behavioral equivalence at the two levels.
To be useful, they should allow the context to produce an observable action
every time it has control, letting it convert its knowledge into
observable behaviors.
For instance, a common choice for behavioral equivalence is based on
termination: two deterministic programs are behaviorally equivalent if
they both terminate or both diverge.
When stated this way (as an implication rather than an equivalence),
full abstraction is largely orthogonal to compiler
correctness~\cite{leroy09:compcert, KumarMNO14}.
While compiler correctness is about preserving behaviors when
compiling from the source to the target, proving full abstraction
requires some way to map each distinguishing context target to a
sourge-level one, which goes in the opposite direction.
This is easiest to see by looking at the contrapositive:
\[
\forall a.~ a[\comp{P}] \neql a[\comp{Q}] \Rightarrow
\exists A.~ A[P] \neqh A[Q]
\]
\SUBSECTION{Problem 1: Undefined behavior}
\label{sec:prob1}
\ifsooner\ch{This was greatly simplified, please review}\fi
The first limitation of full abstraction is that it cannot realistically be
applied to compiling from an unsafe language with undefined behaviors.
Undefined behaviors are (arbitrarily!) nondeterministic, and no
realistic compiler can preserve this nondeterminism in the target
as required by full abstraction. (Removing it from the source language
would negate the performance and optimization benefits that are the
reason for allowing undefined behaviors in the first place.)
\iffalse
\ch{Problem statement: full abstraction does not apply to unsafe
low-level languages or stated as in the intro (\autoref{sec:intro})
``fully abstract
compilation assumes that the source language of our compiler is
secure and that all low-level security has to be provided with
respect to the secure semantics of this source programming
language'' and ``we thus need a property that can meaningfully
accommodate source languages in which components can be compromised
via undefined behavior, while full abstraction simply assumes that
the source language has no such thing.''}
\ch{\bf First, should concretely illustrate these (rather strong)
claims. What exactly would happen if we tried to apply full
abstraction to an unsafe language?}
\ch{New related claim: FA is impossible to instantiate in a meaningful
way for a language with undefined behaviors. Probably too strong
claim without a proof, but let's see how some natural instances
trivialize.}
The main question we have to answer for instantiating FA is what
should be the behavior of a program that has undefined behavior?
(Tricky question, right?)
In particular, how should we define behavioral equivalence in the
source language ($\eqh$) in the presence of undefined behaviors?
There are a couple of ``natural'' but broken choices that we will
investigate below.
\bcp{Based on today's discussions, I'm confused by the phrase ``undefined
program''---it's not {\em programs} that are undefined, but program {\em
executions}.}
\ch{We should make this explicit somewhere, but we are in a fully
deterministic setting without external inputs or outputs, so each
complete program has only a single execution. Things only get
interesting for us when we look at partial programs, since in a way
one can see the context as the input, and then yes, we call a
partial program fully defined when it's defined for all ``context inputs''.}
\ch{My previous comment ignores the ``implicit nondeterminism''
associated with undefined behaviors though.}
(1) We could take an undefined program be equivalent to all other
programs, but then because $\eqh$ is an equivalence this would make
any two programs equivalent to each-other (just by transitivity
via an undefined program). This would make building a fully-abstract
compiler impossible because there would be no high-level distinguishers
to which to map the low-level ones. The boring direction of full
abstraction would now become trivial.
(2) We could take any undefined program be equivalent to all other
undefined programs, but not be equivalent to any defined program.
\ch{Note that the semantics of undefined and defined programs
will in fact overlap, and that's yet another thing we can use
to break all definitions below.}\bcp{seems more natural to say that an
undefined program is equivalent to any program (defined or undefined) that
exhibits all behaviors, nondeterministically?}
\ch{This is partially true, but undefined behavior is worse than this:
An undefined program can have not just all high-level behaviors but
also all low-level behaviors. For instance, even though a defined
program in your source language can't format your disk, doesn't mean
that an undefined one can't.}
This would make building a fully abstract compiler at least as hard as building
a compiler that's intuitively safe for undefined behaviors.\apt{What
does this last phrase mean?}\ch{It tries to give high-level intuition
for what's said below} For
illustrating this for our language from \autoref{sec:instance} we can
construct undefined partial programs $P$ and $Q$ that simply expose
the whole memory of the process to the attacker: they take in an index
from the attacker and then read at that index in a buffer. Because
they hit undefined behavior no high-level context will be able to
distinguish these two behaviors. However, a low-level attacker can use
this ``backdoor'' to read the code of the programs and tell if there
is any difference in the syntax of $\comp{P}$ and $\comp{Q}$, and we
can easily make sure that there is a difference when constructing $P$
and $Q$. So in order to be fully abstract a compiler would need to
stop the bad out of bounds read (but\apt{??}\ch{yes, we're not
enforcing memory safety here} that's just memory safety), or in
general have additional safety mechanisms that go beyond just
compartmentalization.
(3) We could take any undefined program be equivalent only to itself
(\IE $\eqh$ be syntactic equality for programs with undefined
behavior; since $\eqh$ needs to be an equivalence thus reflexive this is the
strongest relation one can choose). This would make the interesting
direction of full abstraction trivial, because there always exists a
high-level context that causes an undefined behavior and uses the new
magic distinguishing power to distinguish any syntactically different but
semantically equivalent programs. Since no low-level context can
distinguish more than syntactic equality, any compiler would satisfy
the interesting direction of full abstraction. Or looking at the
contrapositive, the following property is not very interesting,
because it holds for all compilers:
\[
P = Q \Rightarrow \forall a.~ a[\comp{P}] \eql a[\comp{Q}]
\]
Moreover, the usually boring direction of full abstraction
would now become stupidly strong:
\[
P \not= Q \Rightarrow \exists a.~ a[\comp{P}] \neql a[\comp{Q}]
\]
This requires that low-level contexts are able to distinguish any
syntactically different programs. This is by no means a desirable
property of a compiler.
\apt{What about options in the middle between (2) and
(3), where $\eqh$ is a non-trivial relation based on a semantic
classification of the undefined behavior. For one example, one could
label undefined behaviors by the state in which the program got stuck,
and define $\eqh$ as equality of labels. (This models guarding
potentially unsafe operations by checks that can raise an observable
fatal exception.) But I guess this would have the same consequences as
as (2): it would require more than just compartmentalization checking
in the compiled code.}
(4) We could just give up high-level reasoning and say that two
undefined complete programs $P$ and $Q$ are only equivalent when
$\comp{P} \eql \comp{Q}$. Giving up high-level reasoning is already a
bad thing, but still let's follow along. Our property becomes:
\[
\begin{array}{l}
\begin{array}{rl}
(\forall A.~&
A[P] \text{ and } A[Q] \text{ defined} \Rightarrow A[P] \eqh A[Q]\\
&\land A[P] \text{ and } A[Q] \text{ undefined} \Rightarrow
\comp{(A[P])} \eql \comp{(A[Q])})
\end{array}\\
\iff (\forall a.~ a[\comp{P}] \eql a[\comp{Q}])
\end{array}
\]
\ch{Work in progress ... This whole discussion starts reminding me of
the What If? comic book}
\bcp{I don't have anything technical to say about this yet, but I'm
struck by the similarity between the issues you've raised about
program equivalence and full abstraction and the `refinement
paradox' for noninterference...}
\ch{It's an interesting discussion, but I'm
tempted to move it all to an appendix and just say that we know
of no way to do this and no work in the literature does it. It's
not easy to understand, long, and it detracts from the main story.
Moreover, with the new structure it would come on pg 4, so still at
a point where most readers will look.
It also encourages the reviewer to come up with their own broken
definition, which we can then rebut in the response period?}
\fi
To adapt full abstraction to a source language with undefined behaviors, we
need to restrict attention only to {\em defined}
complete programs in the source language.
\iffull
\bcp{why do we need this assumption here? If the source language has a
little bit of nondeterminism left, and the compiler is willing to preserve
this nondeterminism in the target code, then everything is fine, right?
\ch{Maybe, but we assume our target is deterministic from sentence 1}
However, this discussion makes me wonder something more fundamental: Our
main target is C / C++, the fully defined parts of these languages are
still nondeterministic (argument order, etc.)\ch{Argument order is
not nondeterministic, it's just a global parameter. Even concurrency
can be expressed without nondeterminism by just using a scheduler
oracle (\IE global parameter)}\bcp{OK about argument order. About
concurrency, I don't buy it: a reasonable compiler may well reduce the
number of times that the scheduler is called.}
and compilers do not (and
would not want to) preserve this nondeterminism. So aren't we back in the
same difficulty?}\bcp{We had a long discussion about this by chat and then
phone, but wound up not sure how to proceed, so we're going to ignore this
for now}
\ch{I've added a note.}\bcp{I don't understand how the note addresses my
question. (I don't actually understand the note in the first place, so
perhaps this is not surprising.)}
\ch{Q: Would CompCert satisfy this property? I'm a bit worried that
because of ``implementation specific behavior'' in C, it would not
(\IE a C compiler can refine non-determinism that's not undefined
behavior). But since we don't have such a thing we should be fine
though. Anyway, I know very little about CompCert.}
\aaa{CompCert C is deterministic, so I think it does (although the
semantics of the source language is stated as a deterministic
refinement of a more general, nondeterministic semantics.)}
\bcp{This comment seems related to the discussion we had (that didn't
converge) about unspecified behavior in C...}
\fi
And even with this restriction, defining full abstraction still
requires a little care. For instance, the following variant is
wrong (formally, {\em defined} is another parameter to this property):%
\[
\begin{array}{rl}
(\forall A.~ A[P]\text{ and }A[Q]\textit{ defined }\Rightarrow A[P] \eqh A[Q])
&\Rightarrow\\
(\forall a.~ a[\comp{P}] \eql a[\comp{Q}])
\end{array}
\]
Any $P$ and $Q$ that both trigger undefined behavior as soon
as they get control would be considered equivalent in the high-level language
because there is no context that can make these programs defined
while observing some difference between them.
All such programs would thus need to be equivalent at the low level,
which is clearly not the case (since their nondeterminism can be resolved
in different ways by the compiler).
The problem here is that if $P$ and $Q$ trigger undefined behavior
then the context often cannot make up for that and make the program
defined in order be able to cause an observation that distinguishes
$P$ and $Q$.\ifsooner\bcp{got lost in that sentence}\ch{please improve}\fi
\SUBSECTION{Solution 1: Full abstraction for unsafe languages}
\label{sec:lfa}
The responsibility of keeping $A[P]$ defined should be thus shared
between $A$ and $P$.
For this we assume a compositional notion of {\em fully defined}
behavior for programs and contexts as two parameters to
\autoref{defn:lfa} below.
We require that these parameters satisfy the following properties: (1)
a program is fully defined if it does not cause undefined behavior in
any fully defined context, and (2) a context is fully defined if it
does not cause undefined behavior when we plug any fully defined
program into it.\ch{Above, should we have ``if'' or ``iff''?}
Note that properties (1) and (2) are circular and therefore cannot be
used as the definition of full definedness.
For specific languages (\EG the one in \autoref{sec:instance}) we can
break this circularity and define full definedness using {\em
blame}~\cite{FindlerF02prime}: intuitively we call a partial program
{\em fully defined} when it cannot be blamed for undefined behavior in
any context whatsoever.
Similarly, we call a context fully defined when it cannot be blamed
for undefined behavior for any program that we plug into it.
We expect such a blame-based definition to satisfy the properties (1)
and (2) above.
\ifsooner
\ch{We haven't proved it though! We should at least try to sketch the
proof. Nontrivial; we need to map an arbitrary high-level context
that causes an undefined behavior in the program into a fully
defined high-level context with the same behavior. Trace semantics
for the high-level language?}
\fi
Full definedness allows us to introduce a new variant of
full abstraction that applies to unsafe source languages with
undefined behavior:
\begin{defn}[Full abstraction for unsafe languages]\label{defn:lfa}~\\
We call a compiler $\downarrow$ for an unsafe language {\em fully
abstract} if for all {\em fully defined} partial programs $P$ and
$Q$%
\[
\begin{array}{rl}
(\forall A.~ A~\textit{fully defined} \Rightarrow A[P] \eqh A[Q])&\Rightarrow\\
(\forall a.~ a[\comp{P}] \eql a[\comp{Q}]).
\end{array}
\]
\end{defn}
Requiring that $P$, $Q$, and $A$ are fully defined means that we can safely
apply $\eqh$ to $A[P]$ and $A[Q]$, because neither the programs nor
the context can cause undefined behavior.
This property is incomparable with the original definition of full
abstraction.
Looking at the contrapositive,
\[
\begin{array}{r}
\forall P, Q\textit{ fully defined}.\qquad
(\exists a.~ a[\comp{P}] \neql a[\comp{Q}]) \\
\Rightarrow(\exists A.~ A~\textit{fully defined} \wedge A[P] \neqh A[Q]),
\end{array}
\]
the $P, Q\textit{ fully defined}$ pre-condition makes this weaker than
full abstraction, while the $A~\textit{fully defined}$ post-condition makes it stronger.
The post-condition greatly simplifies reasoning about programs
by allowing us to replace reasoning about low-level contexts with
reasoning about high-level contexts {\em that cannot cause undefined
behavior}.
One might wonder whether the
$P, Q\textit{ fully defined}$ pre-condition is too restrictive,
since full definedness is a rather strong property, requiring each
component to be very defensive about validating inputs it receives from
others.
In the static compromise model inherent to full abstraction and
without additional restrictions on the program's context, we must
be conservative and assume that, if any context can cause
undefined behavior in a program, it can compromise it in a way that the
compiler can provide no guarantees for this program.
The structured full abstraction definition below will in fact restrict
the context and thus use a weaker notion of full definedness.
Moreover, separate compilation will allow us to quantify over all splits of a
program into a fully defined partial program and a compromised
context, which also makes the presence of the full definedness
pre-condition more palatable.
\SUBSECTION{Problem 2: Open-world assumption about contexts}
\label{sec:prob2}
While full abstraction normally requires the contexts
to be compatible with the partial program, for instance by respecting
the partial program's typed interface\iffull (see \autoref{app:fa-detail})\fi,
these restrictions are minimal
and do not restrict the shape%
\ifsooner\bcp{vague}\ch{for instance the breakup
of the context into components}\fi, size,
\ifsooner\bcp{SC doesn't
restrict the size either}\ch{It very well could. The only way we
avoid this now in our instance is by making low-level components
have infinite separate address spaces. If we went to a more
realistic model of one single shared address space then hiding the
sizes of things becomes very expensive and cumbersome, so one better
option might be to expose sizes in the shape.}\bcp{Fair enough. But it's
a minor point in the present context, and IMO mentioning it distracts from
the main point.}\fi
exported interface, or privilege
of the contexts in any way.
This {\em open world} assumption about contexts does not fit with our
compartmentalization setting, in which the breakup of the
application into components is fixed in advance, as are the
interfaces (and thus privileges) of all the components.
In our setting, the definition of full abstraction needs to be refined
to track and respect such structural constraints; otherwise a
low-level context with 2 components might be mapped back to a
high-level context with, say, 3 components that have completely different
interfaces, and thus privileges.
In particular, the high-level components' interfaces could give them
more privileges than the low-level components had,
increasing their distinguishing power.
\ifsooner
\bcp{Because of the focus on shape, this reads like a technical issue rather
than a fundamental conceptual one. IMO think the emphasis should be
on privilege. Ideally, it should include an example showing why it makes
a real difference.}
\ch{TODO: will try}
\ch{Removed ``Technically, \autoref{thm:sfa-to-sc} would fail without
this change.'', since it contributed to your problem}
\fi
\ifsooner
\ch{This might still be relevant above, or not}
\yj{If there is a connection between the notion of low-level
compartment and a high-level notion of component, then we can more
accurately model low-level attackers as high-level attackers having
taken over the precise high-level components that correspond to the
compromised low-level components. This allows more precise reasoning
in the high-level than full abstraction, in which the corresponding
high-level attacker could have no relation whatsoever to the low-level
one, so that we can only give guarantees if we are safe
against \emph{any} high-level attacker. In the SFA setting, we can
give guarantees as long as we are safe against the high-level
attackers \emph{that have the same shape as the low-level attacker},
i.e., in our setting, against the attackers that control exactly the
components that we placed on the ``distrusted'' barrier.}
\fi
\SUBSECTION{Solution 2: Structured full abstraction}
\label{sec:sfa}
We therefore introduce a structured variant of full abstraction, in which
partial programs (indicated by $\bullet$ below) and contexts ($\circ$) are
assigned dual parts of predefined complete program {\em shapes}.
A shape might be anything, from a division into components with their
interfaces (as in \autoref{thm:sfa-to-sc} below), to, \EG the maximum
size of a component's code after compilation (which might expose component
sizes in a setting where it's too costly to hide them by padding to a fixed
maximum size~\cite{PatrignaniDP16}).
\begin{defn}[Structured full abstraction]\label{defn:sfa}~\\
We say that a compiler $\downarrow$ for an unsafe language
satisfies {\em structured full abstraction} if, for all
{\em program shapes} $s$ and partial programs
$P \hasshape{\bullet} s$ and $Q \hasshape{\bullet} s$ so that
$P$ and $Q$ are {\em fully defined} with respect to contexts of shape
${\circ}s$,
\[
\begin{array}{r}
\left(\begin{array}{r}
\forall A \hasshape{\circ} s.\quad
A~\textit{fully defined}\text{ wrt. programs of shape }{\bullet}s\\
\qquad\Rightarrow A[P] \eqh A[Q]
\end{array}\right)\\[1em]
\Rightarrow(\forall a \hasshape{\circ} s.~ a[\comp{P}] \eql a[\comp{Q}])
.
\end{array}
\]
\iffull
\bcp{Wait... isn't this saying that the shape of a program is preserved by
compilation---i.e., that we use the very same set of high- and low-level
shapes at both levels? I was wondering about this before, and you said we
didn't do it! :-p}
\ch{I said we didn't do it in definition 2.1 :-)
The trick is that $A \hasshape{\circ} s$ is a different
relation compared to $a \hasshape{\circ} s$, so it can take into
account that compilation is happening. Should make this explicit?}\bcp{I
don't think we need to be too pedantic here---it's clear what's
going on. I was just objecting to a specific bit that I found
misleading.}
\bcp{Let's leave improving this for later.}
\fi
\end{defn}
\noindent This property universally quantifies over any
complete program shape $s$ and requires that $P \hasshape{\bullet} s$
(read ``program $P$ has shape $s$''), $Q \hasshape{\bullet} s$, and
$A \hasshape{\circ} s$ (``context $A$ matches programs of shape
$s$'').
\iffull
\bcp{Something's a bit off about the intuition here: you say above
that a shape might be a bunch of interfaces or the sizes of components;
the implication is that $P \hasshape{\bullet} s$ means that $P$ includes
all the components described by $s$. But if this is the case, then $A
\hasshape{\circ} s$ doesn't place any constraint on the components in
$A$ (because $s$ doesn't describe them). Deserves a note.}
\ch{We do call them ``predefined {\em complete program} shapes'' from
the start, but we can add more if needed.}
\fi
Moreover, the property only requires programs that are fully defined
with respect to contexts of the right shape, and dually it only
considers contexts that are fully defined with respect to programs of
the right shape.
\SUBSECTION{Recovering secure compartmentalizing compilation}
\label{sec:recovering-scc}
SCC can be recovered in a natural
way as an instance of structured full abstraction (\autoref{defn:sfa}).
For both source and
target languages, we take partial programs and contexts be sets of
components and context application be set union.
Compilation of sets of components works pointwise.
To obtain an instance of structured full abstraction we additionally
take shapes to be sets of component interfaces, where each interface
is marked as either compromised or uncompromised.
\iffull \bcp{This clarifies my concern above, though it comes a bit late.}
\fi
\iffull
\bcp{Wait... I thought we were going to obtain SCC by combining the
solutions to all three issues. But it appears that we only needed to deal
with two issues to get to SCC. Is the point that we don't yet have
something {\em equivalent} to SCC?}
\ch{Problem 3 explains this. We obtained SCC by choosing a great
instance of SFA, but there are also crappy instances in there.
Solution 3 is about distinguishing between the two.}
\bcp{It's a shame for the reader to be confused and then unconfused: better
to structure things to avoid the confusion in the first place. But let's
leave this for later.}
\fi
\ifsooner\ch{Added determinism assumptions}\fi
\begin{thm}\label{thm:sfa-to-sc}
For any deterministic target language and any source language that
is deterministic for defined programs, structured full abstraction
instantiated to components as described above implies SCC.
\end{thm}
\begin{proof}
Straightforward, though tedious. A machine-checked Coq proof can be found
in the auxiliary materials.\Coqed
\end{proof}
\ifsooner
\ch{This is the direction we need for \autoref{sec:instance},
but the other direction is also interesting for the story}
\ch{My claim [probably still too strong] is that the conditions from
SFA + separate compilation uniquely determine an instance as soon
as we choose to programs
= contexts = sets of components, and context application = set union
and that instance is simply SFA(MD). If this is not the case, then
what other conditions can we add above to make it so?}
\fi
\SUBSECTION{Problem 3: Statically known trusted/untrusted split}
While SCC can deal with multiple
compromise scenarios, not all instances of structured full abstraction
can.
In general, if a compiler satisfies (structured) full abstraction,
how can we know whether it can deal with multiple compromise scenarios,
and what does that even mean?
While we can instantiate full abstraction to a
{\em particular} compromise scenario by
letting the partial program $P$ contain the uncompromised components and the
low-level context $a$ contain the compromised ones, a
fully abstract compiler
(together with its linker, loader, runtime \ETC) might exploit this
static split and introduce only one single barrier protecting the
uncompromised components from the compromised ones.
When presented with a different compromise scenario for the same program,
the compiler could adapt and produce a different output.
The source of confusion here is that a fully abstract compiler does not need
to compile contexts---only programs. In fact, even the {\em types} of
contexts and of partial programs might well be completely different (\EG the
types of lambda calculus contexts and terms are different; a compiler for
one cannot compile the other).
Even when the types do match so that we can apply the same
compiler to the context, the low-level context-application operation
$\comp{A}\hspace{-0.35em}[\comp{P}]$ can freely exploit the fact
that its first argument is a compiled untrusted context and its second
argument is a compiled trusted program that should be protected from
the context.
So if we start with a complete high-level program $C$ and
look at two different compromise scenarios $C = A_1[P_1]$ and
$C = A_2[P_2]$, compiling each of the parts and combining the results
using context application does not necessarily yield the same result
(\IE it could well be that
$\comp{A_1}\hspace{-0.35em}[\comp{P_1}] \not=
\comp{A_2}\hspace{-0.35em}[\comp{P_2}]$) or indeed even behaviorally
equivalent results (\IE it could well be that
$\comp{A_1}\hspace{-0.35em}[\comp{P_1}] \neql
\comp{A_2}\hspace{-0.35em}[\comp{P_2}]$).
This means that the user of a fully abstract compiler may need to commit
{\em in advance} to a single compromise scenario.
This weakness significantly limits the applicability of full abstraction.
After all, uncertainty about sources of vulnerability is precisely the
motivation for compartmentalizing compilation: if we knew \iffull in advance\fi
which components
were safe and which were not, there would be no reason to distinguish more
than two levels of privilege, and we could merge each group into a single
mega-component.
Even in rare cases where we are certain that some code cannot be
compromised---for instance because we have proved it
safe---protecting only the verified code from all the rest using a fully
abstract compiler~\cite{Agten0P15} is still suboptimal in terms of
protection, since it provides no guarantees for all the code that is not
verified.
Moreover, this weakness is not hypothetical:
several fully abstract compilers proposed in the literature are only
capable of protecting a single trusted module from its untrusted
context~\cite{PatrignaniASJCP15, AgtenSJP12, LarmuseauPC15,
PatrignaniCP13, patrignani_thesis} (recently proposed
extensions~\cite{PatrignaniDP16} do aim at lifting this restriction in
some cases).
While this setup is appropriate when all one wants to achieve is
protecting trusted (e.g., verified) code
from its untrusted context~\cite{Agten0P15}, it is not suitable for
our compartmentalizing compilation setting where we do not know in advance
which components will be dynamically compromised and which ones not,
and thus we want to simultaneously protect against all possible compromise
scenarios.
\SUBSECTION{Solution 3: Separate compilation}
\label{sec:separate-compilation}
We can address this by requiring that the compiler toolchain have one
additional property:
\begin{defn}\label{defn:separate-compilation}
We say that the compiler toolchain (\IE the compiler $\comp{-}$, the linker
$-[-]$, and the runtime system embodied in the low-level behavioral
equivalence) satisfies {\em separate compilation} if
\begin{enumerate}
\item the type of contexts and programs is the same (so that the
compiler can also compile contexts), and
\item $\comp{(A[P])} \eql{}\; \comp{A}\hspace{-0.35em}[\comp{P}]$ for all
$A$ and $P$.
\end{enumerate}
\end{defn}
Requiring that context application and compilation commute (condition 2)
implies that, if some complete program $C$ can be written
as both $C = A_1[P_1]$ and $C = A_2[P_2]$, then separately compiling
each of these splits yields behaviorally equivalent results:
$\comp{(A_1[P_1])} \eql\; \comp{(A_2[P_2])}$.
With separate compilation, full abstraction for an unsafe language
(\autoref{defn:lfa}) can be instantiated as follows:
\[
\begin{array}{r}
\forall B.~\forall P,Q\textit{ fully defined}.\qquad
(\comp{(B[P])} \neql \comp{(B[Q])})\\
\Rightarrow (\exists A.~ A\textit{ fully defined} \wedge A[P] \neqh A[Q])
\end{array}
\]
One compelling reading of this is that, for all compromise scenarios
(ways to break a complete program into a compromised context $B$ and
an uncompromised program $P$),
and for all programs $Q$ that we can substitute for $P$, if the
context $B$ can distinguish $P$ from $Q$ when compiled to
low-level code, then there exists a fully defined context $A$
that can distinguish them at the high-level.
In a language without undefined behavior, this property would
trivially follow just from (whole program) correct compilation (see
\autoref{assumption:compiler-correctness} below) by picking $A = B$.
However, it is nontrivial for a language in which
context $B$ might cause undefined behavior, since then correct
compilation does not apply for $B[P]$ and $B[Q]$.
In our setting, this property allows us to avoid reasoning about the
implications of undefined behavior in a low-level
context and instead consider just fully defined high-level
contexts.
It is trivial to check that our instance of structured full
abstraction from \autoref{thm:sfa-to-sc} does satisfy separate
compilation.
It should also be easy to show that many previous fully abstract
compilers~\cite{PatrignaniASJCP15, AgtenSJP12, LarmuseauPC15,
PatrignaniCP13, patrignani_thesis} do not satisfy separate
compilation, since they were not designed to support a setting of
mutual distrust.
\ifsooner
\bcp{Can we spell this out in a little more detail? Both why it works
here and why it would not work with undefined behavior?}
\ch{Tried to improve this explanation. Please let me know if
unfolding \autoref{assumption:compiler-correctness} would be even better.}
\fi
\ifsooner
\ch{I think we already made the main point here. Move the rest to
\autoref{sec:sc} and explain it there ... if needed?}
When looking at SFA for a compiler with separate compilation, the
partial instantiation becomes the following:
\[
\begin{array}{l}
\forall B,P,Q,s.~
P \eqhstat Q \land \ct{B}{P} \land \cl{B[P]} \land \\
B \hasshape{\circ} s \land P \hasshape{\bullet} s \land Q \hasshape{\bullet} s \land
P\text{ and }Q\text{ fully defined wrt. }{\circ}{s} \land\\
(\comp{(B[P])} \neql \comp{(B[Q])})\\
\Rightarrow
\left(\begin{array}{rl}
\exists A.& \ct{A}{P} \land \cl{A[P]} \land A \hasshape{\circ} s\\
&A\text{ fully defined wrt. }{\bullet}{s}
\land A[P] \neqh A[Q]
\end{array}\right)
\end{array}
\]
\fi
\ifsooner
\ch{Old comment about Problem 2, but could take it more generally}
\bcp{I think the fundamental
problem is that we're not yet being completely formal about what the
language is (components, interfaces, types, source language, ...), but
the terms we're defining here rely on having clear intuitions about all
that... }
\ch{Full abstraction is a general property that can be instantiated
for {\em arbitrary} languages and compilers. Would it help if we
stressed (even more) in advance that this whole section is about
full abstraction {\em in general}, and how it can be fixed {\em in
general} so that it works well when later instantiated to our more
concrete compartmentalization setting, but also in any other setting
in which similar problems are encountered?}
\bcp{Yes, this would help a lot! (Maybe it would also help to mention one
or two such settings explicitly?)}
\fi
\ifsooner
\clearpage
\section*{Discussion about SFA+separate compilation}
\yj{With these conditions, as you mention, all possible
splits \emph{into a context and a partial program} lead to the same
behavior in the low-level. However this property does not ensure that
all possible splits of a program \emph{into components} lead to the
same behavior.\ch{This will be solved as soon as we choose that
program = context = set of components and context application = set
union, but I guess your point is we could force people more into
picking this instance.}
So this property only forces you to be honest about not
dealing all mutual distrust compromise scenarios if you don't: you
will only protect against all compromise scenarios if \emph{(4) all
mutual distrust compromise scenarios can be represented as a split
between a context and a partial program}. If this is not intended, we
1could consider adding (4).}\ch{What does (4) mean formally though?
What exactly is a ``mutual distrust compromise scenario''? In the current form
it can be an intuitive side-remark (like when we explain what
behavioral equivalence should do), not part of a formal definition.}
\yj{Now, another view on this is that the splits you allow between
partial program and context \emph{define} the compromise scenarios you
protect against and it might be OK not to necessarily deal with all
possible mutual distrust compromise scenarios as long as that
coincides with your attacker model (\IE if your attacker model is not
mutual distrust).}
\yj{An example: I might instantiate definitions with partial programs
and contexts being lists of components and context application defined
as $as[ps] =~\bowtie(as + ps)$ in a very ``positional'' fashion. (1)
(2) and (3) all hold, however I can only represent compromise
scenarios in which the attacker takes the $n$ \emph{leftmost}
components (for any $n$).}
\ch{I'm still not fully convinced one way or the other. The example
you give could be a fine component model (\IE rings of protection),
even if it's not exactly the one we took in SC. So I guess it
depends quite a bit on how much we want to force the whole SC
component model in this property already, as opposed to allowing any
reasonable component model. And the one above seems reasonable, can
you find a non-reasonable one?}
\ch{If we want to force the SC model one thing we could require
formally would be that context application commutes, \IE
$A[P] = P[A]$.}
\ch{Was discussing with Yannis, and commutativity is not the only
thing we could require, but also associativity, and the existence of
a unit element. The question still stands, is anything broken if we
don't require context application to be a monoid?}
\ch{How does structured full abstraction deal with the quantification
over all compromise scenarios though (problem~3)?}\ch{Yannis?}
\yj{SFA itself just asks for a connection between the low-level
attacker and the high-level attacker. It says nothing about all
compromise scenarios. In fact, it has exactly the trusted/distrusted
flavor of full abstraction in this respect, and Patrignani et al's PMA
work could definitely satisfy an instance of SFA where the definition
for low-level context application means putting a physical barrier
between attacker and program.}
\ch{So how can a property that has one of the problems of
full abstraction baked in imply a property that doesn't
have this problem? What's the catch? (Hint: I think it
has to do with only proving this for an MD instance of SFA,
and indirectly to the question below about separate compilation)}
\yj{My feeling is the same as yours; \IE that's part of our
instance definition, in particular in our instance for context
application. Maybe Arthur has an idea about this? (He's proved this
part, right?)}
\yj{The interesting property to me seems to be that
$\forall \pi{}~ps, \bowtie{} \pi{}(ps) \sim{} \bowtie{} ps$. We then define context
application as $as[ps] \triangleq{} \bowtie{} (ps + as)$.
Say now that your real program is $\bowtie{} cs$.
You can choose your compromise scenario $ps, as, \pi{}$ such that
$cs = \pi{}(ps + as)$. SFA gives you guarantees about the behavior of
$as[ps] =~\bowtie{} (ps + as)$. But your real program, $\bowtie{} \pi{}(ps + as)$, behaves
the same as this one because of the property above, so you get
guarantees for your real program as well.}
\yj{What this property says intuitively is that the behavior of your
components does not depend on their position; there are no ``safer''
positions that would be behind a barrier when other positions would
not.}
\ch{This property seems like a part of it, but my feeling is
that there is quite a bit more to it.}
\ch{Starting to wonder whether there is a more general (than SFA(MD))
instance of SFA that has all the good properties we want explicitly
stated, instead of only derived implicitly in some proof.}
\ch{At this point even a more direct definition of SFA(MD) (instead of
as an instance of SFA) would already be a step forward.}
\ch{Possibly related to previous: What's a role of separate
compilation in all this? I guess that the reduction of secure
compartmentalization to structured full abstraction also relies on
separate compilation, right?}
\ch{So is it good intuition to think that SFA + separate compilation
$\Rightarrow$ SC?}
\ch{Yannis?}
\yj{Not sure how separate compilation comes into play. Could it be
that having separate compilation would imply that you also have the
property written in the previous answer? Not obvious at all.}
\ch{I'm afraid I don't see any connection between the two.}
\fi
\section{A Simple Instance of SCC}
\label{sec:instance}
\iffull
\feedback{Reviewer B: the long chunks of prose do not read very
easily. Please use different typesetting for statements of
definitions, theorems, proofs, etc, so it is easier to identify
where they start and where they end.}
\ch{Unclear whether we want to do anything about this complainant.}
\yj{WONTFIX}
\fi
In this section, we illustrate the main ideas behind SCC with a
proof-of-concept compiler from an unsafe language with components to
an abstract machine with compartments.
We discuss key design decisions for providing secure
compartmentalization, such as cleaning register values to prevent
unintended communication between compartments.
We also explain how a compiler optimization for component-local calls
makes unwary compiled components vulnerable to realistic control-flow
hijacking attacks.
Finally, we show how to adapt a standard full abstraction proof
technique called {\em trace semantics}~\cite{PatrignaniASJCP15} to
prove SCC.
The results in this section have been proved on paper under
the assumptions explicitly mentioned in the text.
In the following, an assumption denotes a property that we believe
is true and rely on, but that we haven't proved.
Lemmas, theorems and corollaries denote properties that we have
proved on paper, possibly relying on some of the stated assumptions.
The proof of the structured full abstraction theorem
(\autoref{thm:sfa-instance}) has also been formalized in Coq assuming
most other results in this section as axioms, with the exception of
\autoref{thm:partial-type-safety} and
\autoref{cor:separate-compiler-correctness} which are also mechanized.
While not constituting a complete machine-checked proof, this
mechanization further validates the high-level proof structure
described in this section.
\SUBSECTION{Source Language}
\label{sec:source}
We work with an unsafe source language with components,
procedures, and buffers. A program in this language is a set
of components communicating via procedure calls.
Buffer overflows have undefined behavior and may open the door to
low-level attacks after compilation.
However, thanks to the cooperation between the low-level
compartmentalization mechanism and the compiler,
the effects of these attacks will be limited to the offending
component.
Components have statically checked interfaces that specify which
procedures they import and export.
To satisfy an interface, a component can only call external procedures
that it imports explicitly, and it must define all procedures exported
by its interface.
Thus, interfaces define privileges by preventing components from
calling non-imported procedures, and enable components to define
private procedures (that are not exported in their interfaces).
We will use the same notion of interfaces in our target abstract machine.
The syntax of expressions, given below, is that of a standard
imperative language with mutable buffers and mutually recursive
procedures. Each component $C$ has local procedures \iffull
identified as $P \in [0, P_{num})$ \else ``$C.P$'' \fi and private
local buffers \iffull identified as $b \in [0,b_{num})$, where
$P_{num}$ and $b_{num}$ are fixed for each component\else $b$\fi{}.
Loops are encoded using recursive calls, sequencing is encoded as a
binary operation, and variables are encoded using buffers. Procedures
take a single argument, which by convention is always passed in the
first cell of the first buffer of the callee component. The only
first class values are integers $i$; these can be passed across
component boundaries using procedure calls and returns. Buffers and
procedures are second class.
\[
\begin{array}{l@{}l}
\mathit{e} \ \ \mathrel{::=}\ \ \;&
i
\;|\;
e_1 \otimes e_2
\;|\;
\texttt{if }e\texttt{ then }e_1\texttt{ else }e_2
\;|\;
b\texttt{[}e\texttt{]}
\;|\;
\\ &
b\texttt{[}e_1\texttt{] := }e_2
\;|\;
C\texttt{.}P\texttt{(}e\texttt{)}
\;|\;
\texttt{exit}
\end{array}
\]
where $\otimes \in \{ ; , + , - , \times, =, \leq, \ldots\}$.
We define a standard continuation-based small-step semantics that
reduces configurations $\mathit{cfg}$. It is
deterministic for programs that don't cause undefined behavior.
\aaa{This is confusing, since source-level undefined behavior causes
stuckness, and not nondeterminism. Especially given the discussion
on stuckness a few paragraphs later.}\ch{+1, how about just saying:
it is deterministic; undefined behaviors are modeled as stuckness
as discussed below. Otherwise I would remove this from here and
only discuss determinism later, together with undefined behaviors.}
\[
\begin{array}{l@{}l@{~~~~~}l@{}l}
\mathit{cfg} \ \mathrel{::=}\ \;& (C, s, \sigma, K, e)
&
\mathit{K} \ \mathrel{::=}\ \;&
\texttt{[]}
\;|\;
E \texttt{::} K
\end{array}
\]
\[
\begin{array}{l@{}l}
\mathit{E} \ \ \mathrel{::=}\ \ \;&
\square{} \otimes e_2
\;|\;
i_1 \otimes \square{}
\;|\;
\texttt{if }\square{}\texttt{ then }e_1\texttt{ else }e_2
\;|\;
\\ &
b\texttt{[}\square{}\texttt{] := }e_2
\;|\;
b\texttt{[}i_1\texttt{] := }\square{}
\;|\;
C\texttt{.}P\texttt{(}\square{}\texttt{)}
\end{array}
\]
A configuration $(C, s, \sigma, K, e)$ represents a call in progress
within component $C$, in which $e$ is the expression being reduced and
$K$ is the continuation for this expression, up to the latest procedure
call.
Continuations are evaluation contexts, here represented as lists of
flat evaluation contexts $E$.
We denote by $s$ a global state recording the values of the local
buffers for each component.
Continuations for pending calls are stored on a call stack
$\sigma$, together with their call arguments' values and the names of
the compartments they execute in.
We omit the obvious definitions for call stacks $\sigma$ and states $s$.
Evaluation starts as a call to a fixed procedure of a fixed main
component, and completes once this call completes, or whenever the
current expression $e$ is $\texttt{exit}$.
We illustrate the small-step semantics with the three rules that deal
with procedure call evaluation.
In these rules, $\Delta$ is a mapping from procedure identifiers to
procedure bodies.
\infrule[]
{s' = s[C',0,0 \mapsto i] \andalso \sigma' = (C,s[C,0,0],K) \texttt{::} \sigma }
{\Delta \vdash (C, s, \sigma, C'.P'(\square) \texttt{::} K, i) \rightarrow
(C', s', \sigma', \texttt{[]}, \Delta[C',P'])}
\infrule[]
{s' = s[C',0,0 \mapsto i']}
{\Delta \vdash (C, s, (C',i',K) \texttt{::} \sigma, \texttt{[]}, i) \rightarrow (C', s', \sigma, K, i)}
\infrule[]
{}
{\Delta \vdash (C, s, \sigma, K, C'.P'(e)) \rightarrow (C, s, \sigma, C'.P'(\square) \texttt{::} K, e)}
As shown on the right-hand side of the first rule, a call starts with
an empty continuation and the procedure body $\Delta[C',P']$ as the
current expression.
The first cell in the first buffer of the callee compartment is
updated with the call argument, while information about the caller's
state when performing the call gets pushed on the call stack $\sigma$.
A call completes once an empty continuation is reached and the current
expression is a value, as is the case in the left-hand side
of the second rule.
In this case, the caller's state is restored from the call stack, and
execution resumes with the call result $i$ as the current expression.
The intermediate steps between the start and the end of a call reduce
the procedure body to a value, as the last rule illustrates:
Whenever $e$ is not a value, reduction deconstructs $e$ into a
subexpression $e'$ and a flat evaluation context $E$ such that
$e = E[e']$, where $E[e']$ means filling the hole $\square$ in $E$
with $e'$.
This expression $e'$ becomes the new currently reduced expression,
while $E$ gets appended on top of the current continuation $K$.
Finally, when $e$ is a value $i$ and the call has not completed
($K \neq \texttt{[]}$), the next step is chosen
based on the flat evaluation context found on top of $K$,
which gets removed from $K$.
In the left-hand side of the first rule, for example, this flat
evaluation context is $C'.P'(\square)$, for which the next chosen
step, as shown on the right-hand side, is to start a procedure call to
$C'.P'$, using $i$ as the call argument.
Since undefined behaviors are allowed to
take the machine to an arbitrary {\em low-level} state,
it wouldn't make much sense to try to make the source-language semantics
describe what can happen if an undefined point is reached.
We therefore model them at the source level simply as
stuckness (as done for instance in CompCert~\cite{Leroy09}).
In particular, reduction gets stuck when trying to
access or update a buffer out of bounds, and
the type safety theorem says
that well-formed programs can only go wrong (get stuck) by reducing to an
out-of-bounds operation on a buffer.
A program is well-formed if all the used buffers are defined, all
imported components are defined, all imported external procedures are
public, and if the names of all components are unique.
Well-formedness extends straightforwardly to configurations.
\ifsooner
\yj{New convention: conjecture for things that we think are true but
do not rely on, assumption for things that we think are true but
rely on.}\bcp{I think I'd be more comfortable with either saying ``assumption'' for
both or else not using any keyword at all for the current Conjectures and
just saying in full ``We haven't proved it because we don't need it below,
but we expect that this has such-and-such property...''}
\yj{Compiler correctness as stated below implies partial progress (for
configurations that are reachable from an initial configuration), so
compiler correctness is enough on its own, partial progress is just
an interesting conjecture/side remark.}
\aaa{We should defined well-formedness, or at least try
to explain it intuitively. If the following follows from compiler
correctness, then we could just state it as a corollary after that result.}
\fi
\begin{thm}[Partial type safety]\Coqed
\label{thm:partial-type-safety}
For any well-formed configuration $\textit{cfg} = (C, s, \sigma, K, e)$,
one of the following holds:
\begin{enumerate}[(1)]
\item $\textit{cfg}$ is a final configuration (either
$e$ is
$\texttt{exit}$ or else it is a value and $K$ and $\sigma$ are both
empty);
\item \textit{cfg} reduces in one step to a well-formed configuration;
\item \textit{cfg} is stuck and has one of the following forms:
\begin{enumerate}[(a)]
\item $(C, s, \sigma, b\texttt{[}\square\texttt{]} :: K, i)$
where $s[C,b,i]$ is undefined;
\item $(C, s, \sigma, b\texttt{[}i\texttt{]:=}\square {::} K, i')$ where
$s[C,b,i]$ is undefined.
\end{enumerate}
\end{enumerate}
\end{thm}
In the following, we use the term \emph{undefined behavior configurations}
for the
configurations described in (3), and we say that a well-formed program
is \emph{defined} if reducing it never reaches an undefined behavior
configuration.\ch{This now seems a bit out of sync with claims in section
2 that we model undefined behavior as stuckness. No big deal though,
since for well-defined programs the two agree.}
\iffull
\ch{My impression is that the following 2 paragraphs don't really
belong here. We might search for a place to move them.}
\yj{OK, TODO.}
When compiling this language to the compartmentalized abstract
machine, full abstraction for unsafe languages
(\autoref{defn:lfa}) would give weak guarantees due
to the open-world assumption on contexts.
Indeed, the high-level attacker could choose to import all public
methods or even take a completely different shape (e.g., with more
components) as long as it remains compatible with the program.
\aaa{Maybe mention ``privilege'' here? Something like ``e.g., the
attacker may exert more privilege than the original source component
had access to.''}
SCC, however, allows a reasoning model in
which the compromised components are replaced by arbitrary
components that satisfy the interface of the original component.
In particular, if access to public methods characterizes the level of
privilege of a component, this means that low-level attackers have
the same privilege as the component(s) that they successfully
compromised.
\fi
\SUBSECTION{Target}
\label{sec:target}
Our compiler targets a RISC-based abstract machine extended with a
compartmentalization mechanism, inspired by a similar design featured
in previous work~\cite{micropolicies2015}. Each compartment in this
target has its own private memory, which cannot be directly accessed
by others via loads, stores, or jumps. Instead, compartments must
communicate using special call and return instructions, which, as
explained below, include checks to ensure that the communication
respects compartment interfaces. (Note that this scheme requires a
protected call stack, which, in a real system, could be implemented
\EG using a shadow call stack~\cite{ErlingssonAVBN06, AbadiBEL09} or
return capabilities~\cite{JuglaretHAPST15}.)
Because resource exhaustion and integer overflow issues are orthogonal
to our present concerns, we assume that words are unbounded and
memory is infinite.
The instruction set for our machine is mostly standard.
\[
\begin{array}{l@{}l}
\mathit{instr} \ \ \mathrel{::=}\ \ \;&
\con{Nop}
\;|\;
\con{Const}~i\rightarrow{}r_d
\;|\;
\con{Mov}~r_s\rightarrow{}r_d
\\ &
\;|\;
\con{Load}~^{*}r_p\rightarrow{}r_d
\;|\;
\con{Store}~^{*}r_p\leftarrow{}r_s
\\ &
\;|\;
\con{Jump}~r
\;|\;
\con{Jal}~r
\;|\;
\con{Call}~C~P
\;|\;
\con{Return}
\\ &
\;|\;
\con{Binop}~r_1\otimes{}r_2\rightarrow{}r_d
\;|\;
\con{Bnz}~r~i
\;|\;
\con{Halt}
\end{array}
\]
$\con{Const}~i\rightarrow{}r_d$ puts an immediate value $i$ into register $r_d$.
$\con{Mov}~r_s\rightarrow{}r_d$ copies the value in $r_s$ into $r_d$.
$\con{Load}~^{*}r_p\rightarrow{}r_d$ and
$\con{Store}~^{*}r_p\leftarrow{}r_s$ operate on the memory location whose
address is stored in $r_p$ (the $*$ in the syntax of \con{Load} and
\con{Store} indicates that a pointer dereference is taking place), either
copying the value found at this
location to $r_d$ or overwriting the location with the content of $r_s$.
$\con{Jump}~r$ redirects control flow to the address stored in $r$.
$\con{Jal}~r$ (jump-and-link) does the same but also communicates a
return address in a dedicated register $r_{ra}$, so that the target
code can later resume execution at the location that followed the
$\con{Jal}$ instruction.
$\con{Call}~C~P$ transfers control to compartment $C$ at the entry
point for procedure ``$C.P$''.
$\con{Return}~C~P$ transfers control back to the compartment that
called the current compartment.
$\con{Binop}~r_1\otimes{}r_2\rightarrow{}r_d$ performs the
mathematical operation $\otimes$ on the values in $r_1$ and $r_2$ and
writes the result to $r_d$.
Finally, $\con{Bnz}~r~i$ (branch-non-zero) is a conditional branch
to an offset $i$, which is relative to the current program counter.
If $r$ holds anything but value zero, the branching happens, otherwise
execution simply flows to the next instruciton.
While $\con{Jal}$ is traditionally used for procedure calls and
$\con{Jump}$ for returns, in this machine they can only target the
current compartment's memory.
They are nonetheless useful for optimizing compartment-local calls, which
need no instrumentation; in a realistic setting, the instrumented
primitives $\con{Call}$ and $\con{Return}$ would likely come with
monitoring overhead.
In the low-level semantics, we represent machine states $\ii{state}$
as
$(C,\sigma,\ii{mem},\ii{reg},\ii{pc})$ where $C$ is the currently
executing compartment, $\ii{mem}$ is a partitioned memory, $\ii{reg}$
is a register file, $\ii{pc}$ is the program counter,
and $\sigma$ is a global protected call stack.
We assume a partial function $\ii{decode}$ from words to
instructions.
\ifsooner
\bcp{$K$ and $E$ have already been used, in earlier sections. It's a shame
to reuse them here, but if you do have to reuse them please at least say
that you are doing so and say what they mean now. $K$ is a machine state,
I guess.}\yj{Renamed $K$.}
\fi
We write $\psi; E \vdash \ii{state} \to \ii{state'}$ to mean
that $\ii{state}$ reduces to $\ii{state'}$ in
an environment where component interfaces are given by $\psi$ and
component entry points by $E$.
Here are the reduction rules for $\con{Call}$ and $\con{Return}$:
\infrule[]
{\ii{mem}[C,\ii{pc}] = i \andalso
\ii{decode}~i = \con{Call}~C'~P' \andalso
\ii{pc}' = E[C'][P']
\\
C' = C ~ \vee ~ C'.P' \in \psi[C].\ii{import} \andalso
\sigma' = (C,\ii{pc}\mathord{+}1)~\con{::}~\sigma
}{\psi; E \vdash (C,\sigma,\ii{mem},\ii{reg},\ii{pc}) \to
(C',\sigma',\ii{mem},\ii{reg},\ii{pc}')}
\infrule[]
{\ii{mem}[C,\ii{pc}] = i \andalso
\ii{decode}~i = \con{Return} \andalso
\sigma = (C',\ii{pc}')~\con{::}~\sigma'
}{\psi; E \vdash (C,\sigma,\ii{mem},\ii{reg},\ii{pc})
\to (C',\sigma',\ii{mem},\ii{reg},\ii{pc}')}
The $\con{Call}$ rule checks that the call is valid with
respect to the current compartment's interface---i.e., the target
procedure is imported by the current compartment---which
ensures that even if a compiled component is compromised
it cannot exceed its static privilege level.
Then it puts the calling compartment's name and program
counter on the global protected call stack $\sigma$.
Finally, it redirects control to
the entry point of the called procedure.
The $\con{Return}$ instruction retrieves the caller's compartment and
return address from the protected call stack and resumes execution there.
\SUBSECTION{Compiler}
\label{sec:compiler}
We next define a simple compiler that produces one low-level memory
compartment for each high-level component.
Each compartment is internally split into buffers,
the code of procedures, and a local stack that can grow infinitely.
The local stack is used to store both intermediate results and return
addresses.
\iffull
\bcp{Is this next sentence adding anything?}To compile procedures, we use a
direct mapping from high-level expressions to low-level code.
\fi
In standard calling conventions, the callee is generally expected to restore
the register values of the caller, if it has modified them, before
returning from a call.
Here, however, compiled components cannot assume that other
components will necessarily follow an agreed calling convention, so they
must save any register that may be needed later.
This means, for instance, that we save the value of the current
call argument on the local stack and write the
local stack pointer to a fixed location in the current compartment's
memory before any cross-compartment call instruction is performed,
so that the compartment can restore them when it gets control back.
The compiler must also prevent a compromised compartment from reading
intermediate states from code in other compartments that may be in the
middle of a call to this one.
Intuitively, a secure compiler must prevent compromised compartments from
distinguishing compiled components based on low-level information that
(fully defined) high-level attackers don't get. In the source language,
only a single argument or return value is communicated at call and return
points.
Hence, besides preserving their values for later, the compiler
ensures that all\footnote{
Technically speaking, we believe that, in our very simple setting,
the compiler could choose not to clean \emph{unused} registers and
still be secure.
%
However, our proof relies on compiled components cleaning \emph{all}
registers except the one that holds the call argument or return
value.
%
Indeed, not cleaning unused registers makes things harder because it
can provide a covert channel for two compromised compartments
between which interfaces would forbid any direct communication.
%
These compartments could now exchange values through uncleared
registers by interacting with the same unsuspecting uncompromised
component.
%
We conjecture that this possible cooperation between compromised
components doesn't yield more attacker power
in our case.
%
However, in a
\iffull
slightly different
\fi
setting
where registers could be used to transmit capabilities,
this \emph{would} give more power to the attacker, so
our compiler clears all registers but one, which also simplifies our
proof.}
registers are cleaned before transferring control to other
compartments.
The compiler implements a simple optimization for local calls.
Since all procedures of a component live in the same
address space and local calls don't need instrumentation, these calls
can be implemented more efficiently using $\con{Jal}$ and
$\con{Jump}$ instructions.
We therefore use different procedure entry points for
component-local and cross-component calls, and we skip, for
local calls, the steps that store and restore register values
and clean registers.
Because we do not check bounds when compiling buffer read and write
operations, buffer overflows can corrupt a compartment's memory in
arbitrary ways.
Consequently, many buffer overflow attacks can be reproduced even in our
simple setting,
including, due to the local-call optimization,
return-oriented programming attacks~\cite{Shacham07, Buchanan2008}.
\ifsooner
\bcp{It's probably OK because readers will understand what ROP is, but this
sentence would not explain anything to someone who does not.}\ch{Agreed,
maybe pick up a standard explanation from somewhere like Wikipedia}%
\fi
In return-oriented programming, an attacker overwrites return
addresses on the local stack to produce an unexpected sequence of
instructions of his choice by reusing parts of the code of
component-local procedures.
In our setting, buffer overflow attacks thus enable compiled components to
shoot themselves in the foot by storing beyond the end of a buffer and into
the local call stack.
\iffull
\bcp{Don't think we need the next sentence: the reader knows that this is
where we are going.}
However, as we will prove (\autoref{thm:sfa-instance}), buffer
overflows can only do limited harm to other compiled components.
\fi
We assume compiler correctness as stated below for our
compiler.
\iffull
Note that, in the presence of partial type safety
(\autoref{thm:partial-type-safety}),
the determinism of our source language implies that a defined source
program either terminates or diverges.
Similarly, because we equate stuckness and termination in our
deterministic abstract machine, target programs also either terminate
or diverge.
As a consequence, proving either (1) or (2) below is enough to get the
other.
\else{}
Note that, in the presence of partial type safety,
(\autoref{thm:partial-type-safety}), proving either (1)
or (2) below is enough to get the other.
\fi
\ch{Might want to make it explicit that this definition of compiler
correctness relies on the determinism of both source and target, right?}
\begin{assumption}[Whole-program compiler correctness]\label{assumption:compiler-correctness}
\[
\begin{array}{l}
\forall P.~ P\text{ defined} \Rightarrow \\
\text{~~(1) }\TERM{P} \iff \TERM{\comp{P}}~\wedge \\
\text{~~(2) }\DIV{P} \iff \DIV{\comp{P}}
\end{array}
\]
\end{assumption}
\SUBSECTION{Instantiating structured full abstraction}
\label{sec:instance-defs}
We define program shapes, partial programs, and contexts in a similar
way to \autoref{thm:sfa-to-sc}, as detailed below.
More precisely, we use isomorphic definitions so that we can later
apply this theorem.
A program shape $s$ is the pairing of a mapping from component names
to component interfaces and a set that indicates uncompromised
components.
In the rest of the paper, we implicitly restrict our attention
to \emph{well-formed} shapes.
A shape is well-formed when (1) all component interfaces in the shape
only import procedures from components that are part of the shape, and
(2) these procedures are exported according to the shape.
High-level partial programs $P$ and contexts $A$ are defined as
mappings from component names to component definitions.
A high-level partial program $P$ has shape ${\bullet}s$ when it
defines exactly the components that are marked as \emph{uncompromised}
in $s$, with definitions that satisfy the corresponding interfaces,
and when it moreover satisfies the simple well-formedness condition
that all the local buffers it uses are defined.
A high-level context $A$ has shape ${\circ}s$ under the same
conditions, adapted for \emph{compromised} components instead of
uncompromised ones.
A low-level partial program $p$ or context $a$ is formed by pairing a
partitioned memory with a mapping from procedure identifiers to entry
points.
This choice is isomorphic to having sets of named compartment
memories with entry point annotations.
A low-level partial program $p$ has shape ${\bullet}s$ when the
partitioned memory has partitions under exactly the component names
that are marked as \emph{uncompromised} in $s$, and the entry point
mapping provides addresses for exactly the procedures that are
exported by these components according to $s$.
A low-level context $a$ has shape ${\circ}s$ under the same
conditions, adapted for \emph{compromised} components instead of
uncompromised ones.
We say that a high-level partial program $P \psh{} s$ is fully defined
with respect to contexts of shape ${\circ}s$ when it cannot be blamed
for undefined behavior when interacting with such contexts:
for every $A \ash{} s$, either reducing $A[P]$ never reaches an
undefined behavior configuration, or else the current component in
this undefined behavior configuration belongs to $A$.
Similarly, a high-level context $A \psh{} s$ is fully defined
with respect to programs of shape ${\circ}s$ when it cannot be blamed
for undefined behavior when interacting with such programs.
Because we perform a point-wise compilation of high-level programs,
separate compilation (\autoref{defn:separate-compilation}) trivially
holds for our compiler.
Combining it with whole-program compiler correctness
(\autoref{assumption:compiler-correctness}) immediately leads to the
following corollary:
\begin{cor}[Separate compilation correctness]\Coqed
\label{cor:separate-compiler-correctness}
\[
\begin{array}{l}
\forall s, A \ash{}s, P \psh{}s. \\
~~ P\text{ fully defined wrt. contexts of shape }{\circ}{s} \Rightarrow \\
~~ A\text{ fully defined wrt. programs of shape
}{\bullet}{s} \Rightarrow \\
\text{~~~~(1) }\TERM{A[P]} \iff \TERM{\comp{A}[\comp{P}]}~\wedge \\
\text{~~~~(2) }\DIV{A[P]} \iff \DIV{\comp{A}[\comp{P}]}
\end{array}
\]
\end{cor}
\SUBSECTION{Proof technique for structured full abstraction}
\label{sec:proof-outline}
Trace semantics were initially proposed by Jeffrey and
Rathke~\cite{JeffreyR05b,JeffreyR05} to define fully abstract models
for high-level languages.
Patrignani~\ETAL later showed
how to use trace semantics~\cite{PatrignaniC15} to prove full
abstraction for a compiler targeting machine
code~\cite{PatrignaniASJCP15}.
This proof technique is well suited for deterministic target languages
such as machine code and proceeds in two steps.
First, we devise a trace semantics for low-level partial programs and
contexts and relate it to the target machine's operational semantics (\EG by
proving it fully abstract~\cite{PatrignaniC15}).
This trace semantics will provide a set of traces for every partial
program, describing all the execution paths that this program
can reach by interacting with an arbitrary context.
Second, we use the trace semantics to characterize the interaction
between an arbitrary low-level context and, separately, two compiled
programs that this context distinguishes, resulting in two traces with a
common prefix followed by different actions.
We can then use these traces to construct a high-level attacker,
proving that this attacker distinguishes between the two source programs.
As our proof demonstrates, proving the trace semantics fully abstract
is not a mandatory first step in the technique.
Instead, we relate our trace semantics to the operational one
using two weaker trace composition and decomposition conditions
(\autoref{lemma:trace-decomp}
and \autoref{lemma:trace-comp}),
adapted from the key lemmas that Jeffrey and Rathke used to
prove their trace semantics fully
abstract~\cite{JeffreyR05b,JeffreyR05}.
This reduces proof effort, since proving a trace semantics fully
abstract typically requires proving a third lemma with a
trace-mapping argument of its
own~\cite{PatrignaniC15,JeffreyR05b,JeffreyR05}.
Adapting the technique to undefined behavior is straightforward,
essentially amounting to proving standard full abstraction for the safe
subset of the language.
Then one simply proves that the context produced by the
mapping is fully defined, thus safe.
Adapting to a closed world, however, takes more work.
The trace semantics that have been used previously to prove full abstraction
characterize the interaction between a partial program and arbitrary
contexts.
The context's shape is typically constructed as reduction goes,
based on the steps that the context takes in the generated trace.
For instance, if the trace said that the context performs a call, then
the target procedure would be appended to the context's interface so
that this call becomes possible.
For structured full abstraction, we want a finer-grained trace
semantics that enables reasoning about the interaction with
contexts of a specific shape.
We achieve this by making the shape a parameter to the reduction
relation underlying our trace semantics.
To make sure that traces are compatible with this shape, we also
keep track of the current compartment during reduction.
This allows us to generate only context steps that adhere to the
current compartment's interface, and hence to the context's shape.
In particular, the context will only be able to call program
procedures for which
(1) there is a context compartment whose interface explicitly imports
the target procedure, thus granting the privilege to call that
procedure,
and (2) this other context compartment is reachable from the current
compartment via a chain of cross-compartment calls or returns within
the context.
Moving to a closed world also makes the trace mapping argument harder.
The one from Patrignani~\ETAL~\cite{PatrignaniASJCP15},
for instance, relies on changes in the context's shape,
\EG adding a helper component to the context that is not present in
the low level.
This is no longer possible for structured full abstraction,
where the context shape is fixed.
\ifsooner
\ch{Should explain again what your shapes are (probably the same as in
Section 3 instance)}
\yj{OK, TODO.}
\ch{We reuse \autoref{thm:sfa-to-sc}, given that we have an isomorphic
notion of shape: a pair of a map $\psi$, that associates
component names to interfaces, and a set which contains the names of
the compromised components.}
\fi
\SUBSECTION{Trace semantics for the low-level language}
\label{sec:trace-semantics}
We define a trace semantics in which traces are finite words over an
alphabet $E\alpha$\ifsooner\bcp{It's cute, but because of the change
of alphabet, I first tried to parse this as $E$ applied to $\alpha$.
Change to $\ii{Ea}$?}\fi{}
of external actions, alternating between program external
actions ``$\gamma!$'' and context external actions ``$\gamma?$''.
We treat external actions as moves in a two-player game, viewing
the context and the partial program as the players.
The trace semantics is parameterized by a shape $s$, which the two
players have.
External actions either transfer control to the other player or end
the game.
\[
\begin{array}{r@{}l@{~~~~~~~}r@{}l}
E\alpha\ \mathrel{::=}\; \ \;&
\gamma!
\;|\;
\gamma?
&
\gamma \ \mathrel{::=}\; \ \;&
\con{Call}_{\ii{reg}}~C~P
\;|\;
\con{Return}_{\ii{reg}}
\;|\;
\checkmark
\end{array}
\]
Traces ($E\alpha^*$) track the external actions ($\gamma$)
performed by the context and the program.
The first kind of external action is cross-boundary communication,
which corresponds to the use of instrumented call instructions
$\con{Call}~C~P$ and $\con{Return}$ when they transfer control to
a compartment that belongs to the opponent.
For these external actions, traces keep track of the instruction used
together with $\ii{reg}$, the values held by all registers when the
instruction is issued.
The second kind of external action is program termination,
which we denote with a tick $\checkmark$ and which the opponent cannot
answer ($\checkmark$ ends the game).
It corresponds to the use of an instruction that makes execution
stuck, such as $\con{Halt}$.
At any point where it has control, a player can take internal actions
(any instruction that
neither terminates execution nor transfers control to the opponent);
these are not reflected in the trace.
In particular, cross-compartment communication is considered an
{\em internal} action when it transfers control to a compartment that
belongs to the current player.
Besides halting, a player can also end the game by triggering an infinite
sequence of internal actions, making execution diverge.
In the trace, this will correspond to not making any move:
the trace observed thus far will be a maximal trace for the
interaction between the program and context involved, \IE any
extension of this trace will not be shared by both the program and the
context.
Intuitively, a program $p \psh{} s$ has trace $t$ if it answers with the
{program} actions described in $t$ when facing a context $a \ash{}s$ that
produces the {context} actions described in $t$.
\ifsooner
We make no formal distinction between partial programs and contexts,
so asking that $a$ produces the context actions described in
$t$ is the same as asking that $a$ (seen as a partial program)
produces the program actions in $t^{-1}$ when facing $p$, where
$t^{-1}$ is obtained by swapping ``!'' and ``?'' in $t$.
\fi
Similarly, a program $a \ash{}s$ has trace $t$ if it answers with the
{context} actions described in $t$ when facing a program $p \psh{}s$
that produces the {program} actions described in $t$.
We define $\ptr{s}{p}$ to be the set of traces of a partial program
$p$ with respect to contexts of shape ${\circ}s$, and $\atr{s}{a}$
to be the set of traces of a context $a$ with respect to programs of
shape ${\bullet}s$.
\ifsooner
We use this definition to talk about the traces of a context:
context $a$ produces the context actions in trace $t$ when facing a
program $p$ that produces the program actions in $t$ iff
$t^{-1} \in \atr{s}{a}$.
\fi
The player that starts the game is the one that owns the main
component according to $s$.
For each player, the trace semantics is deterministic with respect to
its own actions and nondeterministic with respect to the opponent's
actions.
All possible actions an actual opponent could take have a
corresponding nondeterministic choice, which is formalized by a
property we call trace extensibility.
\begin{lemma}[Trace extensibility]
\label{lem:trace-ext}
\[
\begin{array}{l}
\forall t,\; s,\; p \psh{}s,\; a \ash{}s. \\
~~ ( t \in \ptr{s}{p} \wedge t.\gamma? \in \atr{s}{a} \Rightarrow
t.\gamma? \in \ptr{s}{p} ) ~ \wedge \\
~~ ( t \in \atr{s}{a} \wedge t.\gamma! \in \ptr{s}{p} \Rightarrow
t.\gamma! \in \atr{s}{a} )
\end{array}
\]
\end{lemma}
\ifsooner
\iffull
Note that because there is no formal distinction between partial
program and context, the same property holds when swapping the
roles of the program and the context.
\fi
\fi
Nondeterminism disappears once we choose a particular opponent for
a player, as the two key lemmas below illustrate.
\begin{lemma}[Trace decomposition]
\label{lemma:trace-decomp}
\[
\begin{array}{l}
\forall s,\; p \psh{}s,\; a \ash{}s.~~
\TERM{a[p]} \Rightarrow \\
~~\exists t.~~ \TTERM{t} \wedge t \in \ptr{s}{p} \cap \atr{s}{a}
\end{array}
\]
\end{lemma}
Trace decomposition is stated for terminating programs.
It extracts the interaction\ch{an interaction? unique or not?}
between a program $p$ and a context $a$
with dual shapes by looking at how $a[p]$ reduces, synthesizing that
interaction into a trace $t$.
Because execution terminates, this trace ends with a
termination marker.
\begin{lemma}[Trace composition]
\label{lemma:trace-comp}
\[
\begin{array}{l}
\forall t,\; s,\; p \psh{}s,~ a \ash{}s.~~
t \in \ptr{s}{p} \cap \atr{s}{a} \Rightarrow \\
~~ ( \forall E\alpha.~(t.E\alpha) \not\in \ptr{s}{p} \cap \atr{s}{a} ) \Rightarrow \\
~~~~~(\TERM{a[p]} \iff \TTERM{t})
\end{array}
\]
\bcp{Is $E\alpha$ a variable ranging over $E\alpha$??}
\ifsooner\bcp{How about $\cdot$ instead of . for trace append?}
\ch{too late for this, maybe later}\fi
\end{lemma}
Trace composition is the opposite of trace decomposition,
reconstructing a sequence of reductions based on synthesized interaction
information.
It considers a program and a context with dual shapes, that share a
common trace $t$.
The condition on the second line states that the game has ended:
trace $t$ cannot be extended by any action $E\alpha$ such that the
two players share trace ``$t.E\alpha$''.
Under these assumptions, trace composition tells us that one of the
following holds:
either (1) the trace ends with a termination marker $\checkmark$ and
putting $p$ in context $a$ will produce a terminating
program,
or (2) the trace does not end in $\checkmark$
and putting $p$ in context $a$ will produce a diverging program.
Intuitively, if the game has ended but there is no termination
marker, it must be because one of the players went into an
infinite sequence of internal actions and will neither give control
back nor terminate.
While the statement of these lemmas is quite close to that used in
an open world setting~\cite{JeffreyR05b,JeffreyR05},
\ifsooner
\iffull
(for standard full abstraction, we simply wouldn't care about program
and context shapes),
\else,
\fi
\fi
the trace semantics itself has to be adapted in order to prove them
in the presence of our closed world assumption.
To this end, we incorporate \emph{internal} actions within the trace
semantics, thus adding more options to the nondeterministic
choice of the next context action, which allows us to track at any
point the currently executing compartment.
When in control, a player can only perform communicating actions
allowed by the interface of the current compartment.
\ifsooner
\bcp{The rest of the paragraph doesn't flow very well.}
\fi
This restricts external actions as required, while also making it possible
to internally switch the current compartment through allowed internal
actions.
Using our semantics, we thus end up with finer-grained traces that
include internal communication, which can be directly mapped to
high-level attackers (\autoref{assumption:definability}).
The traces we use otherwise are obtained by erasing internal actions
from the finer-grained traces.
\SUBSECTION{Proof of SCC}
\label{sec:md-proof}
We prove our instance of structured full abstraction,
which implies SCC by \autoref{thm:sfa-to-sc} since we have
isomorphic definitions to the ones in \autoref{sec:fa-not-enough}.
\begin{thm}[Structured full abstraction]\label{thm:sfa-instance}\Coqed\\
Our compiler satisfies structured full abstraction.
\end{thm}
\ifsooner
\ch{\bf Entering the twighlight zone, where the intuition is gone}
\ch{\bf Don't understand this at all, why a bad
starting point? Why do you care about compiled fully defined guys
when mapping traces?}\yj{Tried to explain and reorganize. Better?}
\fi
Recall that the basic idea behind the proof technique is to extract
two traces that characterize the interaction between a low-level context
and two compiled fully defined high-level programs, and then to map these
two traces to a fully defined high-level context.
The high-level context should reproduce the context actions described
in the traces when facing the same programs as the low-level context.
Unfortunately, a compiled fully defined context cannot reproduce
any arbitrary low-level trace, because the values transmitted
in registers are part of external communication actions in low-level
traces:
As enforced by the compiler, these contexts always clear all
registers but the one used for communication before giving control to
the program.
They can thus only produce traces in which registers are
cleared in all context actions, which we call \emph{canonical} traces.
We denote by $\zeta(\gamma)$ the operation that rewrites action
$\gamma$ so that all registers but one are clear.
A canonical trace $\zeta_{\circ}(t)$ can be obtained from an arbitrary
trace $t$ by replacing all context actions ``$\gamma?$'' by
``$\zeta(\gamma)?$''.
We call this operation trace canonicalization.
As we will see, being able to reproduce arbitrary canonical
traces gives enough distinguishing power to the high-level context.
The reason is that, because they can't trust other compartments,
compiled fully defined components never read values transmitted in
registers with the exception of the one used for communication.
As a consequence, these components cannot distinguish context external actions
based on the content of these unread registers, which are exactly the
ones a compiled fully defined context cleans.
Fully defined programs thus perform the exact same actions when facing
a trace $t$ or its canonicalization $\zeta_{\circ}(t)$,
as formalized by \autoref{lem:canonicalization}.
This means that having the high-level attacker reproduce canonical
traces instead of the original traces of the low-level context will be
enough to lead compiled programs into reproducing the actions they took when
facing the low-level context.
\begin{lemma}[Canonicalization]
\label{lem:canonicalization}
\[
\begin{array}{l}
\forall t,\; s,\; P \psh{}s. \\
~~
P\text{ fully defined wrt. contexts of shape }{\circ}s \Rightarrow \\
~~~~ t \in \ptr{s}{\comp{P}} \iff \zeta_{\circ}(t) \in \ptr{s}{\comp{P}}
\end{array}
\]
\end{lemma}
\ifsooner
\ch{\bf At this point I'm completely lost. The previous couple of
paragraphs are currently not understandable and need more/better
intuitive explanations. It's not even clear what you're trying to
achieve with all this.}\yj{Tried to justify why and it should make
more sense. Does it?}
\fi
The definability assumption\ifsooner\bcp{preview here why it is called
an assumption (the explanation below is good, but a forward pointer
is needed)}\ch{Unsure whether there is a real problem here}\fi{}
below gives a characterization of our
mapping from a canonical trace $t$ and an action $\gamma_1$
to a compiled fully defined context $\comp{A}$
that reproduces the context actions in $t$ and,
depending on the next action $\gamma$ the
program takes, ends the game with either termination (if
$\zeta(\gamma) = \zeta(\gamma_1)$) or divergence
(if $\zeta(\gamma) \not= \zeta(\gamma_1$)).
The context $\comp{A}$ will thus distinguish a program $p$ producing
trace ``$t.\gamma_1!$'' from any program producing ``$t.\gamma!$'' with
$\zeta(\gamma) \not= \zeta(\gamma_1)$.
\begin{assumption}[Definability]
\label{assumption:definability}
\[
\begin{array}{l}
\forall t,\, \gamma_1,\, s.~~
t = \zeta_{\circ}(t) \wedge
(\exists p \psh{}s.~ (t.\gamma_1!) \in \ptr{s}{p}) \Rightarrow \\
~~\exists A \ash{}s.~ A\text{ fully defined wrt. programs of shape }{\bullet}s ~\wedge \\
\text{~~~~(1) }t \in \atr{s}{\comp{A}} ~\wedge \\
\text{~~~~(2) }( \gamma_1 \neq \checkmark \Rightarrow
(t.\gamma_1!.\checkmark?) \in \atr{s}{\comp{A}} ) ~\wedge \\
\text{~~~~(3) }\forall \gamma.\;\text{if }
\zeta(\gamma) \not= \zeta(\gamma_1) \text{ then } \forall \gamma'.~
(t.\gamma!.\gamma'?) \not\in \atr{s}{\comp{A}}
\end{array}
\]
\end{assumption}
The definability assumption gives us a fully defined
context that follows trace $t$ (part 1) and that, if given
control afterwards via action ``$\gamma!$'' such that
$\gamma \not= \checkmark$, acts as follows:
if $\gamma = \gamma_1$ the context terminates (2)
and if the context can distinguish $\gamma$ from $\gamma_1$, it
will make execution diverge by not issuing any action $\gamma'$ (3).
Since it is a compiled fully defined context, $\comp{A}$ can only access
values transmitted using register $r_\ii{com}$, the register that
holds the call argument or return value.
So $\comp{A}$ can only distinguish between $\gamma$ and $\gamma_1$
when they differ in $r_\ii{com}$, which is captured formally by
the $\zeta(\gamma) \neq \zeta(\gamma_1)$ condition.
\ch{What about completely different actions? Say a call and a return,
or two calls to completely different places?}
\ifsooner\ch{Big gap in explanation when switching to finer-grained
actions.}\ch{old comment, check later}\fi
Proving this assumption (even on paper) would be quite tedious,
so we settled for testing its correctness using
QuickCheck~\cite{ClaessenH00}.
We built an algorithm (in OCaml) that constructs $A$ out of $t$.
\ch{Confused by the ``More precisely'' below and how
it relates to the formal assumption above. Are you testing
the assumption above or something slightly different?}
\ch{It was actually more than just slightly different}
More precisely, the algorithm inputs a trace with internal
actions (the finer-grained trace that erases to $t$)
and builds a context $A$ that reproduces context internal and
external actions as prescribed by that trace.
Execution will afterwards resume at a different point in $A$ depending
on the next action taken by the program.
At each such point, $A$ will either terminate execution or make it
diverge depending on whether the program action is distinguishable
from action $\gamma_1$.
Because the trace taken as input already includes internal actions, we do
not have to reconstruct them, hence our algorithm is not more
difficult to devise than in an open-world setting~\cite{PatrignaniASJCP15}.
In the following, we assume that the algorithm is correct,
\IE that \autoref{assumption:definability} holds.
We can now turn to the main theorem.
\begin{proof}[Detailed proof of structured full abstraction]
Consider a low-level attacker $a \ash{}s$ distinguishing
two fully defined partial programs $P,Q \psh{}s$ after compilation.
Suppose without loss of generality that $a[\comp{P}]$ terminates and $a[\comp{Q}]$
diverges.
We build a high-level attacker $A \ash{}s$ that is fully defined
with respect to programs of shape $\bullet s$ and can distinguish between
$P$ and $Q$.
We can first apply trace decomposition (\autoref{lemma:trace-decomp})
to $a$ and $\comp{P}$ to get a trace $t_i \in \ptr{s}{\comp{P}}$ that ends
with $\checkmark$, such that $t_i \in \atr{s}{a}$.
Call $t_p$ the longest prefix of $t_i$ such that
$t_p \in \ptr{s}{\comp{Q}}$.
Because trace sets are prefix-closed by construction, we know that
$t_p \in \ptr{s}{\comp{P}} \cap \atr{s}{a}$.
Moreover, $t_p$ is necessarily a \emph{strict} prefix of $t_i$:
otherwise, we could apply trace composition
(\autoref{lemma:trace-comp})
\iffull to $a$ and $\comp{Q}$ \fi
and get that $a[\comp{Q}]$ terminates, a contradiction.
So there exists an
external action $E\alpha$ such that trace ``$t_p.E\alpha$'' is a
prefix of $t_i$.
Now $E\alpha$ cannot be a context action,
or else trace extensibility (\autoref{lem:trace-ext}) would imply that
``$t_p.E\alpha$'' is a trace of $\ptr{s}{\comp{Q}}$, which is incompatible
with $t_p$ being the \emph{longest} prefix of $t_i$ in
$\ptr{s}{\comp{Q}}$.
Therefore, $E\alpha$ is a program action, i.e., there
exists $\gamma_1$ such that ``$E\alpha = \gamma_1!$''.
Intuitively, $\comp{P}$ and $\comp{Q}$ take the same external actions
until the end of $t_p$, where $\comp{P}$ takes external action
``$\gamma_1!$'' and $\comp{Q}$ does not (it takes either a different
action $\gamma \neq \gamma_1$ or no external action at all).
Now, let $t_c$ be the canonicalization of trace $t_p$, i.e.,
$t_c = \zeta_{\circ}(t_p)$.
By canonicalization (\autoref{lem:canonicalization}),
``$t_c.\gamma_1!$'' $= \zeta_{\circ}(t_p.\gamma_1!)$ is a trace of $\comp{P}$.
We can thus use apply definability (\autoref{assumption:definability})
to trace $t_c$ and action $\gamma_1$, using $\comp{P} \psh{}s$ as a
witness having trace ``$t_c.\gamma_1!$''.
This yields a {fully defined} context $A \ash{}s$ such that:
\[
\begin{array}{l}
\text{(1) }t_c \in \atr{s}{\comp{A}}, \\
\text{(2) }\gamma_1 \neq \checkmark \Rightarrow
(t_c.\gamma_1!.\checkmark?) \in \atr{s}{\comp{A}}, \\
\text{(3) }\forall \gamma,~ \gamma'.~~(t_c.\gamma!.\gamma'?) \in \atr{s}{\comp{A}} \Rightarrow
\zeta(\gamma) = \zeta(\gamma_1).
\end{array}
\]
We now show that these conditions imply that
\iffull
$\comp{A}$ distinguishes $\comp{P}$ from $\comp{Q}$:
\fi
$\comp{A}\![\comp{P}]$
terminates while $\comp{A}[\comp{Q}]$ diverges.
\ifsooner
\yj{This looks like a good candidate for a lemma to prove separately
so that we reduce the size of this huge proof.}
\ch{If you split this off, you should consider putting it {\em after}
the theorem. Otherwise it will be impossible(?) to explain
how on earth you came up with such a lemma statement.
Otherwise just don't split it, it's fine here.}
\ch{This might already be the case for some previous lemmas.}
\fi
First, we look at $\comp{P}$.
Consider the case where
$\gamma_1 = \checkmark$.
In this case, by applying trace extensibility to $\comp{A}$ in (1), we
get that
``$t_c.\checkmark!$'' is a trace of $\comp{A}$, so trace
composition allows us to conclude that $\comp{A}[\comp{P}]$
terminates.
Now if $\gamma_1 \neq \checkmark$ then this action gives back control
to the context, which, given (2), will perform action ``$\checkmark?$''.
Applying trace extensibility to $\comp{P}$, $\comp{P}$ has trace
``$t_c.\gamma_1!.\checkmark?$'', so we can apply trace composition and
deduce that $\comp{A}[\comp{P}]$ terminates in this case as well.
Now, regarding $\comp{Q}$, we first obtain the following by applying
canonicalization to $t_p$, ``$t_p.\checkmark!$'', and
``$t_p.\gamma_1!$'':
\[
\begin{array}{l}
\text{(a) }t_c = \zeta_{\circ}(t_p) \in \ptr{s}{\comp{Q}}, \\
\text{(b) }(t_c.\checkmark!) = \zeta_{\circ}(t_p.\checkmark!) \in \ptr{s}{\comp{Q}} \Rightarrow
(t_p.\checkmark!) \in \ptr{s}{\comp{Q}}, \\
\text{(c) }(t_c.\gamma_1!) = \zeta_{\circ}(t_p.\gamma_1!) \in \ptr{s}{\comp{Q}} \Rightarrow
(t_p.\gamma_1!) \in \ptr{s}{\comp{Q}}.
\end{array}
\]
After following trace $t_c$, which $\comp{Q}$ has from (a), $\comp{Q}$
cannot perform a terminating action:
otherwise using (b) and trace extensibility for $a$ and $t_p$, we
could apply trace composition to trace ``$t_p.\checkmark$'' and get that
$a[\comp{Q}]$ terminates, which is a contradiction.
$\comp{Q}$ cannot perform action $\gamma_1$ either, since (c) would
then violate the fact that $t_p$ is the longest prefix of $t_i$ in
$\ptr{s}{\comp{Q}}$.
So $\comp{Q}$ only has two options left.
The first is to perform no external action by
going into an infinite sequence of internal transitions.
In this case, using (1), we can apply trace composition to get that
$\comp{A}[\comp{Q}]$ diverges.
The second option is to give control back to the context using
an external action $\gamma$ so that
$\checkmark \neq \gamma \neq \gamma_1$.
Because fully defined compiled programs clean registers, they only
yield canonical actions, i.e.
$\gamma = \zeta(\gamma) \wedge \gamma_1 = \zeta(\gamma_1)$.
Combined with (3), this entails that if $\comp{A}$ produced an
action $\gamma'$, we would have $\gamma = \gamma_1$, which is
false.
Hence, $\comp{A}$ doesn't produce any action: it goes into an infinite
sequence of local transitions.
We can again apply trace composition to get that
$\comp{A}[\comp{Q}]$ diverges.
We finally apply separate compiler correctness
(\autoref{cor:separate-compiler-correctness}) to conclude the proof.
\end{proof}
\section{Related Work}
\label{sec:related}
\iffull
\ch{Any more related work we're missing here?}
\fi
\SUBSECTION{Fully abstract compilation}
Fully abstract compilation was introduced in the seminal work of
Mart\'in Abadi~\cite{abadi_protection98}
and later investigated by the academic community.
(Much before \iffull securing compilers\else this\fi,
the concept of full abstraction
was coined by Milner~\cite{Milner75}.)
\iffull
Abadi~\cite{abadi_protection98} and later Kennedy~\cite{Kennedy06}
identified failures of full abstraction in the Java and C\# compilers,
some of which were fixed, but also, some that would be too expensive to
fix with the currently deployed protection mechanisms.
\fi
\iffull\else For instance, \fi
Ahmed~\ETAL\cite{AhmedB08,AhmedB11, Ahmed15, NewBA16} proved the
full abstraction of type-preserving compiler passes for functional
languages and devised proof techniques for {\em typed} target languages.
Abadi and Plotkin~\cite{abadi_aslr12} and
Jagadeesan~\ETAL\cite{JagadeesanPRR11} expressed the protection
provided by a mitigation technique called address space layout
randomization as a probabilistic variant of full abstraction.
Fournet~\ETAL\cite{FournetSCDSL13} devised a fully abstract compiler
from a subset of ML to JavaScript.
\ch{Cedric Fournet asked Marco at CSF about the ``modularity''
property, and whether it's equivalent to (basically) our separate
compilation, but it seemed that he already had work on ``separate
compilation'' ... maybe in the process calculus setting. Can we dig
that up and give him proper credit if needed?}
Patrignani~\ETAL\cite{PatrignaniASJCP15, LarmuseauPC15}
were recently the first to study fully abstract compilation to machine code,
starting from single modules written in simple, idealized
object-oriented and functional languages and targeting hardware
architectures featuring a new coarse-grained isolation mechanism.
Until recently, they studied fully abstract compilers that
by design violate our separate compilation property, so they cannot be
applied to our compartmentalizing compilation setting.
In recent parallel work, Patrignani~\ETAL\cite{PatrignaniDP16}
proposed an extension of their compilation scheme to protecting
multiple components from each other.
The attacker model they consider is different, especially since their
source language does not have undefined behavior.
Still, if their source language were extended with unsafe features,
our SCC property could possibly hold for their compiler.
Patrignani, Devriese~\ETAL\cite{PatrignaniC15, DevriesePP16} also
proposed proof techniques for full abstraction that work for untyped
target languages, and more recently proved full abstraction by
approximate back-translation for a compiler between the simply typed
and the untyped lambda calculus and fully formalized this proof in
Coq~\cite{DevriesePPK17}. As opposed to our Coq proof for the instance
from \autoref{sec:instance}, which is done under well-specified
assumptions but not complete, the proof of Devriese~\ETAL is assumption free.
\ch{In this work we tease out the unsafe source language problem and
beat it to death. We could extend the compartmentalizing compilation
story to also encompass secure interoperability with mutually
distrustful low-level components, as we already did informally in
\cite[\S 2.2 and \S 2.3]{JuglaretHAPST15}, and as they do in
\cite{PatrignaniDP16}. The key point there is that low-level
components can also get protection, but only if they are
observationally equivalent to compiled components, which is a strong
restriction. The most realistic scenario we could find where this
could be useful was hand-optimizing assembly produced by our
compiler \cite[\S 2.3]{JuglaretHAPST15}, but \cite{PatrignaniDP16}
came up with more scenarios where ``modularity'' is supposed to
help, although these scenarios often don't have much to do with
security (\EG separate compilation is a good feature to have, even
if things are compiled one might not have access to the source,
independent attestation for PMA). If one doesn't find these use
cases for mutual distrust convincing with low-level components (\EG
because increasing overhead without clear security benefits is a
hard sell), one might be tempted to just say: use full abstraction
there and merge all low-level components into one. Anyway, could all
this be worth discussing in this paper?}
\ch{A different reason to include low-level components is horizontal
composition for secure compilers. That's a challenging topic;
see comment in by ERC b2 after discussion with Yannis.}
\SUBSECTION{Formal reasoning about compartmentalized code}
SCC is orthogonal to formal techniques for reasoning about
compartmentalized software: SCC allows {\em transferring} security
guarantees for compartmentalized code written in a source language
to machine code via compartmentalizing compilation, but SCC itself
does not provide effective reasoning principles to obtain those
security guarantees in the first place.
The literature contains interesting work on formally characterizing
the security benefits of compartmentalization.
Promising approaches include Jia~\ETAL's work on System
M~\cite{JiaS0D15}, and Devriese~\ETAL's work on logical relations for
a core calculus based on JavaScript~\cite{DevriesePB16}, both of which
allow bounding the behavior of a program fragment based on the
interface or capabilities it has access to.
One significant challenge we attack in this paper is languages with
undefined behaviors, while in these other works illegal actions such
as accessing a buffer out of bounds must be detected and make the program
halt.
\SUBSECTION{Verifying correct low-level compartmentalization}
Recent work focused on formally verifying the correctness of
low-level compartmentalization
mechanisms based on software fault isolation~\cite{ZhaoLSR11,
MorrisettTTTG12, KrollSA14} or tagged hardware~\cite{micropolicies2015}.
That work, however, only considers the correctness of the low-level
compartmentalization mechanism, not the compiler and not high-level
security properties and reasoning principles for code written in
a programming language with components.
\ifsooner
\ch{TODO Should explain why for us having the compiler in the picture is
crucial (and in some ways better!),
while for people doing SFI it's often the opposite, they
don't want the compiler be part of their TCB~\cite{ZengTE13}
(unless it's verified~\cite{KrollSA14}).}
\fi
Communication between low-level compartments is generally done by jumping to a
specified set of entry points, while the model we consider in
\autoref{sec:instance} is more structured and enforces correct calls
and returns.
Finally, seL4 is a verified operating system
microkernel~\cite{Klein09sel4:formal}, that uses a capability system
to separate user level threads and for which correct access
control~\cite{SewellWGMAK11} and noninterference
properties~\cite{seL4:Oakland2013} were proved formally.
\iffull
\feedback{
Some related work that may be of interest to the authors, these are closer to
the problem of compartmentalization than full abstraction.
- Flume OS: Information Flow Control for Standard OS Abstractions. SOSP 07
- Shill: A Secure Shell Scripting Language. OSDI 2015}
\ch{Only if it's actually related. Not really sure.}
\ch{Flume is about coarse grained IFC between OS processes (=components).
Not related to compartmentalizing compilation as far as I can tell.}
\ch{Shill is a scripting language for achieving least privilege using
capability-based sandboxing. It has nothing to do with low-level
attacks and doesn't involve a compiler, so not that related.}
\fi
\section{Conclusion and Future Work}
\label{sec:conclusion}
We have introduced a new secure compartmentalizing compilation property,
related it to the established notion of full abstraction,
and applied our property in a carefully simplified setting: a small
imperative language with procedures compiling to a compartmentalized
abstract machine.
This lays the formal foundations for studying the secure compilation
of mutually distrustful components written in unsafe languages.
In the future we plan to build on this groundwork to study more realistic
source and target languages, compilers, and enforcement mechanisms.
In the long run, we would like to apply this to the C language by
devising a secure compartmentalizing variant of CompCert that targets a
tag-based reference monitor~\cite{micropolicies2015} running on a real
RISC processor~\cite{Dover16}.
We have in fact started working towards this long term
goal~\cite{JuglaretHAPST15}, but this will take time to achieve.
Beyond tagged hardware, we would also like to implement the abstract
compartmentalization machine from \autoref{sec:instance} in terms of
various enforcement other mechanisms, including: process-level
sandboxing~\cite{Kilpatrick03, GudkaWACDLMNR15, wedge_nsdi2008,
ProvosFH03}, software fault isolation (SFI)~\cite{YeeSDCMOONF10},
capability machines~\cite{cheri_oakland2015}, and multi-PMA
systems~\cite{PatrignaniDP16}.
As we target lower-level machines, new problems will appear: for
instance we need to deal with the fact that memory is finite and
resource exhaustion errors cannot be hidden from the attacker, which
will require slightly weakening the security property.
Finally, we would like to study more interesting compartmentalization
models including dynamic component creation and nested components, and
the way these extensions influence the security property.
\ifsooner
\ch{Also more fine-grained notions of components. For instance a C
program could be protected from a piece of inline ASM that it uses.
Or ML components could exchange higher-order values.}
\fi
\ifanon\else
\paragraph*{Acknowledgments}
We thank
Andr\'e DeHon,
Deepak Garg, and
Andrew Tolmach
for helpful discussions and thoughtful feedback on earlier drafts.
Yannis Juglaret is supported by a PhD grant from the
French Department of Defense (DGA) and Inria.
Arthur Azevedo de Amorim and Benjamin C. Pierce are
supported by
\href{http://www.nsf.gov/awardsearch/showAward?AWD_ID=1513854}{NSF award 1513854,
{\em Micro-Policies: A Framework for Tag-Based Security Monitors}}.
\fi
\bibliographystyle{plainurl}
|
1,108,101,566,479 | arxiv | \section{CCQE event selection in MiniBooNE}
The MiniBooNE\footnote{The mini-Booster neutrino experiment (MiniBooNE)
at Fermi National Accelerator Laboratory (Fermilab)
is designed to search for
$\nu_\mu \rightarrow \nu_e$
appearance neutrino oscillations~\cite{MB_osc}.} detector, a spherical tank filled with mineral oil,
is surrounded by 1280 8'' photomultiplier tubes (PMTs) to detect
\v{C}erenkov light from charged particles
\footnote{Detailed information about the Fermilab Booster neutrino beamline and
the MiniBooNE neutrino detector are available elsewhere~\cite{MB_flux,MB_dtec}.}.
In the 19.2$\mu s$ readout window, a ``subevent'' is defined as a timing cluster of PMT hits.
The identification of $\nu_\mu$ CCQE interactions relies solely on
the detection of the primary muon \v{C}erenkov light (first subevent) and
the associated decay electron \v{C}erenkov light (second subevent) in
these events~\cite{MB_CCQE}:
\begin{eqnarray}
\begin{array}{cccl}
1: & \nu_\mu + n & \rightarrow & \mu^{-} + p \\
2: & & & \mu^{-} \rightarrow e^{-} + \bar{\nu}_e + \nu_\mu.
\end{array}
\end{eqnarray}
where each line in this equation identifies the subevent
where each process occurs.
Therefore, a CCQE candidate is characterized with a total of 2 subevents.
After cuts, $146070$ events are identified from
$5.58\times 10^{20}$ protons on target collected between
August 2002 and December 2005. The cuts are estimated to be $26$\% efficient
at selecting $\nu_\mu$ CCQE events in a 550 cm radius, with a CCQE purity
of $78$\%.
The largest background is that from CC single-pion production (CC1$\pi^{+}$).
The CC1$\pi^{+}$ interaction, proceeds as,
\begin{equation}
\begin{array}{cccl}
1: & \nu_\mu +p(n) & \rightarrow & \mu^{-} + p(n) + \pi^{+} \; , \; \pi^{+} \rightarrow \mu^{+} + \nu_\mu \\
2: & & & \mu^{-} \rightarrow e^{-} + \bar{\nu}_e + \nu_\mu \\
3: & & & \mu^{+} \rightarrow e^{+} + \nu_e + \bar{\nu}_\mu.
\end{array}
\end{equation}
Note this interaction results in total 3 subevents,
the primary interaction and 2 muon decays resulting in an electron and a positron.
Although these events can be removed from the CCQE sample by requiring
only one muon decay (a total of 2 subevents), there is still a significant number of CC1$\pi^{+}$ events
that contribute to the CCQE background because one of the muon decays may be missed for various reasons.
Among them, $\pi^{+}$ absorption is a large effect (>40\%) with large uncertainty ($\sim$30\%).
Additionally, the prediction of CC1$\pi^{+}$ backgrounds in the CCQE sample rely on
the Rein and Sehgal's model~\cite{Rein-Sehgal} and final state interactions (FSIs)
in the $\textsc{nuance}$ event generator~\cite{nuance} which are not sufficiently accurate
for a precise background prediction to measure the absolute CCQE cross section.
\section{CC1$\pi^{+}$ background measurement}
Because of uncertainties in the CC1$\pi^{+}$ background predictions,
we instead measure the CC1$\pi^{+}$ rate in our CC1$\pi^{+}$ data
and the event generator is adjusted to match.
By this, the predicted kinematic distribution of CC1$\pi^{+}$ events is modified,
and the systematic error of CC1$\pi^{+}$ cross section is reduced to the level of the $\pi^{+}$ absorption uncertainty.
\begin{figure}
\includegraphics[height=2.3in]{fit_before.eps}
\includegraphics[height=2.3in]{fit_after.eps}
\caption{
(color online). The distribution of events in $Q^2_{QE}$ for the (a) 2 and (b) 3 subevent samples.
Data and MC samples are shown along with the individual MC contributions from
CCQE, CC1$\pi^+$, and other channels.
The left side is before the application of the CC1$\pi^+$ background correction.
The inset in (b) shows the CC1$\pi^+$ reweighting function
as determined from the background fit procedure.
The right side is the same distribution after the application of the CC1$\pi^+$ background
correction and the new CCQE model parameters $M_A^{eff}$ and $\kappa$
as determined from the fit procedure described in the text.}
\label{fig:fit_before}
\end{figure}
The left plot in of Figure~\ref{fig:fit_before} shows the
$Q^2_{QE}$ distributions\footnote{The neutrino energy $E_{\nu}^{QE}$ and 4-momentum transfer $Q^2_{QE}$
are reconstructed by assuming a CCQE interaction and neutron at rest,
with averaged nucleon binding energy = 34~MeV.}
for data and Monte Carlo (MC) of the two samples before the reweighting of
CC1$\pi^+$ MC events. The 2-subevent sample shows good shape agreement between data and MC.
$\textsc{nuance}$ uses the relativistic Fermi gas (RFG) model~\cite{Smith-Moniz} for CCQE interactions.
In the previous work, we adjusted 2 parameters in RFG model,
the effective axial mass $M_A^{eff}$ and Pauli blocking parameter $\kappa$,
to match the shape of the $Q^2_{QE}$ distribution to data~\cite{MB_CCQE}.
Note that analysis did not consider the overall normalization of events.
The 3-subevent sample shows a large data-MC disagreement in both shape and normalization.
Using these samples, a simultaneous fit was performed for the shape
and normalization of the 3-subevent sample,
and the normalization of the 2-subevent sample.
These were then used to determine the CC1$\pi^+$ reweighting function
which is shown in the inset plot of Figure~\ref{fig:fit_before}b (left).
In order to reduce the sensitivity to the details of the shape of the 2-subevent sample,
only the 0.2$<Q^2_{QE}(\mbox{GeV}^2)<$0.6 region was considered for the normalization parameter of this function.
The $Q^2_{QE}$ shape of the CCQE sample was fit later
although it has no impact on the cross section measurements.
The effect of the CCQE normalization on the 3-subevent sample was minimal since the background
from CCQE in this $Q^2_{QE}$ region is small as can be seen in the left plot of Figure~\ref{fig:fit_before}b.
As a final step, with the measured CC1$\pi^+$ background incorporated, a shape-only fit
to the 2-subevent (CCQE) sample is performed
in order to extract revised CCQE model parameters~\cite{MB_CCQE}.
The normalization of the CCQE sample is then extracted from the fit described above.
The $Q^2_{QE}$ distributions of data from all subevent samples is shown
together with the MC prediction in the right plot of Figure~\ref{fig:fit_before}.
Data-MC agreement is good in both subevent samples.
A fit to the 2-subevent sample provided adjusted CCQE model parameters, $M_A^{eff}$ and $\kappa$.
This was a ``shape-only'' fit, that is, the MC was normalized with an arbitrary factor
to have the same integrated event count as the background subtracted data.
The fit yielded,
\begin{eqnarray}
M_A^{eff}&=& 1.35 \pm 0.17~\mbox{GeV}/\mbox{c}^2~;\nonumber\\
\kappa &=& 1.007 \pm 0.012 ~;\nonumber\\
\chi^2/dof&=& 47.0 /38 ~.\nonumber
\end{eqnarray}
The left plot of Figure~\ref{fig:contour} shows the $Q^2_{QE}$ distribution of data,
MC before, and MC after the fit with all sources of error.
Data and MC after the fit agree within shape errors.
The right plot of Fig.~\ref{fig:contour} is the $1-\sigma$ contour regions of this fit together
with the results from the previous MiniBooNE analysis~\cite{MB_CCQE}.
Note that the current result is consistent (to within $1-\sigma$) with $\kappa=1$.
This is because the CC1$\pi^+$ background resulting from the procedure
in this work has changed by an amount only just consistent with the error
assigned on the background in the previous work. The value for $\kappa$ is
quite sensitive to the CC1$\pi^+$ background at low $Q^2_{QE}$.
However, the previous and current results are consistent at the $1-\sigma$ level.
\begin{figure}
\includegraphics[height=2.3in]{mafit_all_q2.eps}
\includegraphics[height=2.3in]{contour.eps}
\caption{\label{fig:contour}(Color online).
Left plot is the $Q^2_{QE}$ distribution of the data, MC before, and MC after the fit with errors.
Right plot is the $1-\sigma$ contour plot for the $M_A^{eff}-\kappa$ fit.
The filled star shows the best fit point and $1-\sigma$ contour
extracted from this work.
The open star indicates the best fit point and
$1-\sigma$ contour from the previous work~\cite{MB_CCQE}.
Two regions are shown from the previous work,
the larger area indicates the total uncertainty on the results
including the background uncertainty~\cite{MB_CCQE}.}
\end{figure}
The effect of the new $M_A^{eff}$ is clearly seen in 2-dimensional plots.
Figure~\ref{fig:2sub_2dim} shows the data-MC ratio of CCQE candidate events as a function of
muon kinetic energy $T_{\mu} (\mbox{GeV})$ and muon scattering angle $cos\theta_{\mu}$.
Note the muon energy and muon scattering angle observables are the basis of
all reconstructed kinematics variables in the $\nu_\mu$ CCQE channel in MiniBooNE.
In the left plot, we use the world averaged nuclear parameters
($M_A^{eff}=\NUAMAcon~\mbox{GeV}/\mbox{c}^2$, $\kappa=1.000$)~\cite{past-ma}.
As can be seen, data-MC disagreement follows auxiliary lines of equal $Q^2$.
This is the same tendency observed in the previous CCQE analysis in MiniBooNE~\cite{MB_CCQE},
indicating that data-MC disagreement is more likely due to an incorrect cross section prediction
(=function of $Q^2$) than an incorrect flux prediction (=function of neutrino energy).
After introducing the new $M_A^{eff}$ and $\kappa$ ($M_A^{eff}=1.35~\mbox{GeV}/\mbox{c}^2$, $\kappa=1.007$),
Fig.~\ref{fig:2sub_2dim} right plot,
data-MC disagreement is reduced and we obtain an improved cross section prediction across the entire kinematic space.
Note, this modification of the CCQE cross section prediction does not affect the CCQE absolute cross section measurement,
presented below.
\begin{figure}
\includegraphics[height=2.3in]{2sub_2dim_before.eps}
\includegraphics[height=2.3in]{2sub_2dim_after.eps}
\caption{Ratio of MiniBooNE $\nu_\mu$ CCQE data/simulation as a function of
measured muon angle and kinetic energy.
Left plot, with world averaged $M_A^{eff}$~(=$\NUAMAcon~\mbox{GeV}/\mbox{c}^2$) and $\kappa$~(=$1.000$),
and right plot, with newly determined $M_A^{eff}$~(=$1.35~\mbox{GeV}/\mbox{c}^2$) and $\kappa$~(=$1.007$).
The ratio forms a 2D surface whose values are represented by the gray scale,
shown on the right. If the simulation modeled the data perfectly, the ratio
would be unity everywhere. Contours of constant $E_\nu$ and $Q^2$ are overlaid.}
\label{fig:2sub_2dim}
\end{figure}
\section{CCQE absolute cross section measurements}
\subsection{Flux-averaged double differential cross section}
\begin{figure}
\includegraphics[width=\columnwidth]{ddsigma.eps}
\caption{\label{fig:ddsigma}(Color online).
The flux-averaged double differential per nucleon ($n$) cross section for
the $\nu_\mu$ CCQE process. The dark bars indicate the measured values and
the surrounding lighter bands show the shape error. The overall normalization
(scale) error is 10.8\%.}
\end{figure}
Figure.~\ref{fig:ddsigma} shows the flux-averaged double differential cross section,
$\frac{d^2\sigma}{dT_\mu dcos\theta_\mu}$, for the $\nu_\mu$ CCQE process.
The flux-averaged total cross section, an integral of the double differential cross section
($-1<cos\theta_\mu<+1$ and $0<T_\mu(\mbox{GeV})<\infty$) is $9.412\times 10^{-39}$~$\mbox{cm}^2$.
The total normalization error on this measurement is $10.8$\%.
The kinematic quantities, $T_\mu$ and $cos\theta_\mu$, have been corrected for detector
resolution effects only. This result is the most model-independent
measurement of this process possible with the MiniBooNE detector. No cuts on the recoil
nucleons are used to define this process. The neutrino flux is an absolute prediction and
was not adjusted based on measured processes in the MiniBooNE detector.
\subsection{Flux-averaged differential cross section}
Figure.~\ref{fig:dsigma} shows the flux-averaged single differential cross section, $\frac{d\sigma}{dQ^2_{QE}}$.
The reconstructed 4-momentum transfer $Q^2_{QE}$ depends only upon the (unfolded)
quantities $T_\mu$ and $cos\theta_\mu$.
\begin{figure}
\includegraphics[height=3.0in]{dsigma.eps}
\caption{\label{fig:dsigma}(Color online).
The flux-averaged single differential per nucleon ($n$) cross section
for the $\nu_\mu$ CCQE process. The measured values are shown as points
with the shape error as shaded bars. Predictions from the $\textsc{nuance}$ RFG
model with different values for the model parameters are shown as histograms.}
\end{figure}
In addition to the experimental result, Figure~\ref{fig:dsigma} also shows
the prediction for the CCQE process from the $\textsc{nuance}$ simulation with three
different variations of parameters in the underlying RFG model. The predictions
are flux-averaged and absolutely normalized.
The RFG model is plotted with both the world-averaged CCQE parameters
($M_A=\NUAMAcon~\mbox{GeV}$,$\kappa=1.000$) and with the CCQE parameters extracted
from this analysis ($M_A=1.35~\mbox{GeV}$, $\kappa=1.007$). The model
using the world-averaged CCQE parameters underpredicts the measured values
significantly (by $\approx 30$\%). The model using the CCQE parameters extracted
from the shape fit to the MiniBooNE CCQE data are within $\approx 10$\% of the data,
consistent within the normalization uncertainty of $\approx 10$\%.
The prediction with the CCQE parameters from
this analysis scaled by 1.10 is also plotted and is in good agreement with the data.
\subsection{Flux-unfolded total cross section}
\begin{figure}
\includegraphics[height=3.0in]{sigma.eps}
\caption{\label{fig:sigma}(Color online).
The flux-unfolded total per nucleon (n) cross section with total
errors and bin widths plotted indicated with the data points. In (a) shape errors
are shown as shaded boxes. In (b), a larger energy
range is shown along with results from the LSND~\cite{LSNDxs}
and NOMAD~\cite{NOMAD} experiments. Predictions from the $\textsc{nuance}$ simulation
with two different RFG parameter variations are shown in both plots.}
\end{figure}
The flux-unfolded total cross section ($\sigma[E_\nu^{QE,RFG}]$)
as a function of estimated neutrino energy $E_\nu^{QE,RFG}$ is shown in Figure~\ref{fig:sigma}.
The quantity $E_\nu^{QE,RFG}$ is a model-dependent estimate of the
neutrino energy obtained after correcting for both detector and nuclear model
resolution effects. These results depend on the details of the nuclear model
used for the calculation. The dependence is only weak in the peak of the flux
distribution but becomes strong at $E_\nu<0.5$~GeV and $E_\nu>1.0$~GeV,
in the ``tails'' of the flux distribution.
In Figure~\ref{fig:sigma} data are compared with the $\textsc{nuance}$ implementation
of the RFG model with the world averaged parameter values ($M_A^{eff}=\NUAMAcon~\mbox{GeV}$, $\kappa=1.000$),
and the parameters extracted from this work ($M_A^{eff}=1.35~\mbox{GeV}$, $\kappa=1.007$).
These are absolute predictions from the model --- they are not scaled in any way.
The measurement is $\sim$20\% higher than the RFG model prediction with world
average parameter values at the flux peak ($700-800~\mbox{MeV}$). The prediction
with the RFG parameter values extracted from the {\em shape-only} fit to
MiniBooNE CCQE data reproduces the data significantly better,
to within $1\sigma$ for every point over the entire measured energy range.
Figure~\ref{fig:sigma}(b) shows the CCQE results from the LSND~\cite{LSNDxs}
and NOMAD~\cite{NOMAD} experiments. It is interesting to note that NOMAD results
are better described with the world-average $M_A^{eff}$ and $\kappa$ values.
At this time, a solution to this growing mystery is not evident.
Although there are tremendous efforts to model this process~\cite{new-model},
no models seem to be able to produce
the (1) large observed $M_A^{eff}$ and (2) large observed total cross section,
while keeping the ``bare'' $M_A=\NUAMAcon~\mbox{GeV}$ (the world averaged value).
Model-independent cross section results from
the MINOS near detector~\cite{MINOS}, running with $E_\nu\sim$3~GeV
and near-future experiments
such as MINERvA~\cite{MINERvA}, running with $2<E_\nu<20$~GeV could
help shed further light on this subject.
|
1,108,101,566,480 | arxiv | \section{Introduction}
Cosmological perturbation theory plays a central role in confronting theories
of the early universe with observations. The increasing
accuracy of the observations, however,
has made it necessary to extend the theory from linear to
second order (\emph{i.e.\,}nonlinear)
perturbations, which presents various technical challenges.
This paper is a contribution
to this effort, focussing on long wavelength perturbations of
a flat Friedmann-Lema\^{i}tre (FL) universe whose
stress-energy content is a single
minimally coupled scalar field with an arbitrary potential.
For this class of models we derive the general solution of the
perturbed Einstein equations at second order
when the perturbations are in the super-horizon regime, including
both the growing and decaying modes.
This paper relies on three of our previous papers on
cosmological perturbation theory
which we shall refer to as UW1~\cite{uggwai19a}
(a unified and simplified formulation of change of gauge
formulas at second order), UW2~\cite{uggwai18}
(five ready-to-use systems of governing equations for second order perturbations)
and UW3~\cite{uggwai19b} (conserved quantities and the
general solution of the perturbed Einstein
equations for adiabatic long wavelength perturbations).
Our method is to apply the general solution in the total matter gauge given in
UW3~\cite{uggwai19b} to the case of a scalar field and then transform to the uniform
curvature gauge.\footnote{Experience has shown that for these models the uniform
curvature gauge is the best choice to represent the perturbations of the scalar field.
See, for example, Hwang (1994)~\cite{hwa94b} (remarks in the Discussion), Liddle and
Lyth (2000)~\cite{lidlyt00} (page 93).}
The scalar field perturbations at first and second order are
algebraically related to the metric perturbations, and we show that in the uniform
curvature gauge, denoted by a subscript $_\mathrm{c}$, they have the following form:
\begin{equation} \label{varphi_c.solution}
{}^{(1)}\!{\varphi}_{\mathrm c}\approx
(\varphi_0'){}^{(1)}\!C, \qquad{}^{(2)}\!{\varphi}_{\mathrm c}\approx
(\varphi_0'){}^{(2)}\!C -
(\varphi_0''){}^{(1)}\!C^2,
\end{equation}
where $\varphi_0 = \varphi_0(N)$ is the background scalar field and $'$
denotes the derivative\footnote{We note that in cosmological perturbation
theory a $'$, in contrast to the present paper, is often used to
denote differentiation with respect to conformal time.}
with respect to $e$-fold time $N=\ln x = \ln(a/a_{init})$.
The background scalar field is determined
by the background Klein-Gordon equation:
\begin{equation} \label{KG_N}
\varphi_0'' +\sfrac12(6-(\varphi_0')^2)\left(\varphi_0' +
{V_{,\varphi}}/{V}\right)=0,
\end{equation}
where $V_{,\varphi}$ is the derivative of the
potential $V(\varphi_0)$ with respect to $\varphi_0$.
The arbitrary spatial functions ${}^{(1)}\!C$ and ${}^{(2)}\!C$
in equation~\eqref{varphi_c.solution} are
related to the comoving curvature perturbation
${}^{(r)}\!{\cal R}$, $r=1,2$, according to
\begin{equation}
{}^{(1)}\!C= {}^{(1)}\!{\cal R}, \qquad
{}^{(2)}\!C= {}^{(2)}\!{\cal R} + 2{}^{(1)}\!{\cal R}^2.
\end{equation}
Note that we have not imposed the slow-roll approximation
in obtaining this solution, and have not had to solve the perturbed
Klein-Gordon equation. As a by-product we obtain a new
conserved quantity for long wavelength perturbations of
a single scalar field at second order.
The outline of the paper is as follows.
In section~\ref{long.wavelength} we present the main results,
first the new conserved quantity at second order for a perturbed scalar field,
and then give explicit expressions for the scalar field perturbations
up to second order. In section~\ref{KG.equation} we derive a new form
of the perturbed Klein-Gordon equation which makes explicit the
existence of the conserved quantities at first and second order.
In appendix~\ref{scalar.field} we introduce the necessary background
material concerning scalar field perturbations.
\section{Long wavelength perturbations \label{long.wavelength}}
We consider second order scalar perturbations of a flat
FL universe with a single minimally coupled scalar field as matter.
Since we are going to specialize the general
framework developed in UW2~\cite{uggwai18} to this
situation we begin by briefly introducing the notation used in~\cite{uggwai18},
referring to that paper for further details. We
write the perturbed metric in the form\footnote{The scalar perturbations at
first order will generate vector and
tensor perturbations at second order, but we do not
give these perturbation variables since we will not consider these
modes in this paper.}
\begin{equation} \label{pert.metric}
ds^2 = a^2\left(-(1+2\phi) d\eta^2 + {\bf D}_i B\,d\eta dx^i +
(1-2\psi)\delta_{ij} dx^i dx^j \right),
\end{equation}
where $\eta$ is conformal time, the $x^i$ are Cartesian background coordinates
and ${\bf D}_i = \partial/\partial x^i$. The background geometry is described by the
scale factor $a$ which determines the conformal Hubble scalar ${\cal H}=a'/a$,
where in this specific situation $'$ denotes differentiation with respect to $\eta$.
By expanding the functions $\phi, B, \psi$ in a perturbation series\footnote{A
perturbation series for a variable $f$ is a Taylor series in a perturbation
parameter $\epsilon$, of the form
$f= f_0 +\epsilon\,{}^{(1)}\!f + \sfrac12 \epsilon^2\,{}^{(2)}\!f + \dots.$}
we obtain the following metric perturbations up to second order:
${}^{(r)}\!\phi, {\cal H}{}^{(r)}\!B, {}^{(r)}\!\psi$, $r=1,2,$
where the factor of ${\cal H}$ ensures that the $B$-perturbation is dimensionless, see
UW1~\cite{uggwai19a} and UW2~\cite{uggwai18}.
We use a perfect fluid stress-energy tensor~\eqref{pf} to describe
the matter-energy content, with the matter perturbations described by the variables
${}^{(r)}\!\mbox{\boldmath $\delta$},\, {\cal H}{}^{(r)}\!V, {}^{(r)}\!\Gamma$, $r=1,2,$
where ${}^{(r)}\!\mbox{\boldmath $\delta$} = {}^{(r)}\!\rho/(\rho_0+p_0)$ is the density
perturbation, ${}^{(r)}\!V$ is the scalar velocity perturbation,
defined by writing\footnote{UW2~\cite{uggwai18}, section IIC. We note
that $V$ is the customary notation for the potential of a
scalar field, which we will also use in this paper. The context will eliminate
possible confusion. }
$au_i={\bf D}_iV$, and
${}^{(r)}\!\Gamma$ is the non-adiabatic pressure perturbation.
The scalar field will be denoted by $\varphi$ with background field $\varphi_0$
and perturbations ${}^{(r)}\!\varphi$, $r=1,2$, and the potential
will be denoted by $V(\varphi_0)$. As is well known,
the stress-energy tensor of a minimally coupled scalar field~\eqref{sfT_ab}
in a cosmological setting can be written in the form of a perfect fluid,
which means that we can apply the above framework.
Using the relation between the two stress-energy tensors
the matter perturbations can be expressed in terms of
the scalar field perturbations and the metric perturbations.
We give the technical details in Appendix~\ref{app:scalar}.
In this paper at the outset
we need the relation between ${\cal H}{}^{(r)}\!V$
and ${}^{(r)}\!\varphi$ given by equations~\eqref{scalar1.3} and~\eqref{scalar2.3}
and the simplified expressions for ${}^{(r)}\!\Gamma$
given by~\eqref{gamma_sf}.
In this section, since we are considering long wavelength
perturbations, we will rely heavily on the results of UW3~\cite{uggwai19b}.
\subsection{A new conserved quantity\label{cons.quantity}}
When analyzing perturbed inflationary universes
two useful and complementary sets of variables are
the gauge invariants ${}^{(r)}\!{\varphi}_{\mathrm c}$, $r=1,2$,
the perturbations of the scalar field in the uniform curvature gauge
which are sometimes referred to as the \emph{Sasaki-Mukhanov
variables}\footnote{See for example Malik (2005)~\cite{mal05}, equations (3.14)
and (3.21).} and the gauge invariants ${}^{(r)}\!{\psi}_{sc},\,r=1,2$,
the curvature perturbations in the uniform field gauge.\footnote{See for
example, Maldacena (2003)~\cite{mal03}, section 3, in which both
descriptions are used and compared.}
Our analysis in this section will rely to a large extent on these variables.
At the outset we note that we will primarily use $e$-fold time $N$
as the time variable. We will write $\partial_N f= \partial f/\partial N$
for brevity, and when $f$ is a background quantity we will use a $'$
as in the introduction,
for example $\partial_N \varphi_0\equiv\varphi_0'$.
We will also use the factor $l$, given by
\begin{equation} \label{def_lambda}
l := -(\varphi_0')^{-1},
\end{equation}
as a shorthand notation to represent the frequent divisions by $\varphi_0'$
that occur in the equations involved in
the study of perturbations of scalar fields.
Our first goal is to derive a general relation between
the scalar field perturbations ${}^{(r)}\!{\varphi},\,r=1,2,$
and the metric perturbations, specifically the curvature perturbations
${}^{(r)}\!\psi,\,r=1,2$. We begin by performing a change of gauge from the uniform
field gauge, defined by ${}^{(r)}\!{\varphi}=0,\,r=1,2$, to an arbitrary gauge
using the formula (42e) in UW1~\cite{uggwai19a} with $\Box=\psi$:
\begin{subequations} \label{gauge.change}
\begin{align}
{}^{(1)}\!{\psi}_{\mathrm sc} &= {}^{(1)}\!{\psi} - l{}^{(1)}\!{\varphi},\\
{}^{(2)}\!\hat{\psi}_{\mathrm sc} &= {}^{(2)}\!\hat{\psi} -
l{}^{(2)}\!\hat{\varphi} +
2l{}^{(1)}\!{\varphi}\,\partial_N {}^{(1)}\!\psi_{\mathrm sc} -
{\mathbb D}_2({}^{(1)}\!B_{\mathrm sc}) + {\mathbb D}_2({}^{(1)}\!B),\label{psi-varphi2}
\end{align}
\end{subequations}
where ${\mathbb D}_2$, the so-called Newtonian spatial operator,\footnote{See
UW1~\cite{uggwai19a}, Appendix B. The specific form
of ${\mathbb D}_2(\bullet)$ does not concern us here.} is a spatial differential operator
of order $2$ in ${\bf D}_i$ and hence negligible in the super-horizon regime.
The hatted variables are given by (see UW1~\cite{uggwai19a}, equations (37)):
\begin{subequations}
\begin{align}
{}^{(2)}\!{\hat \psi} :&={}^{(2)}\! \psi + 2\,{}^{(1)}\! \psi^2, \\
l {}^{(2)}\!{\hat{\varphi}} :&=l {}^{(2)}\!{\varphi} +
\sfrac32(w_{\varphi} -c_{\varphi}^2)(l{}^{(1)}\!{\varphi})^2,
\label{hat.varphi}
\end{align}
\end{subequations}
where $w_{\varphi}-c_{\varphi}^2$ is related to $\varphi_0$
and its derivatives by equation~\eqref{w-c^2} in appendix~\ref{background}.
It is helpful to use the fact that the uniform field gauge
is equivalent to the total matter gauge,
defined by ${}^{(r)}\!V=0$, $r=1,2$,
since equations~\eqref{scalar1.3} and~\eqref{scalar2.3}
show that ${}^{(r)}\!\varphi=0 \Leftrightarrow {}^{(r)}\!V=0$, $r=1,2$.
This means that the various gauge invariants in the
two gauges are equal. For example, for the curvature perturbation we have
\begin{equation} \label{psi_sc=psi_v}
{}^{(r)}\!\psi_{\mathrm {sc}} = {}^{(r)}\!\psi_{\mathrm v},\qquad r=1,2.
\end{equation}
By solving for the $\varphi$-perturbation in equations~\eqref{gauge.change}
and using equation~\eqref{psi_sc=psi_v} we obtain
\begin{subequations} \label{varphi-psi}
\begin{align}
l{}^{(1)}\!{\varphi} &= {}^{(1)}\!{\psi} - {}^{(1)}\!{\psi}_{\mathrm v},\label{varphi.1}\\
l{}^{(2)}\!\hat{\varphi} &= {}^{(2)}\!\hat{\psi} -
{}^{(2)}\!\hat{\psi}_{\mathrm v} +
2({}^{(1)}\!{\psi} - {}^{(1)}\!{\psi}_{\mathrm v})\partial_N {}^{(1)}\!\psi_{\mathrm v} -
{\mathbb D}_2({}^{(1)}\!B_{\mathrm v}) + {\mathbb D}_2({}^{(1)}\!B),\label{varphi.2}
\end{align}
\end{subequations}
which determine the scalar field perturbations in any
gauge in terms of the metric perturbations.
At this stage we need two general properties of
long wavelength perturbations:\footnote{These results are part of the folklore
of perturbation theory, but derivations at second order are not easy to find. We
have given simple derivations in UW3~\cite{uggwai19b}, equations (20) and (26a).}
\begin{itemize}
\item[i)] The density perturbations in the total matter gauge satisfy,
\begin{equation} \label{delta_v.zero.new}
{}^{(1)}\! {\mbox{\boldmath $\delta$}}_{\mathrm v}\approx 0, \qquad {}^{(2)}\! {\mbox{\boldmath $\delta$}}_{\mathrm v}\approx 0.
\end{equation}
\item[ii)] If the perturbations are adiabatic the curvature
perturbations in the total matter gauge satisfy,
\begin{equation} \label{deriv.psi_v.new}
\partial_N{}^{(1)}\! \psi_{\mathrm v} \approx 0, \qquad
\partial_N{}^{(2)}\! \psi_{\mathrm v} \approx 0.
\end{equation}
\end{itemize}
In addition we need to determine the non-adiabatic pressure perturbations
for a scalar field. In Appendix~\ref{app:scalar} we have shown
that the ${}^{(r)}\!\Gamma, r=1,2$, depend linearly on
${}^{(r)}\!\mbox{\boldmath $\delta$}_{\mathrm v}, r=1,2$, with source terms at second order
depending on ${}^{(1)}\!\mbox{\boldmath $\delta$}_{\mathrm v}$, as in equation~\eqref{gamma_sf}.
It thus follows from
equation~\eqref{delta_v.zero.new} that
\begin{equation} \label{gamma_0.new}
{}^{(1)}\!\Gamma\approx 0, \qquad
{}^{(2)}\!\Gamma\approx 0,
\end{equation}
{\emph i.e.} long wavelength scalar field perturbations are adiabatic.\footnote
{This result has been given by Vernizzi (2005)~\cite{ver05}.}
Thus~\eqref{deriv.psi_v.new} holds,
which implies that for long wavelength perturbations
equations~\eqref{varphi-psi} reduce to
\begin{equation} \label{varphi-psi.long}
l{}^{(1)}\!{\varphi} = {}^{(1)}\!{\psi} - {}^{(1)}\!{\psi}_{\mathrm v}, \qquad
l{}^{(2)}\!\hat{\varphi} \approx {}^{(2)}\!\hat{\psi} -
{}^{(2)}\!\hat{\psi}_{\mathrm v}.
\end{equation}
In other words, when using the hatted variables the first order
relation generalizes to second order.
We now choose the arbitrary gauge in these equations to
be the uniform curvature gauge, which gives
\begin{equation} \label{varphi-psi.long.uc}
l{}^{(1)}\!{\varphi}_{\mathrm c} = - {}^{(1)}\!{\psi}_{\mathrm v}, \qquad
l{}^{(2)}\!\hat{\varphi}_{\mathrm c} \approx -
{}^{(2)}\!\hat{\psi}_{\mathrm v}.
\end{equation}
Since~\eqref{deriv.psi_v.new} holds the first equation
in~\eqref{varphi-psi.long.uc}
gives the known result\footnote{Sasaki
(1986)~\cite{sas86} (see equation (2.33)) introduced
the quantity ${}^{(1)}\!{\psi}_{\mathrm p} - l{}^{(1)}\!{\varphi}_{\mathrm p}$
in our notation, and stated that it is constant on large scales provided that the entropy and spatial
anisotropy perturbations are negligible.}
that at first order $l{}^{(1)}\!{\varphi}_{\mathrm c}$ is a
conserved quantity while
the second equation
gives the new result that at second order
\begin{equation} \label{hat.varphi.new}
l{}^{(2)}\!\hat{\varphi}_{\mathrm c}:=
l {}^{(2)}\!{\varphi}_{\mathrm c} +
\sfrac32(w_{\varphi} -c_{\varphi}^2)(l{}^{(1)}\!{\varphi}_{\mathrm c})^2,
\end{equation}
is a conserved quantity for long wavelength perturbations of a single field
inflationary universe. We note that the relation~\eqref{hat.varphi.new} is obtained
by choosing the uniform curvature gauge in~\eqref{hat.varphi}.
\subsection{Explicit form of the scalar field perturbations}
In a recent paper UW3~\cite{uggwai19b} we gave
the general solution of the perturbed Einstein
equations at second order for long wavelength
adiabatic perturbations of a FL universe, with stress-energy
tensor of the perfect fluid form~\eqref{pf}.
We solved the equations for the metric
perturbations in the total matter gauge, giving the general solution,
\emph {i.e.} including the decaying mode.
At first order the solution is
\begin{subequations} \label{tm_sh1}
\begin{equation} \label{comov_sh1.new}
{}^{(1)}\!\phi_{\mathrm v}\approx0, \qquad
{}^{(1)}\!\psi_{\mathrm v}\approx {}^{(1)}\!C,
\end{equation}
\begin{equation}
{\cal H}{}^{(1)}\!B_{\mathrm v} \approx (1-g(a)){}^{(1)}\!C
+({\cal H}/a^2) {}^{(1)}\!C_{*},
\label{comov_sh1_B.new}
\end{equation}
\end{subequations}
where the perturbation evolution function\footnote{We refer to UW3~\cite{uggwai19b}
for this name, and for its history and properties (see Appendix C in~\cite{uggwai19b}).}
$g(a)$ is defined by
\begin{equation} \label{def_g_simple}
g(a) = 1 - \frac{{\cal H}(a)}{a^2} \int_0^a \frac{\bar a}{{\cal H}(\bar a)}d{\bar a}.
\end{equation}
At second order the solution is:\footnote{The spatial differential
operator ${\mathbb D}_0$ is defined by
${\mathbb D}_0(C) := {\cal S}^{ij}({\bf D}_iC)({\bf D}_jC)$,
where the scalar mode extraction operator ${\cal S}^{ij}$
is defined by
${\cal S}^{ij} = \sfrac32({\bf D}^{-2})^2{\bf D}^{ij}$.
Here ${\bf D}^{-2}$ is the inverse spatial Laplacian and
${\bf D}_{ij} := {\bf D}_{(i}{\bf D}_{j)} - \sfrac13 \delta_{ij}{\bf D}^2$. Spatial
indices are raised with $\delta^{ij}$.}
\begin{subequations} \label{tm_sh2}
\begin{equation} \label{comov_sh2.new}
{}^{(2)}\!\phi_{\mathrm v}\approx0, \qquad
{}^{(2)}\!\hat{\psi}_{\mathrm v}\approx {}^{(2)}\!C,
\end{equation}
\begin{equation}
{\cal H}{}^{(2)}\!B_{\mathrm v} \approx\,(1-g(a))\left({}^{(2)}\!C -
2{\mathbb D}_0( {}^{(1)}\!C)\right)+({\cal H}/a^2) {}^{(2)}\!C_{*}. \label{B_v2.new}
\end{equation}
\end{subequations}
We identify the spatial functions ${}^{(1)}\!C(x^i)$ and
${}^{(2)}\!C(x^i)$ as the conserved
quantities at first and second order, while
${}^{(1)}\!C_{*}(x^i)$ and
${}^{(2)}\!C_{*}(x^i)$ describe the decaying mode.
If we apply this solution in the case of a scalar field the Hubble scalar ${\cal H}(a)$
is determined explicitly in terms of
the background scalar field and the scalar field
potential by equation~\eqref{H.ito.phi}, which
then determines the function $g(a)$ through~\eqref{def_g_simple}.
In the total matter gauge it follows from~\eqref{varphi-psi.long}
that the scalar field perturbations
are zero (${}^{(r)}\!\varphi_{\mathrm v}=0,\,r=1,2$). In effect, in this gauge
the perturbations of the scalar field are hidden in the metric perturbations.
On the other hand, in the uniform curvature gauge it follows
from~\eqref{varphi-psi.long.uc} that the scalar field perturbations
are given by
\begin{equation} \label{phi_c.conserved}
l{}^{(1)}\!\varphi_{\mathrm c}=-{}^{(1)}\!\psi_{\mathrm v}\approx-{}^{(1)}\!C , \qquad
l{}^{(2)}\!\hat{\varphi}_{\mathrm c}\approx -{}^{(2)}\!\hat{\psi}_{\mathrm v}
\approx-{}^{(2)}\!C,
\end{equation}
the last step following from~\eqref{comov_sh1.new} and~\eqref{comov_sh2.new}.
To obtain an explicit expression for ${}^{(2)}\! {\varphi}_{\mathrm c}$ we
use the definition~\eqref{hat.varphi.new} of ${}^{(2)}\!\hat {\varphi}_{\mathrm c}$
and $l=-1/(\varphi_0')$ which yields:
\begin{equation} \label{varphi_2.temp.new}
{}^{(1)}\!{\varphi}_{\mathrm c}\approx
\varphi_0'{}^{(1)}\!C, \qquad
{}^{(2)}\!{\varphi}_{\mathrm c}\approx
\varphi_0'\left({}^{(2)}\!C + \sfrac32(w_{\varphi}-c_{\varphi}^2){}^{(1)}\!C^2\right),
\end{equation}
where $w_{\varphi}-c_{\varphi}^2$ is given
by equation~\eqref{w-c^2} in appendix~\ref{background}, which we also give here:
\begin{equation} \label{w-c^2.repeat}
w_{\varphi}-c_{\varphi}^2 =
\frac23\left(\frac{\varphi_0''}{\varphi_0'}\right).
\end{equation}
Substituting this result
into~\eqref{varphi_2.temp.new} yields the expression~\eqref{varphi_c.solution}
for ${}^{(2)}\!{\varphi}_{\mathrm c}$ in the introduction.
The scalar field $\varphi_0(N)$ is determined by the background
Klein-Gordon equation~\eqref{KG_N}.
We now calculate the scalar field perturbation in the
Poisson gauge by choosing the Poisson gauge in
equation~\eqref{varphi-psi.long} which yields:
\begin{equation} \label{varphi-psi.long.poisson}
l{}^{(1)}\!{\varphi}_{\mathrm p} =
{}^{(1)}\!{\psi}_{\mathrm p} - {}^{(1)}\!{\psi}_{\mathrm v}, \qquad
l{}^{(2)}\!\hat{\varphi}_{\mathrm p} \approx {}^{(2)}\!\hat{\psi}_{\mathrm p} -
{}^{(2)}\!\hat{\psi}_{\mathrm v}.
\end{equation}
In our previous paper UW3~\cite{uggwai19b} we used
a change of gauge formula to calculate
the curvature $\psi_{\mathrm p}$ determined by the
solution~\eqref{tm_sh1} and~\eqref{tm_sh2}.
The results are:
\begin{subequations}
\begin{align}
\psi_{\mathrm p}& \approx g C -({\cal H}/a^2) C_{*},\\
{}^{(2)}\! {\hat\psi}_{\mathrm p} &\approx \,g{}^{(2)}\!C +
(1-g)[\left((1+q)(1-g) - g\right) C^2 +
4g{\mathbb D}_0(C)].
\end{align}
\end{subequations}
where we note that for brevity have not included the decaying mode at
second order. We substitute these expressions into~\eqref{varphi-psi.long.poisson}
and use~\eqref{comov_sh1.new} and~\eqref{comov_sh2.new}.
The final expressions for the scalar field perturbations, using the
unhatted variable and excluding the decaying mode at
second order, are as follows:\footnote
{The decaying mode contribution to $l {}^{(2)}\!{\varphi_{\mathrm p}}$ has the
leading order term $-({\cal H}/a^2){}^{(2)}\!C_{*}$, where
${}^{(2)}\!C_{*}$ is another arbitrary spatial function, together with quadratic
source terms that are a linear combination of the following spatial functions,
$C_{*}^2, {\mathbb D}_0(C_{*}), CC_{*}, {\cal S}^i(C{\bf D}_iC_{*}),
{\cal S}^{ij}({\bf D}_iC{\bf D}_jC_{*})$, with time dependent coefficients.
The scalar mode extraction operator ${\cal S}^i$ is defined by
${\cal S}^i={\bf D}^{-2}{\bf D}^i.$}
\begin{subequations}
\begin{align}
l {}^{(1)}\!{\varphi_{\mathrm p}} &\approx \,-(1-g){}^{(1)}\!C -
({\cal H}/a^2){}^{(1)}\!C_{*},\\
l {}^{(2)}\!{\varphi_{\mathrm p}} &\approx
(1-g)\left[{-}^{(2)}\!C +
\left(\sfrac32 (1+c_{\varphi}^2)(1-g) -g\right) C^2 +4g\,{\mathbb D}_0(C)\right].
\end{align}
\end{subequations}
The linear solution is well known but is usually found by
solving the Bardeen equation or the perturbed Klein-Gordon equation.\footnote{See,
for example Mukhanov (2005)~\cite{muk05}, equation (8.68), or
Mukhanov (1985)~\cite{muk85}, equation (14),
which reads
${}^{(1)}\!{\varphi}_{\mathrm p}=A\dot\varphi_0 a^{-1}\int a dt, $
where $\dot \varphi_0=d\varphi_0/dt=-H/l.$ The expression
for $g(t)$ in UW3~\cite{uggwai19b} (see section 7 ) relates Mukhanov's result to ours.}
The second order solution is new.
Note that the decaying mode appears in the scalar field perturbations in the
Poisson gauge, in contrast to the situation in the uniform
curvature gauge in equation~\eqref{varphi_2.temp.new}.\footnote
{This property of the decaying mode at linear order has been pointed out previously, see for example Hwang (1994)~\cite{hwa94b}, table 1. It can also be inferred from
Brandenberger and Finelli (2001)~\cite{brafin01}. }
We end this section with a comment on multiple scalar fields.
We note that perturbations of universes with multiple scalar fields
(see for example, Malik and Wands (2005)~\cite{malwan05}, for linear perturbations
and Malik (2005)~\cite{mal05} for second order perturbations) have been studied
using the uniform curvature gauge. However, since large scale perturbations of multiple scalar fields are not adiabatic, our explicit large scale solution for a single scalar field
cannot be generalized to multiple fields.
\section{The perturbed Klein-Gordon equation\label{KG.equation}}
Although the Klein-Gordon equation plays a central role
in governing scalar field perturbations,
we have obtained simple expressions for
these perturbations to second order on super-horizon scale
and have obtained conserved quantities,
using only some of the perturbed Einstein equations.
On inspecting the form of the perturbed Klein-Gordon equation at first and
second order as given in the literature\footnote{
See, for example,
Huston and Malik (2009)~\cite{husmal09}, equations
(2.13) and (2.15), in Fourier space (${\bf D}^2\rightarrow-k^2$). }
it is clear that our results are unexpected, since it is not obvious that
the equation admits conserved quantities or that the explicit expressions
for ${}^{(1)}\!{\varphi}_{\mathrm c}$ and ${}^{(2)}\!{\varphi}_{\mathrm c},$
as given by equation~\eqref{varphi_c.solution}, are in fact solutions
of the equation on super-horizon scale.
There are two standard ways to derive the perturbed Klein-Gordon
equation. The first is to use the perturbed energy conservation equation
to obtain an expression for the second time
derivative of ${\varphi}_{\mathrm c}$ and then use
(some of) the perturbed Einstein equations to express the metric perturbations
that appear in this equation in terms of the scalar field perturbations. The second is to use
the variation of the Einstein action coupled to the scalar field. We refer to
Malik \emph{et al} (2008)~\cite{maletal08} for a comparison of the two approaches.
In this section we give a new way of deriving the
perturbed Klein-Gordon equation, leading to a simpler form of
the equation with $l=-1/(\varphi_0')$
acting as a scale factor for the perturbation, which
by inspection admits a conserved quantity at first and second order.
We begin with the perturbed Einstein equations
in the uniform curvature gauge as given in our earlier
paper UW2~\cite{uggwai18}, see section IVB.1.
These equations, which determine the
metric perturbations $\phi_\mathrm{c}$ and $B_\mathrm{c}$ at first order,
read
\begin{subequations}\label{ucg_gov1}
\begin{align}
\partial_N((1+q)^{-1} {}^{(1)}\!{\phi_\mathrm{c}}) &=
-c_s^2(1+q)^{-1}{\cal H}^{-2}{\bf D}^2({\cal H}{}^{(1)}\!{B}_{\mathrm c})
+ {}^{(1)}\Gamma, \label{ucg_gov1.1} \\
\partial_N(a^2\,{}^{(1)}\!B_{\mathrm c} )&= -a^2 {\cal H}^ {-1}{}^{(1)}\!{\phi_\mathrm{c}}. \label{ucg_gov1.2}
\end{align}
\end{subequations}
Here $q$ denotes the background deceleration parameter which is
defined by
\begin{equation} \label{q.definition}
1+q=-H'/H,\,\Longleftrightarrow \,q={\cal H}'/{\cal H},
\end{equation}
where $H$ is the background Hubble scalar, ${\cal H}=aH$ and $'$ denotes
differentiation with respect to $N$.
The velocity perturbation and density perturbation are given by
\begin{subequations} \label{sc.field.relations}
\begin{align}
{\cal H}{}^{(1)}\!V_{\mathrm c} &= -(1+q)^{-1} {}^{(1)}\!\phi_\mathrm{c},\\
{}^{(1)}\!{\mbox{\boldmath $\delta$}}_{\mathrm v} &= -
(1+q)^{-1}{\cal H}^{-2}{\bf D}^2 ({\cal H}{}^{(1)}\!{B}_{\mathrm c}).
\end{align}
In a universe with a single scalar field
equations~\eqref{scalar1.3}
and~\eqref{gamma_sf1} in appendix~\ref{app:scalar} give
\begin{equation} \label{infl.uni}
{\cal H}{}^{(1)}\!V_{\mathrm c} = l{}^{(1)}\!\varphi_{\mathrm c}, \qquad
{}^{(1)}\!\Gamma=(1-c_s^2){}^{(1)}\!{\mbox{\boldmath $\delta$}}_{\mathrm v},
\end{equation}
\end{subequations}
where $l=-1/(\varphi_0')$.
Using equations~\eqref{sc.field.relations}, which show that
\begin{equation} \label{phi.varphi}
(1+q)^{-1}{}^{(1)}\!{\phi_\mathrm{c}}= -l{}^{(1)}\!\varphi_{\mathrm c},
\end{equation}
we can write equations~\eqref{ucg_gov1} as a coupled system for
${\varphi}_{\mathrm c}$ and $B_{\mathrm c}$:
\begin{subequations}\label{ucg1_varphi}
\begin{align}
(1+q)\partial_N(l{}^{(1)}\!\varphi_{\mathrm c}) &=
{\cal H}^{-2}{\bf D}^2({\cal H}{}^{(1)}\!B_{\mathrm c}), \label{ucg1_varphi.1} \\
\partial_N(a^2\,{}^{(1)}\!B_{\mathrm c} )&=
a^2 {\cal H}^ {-1}(1+q)l{}^{(1)}\!\varphi_{\mathrm c}. \label{ucg1_varphi.2}
\end{align}
\end{subequations}
We now eliminate
the metric perturbation ${}^{(1)}\!B_{\mathrm c}$ by
applying ${\bf D}^2$ to~\eqref{ucg1_varphi.2}
and substituting for ${\bf D}^2({\cal H}{B}_{\mathrm c})$ from~\eqref{ucg1_varphi.1}.
This leads to
\begin{equation} \label{DE.varphi.1}
\left(\partial_N^2+2\frac{h' }{h} \partial_N
- {\cal H}^{-2}{\bf D}^2\right)(l {}^{(1)}\!{\varphi}_{\mathrm c}) = 0,
\end{equation}
where $h=h(N)$ is a background scalar given by\footnote
{We use $h^2$ instead of $h$ in order to have a simple link
with the commonly used Mukhanov-Sasaki form of the perturbed
Klein-Gordon equation, as in equations~\eqref{KG1} and~\eqref{KG2}
in appendix~\ref{KG.alt}. }
$h^2= 2a^2{\cal H}(1+q),$ and $h'\equiv\partial_N h.$ By differentiating this expression
and using equations~\eqref{q.definition},~\eqref{2-q.itoU} and \eqref{H,q.prime}
one can express the coefficient $h'/h$ in~\eqref{DE.varphi.1}
in terms of the scalar field potential $V(\varphi_0)$, as follows:
\begin{equation} \label{partial.h}
\frac{h'}{h} = \frac{(2l V_{,\varphi} -V)}{2H^2},
\end{equation}
with $H$ given by~\eqref{H.ito.phi} in appendix~\ref{background}.
Equation~\eqref{DE.varphi.1} with~\eqref{partial.h} is the desired new form of the
perturbed Klein-Gordon equation at first order.
By inspection it is clear that in the
super-horizon regime $l {}^{(1)}\!{\varphi}_{\mathrm c}\approx C$,
where $C$ is a spatial function, is a solution of this
equation as expected.
However, since~\eqref{DE.varphi.1} is a second order differential equation, it
will have two independent solutions, and the general solution
in the super-horizon regime can be written in the form
\begin{equation} \label{soln.SH1}
l {}^{(1)}\!{\varphi}_{\mathrm c}\approx C_1 +
C_2\int_{N_{init}}^N\frac{d{\bar N}}{h({\bar N})^2},
\end{equation}
which at first sight contradicts the earlier result~\eqref{phi_c.conserved} that
$l {}^{(1)}\!{\varphi}_{\mathrm c}\approx C$ \emph{is the general solution of
the perturbation equations in the super-horizon regime}. It follows that
the spatial function $C_2$ must be of order $O({\bf D}^2)$, making the second
term negligible in the super-horizon regime.
Although~\eqref{DE.varphi.1} can be solved explicitly
in the super-horizon regime as above
and also in the special case of power-law inflation
and in the slow-roll approximation\footnote
{This involves using conformal time $\eta$ instead of $e$-fold
time $N$. See appendix~\ref{KG.alt} for the resulting alternative
forms of the Klein-Gordon equation.}
there is a restriction as regards
its overall applicability.
On recalling that $l=-1/(\varphi_0')$, it follows that
\emph{the coefficient~\eqref{partial.h} will be singular
whenever $\partial_N \varphi_0=0$}.\footnote
{Some implications of this singularity have been discussed by
Finelli and Brandenberger (1999)~\cite{finbra99}.}
For example, during the
period of reheating that occurs at the end of inflation the scalar field
$\varphi_0$ oscillates, which means that $\partial_N \varphi_0$ will be zero
repeatedly. In order to avoid this singularity one can use
${}^{(1)}\!{\varphi}_{\mathrm c}$ as dependent variable.
The alternative form of the Klein-Gordon equation is given in
appendix~\ref{KG.alt}.
One can also use the above procedure to derive the perturbed
Klein-Gordon equation at second order. The leading order terms
in equations~\eqref{ucg_gov1} and~\eqref{infl.uni}
will be the same as at first order, but the equations will also have
source terms that are quadratic in the first order
perturbations $\phi_{\mathrm c},B_{\mathrm c}$ and $\varphi_{\mathrm c}$.\footnote{The
detailed expressions can be obtained from the equations in
UW2~\cite{uggwai18}. We do not give them because in this
paper we are just interested in the overall structure of the equations.}
At the first stage equations~\eqref{ucg1_varphi} will have the form
\begin{subequations}\label{ucg2_varphi}
\begin{align}
(1+q)\partial_N(l{}^{(2)}\!\hat{\varphi}_{\mathrm c}) &=
{\cal H}^{-2}{\bf D}^2({\cal H}{}^{(2)}\!{B}_{\mathrm c}) +
(1+q){\mathbb S}_{\varphi}, \label{ucg2_varphi.1} \\
\partial_N(a^2{}^{(2)}\!B_{\mathrm c} )&=
a^2 {\cal H}^ {-1}(1+q)l{}^{(2)}\!\hat{\varphi}_{\mathrm c}
+ {\mathbb S}_B, \label{ucg2_varphi.2}
\end{align}
\end{subequations}
where we have chosen to use ${}^{(2)}\!\hat{\varphi}_{\mathrm c}$
instead of ${}^{(2)}\!{\varphi}_{\mathrm c}$ since this simplifies the source
term ${\mathbb S}_{\varphi}$ so that it has the property
\begin{equation} \label{S_varphi}
{\mathbb S}_{\varphi}={\cal O}({\bf D}^2).
\end{equation}
This can be confirmed by inspecting the various terms that contribute
to ${\mathbb S}_{\varphi}$. Here we note that we have used~\eqref{phi.varphi} to
replace ${}^{(1)}\!{\phi}_{\mathrm c}$ by ${}^{(1)}\!{\varphi}_{\mathrm c}$
in the source terms so that they depend only on the spatial derivatives
of ${}^{(1)}\!B_{\mathrm c}$ and ${}^{(1)}\!\varphi_{\mathrm c}$.
Eliminating ${}^{(2)}\!B_{\mathrm c}$ in equations~\eqref{ucg2_varphi}
as in the linear case leads to
\begin{equation} \label{DE.varphi.2}
\left(\partial_N^2+2\frac{h'}{h} \partial_N
- {\cal H}^{-2}{\bf D}^2 \right)(l {}^{(2)}\!\hat{\varphi}_{\mathrm c})=
\partial_N(h^2{\mathbb S}_{\varphi}) + {\cal H}^{-2}{\bf D}^2 {\mathbb S}_B,
\end{equation}
where $h'/h$ is given by~\eqref{partial.h}.
Equation~\eqref{DE.varphi.2} is a new version of the
perturbed Klein-Gordon equation at second order.
We note that
there is one complication as regards the source term in~\eqref{DE.varphi.2}.
Since it depends on both ${}^{(1)}\!\varphi_{\mathrm c}$ and ${}^{(1)}\!B_{\mathrm c}$
one has to express ${}^{(1)}\!B_{\mathrm c}$ in terms of ${}^{(1)}\!\varphi_{\mathrm c}$
using~\eqref{ucg1_varphi.1} in order to
make~\eqref{DE.varphi.2} a closed equation,\footnote{See for example,
Malik \emph{et al} 2008~\cite{maletal08} for an explicit
example of this process: the second order differential equation that arises from
the conservation of energy equation is equation (3.5) which becomes equation
(3.22) after the metric terms have been eliminated, in the slow-roll approximation.}
and this involves using the inverse Laplacian operator ${\bf D}^{-2}$.
An important property of equation~\eqref{DE.varphi.2} is
that the total source
term on the right side of the equation is ${\cal O}({\bf D}^2)$
on account of~\eqref{S_varphi}, and this is due to using the hatted variable
${}^{(2)}\!\hat{\varphi}_{\mathrm c}$ instead of ${}^{(2)}\!{\varphi}_{\mathrm c}$.
Thus in the super-horizon regime, equation~\eqref{DE.varphi.2} reduces to the
DE
\begin{equation} \label{DE.varphi.2.S}
\left(\partial_N^2+2\frac{h'}{h} \partial_N
\right)(l {}^{(2)}\!\hat{\varphi}_{\mathrm c})\approx 0,
\end{equation}
in complete analogy with equation~\eqref{DE.varphi.1},
which means that the general solution
has the same form as~\eqref{soln.SH1} with $l {}^{(2)}\!\hat{\varphi}_{\mathrm c}$
replacing $l {}^{(1)}\!{\varphi}_{\mathrm c}$. Since equation~\eqref{phi_c.conserved}
shows that $l {}^{(2)}\!\hat{\varphi}_{\mathrm c}$
is a conserved quantity in general the second term in the solution
must be negligible in the super-horizon regime,
as in the first order case. Another similarity is that the
changes of variable made on the first order equation~\eqref{DE.varphi.1}
as in equations~\eqref{KG1}-\eqref{KG4},
which affect only the terms
on the left side of the equation,
can performed on equation~\eqref{DE.varphi.2} in an identical manner.
On the other hand changing from
${}^{(2)}\!\hat{\varphi}_{\mathrm c}$ to ${}^{(2)}\!{\varphi}_{\mathrm c}$
as the dependent variable using~\eqref{hat.varphi}
will complicate the equation~\eqref{DE.varphi.2} significantly by
introducing additional source terms that are
non-zero in the super-horizon regime.
Equations~\eqref{ucg2_varphi} or~\eqref{DE.varphi.2}, when transformed to Fourier
space both provide an algorithm for calculating
$l {}^{(2)}\!\hat{\varphi}_{\mathrm c}$ numerically.\footnote
{Numerical procedures for determining second order scalar perturbations
using the perturbed Klein-Gordon equation with ${}^{(2)}\!\varphi_{\mathrm c}$
rather than $l{}^{(2)}\!\varphi_{\mathrm c}$ as the dependent variable
have been given by Huston and Malik (2009)~\cite{husmal09} using the slow-roll
approximation, and by Huston and Malik (2011)~\cite{husmal11} in general.}
Using~\eqref{ucg2_varphi} would have
two advantages. First there is no need to eliminate ${}^{(1)}\!B_{\mathrm c}$ thereby
avoiding the introduction of the inverse Laplacian in the source terms, and second
the output directly determines two other quantities of
physical interest, namely the Bardeen potential $\psi_\mathrm{p}$ and the density perturbation
${\mbox{\boldmath $\delta$}}_{\mathrm v}$, via the equations $\psi_\mathrm{p} = -{\cal H}B_{\mathrm c}$
and ${\mbox{\boldmath $\delta$}}_{\mathrm v} = -\partial_N(l\varphi_{\mathrm c})$, generalized
to second order with source terms.
\section{Discussion}
In this paper we have considered second order perturbations of a flat Friedmann-Lema\^{i}tre
universe whose stress-energy content is a single minimally coupled scalar field. We have derived the general solution of the perturbed Einstein equations in
explicit form for this class of models when the perturbations are in the super-horizon
regime. We assumed an arbitrary potential and made
no use of the slow-roll approximation. We also showed that the Einstein equations in
the uniform curvature gauge lead to a new form of the perturbed Klein-Gordon
equation for linear perturbations using $l\varphi_{\mathrm c}$ as dependent variable,
which we generalize to second order.
The perturbations of the scalar field have a simple form in the uniform curvature gauge,
which reflects the fact that the perturbations admit a conserved
quantity. Although second order perturbations of scalar fields
minimally coupled to gravity
have been studied extensively during the past fifteen years
in connection with inflation, starting with
Acquaviva \emph {et al} (2003)~\cite{acqetal03} and
Maldacena (2003)~\cite{mal03} (see also, for example,
Finelli \emph {et al}~\cite{finetal04,finetal06},
Malik (2005)~\cite{mal05} and Vernizzi (2005)~\cite{ver05}), the conserved
quantity $l{}^{(2)}\!\hat{\varphi}_{\mathrm c}$ given by~\eqref{hat.varphi.new}
and the general solution for ${}^{(2)}\!\varphi_{\mathrm c}$
in equation~\eqref{varphi_c.solution} have
not been given previously, to the best of our knowledge. However, we note that
Finelli \emph{et al} (2004)~\cite{finetal04} have derived an approximate
solution for ${}^{(2)}\!\varphi_{\mathrm c}$ using the perturbed
Klein-Gordon equation at second order (their equation (28)). They
impose the slow-roll approximation and use
the potential $V=\sfrac12 m^2 \varphi_0^2$
that corresponds to chaotic inflation. We have not
been able to relate their solution which is described in equations (53)-(57)
in~\cite{finetal04} to our general result. In addition, some of the change of
variable formulas that we use in section~\ref{cons.quantity} have been given
previously but in a more complicated form. For example
equation (8) in Finelli \emph{et al} (2006)~\cite{finetal06}
corresponds to our equation~\eqref{psi-varphi2},
while equations (3.3)-(3.4)
in Maldacena (2003)~\cite{mal03} correspond to the special case of
our equation~\eqref{psi-varphi2}
when the gauge on the right side is chosen to be the uniform curvature gauge.
\begin{appendix}
\section{Cosmological scalar fields as perfect fluids~\label{scalar.field}}
We are considering a flat FL universe in which the matter-energy
content is a single minimally coupled scalar field $\varphi$.
The stress-energy tensor is of the form
\begin{subequations}
\begin{equation}\label{sfT_ab}
T^a\!_b = \mbox{\boldmath $\nabla$}^a\varphi\mbox{\boldmath $\nabla$}\!_b\varphi - \left[\sfrac12 \mbox{\boldmath $\nabla$}^c\varphi\mbox{\boldmath $\nabla$}\!_c\varphi +
V(\varphi)\right]\delta^a\!_b ,
\end{equation}
and the conservation equation leads to the Klein-Gordon equation
$\mbox{\boldmath $\nabla$}^c\mbox{\boldmath $\nabla$}\!_c\varphi - V\!_{,\varphi} = 0$,
where the potential $V(\varphi)$ has to be specified. In cosmology this stress-energy tensor has the perfect fluid form
\begin{equation}\label{pf}
T^a\!_b = \left(\rho + p\right)\!u^a u_b + p\delta^a\!_b,
\end{equation}
with
\begin{equation}\label{T_ab,scalar}
\rho + p = - \mbox{\boldmath $\nabla$}^a\varphi \mbox{\boldmath $\nabla$}\!_a\varphi, \qquad \rho - p = 2V(\varphi),
\qquad u_a = \frac{\mbox{\boldmath $\nabla$}\!_a\varphi}{\sqrt{- \mbox{\boldmath $\nabla$}^a\varphi \mbox{\boldmath $\nabla$}\!_a\varphi}} .
\end{equation}
\end{subequations}
\subsection{Background equations \label{background}}
In a spatially flat background the Friedmann
equation\footnote{We use units with $c=\hbar=1$ and
$8\pi G=1/(M_{Pl})^2 = 1$, where $M_{Pl}$ is the reduced Planck mass
and $G$ the gravitational constant.}
and the conservation of energy equation read\footnote
{We remind the reader that a $'$ denotes the derivative with respect to $N$.}
\begin{subequations} \label{basic.relations}
\begin{equation} \label{prop.of.rho}
3H^2 = \rho_0, \qquad \rho_0' = -3(\rho_0+p_0),
\end{equation}
where $H$ is the background Hubble variable while $\rho_0$
and $p_0$ are the background
energy density and pressure, respectively.
We introduce the standard matter variables
$w=p_0/\rho_0$ and $c_s^2=p_0'/\rho_0'$.
Using these equations and the definition $1+q=- H'/H$
it follows that
\begin{equation} \label{prop.of.w}
3(1+w)=2(1+q), \qquad w'=3(1+w)(w-c_s^2).
\end{equation}
When evaluated on the FL background, equation~\eqref{T_ab,scalar} leads to
\begin{equation}\label{rho,p for phi}
\rho_0 + p_0 = H^2(\varphi_0')^2, \qquad \rho_0 - p_0 = 2V(\varphi_0).
\end{equation}
\end{subequations}
We can use equations~\eqref{basic.relations} to express $H^2$,
$w$ and $c_s^2$ in terms of $\varphi_0$ and $V(\varphi_0)$. To indicate that $w$
and $c_s^2$ describe a scalar
field we will label them as $w_{\varphi}$ and $c_{\varphi}^2$.
The resulting expressions are as follows:
\begin{subequations} \label{wcs_N}
\begin{align}
H^2&=\frac{2V(\varphi_0)}{6-(\varphi_0')^2}, \label{H.ito.phi} \\
w_{\varphi} &= -1 +\sfrac13 (\varphi_0')^2, \label{w.ito.phi} \\
c_{\varphi}^2 &= 1 + \frac{(6-(\varphi_0')^2)}{3\,\varphi_0'}
\frac{V_{,\varphi}}{V},
\end{align}
\end{subequations}
where $V_{,\varphi}$ is the derivative of
$V(\varphi_0)$ with respect to $\varphi_0$.
It follows from~\eqref{prop.of.w} and~\eqref{wcs_N} that
\begin{equation} \label{2-q.itoU}
2-q=\frac{V}{H^2}, \qquad 1-c_{\varphi}^2=
-\frac{2}{3\varphi_0'}\frac{V_{,\varphi}}{H^2}.
\end{equation}
Further, one can derive the background Klein-Gordon equation~\eqref{KG_N}
by differentiating~\eqref{H.ito.phi}, and then use the result to obtain
\begin{equation} \label{w-c^2}
w_{\varphi}-c_{\varphi}^2 =\frac23\left(\frac{\varphi_0''}{\varphi_0'}\right).
\end{equation}
We will also need the following result
\begin{equation} \label{H,q.prime}
q'= 3(1+q)(w_{\varphi}-c_{\varphi}^2),
\end{equation}
which is an immediate consequence of~\eqref{prop.of.w}.
We end this section with a brief digression on the Hubble flow
functions $\varepsilon_n, \, n=1,2,3\dots$ which define the
slow-roll regime,
although we do not use this approximation. These
functions are defined by
$\varepsilon_1=-H'/H,\, \varepsilon_{n+1}=\varepsilon_n'/\varepsilon_n,\,
n=1,2,3\dots$ (see for example,
Martin (2016)~\cite{mar16}, equation (5)). It follows
from $1+q=- H'/H$ and equations~\eqref{prop.of.w},~\eqref{w.ito.phi}
and~\eqref{w-c^2} that the Hubble flow functions
are related to the scalar field according to
\begin{equation}
\varepsilon_1= 1+q = \sfrac12(\varphi_0')^2,
\qquad \varepsilon_2=\frac{q'}{1+q}=2\left(\frac{\varphi_0''}{\varphi_0'}\right).
\end{equation}
\subsection{Perturbations of the scalar field \label{app:scalar} }
For a scalar field we can express the matter variables
$({}^{(r)}\!{\mbox{\boldmath $\delta$}}, {}^{(r)}\!{P}, {}^{(r)}\!{V})$,
where ${}^{(r)}\!{P}={}^{(r)}\!{p}/(\rho_0+p_0)$,
in terms of the scalar field perturbations ${}^{(r)}\!{\varphi}$ and the metric variables
$\phi$ and $B$ using equations~\eqref{sfT_ab} and~\eqref{T_ab,scalar}.
The results at first order are
\begin{subequations} \label{scalar1}
\begin{align}
{}^{(1)}\!{\mbox{\boldmath $\delta$}} + {}^{(1)}\!{P} &=
-2(l{\partial}_{N}{}^{(1)}\!{\varphi} +{}^{(1)}\!\phi) ,\label{scalar1.1} \\
{}^{(1)}\!{\mbox{\boldmath $\delta$}} - {}^{(1)}\!{P} &=
2H^{-2}\, lV_{,\varphi} \,l{}^{(1)}\!{\varphi} , \label{scalar1.2} \\
{\cal H}{}^{(1)}\!{V} &= l{}^{(1)}\!{\varphi}, \label{scalar1.3}
\end{align}
\end{subequations}
where $l=-(\varphi_0')^{-1}$
is given by~\eqref{def_lambda}.
The results at second order are:\footnote{Formulas
for ${}^{(r)}\!T^a\!_b$, $r=1,2$, for the stress-energy tensor~\eqref{sfT_ab} have been given
for example, by Acquaviva \emph{et al} (2003)~\cite{acqetal03}
(see equations (9)-(16)), by Malik (2007)~\cite{mal07}
(see equations (C12)-(C14)) and by Nakamura (2009)~\cite{nak09a}
(see equations (4.45)-(4.52)).
The expressions we have given can be obtained using the relation
${}^{(2)}\!T^a\!_b = (\rho_0 +p_0)({\mathsf T}^a\!_b + {\mathbb T}^a\!_b)$, see UW2~\cite{uggwai18}.}
\begin{subequations} \label{scalar2}
\begin{align}
{}^{(2)}\!{\mbox{\boldmath $\delta$}} + {}^{(2)}\!{P} &=
-2(l{\partial}_{N}{{}^{(2)}\!\varphi} + {}^{(2)}\!\phi) +
2\left(2{}^{(1)}\!\phi +l{\partial}_{N}{{}^{(1)}\!\varphi}\right)^2 +
2{\cal H}^{-2}\left({\bf D}(l{{}^{(1)}\!\varphi} - {\cal H} {}^{(1)}\!B)\right)^2 , \label{scalar2.1} \\
{}^{(2)}\!{\mbox{\boldmath $\delta$}} - {}^{(2)}\!{P} &=
2H^{-2}\left(lV_{,\varphi}\,l{}^{(2)}\!{\varphi} +
V_{,\varphi\varphi} (l{{}^{(1)}\!\varphi})^2\right), \\
{\cal H}{}^{(2)}\!{V} &=l {}^{(2)}\!{\varphi} -
{\cal S}^i\left[({{}^{(1)}\!\mbox{\boldmath $\delta$}} + {{}^{(1)}\!P}){\bf D}_i {\cal H}{{}^{(1)}\!V}\right]. \label{scalar2.3}
\end{align}
\end{subequations}
Before continuing we note two properties of the perturbations
of the scalar field that are obtained immediately by choosing the
total matter gauge in~\eqref{scalar1} and~\eqref{scalar2}, namely that
\begin{equation} \label{sf.prop1}
{}^{(r)}\!\varphi_{\mathrm v} = 0,\qquad r=1,2.
\end{equation}
and hence that
\begin{equation} \label{sf.prop2}
{}^{(r)}\!{P}_{\mathrm v} = {}^{(r)}\!{\mbox{\boldmath $\delta$}}_{\mathrm v},\qquad r=1,2.
\end{equation}
We now derive expressions for the non-adiabatic
pressure perturbations ${}^{(r)}\!\Gamma$, $r=1,2$ for
a perturbed scalar field. The general expressions are given
in UW2~\cite{uggwai18} (see equation (23)), which we repeat here
\begin{subequations} \label{gamma_general}
\begin{align}
{}^{(1)}\!{\Gamma} &= {}^{(1)}\!{P} - c_s^2\, {}^{(1)}\!{\mbox{\boldmath $\delta$}},\label{gamma_1} \\
{}^{(2)}\!{\Gamma} &= {}^{(2)}\!{P} - c_s^2\,{}^{(2)}\!{\mbox{\boldmath $\delta$}} +
\sfrac13 (\partial_N c_s^2)({{}^{(1)}\!\mbox{\boldmath $\delta$}})^2 +
\sfrac23{{}^{(1)}\!\mbox{\boldmath $\delta$}}\left[\partial_N -
3(1 + c_s^2)\right]{{}^{(1)}\!\Gamma}. \label{gamma_2}
\end{align}
\end{subequations}
The expressions on the right side are independent of choice of timelike gauge
once the spatial gauge has been fixed as in UW2~\cite{uggwai18}.
In the present situation we evaluate them in the total matter gauge
and use~\eqref{sf.prop2} which leads to
\begin{subequations} \label{gamma_sf}
\begin{align}
{}^{(1)}\!{\Gamma} &= (1-c_s^2){}^{(1)}\!{\mbox{\boldmath $\delta$}}_{\mathrm v}, \label{gamma_sf1} \\
{}^{(2)}\!{\Gamma} &= (1-c_s^2){}^{(2)}\!{\mbox{\boldmath $\delta$}}_{\mathrm v} +
\sfrac13 (\partial_N c_s^2)({{}^{(1)}\!\mbox{\boldmath $\delta$}}_{\mathrm v})^2 +
\sfrac23 {{}^{(1)}\!\mbox{\boldmath $\delta$}_{\mathrm v}}\left(\partial_N - 3(1+c_s^2) \right)\!{{}^{(1)}\!\Gamma}. \label{gamma_sf_2}
\end{align}
\end{subequations}
The constraints~\eqref{scalar1.3} and~\eqref{scalar2.3} play
a central role in that they determine the
perturbations of the scalar field in terms of the velocity perturbations
in an arbitrary gauge. We can write~\eqref{scalar2.3} in terms of
the hatted variables, in a form that will be useful later, as follows.
We combine~\eqref{scalar1.1} and~\eqref{scalar1.3} to obtain
\begin{equation} \label{delta+P}
{}^{(1)}\!{\mbox{\boldmath $\delta$}} + {}^{(1)}\!{P} =
-2\left(({\partial}_{N}+1+q)({\cal H}{}^{(1)}\!V) +{}^{(1)}\!\phi
- \sfrac32(1+c_{\varphi}^2){\cal H}{}^{(1)}\!V\right),
\end{equation}
using $l'/l=-(1+q)+\sfrac32(1+c_{\varphi}^2)$.
The perturbed conservation of momentum equation
(UW2~\cite{uggwai18}, section 4) when
applied to a scalar field using~\eqref{gamma_sf1} gives
\begin{equation}
({\partial}_{N} +1+q)({\cal H}{}^{(1)}\!V) + {}^{(1)}\!\phi = -{}^{(1)}\!{\mbox{\boldmath $\delta$}}_{\mathrm v},
\end{equation}
which when substituted in~\eqref{delta+P} yields
\begin{equation}
{}^{(1)}\!{\mbox{\boldmath $\delta$}} + {}^{(1)}\!{P} =3(1+c_{\varphi}^2){\cal H}{}^{(1)}\!V + 2{}^{(1)}\!{\mbox{\boldmath $\delta$}}_{\mathrm v}.
\end{equation}
We substitute this expression in~\eqref{scalar2.3} and introduce
the hatted variables ${}^{(2)}\!{\hat{\varphi}}$ defined in~\eqref{hat.varphi} and
${\cal H}{}^{(2)}\!{\hat{V}} =
{\cal H}{}^{(2)}\!{V} + (1+q)({\cal H} {}^{(1)}\!{V})^2.$
Together with~\eqref{scalar1.3} we obtain
\begin{subequations} \label{varphi_V}
\begin{align}
l{}^{(1)}\!{\varphi}&={\cal H}{}^{(1)}\!{V} , \label{scalar1.3a} \\
l {}^{(2)}\!{\hat{\varphi}} &= {\cal H}{}^{(2)}\!{\hat V} +
2{\cal S}^i[{\mbox{\boldmath $\delta$}}_{\mathrm v}{\bf D}_i {\cal H}{V}]. \label{scalar2.3a}
\end{align}
\end{subequations}
\subsection{Alternative forms for the perturbed Klein-Gordon equation \label{KG.alt}}
We first transform the differential equation~\eqref{DE.varphi.1}
to Fourier space (${\bf D}^2\rightarrow -k^2$) and introduce
conformal time $\eta$ obtaining
\begin{equation} \label{KG1}
\left(\partial_{\eta}^2\,+
2({\partial_{\eta}z }/{z}) \partial_{\eta}
+k^2\right)(l {}^{(1)}\!{\varphi}_{\mathrm c})=0, \qquad z=h/\sqrt{\cal H}=a/l.
\end{equation}
We make the transition from $N$ to $\eta$ by using
\begin{equation}
\partial_{\eta} = {\cal H}\partial_{N}, \qquad
\partial_{\eta}^2 = {\cal H}^2(\partial_{N}^2-q\partial_{N}).
\end{equation}
Alternatively one can transform the above differential equation
to the so-called Mukhanov-Sasaki form by
scaling $l {}^{(1)}\!{\varphi}_{\mathrm c}$ with $z$:
\begin{equation} \label{KG2}
\left(\partial_{\eta}^2\,-
({\partial_{\eta}^2z }/{z})
-k^2)\right)(a {}^{(1)}\!{\varphi}_{\mathrm c})=0, \qquad z=a/l.
\end{equation}
On recalling that the comoving curvature perturbation is given by
${\cal R}=\psi_{\mathrm v}=-l{}^{(1)}\!{\varphi}_{\mathrm c}$
these differential equations can be written with
${\cal R}$ and $z{\cal R}$, respectively, as the dependent variable.
See, for example, Weinberg (2008)~\cite{wei08}, equation (10.3.1) and page 481,
and Durrer (2008)~\cite{dur08}, equation (3.35) and page 113, respectively.
In the case of power-law inflation and when using the slow-roll approximation
equations~\eqref{KG1} and~\eqref{KG2} reduce to particular forms of Bessel's equation,
and hence can be solved. See for example~\cite{dur08},
pages 113-115, and in~\cite{wei08}, pages 481-482 and 488-491.
Finally, if we use ${\varphi}_{\mathrm c}$ as the dependent variable,
the differential equation~\eqref{DE.varphi.1} assumes the following form:
\begin{equation} \label{KG3}
\partial_N^2{\varphi}_{\mathrm c} +
\frac{V}{H^2} \partial_N {\varphi}_{\mathrm c} +
\frac{(V_{,\varphi\varphi}+2\varphi_0' V_{,\varphi} +
(\varphi_0' )^2V)}{H^2}{\varphi}_{\mathrm c} -
{\cal H}^{-2}\,{\bf D}^2{\varphi}_{\mathrm c}=0,
\end{equation}
(see for example,
Huston and Malik (2009)~\cite{husmal09}, equation (3.9), in the
Fourier domain, noting that $\delta \dot{\varphi}$ denotes differentiation with
respect to $N$ in this reference.). If we change to conformal time we obtain
\begin{equation} \label{KG4}
\partial_{\eta}^2{\varphi}_{\mathrm c} +2{\cal H} \partial_{\eta}{\varphi}_{\mathrm c} +
a^2\left(V_{,\varphi\varphi}+2{\varphi_0'} V_{,\varphi} +
(\varphi_0' )^2V\right){\varphi}_{\mathrm c} -
{\bf D}^2{\varphi}_{\mathrm c}=0,
\end{equation}
with $\varphi_0'=\partial_\eta \varphi_0/{\cal H}$ (see for
example~\cite{husmal09}, equation (2.13), noting that $'$
denotes differentiation with respect to conformal time in this reference.)
One can see by inspection that the coefficients of~\eqref{KG3}
and~\eqref{KG4} are well-defined when $\varphi_0'=0.$
\end{appendix}
\bibliographystyle{plain}
|
1,108,101,566,481 | arxiv | \section{Introduction}
\label{sec:intro}
The discovery of the relationship between the masses of nuclear
supermassive black holes (SMBH) and the luminosity, velocity
dispersion, or mass of their host galactic spheroids
\citep{dressler:89,kormendy_review:95,magorrian:98,ferrarese:00,gebhardt:00,marconi:03,haering:04}
is surely one of the most profound observational results of the past
decade, if not the past century. Different methods applied to both
dormant and active BH in the nearby Universe now yield consistent
results and indicate that BH mass and galaxy velocity dispersion
$\sigma$ are related via $\mbox{$m_{\rm BH}$} \propto \sigma^\beta$, where $\beta
\simeq 4$--5 \citep{ferrarese:00,gebhardt:00,tremaine:02}, and BH mass
and galaxy mass are related via $\mbox{$m_{\rm BH}$} \propto \mbox{$m_{\rm gal}$}^{1.1}$
\citep{marconi:03,haering:04}\footnote{Several other equally tight
relationships between BH mass and other galaxy properties have been
discovered \citep[e.g.][]{graham:01,kormendy:09}. Although these
relationships are intriguing, here I focus on the relationship
between BH mass and galaxy mass.}. The {\em observed} scatter in
both of these relationships is remarkably small, and implies intrinsic
scatters of approximately 0.3 dex in $\mbox{$m_{\rm BH}$}$ at fixed $\sigma$, and
0.5 dex in $\mbox{$m_{\rm BH}$}$ at fixed $L$ \citep{novak:06,gultekin:09}.
A large number of theoretical explanations for the origin of this
observed relationship (hereafter referred to for brevity as the
\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship) have been proposed
\citep[e.g.][]{silk_rees:98,burkert_silk:01,adams:01,adams:03,wyithe_loeb:03,robertson:06,croton:bhev,hopkins_bhfpth:07}.
However, there is no widely accepted unique theoretical model, and the
models differ in their predictions for the amount of evolution in the
\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship. This quantity is therefore a potentially
strong discriminator between different theoretical models, and there
is great interest in obtaining robust direct observational
measurements of this relationship at high redshifts. A great deal of
effort and telescope time has been expended towards achieving this
goal.
With currently available facilities, the masses of dormant BH can be
measured via the dynamics of the surrounding gas or stars
\citep[see][for a recent review]{ferrarese_ford:05} only for very
nearby galaxies. At high redshift, it is currently possible to attempt
to measure masses only for active or accreting BH. Several such
studies have claimed to find evidence for significant evolution in the
\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship, always in the sense that black holes are
more massive at high redshift relative to their host galaxy
\citep[e.g.][]{treu:04,peng:06,woo:06,woo:08,salviander:07}. However,
these methods rely on a set of simplified underlying assumptions and
various proxies for the desired quantities, and are subject to
potentially severe selection biases. \citet{lauer:07} have argued
that if there is a moderate amount of scatter in the intrinsic
\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship (consistent with observational constraints on
the scatter in the local relation), these selection biases can account
for most or all of the claimed evolution. It is therefore interesting
to explore the possibility of obtaining independent {\em empirical}
constraints on the evolution. Even obtaining robust upper and lower
limits on the evolution could be useful.
Recently, it has become common practice to estimate the stellar masses
of galaxies from multi-wavelength broadband photometry and/or
spectroscopy \citep[e.g.][]{bell_dejong:01,bell:03,kauffmann:03}. The
availability of deep multi-wavelength surveys covering substantial sky
area has yielded estimates of the stellar mass functions out to high
redshift, $z\sim 4$--5
\citep[e.g.][]{Drory:04,Fontana:04,Borch:06,Fontana:06,Pannella:06,Bundy:06,Pozzetti:07,Vergani:08,Marchesini:08,PerezGonzalez:08}. Similarly,
there has been significant progress in piecing together the evolution
of the {\em bolometric} luminosity function of quasars and Active
Galactic Nuclei (AGN) out to high redshift ($z\sim 6$) from
multi-wavelength surveys \citep[][and references therein; hereafter
HRH07]{hopkins_qsolf:07}.
In this paper, I explore whether one can derive useful empirical
constraints on the evolution of the relationship between BH mass and
galaxy mass by comparing these two sets of observed statistical
distributions (galaxy stellar mass functions and QSO/AGN luminosity
functions) under the basic and well-accepted ansatz that accreting BH
provide the power source for AGN. The outline of the rest of the paper
is as follows. In \S\ref{sec:results}, I describe my basic set of
assumptions. In \S\ref{sec:results:noscat} I present upper and lower
limits on the evolution of the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation, assuming that
there is no intrinsic scatter in the relation. In
\S\ref{sec:results:scat} I discuss how these constraints are impacted
by the inclusion of intrinsic scatter. I conclude in
\S\ref{sec:conclusions}.
\begin{figure*}
\begin{center}
\plottwo{qsolf_z=1.ps}{qsolf_z=2.ps}
\end{center}
\caption{\small The bolometric luminosity function of quasars at $z=1$
(left panel) and $z=2$ (right panel). Square symbols with error bars
show the estimate of the observed bolometric QSO LF from HRH07. Dashed
lines show the upper limit on the QSO LF derived from the observed
stellar mass function at the relevant redshift and the arguments
described in the text. From left to right, the dashed lines assume
that the zero-point of the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation has evolved by a
factor of $\Gamma=1$, 2, 3, 4, 5, or 6. Under these assumptions, the
mass of a typical BH hosted by a galaxy of a given mass must have
been larger by a factor of $\sim 2$ at $z=1$ and by a factor of 5--6
at $z=2$.
\label{fig:qsolf}}
\end{figure*}
\section{Upper and Lower Limits on the Evolution}
\label{sec:results}
For nearby dormant BH, the average relationship between BH mass and
galaxy mass can be characterized as $\mbox{$m_{\rm BH}$} \propto \mbox{$m_{\rm gal}$}^{1.1}$
\citep{marconi:03,haering:04}\footnote{\citet{lauer:07a} suggest a
slope of unity, but this would not significantly effect the
results.}. I will explore the simplest possible form for possible
evolution of this relationship, namely scaling by a factor that is a
function of redshift only $\Gamma(z)$. Thus the mass of a BH residing
in a galaxy with mass \mbox{$m_{\rm gal}$}\ at redshift $z$ is given by:
\begin{equation}
\mbox{$m_{\rm BH}$}(z,\mbox{$m_{\rm gal}$}) = \Gamma(z)\, \mbox{$m_{\rm BH}$}(z=0, \mbox{$m_{\rm gal}$}) = \Gamma(z)\, \mbox{$m_{\rm gal}$}^{1.1} \, .
\label{eqn:mbhev}
\end{equation}
I now make use of the observed galaxy stellar mass function at some
redshift of interest, and assume that every galaxy hosts a SMBH with
mass given by Eqn.~\ref{eqn:mbhev}. One can then obtain a reasonable
{\em lower} limit on the evolution in \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ (i.e. on
$\Gamma(z)$) at a given redshift by assuming that 1) every BH is
active at all times (has a duty cycle of unity) 2) every active BH
always radiates at its Eddington luminosity. This set of assumptions
will clearly maximize the number of luminous quasars for a given
population of BH, under the fairly standard conjecture of
Eddington-limited accretion.
One can then obtain an {\em upper} limit on the evolution by comparing
the implied BH mass function at the redshift of interest with
observational estimates of the {\em present-day} BH mass
function. Again, under the apparently reasonable assumptions that BH
masses increase monotonically with time and that significant numbers
of massive BH are not somehow lost from galaxies, clearly the number
of massive BH in the past cannot exceed that at the present day. To
state the condition more precisely, the number density of BH,
$\phi(\mbox{$m_{\rm BH}$})$, at high redshift may not exceed the present day value
for all BH masses greater than some threshold value $\mbox{$m_{\rm BH}$} > M_{\rm
min}$.
For the observed stellar mass function as a function of redshift, I
adopt the fitting functions of \citet{Fontana:06}, based on
measurements from the GOODS-MUSIC survey. I have checked that these
fitting functions provide good agreement with the results from other
surveys (see \citet{fontanot:09} for a thorough comparison of stellar
mass function estimates from different surveys over a wide range of
redshifts). There is good agreement between different estimates of the
stellar mass function up to $z\sim 2$; at higher redshifts the results
of different studies diverge. For this reason, the results presented
here are limited to redshift two and below. The stellar masses have
been converted to correspond to a \citet{chabrier:03} stellar initial
mass function. I will assume that there are no significant
redshift-dependent systematic errors in the stellar mass
estimates. For the moment, because I am mainly exploring the
feasibility of this approach, I also ignore the effect of the random
errors on the stellar masses, although these will probably have an
impact on the quantitative results.
\subsection{Constraints without Scatter}
\label{sec:results:noscat}
Initially, I assume that the relationship between BH mass and galaxy
mass has no intrinsic scatter. It is then straightforward to derive
the implied bolometric luminosity function of QSO/AGN from the
observed galaxy stellar mass function under the set of assumptions
outlined above, for a given value of the evolution factor
$\Gamma(z)$. Figure~\ref{fig:qsolf} shows the observed bolometric QSO
luminosity function as estimated by HRH07 along with the upper limit
estimate based on the stellar mass function, for different values of
$\Gamma$. This comparison is shown at $z=1$ and $z=2$. At QSO
luminosities below the ``knee'' in the LF, the upper limit estimate
overproduces QSOs, which can be understood as implying that these BH
are not active at all times and/or are radiating at sub-Eddington
luminosities. What is more interesting is that at the highest
luminosities, above $\sim 10^{14} \mbox{${\rm L}_{\odot}$}$ at $z=1$ or $10^{13.5}$ at
$z=2$, {\em without evolution in the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation, the number
of luminous QSOs is significantly underestimated}. Assuming that
these luminous QSOs are not radiating above their Eddington
luminosity, and are not magnified somehow (e.g. by beaming or
lensing), one can then read off the {\em minimum} amount of evolution
(minimum value of $\Gamma$) required to produce enough luminous
QSOs. This corresponds to $\Gamma_{\rm min} \sim 2$ at $z=1$ and
$\Gamma_{\rm min} \sim 5$--6 at $z=2$. Taken at face value, then,
these results require that BH were 5--6 times larger for a given
galaxy mass at $z=2$.
Now let us consider the upper limit on the evolution, or maximum
allowed value of $\Gamma$. Figure~\ref{fig:bhmf} shows the
observational estimate of the BH mass function\footnote{Note that the
\protect\citet{marconi:04} estimate of the BHMF includes the effect
of scatter in the local \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation, and therefore the
comparison shown here is not strictly self-consistent. However, I
will consider scatter self-consistently in the next Section.} at
$z=0$ \citep{marconi:04}, compared with the results from the stellar
mass function at $z=2$ scaled by the same series of values of
$\Gamma$. Clearly, as long as BH cannot decrease in mass or be ejected
from their host galaxies, then there is an upper limit of $\Gamma_{\rm
max} \sim 6$ at $z\sim2$. It is interesting that the lower limit
from the QSO LF, discussed above, and this upper limit are so close to
one another.
\begin{figure*}
\begin{center}
\plottwo{bhmf_z=1.ps}{bhmf_z=2.ps}
\end{center}
\caption{\small The mass function of SMBH. The solid green line shows
the observational estimate of the BH mass function at $z=0$ from
\protect\citet{marconi:04}. Dashed (red) lines show the BH MF
implied by the observed galaxy stellar mass function at $z=1$ (left
panel) or $z=2$ (right panel), the relationship between BH mass and
galaxy mass described in the text, and evolution in the zero-point
of the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation of a factor of $\Gamma=1$, 2, 3, 4, 5,
or 6 (curves from left to right, respectively).
\label{fig:bhmf}}
\end{figure*}
\subsection{Constraints with Scatter}
\label{sec:results:scat}
\begin{figure*}
\begin{center}
\includegraphics[width=6.5in]{bhev_z2_g1_s0.3.ps}
\end{center}
\caption{\small Left: BH mass function. Open (purple) dots show the
BHMF implied by the observed $z=2$ GSMF, no evolution in the
\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation ($\Gamma=1$), and a scatter of $\mbox{$\sigma_{\rm BH}$}=0.3$;
error bars are simple Poisson errors. The solid (green) line shows
the observational estimate of the BH mass function at $z=0$ from
\protect\citet{marconi:04}; the dashed (red) line shows the BH MF
implied by the observed galaxy stellar mass function at $z=2$ plus
the assumed \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation with no evolution ($\Gamma=1$) and
no scatter. To satisfy the constraint, the purple dots
(prediction) should be {\em lower} than the green line
(observations). Right: The QSO luminosity function. Open (purple)
dots show the upper limit on the QSO LF from the same argument, but
including scatter in the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation ($\mbox{$\sigma_{\rm BH}$}=0.3$);
error bars are Poisson. Square (green) symbols with error bars show
the estimate of the observed bolometric QSO LF at $z=2$ from HRH07;
the dashed (red) line shows the upper limit on the QSO LF derived
from the observed GSMF at $z=2$ and the arguments described in the
text, for $\Gamma=1$ and $\mbox{$\sigma_{\rm BH}$}=0$. Here, to satisfy the
constraint, the purple dots (prediction) should be {\em higher}
than the green squares (observations). The inclusion of a
moderate amount of scatter has a large impact on the high-mass end
of the BHMF and the high luminosity end of the QSO LF. When
scatter is included, the assumption that the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation
has not evolved since $z\sim2$ appears to be consistent with these
constraints.
\label{fig:bhev_s0.3}}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=6.5in]{bhev_z2_g1_s0.5.ps}
\end{center}
\caption{\small The same as Fig.~\protect\ref{fig:bhev_s0.3}, but with
$\mbox{$\sigma_{\rm BH}$}=0.5$.
\label{fig:bhev_s0.5}}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=6.5in]{bhev_z2_g2_s0.3.ps}
\end{center}
\caption{\small The same as Fig.~\protect\ref{fig:bhev_s0.3}, but with
$\Gamma=2$ and $\mbox{$\sigma_{\rm BH}$}=0.3$.
\label{fig:bhev_g2}}
\end{figure*}
So far I have neglected intrinsic scatter in the
\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation. However, as noted by \citet{lauer:07}, because
of the very steep decline of the high-mass end of the galaxy mass or
luminosity function, the highest mass black holes are actually more
likely to be outliers from the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation, hosted by galaxies
of more modest mass, rather than typical BH in the much rarer
high-mass galaxies that would be the sole hosts of such BH in the
absence of scatter.
\citet{novak:06} attempted to constrain the {\em intrinsic} scatter in
the $\mbox{$m_{\rm BH}$}-\sigma$ and $\mbox{$m_{\rm BH}$}-L$ relations at $z=0$, and concluded that
due to the small sample of galaxies with reliable measurements of BH
mass and uncertainties in the observational error estimates on \mbox{$m_{\rm BH}$},
only upper limits on the scatter could be obtained. They estimated
these upper limits to be $\delta_{\sigma} < 0.3$ and $\delta_L < 0.5$,
where $\delta_{\sigma}$ is the 1-$\sigma$ log scatter in the
$\mbox{$m_{\rm BH}$}-\sigma$ relation and $\delta_L$ is the same for the $\mbox{$m_{\rm BH}$}-L$
relation. \citet{marconi:03} find a similar scatter in the
$\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm sph}$}$ relation as in $\mbox{$m_{\rm BH}$}-\sigma$, and \citet{haering:04}
find an {\em observed} scatter of 0.3 dex in the $\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}$
relation, implying that the intrinsic scatter is presumably
smaller. Recently, \citet{gultekin:09} made a detailed study of the
magnitude of the intrinsic scatter in $\mbox{$m_{\rm BH}$}-\sigma$ and $\mbox{$m_{\rm BH}$}-L$,
finding $\delta_{\sigma} = 0.31$ for ellipticals but a larger scatter
of $\delta_{\sigma} = 0.44$ for all galaxies (including spirals). They
furthermore find that the shape of the distribution of the intrinsic
residuals in \mbox{$m_{\rm BH}$}\ at fixed $\sigma$ is consistent with a log-normal
(and inconsistent with a normal distribution).
Unfortunately, almost nothing is known about the possible evolution of
the intrinsic scatter in the BH-galaxy scaling relations. Therefore, I
investigate how the inclusion of representative amounts of scatter
would impact the results presented in
Section~\ref{sec:results:noscat}. In order to do this, I run Monte
Carlo simulations of $\sim 10^6$ galaxies, in which I first select
galaxy masses from the observed stellar mass function at the redshift
of interest (using $z=2$ as a representative case). I then assign BH
masses to each galaxy according to Eqn.~\ref{eqn:mbhev}, adding a
random deviate in mass selected from a log-normal distribution with
root variance $\mbox{$\sigma_{\rm BH}$}$, and consider the implied BH mass function
and upper limit on the QSO LF as before.
Results of these experiments for various values of $\mbox{$\sigma_{\rm BH}$}$ and
$\Gamma$ (all at $z=2$) are shown in
Figures~\ref{fig:bhev_s0.3}--\ref{fig:bhev_g2}. Note that I cut off
the galaxy stellar mass function below $10^{10} \mbox{${\rm M}_{\odot}$}$ because these
low-mass galaxies do not provide interesting constraints and including
them causes the Monte Carlo simulations to take much longer to run
(for a given desired number of high-mass objects). In
Figure~\ref{fig:bhev_s0.3}, one can see that when a moderate scatter
in the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation is included ($\sigma_{BH} = 0.3$, similar
to the scatter in the observed relation at the present day), the
number of luminous QSOs can be reproduced under the fairly extreme
assumptions used in the lower limit exercise (all BH radiate at their
Eddington limit at all times). Even a scatter half as large as the
observed present-day estimates ($\sigma_{BH} = 0.15$), with no
evolution in the normalization ($\Gamma=1$) marginally satisfies the
lower limit. With a slightly larger scatter ($\sigma_{BH} = 0.5$) or
moderate evolution in the zero-point $\Gamma=2$ (see
Fig.~\ref{fig:bhev_s0.5} and \ref{fig:bhev_g2}), bright QSOs are
overproduced by a factor of 10-100, leaving room for more reasonable
assumptions about duty cycle and Eddington ratio.
One can try to sharpen this constraint by adopting more physically
reasonable values for the duty cycles and Eddington ratios of
AGN. \citet{erb:06} and \citet{kriek:07} find that about 20--40\% of
galaxies at $z\sim2$ contain an active nucleus, and models in which
such activity is merger-driven
\citep[e.g.][]{hopkins:07,somerville:08} predict that this fraction is
nearly constant for galaxy masses $10.0 \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} \log (m_*/\mbox{${\rm M}_{\odot}$})
\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 12.0$ \citep[see e.g. Figure 19
of][]{hopkins:07}. \citet{vestergaard:04} find that the Eddington
ratios of luminous quasars at $1.5 < z < 3.5$ are in the range $0.1 <
L/L_{\rm Edd} < 1$, with an average value $L/L_{\rm Edd} \sim
0.4$--0.5, while \citet{kollmeier:06} find a fairly sharply peaked
distribution of Eddington ratios with a peak at $L/L_{\rm
Edd}=0.25$. Adopting average values for the fraction of galaxies
containing active BH, $\mbox{$f_{\rm AGN}$}=0.3$, and the Eddington ratio $\mbox{$f_{\rm Edd}$}
\equiv L/L_{\rm Edd} =0.5$ (assuming that $L/L_{\rm Edd}$ is also
constant with galaxy/BH mass)\footnote{Note that if we included a
realistic distribution of Eddington ratios, rather than a single
constant value, this would again broaden the tail of the bright end
of the QSO LF, leading to more very luminous QSOs.} produces quite
good agreement with the observed QSO LF at $z=2$ with {\em no
evolution} in the zero-point or scatter of the \mbox{$m_{\rm gal}$}-\mbox{$m_{\rm BH}$}\ relation
($\Gamma=1$, $\sigma_{BH} = 0.3$; see Fig.~\ref{fig:bhev_g1_fedd}).
\begin{figure}
\begin{center}
\plotone{bhev_z2_g1_s0.3_fedd.ps}
\end{center}
\caption{\small The QSO LF at $z=2$ as shown in
Fig.~\protect\ref{fig:bhev_s0.3}, with $\Gamma=1$ and
$\mbox{$\sigma_{\rm BH}$}=0.3$, but with a QSO duty cycle of $\mbox{$f_{\rm AGN}$}=0.3$ and an
Eddington ratio of $\mbox{$f_{\rm Edd}$}=0.5$.
\label{fig:bhev_g1_fedd}}
\end{figure}
Turning the argument around, then, if the independent observational
estimates of duty cycle and Eddington ratio are correct, and if the
scatter in the \mbox{$m_{\rm gal}$}-\mbox{$m_{\rm BH}$}\ relation was not significantly smaller at
high redshift than it is today, then overall evolution in the
\mbox{$m_{\rm gal}$}-\mbox{$m_{\rm BH}$}\ relation of $\Gamma \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 2$ since $z\sim2$ is
disfavored as it would {\em overproduce} the number of luminous QSOs
(see Fig.~\ref{fig:bhev_g2_fedd}). In particular, if the value of
$\Gamma$ at $z\sim2$ were as large as suggested by the observations of
e.g. \citet[][]{peng:06}, $\Gamma \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 4$, the number of luminous
QSOs would be overproduced by more than one order of magnitude. Of
course, one could reconcile these larger amounts of evolution if the
duty cycle of luminous quasars is an order of magnitude smaller than
what I have assumed ($\sim 2$--3 percent instead of $20$--30
percent).
\section{Conclusions}
\label{sec:conclusions}
I have investigated whether observational estimates of the stellar
mass function of galaxies, combined with observed QSO luminosity
functions, can provide useful limits on the relationship between
galaxies and their SMBH at high redshift. I assumed a simple
relationship between galaxy mass and SMBH mass, as observed in dormant
galaxies in the nearby Universe, and a simple form for the possible
evolution of this relationship (see Eqn.~\ref{eqn:mbhev}), namely a
shift in the zero-point of the relation by a redshift-dependent factor
$\Gamma(z)$. I then argued that one can obtain a {\em lower} limit on
$\Gamma(z)$ by making the rather extreme assumption that all BH
radiate at their Eddington limit at all times, and requiring that at
least the observed number of luminous QSOs be reproduced. I further
argued that an {\em upper} limit on $\Gamma(z)$ could be obtained by
requiring that the number of massive BH in galaxies today should not
be exceeded at high redshift.
Assuming that there is a deterministic relationship between galaxy
mass and BH mass (i.e., no scatter in the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship), I
find that in order to produce enough luminous QSOs, the zero-point of
the relation must have been higher by at least a factor of $\sim 2$ at
$z=1$ and a factor of 5--6 at $z=2$. At the same time, in order to
avoid producing a larger number density of massive BH than what is
implied by observations at $z\sim0$, the upper limit on the evolution
of the normalization of the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship at $z=2$ is about
a factor of six. Since both the lower and upper limits are fairly
liberal, one might have expected them to lie several orders of
magnitude apart, and therefore not to provide very interesting
constraints on the actual evolution of the
\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship. It seems potentially quite interesting that
these limits lie nearly on top of one another.
\begin{figure*}
\begin{center}
\plottwo{bhev_z2_g2_s0.3_fedd.ps}{bhev_z2_g4_s0.3_fedd.ps}
\end{center}
\caption{\small The QSO LF at $z=2$ as shown in
Fig.~\protect\ref{fig:bhev_s0.3}, with intrinsic scatter
$\mbox{$\sigma_{\rm BH}$}=0.3$, a QSO duty cycle of $\mbox{$f_{\rm AGN}$}=0.3$ and an Eddington
ratio of $\mbox{$f_{\rm Edd}$}=0.5$. Left panel: $\Gamma=2$; Right panel:
$\Gamma=5$. Assuming that the duty cycles and Eddington ratios
derived from independent observations are correct, and that the
intrinsic scatter in the \mbox{$m_{\rm gal}$}-\mbox{$m_{\rm BH}$}\ relation was at least as large
at $z=2$ as it is today, large amounts of evolution in the
zero-point ($\Gamma \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 2$) are disfavored.
\label{fig:bhev_g2_fedd}}
\end{figure*}
However, relaxing the assumption of a perfectly deterministic
\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship has a major impact on the results. When
scatter is included in the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relation at a level similar to
the intrinsic scatter in the observed relation at $z=0$, I find that
the majority of very massive BH are objects that live in galaxies of
moderate mass but are outliers in the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship. This
is of course due to the very steep slope of the galaxy stellar mass
function at large masses. Because the constraints above arose from the
most luminous QSOs, I then find that there is a strong degeneracy
between the evolution of the zero-point $\Gamma(z)$ and the scatter
$\mbox{$\sigma_{\rm BH}$}$. For example, the QSO constraint at $z=2$ can be reproduced
even in a scenario in which $\Gamma=1$ (no evolution in the zero-point
has occured) and $\mbox{$\sigma_{\rm BH}$}=0.3$ (the intrinsic scatter in
\mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ is similar to that in the observed relation today). Thus
we are left with the very weak constraint that BH probably were no
{\em smaller} at high redshift relative to their host galaxies (unless
the scatter was much larger than it is today).
I tried to sharpen this constraint by adopting more physically
reasonable values for the duty cycles and Eddington ratios of AGN,
based on independent observational constraints. Adopting
mass-independent values of $\mbox{$f_{\rm AGN}$}=0.3$ (the fraction of galaxies
hosting AGN) and $\mbox{$f_{\rm Edd}$} \equiv L/L_{\rm Edd} \sim 0.5$, and assuming a
scatter in \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ similar to that in the observed relation for
dormant galaxies today ($\mbox{$\sigma_{\rm BH}$}=0.3$), I find that BH cannot have
been much more than a factor of $\sim 2$ more massive relative to
their host galaxies at $z\sim 2$ than they are today. In particular,
values as large as $\Gamma(z=2) \sim 4$, as suggested by some
observational studies \citep[e.g.][]{peng:06}, would overproduce the
number of luminous QSOs by more than an order of magnitude.
Interestingly, \citet{hopkins:06} also reached similar conclusions
based on a somewhat different, though related argument. They pointed
out that in order to avoid overproducing the {\em total} mass density
in SMBH relative to the present day value, the average value of
\mbox{$m_{\rm BH}$}/\mbox{$m_{\rm gal}$}\ must not have been more than about a factor of two larger
at $z\sim2$ than today's value.
I have based these results on the relationship between the total
stellar mass of the galaxy and the mass of the SMBH; however, there is
strong evidence that the more fundamental relationship is actually
between the BH mass and the mass of the {\em spheroidal} component of
the galaxy \citep[e.g.][]{kormendy_review:95}. I made this choice
because the stellar mass function of galactic spheroids is very poorly
constrained at high redshift. However, at low redshift, the most
massive galaxies are predominantly spheroid-dominated
\citep[e.g.][]{bell:03}. If this was also the case at high redshift,
then my conclusions will not change much as the constraints are driven
by the most massive BH which are hosted by massive galaxies. If there
is a significant population of disk-dominated massive galaxies at high
redshift, and the BH mass indeed correlates with spheroid mass only,
then this would leave more room for evolution and/or scatter in the
\mbox{$m_{\rm sph}$}-\mbox{$m_{\rm BH}$}\ relation.
Another source of uncertainty arises from the fact that BH masses
predicted from the \mbox{$m_{\rm BH}$}\ vs. luminosity (\mbox{$m_{\rm BH}$}-$L$) relationship are
inconsistent with those predicted from the \mbox{$m_{\rm BH}$}\ vs. velocity
dispersion (\mbox{$m_{\rm BH}$}-$\sigma$) relationship for the most luminous galaxies
\citep{lauer:07a}. The \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship that I have chosen to
use here is derived from the \mbox{$m_{\rm BH}$}-$L$ relation, which
\citet{lauer:07a} argue should be more reliable in the regime of
interest, but the situation at high redshift is unknown. Currently,
there are no published observational measurements of the galaxy
velocity dispersion function at high redshift (of which I am aware);
however, these may become available in the future. It would then be
very interesting to repeat this kind of analysis using \mbox{$m_{\rm BH}$}-$\sigma$
instead.
Although it is dissappointing that the proposed approach did not yield
stronger constraints on the evolution of the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship,
this exercise has brought out a few important lessons. First, in order
to understand the relationship between galaxies and their BH, it is
perhaps as important to understand the magnitude and evolution of the
{\em intrinsic scatter} in this relationship as it is to understand
the evolution of the zero-point of the relation itself. Second, new
generations of theoretical models that attempt to simultaneously treat
the formation and evolution of galaxies and their black holes
\citep[e.g.][]{croton:06,bower:06,fontanot:06,somerville:08} must take
care to properly model the dispersion in the \mbox{$m_{\rm BH}$}-\mbox{$m_{\rm gal}$}\ relationship.
\section*{Acknowledgments}
\begin{small}
I thank Sandra Faber, Cole Miller, Brant Robertson, Tod Lauer, and
Chien Peng for useful comments on this work, and Phil Hopkins, Lars
Hernquist, T.J. Cox, Yuexing Li, and Brant Robertson for stimulating
discussions. I also thank the anonymous referee for comments that
improved the paper.
\end{small}
\bibliographystyle{mn}
|
1,108,101,566,482 | arxiv | \section{Introduction}
\setcounter{equation}{0}
In this paper, we study the dynamical behavior of the
non-autonomous Reaction-Diffusion equation defined on $I \kern -.3em R^n$:
\begin{equation}
\label{intr1}
\frac {\partial u}{\partial t}
- \Delta u + \lambda u = f(x, u) + g(x,t),
\end{equation}
where $\lambda$ is a positive constant,
$g$ is a given function in $L^2_{loc} (I \kern -.3em R, L^2(I \kern -.3em R^n))$, and $f$ is a
nonlinear
function satisfying a dissipative condition.
Global attractors for non-autonomous dynamical systems
have been extensively studied in the literature, see, e.g., \cite{ant,
aul1, car1, car2, car3, car4,
cheb1, cheb2, cheb3, che, har,
lan1, lan2, lu, moi, pri2,
sun1, sun2, wan2, wangy}.
Particularly, when PDEs
are defined in bounded domains, such attractors
have been investigated in
\cite{car1, car4, cheb1, cheb2, che, har, lu, sun1, sun2}.
In the case of unbounded domains, global attractors
for non-autonomous PDEs
have been examined in \cite{ant,moi, pri2}
for almost periodic external terms,
and in
\cite{car2, car3, wan2, wangy} for unbounded external terms.
In this paper, we will prove existence
of a pullback attractor for equation
\eqref{intr1}
defined on $I \kern -.3em R^n$ with unbounded external terms.
Notice that the domain $I \kern -.3em R^n$ for \eqref{intr1} is unbounded,
and hence Sobolev
embeddings are no longer compact in this case.
This introduces a major obstacle
for examining the asymptotic compactness of solutions.
For some PDEs, such difficulty can be overcome
by the energy equation approach, which was introduced by Ball in
\cite{bal1, bal2} (see also
\cite{ car2, car3, gou1, ju1, luk, moi, moi2, ros1, wan5, wanx}).
In this paper, we will use the
uniform estimates on the tails of solutions to
circumvent the difficulty caused by the unboundedness of the domain.
This idea was developed in \cite{wan}
to prove asymptotic compactness of solutions for autonomous
parabolic equations on $I \kern -.3em R^n$,
and later extended to non-autonomous equations
in \cite{ant, pri2, wan2, wangy} and stochastic equations in \cite{bat3, wan3, wan4}.
Here, we will use the method of tail-estimates
to investigate the asymptotic behavior
of equation \eqref{intr1} with nonlinearity of arbitrary growth rate.
We first establish the pullback asymptotic
compactness of solutions of equation \eqref{intr1} and prove existence
of a pullback global attractor in $L^2(I \kern -.3em R^n)$.
Then we extend this result and show
existence of a pullback global attractor
in $H^1(I \kern -.3em R^n)$.
It is worth noticing that attractors
for the non-autonomous Reaction-Diffusion equation defined
on $I \kern -.3em R^n$ with unbounded external terms were
also studied in
\cite{wangy}, where the authors proved the existence of
a pullback attractor when the nonlinearity $f$ satisfies a Sobolev growth
rate. In the present paper, we deal
with the case where the growth order of $f$
is arbitrary.
The asymptotic compactness of solutions in \cite{wangy}
was obtained by using the energy equation approach. But, here we will
derive such compactness directly from the
uniform tail-estimates of solutions. As we will see later,
the existence of an attractor in $H^1(I \kern -.3em R^n)$ is an immediate
consequence of the existence of an attractor
in $L^2(I \kern -.3em R^n)$ and the asymptotic compactness of solutions
in $H^1(I \kern -.3em R^n)$.
The paper is organized as follows. In the next section, we
recall fundamental concepts and results
for pullback
attractors for non-autonomous dynamical systems.
In Section 3, we define a cocycle for the non-autonomous
Reaction-Diffusion
equation on $I \kern -.3em R^n$.
Section 4 is devoted to deriving uniform estimates of solutions
for large space and time
variables. In the last section, we
prove the existence of a pullback
global attractor for the equation in $L^2(I \kern -.3em R^n)$ and
$H^1(I \kern -.3em R^n)$.
The following notations will be used throughout the paper.
We denote by
$\| \cdot \|$ and $(\cdot, \cdot)$ the norm and inner product
in $L^2(I \kern -.3em R^n)$ and use $\| \cdot\|_{p}$ to denote the norm in
$L^{p}(I \kern -.3em R^n)$. Otherwise, the
norm of a general Banach space $X$ is written as $\|\cdot\|_{X}$.
The letters $C$ and $C_i$ ($i=1, 2, \ldots$)
are generic positive constants which may change their values from line to
line or even in the same line.
\section{Preliminaries}
\setcounter{equation}{0}
In this section, we recall some basic concepts
related to pullback attractors for non-autonomous dynamical
systems. It is worth noticing that these concepts
are quite similar to that of random attractor for stochastic
systems. We refer the reader to \cite{arn1, bat1, bat3, car22, car2, car3, cheb1, chu, fla1, sun1, wan3}
for more details.
Let $\Omega$ be a nonempty set and $X$ a metric space with distance
$d(\cdot, \cdot)$.
\begin{defn}
A family of mappings $\{\theta_t\}_{t\in I \kern -.3em R}$
from $\Omega$ to itself is called a family of shift operators on $\Omega$ if
$\{\theta_t\}_{t\in I \kern -.3em R}$ satisfies the group properties:
(i) \ $\theta_0 \omega =\omega, \quad \forall \ \omega \in \Omega;$
(ii)\ $ \theta_t (\theta_\tau \omega) = \theta_{t+\tau} \omega, \quad
\forall \ \omega \in \Omega \quad \mbox{and} \ \ t, \ \tau \in I \kern -.3em R.$
\end{defn}
\begin{defn}
Let $\{\theta_t\}_{t\in I \kern -.3em R}$
be a family of shift operators on $\Omega$. Then a continuous $\theta$-cocycle
$\phi$ on $X$
is a mapping
$$
\phi: I \kern -.3em R^+ \times \Omega \times X \to X, \quad (t, \omega, x) \mapsto \phi(t, \omega, x),
$$
which satisfies, for all $\omega \in \Omega$ and
$t, \tau \in I \kern -.3em R^+$,
(i) \ $\phi(0, \omega, \cdot) $ is the identity on $X$;
(ii) \ $\phi(t+\tau, \omega, \cdot) = \phi(t, \theta_\tau \omega, \cdot) \circ \phi(\tau, \omega, \cdot)$;
(iii) \ $\phi(t, \omega, \cdot): X \to X$ is continuous.
\end{defn}
Hereafter, we always assume that
$\phi$ is a continuous $\theta$-cocycle on $X$, and $\mathcal{D}$ a collection of families of subsets of $X$:
$$
{\mathcal{D}} = \{ D =\{D(\omega)\}_{\omega \in \Omega}: \ D(\omega) \subseteq X
\ \mbox{for every} \ \omega \in \Omega \}.
$$
Such a collection ${\mathcal{D}}$ is often referred to as a universe in the literature.
\begin{defn}
Let $\mathcal{D}$ be a collection of families of subsets of $X$.
Then $\mathcal{D}$ is called inclusion-closed if
$D=\{D(\omega)\}_{\omega \in \Omega} \in {\mathcal{D}}$
and $\tilde{D}=\{\tilde{D}(\omega) \subseteq X: \omega \in \Omega\} $
with
$\tilde{D}(\omega) \subseteq D(\omega)$ for all $\omega \in \Omega$ imply
that $\tilde{D} \in {\mathcal{D}}$.
\end{defn}
\begin{defn}
Let $\mathcal{D}$ be a collection of families of subsets of $X$ and
$\{K(\omega)\}_{\omega \in \Omega} \in \mathcal{D}$. Then
$\{K(\omega)\}_{\omega \in \Omega} $ is called a pullback
absorbing
set for $\phi$ in $\mathcal{D}$ if for every $B \in \mathcal{D}$
and $\omega \in \Omega$, there exists $t(\omega, B)>0$ such
that
$$
\phi(t, \theta_{-t} \omega, B(\theta_{-t} \omega)) \subseteq K(\omega)
\quad \mbox{for all} \ t \ge t(\omega, B).
$$
\end{defn}
\begin{defn}
Let $\mathcal{D}$ be a collection of families of subsets of $X$.
Then
$\phi$ is said to be $\mathcal{D}$-pullback asymptotically
compact in $X$ if for every $\omega \in \Omega$,
$\{\phi(t_n, \theta_{-t_n} \omega,
x_n)\}_{n=1}^\infty$ has a convergent subsequence in $X$
whenever
$t_n \to \infty$, and $ x_n\in B(\theta_{-t_n}\omega)$ with
$\{B(\omega)\}_{\omega \in \Omega} \in \mathcal{D}$.
\end{defn}
\begin{defn}
Let $\mathcal{D}$ be a collection of families of subsets of $X$
and
$\{\mathcal{A}(\omega)\}_{\omega \in \Omega} \in {\mathcal{D}}$.
Then $\{\mathcal{A}(\omega)\}_{\omega \in \Omega}$
is called a $\mathcal{D}$-pullback global attractor for
$\phi$
if the following conditions are satisfied, for every $\omega \in \Omega$,
(i) \ $\mathcal{A}(\omega)$ is compact;
(ii) \ $\{\mathcal{A}(\omega)\}_{\omega \in \Omega}$ is invariant, that is,
$$ \phi(t, \omega, \mathcal{A}(\omega) )
= \mathcal{A}(\theta_t \omega), \ \ \forall \ t \ge 0;
$$
(iii) \ \ $\{\mathcal{A}(\omega)\}_{\omega \in \Omega}$
attracts every set in $\mathcal{D}$, that is, for every
$B = \{B(\omega)\}_{\omega \in \Omega} \in \mathcal{D}$,
$$ \lim_{t \to \infty} d (\phi(t, \theta_{-t}\omega, B(\theta_{-t}\omega)), \mathcal{A}(\omega))=0,
$$
where $d$ is the Hausdorff semi-metric given by
$d(Y,Z) =
\sup_{y \in Y }
\inf_{z\in Z} \| y-z\|_{X}
$ for any $Y\subseteq X$ and $Z \subseteq X$.
\end{defn}
The following existence result of a pullback global attractor
for a continuous cocycle
can be found in \cite{arn1, bat1, bat3, car22, car2, car3, cheb1, chu, fla1}.
\begin{prop}
\label{att} Let $\mathcal{D}$ be an inclusion-closed collection of families of subsets of
$X$ and $\phi$ a continuous $\theta$-cocycle on $X$.
Suppose that $\{K(\omega)\}_{\omega
\in \Omega} \in {\mathcal{D}} $ is a closed absorbing set for $\phi$ in
$\mathcal{D}$ and $\phi$ is $\mathcal{D}$-pullback asymptotically
compact in $X$. Then $\phi$ has a unique $\mathcal{D}$-pullback global
attractor $\{\mathcal{A}(\omega)\}_{\omega \in \Omega} \in {\mathcal{D}}$ which is
given by
$$\mathcal{A}(\omega) = \bigcap_{\tau \ge 0} \ \overline{ \bigcup_{t \ge \tau} \phi(t, \theta_{-t} \omega, K(\theta_{-t} \omega)) }.
$$
\end{prop}
\section{Cocycle associated with the Reaction-Diffusion equation}
\setcounter{equation}{0}
In this section, we construct a $\theta$-cocycle $\phi$
for the
non-autonomous Reaction-Diffusion equation
defined on $I \kern -.3em R^n$.
For every $\tau \inI \kern -.3em R$ and $t > \tau$, consider the problem:
\begin{equation}
\label{rd1}
\frac{\partial u}{\partial t} - \Delta u + \lambda u = f(x,u) + g(x, t), \quad x \in
I \kern -.3em R^n ,
\end{equation}
with the initial condition
\begin{equation}
\label{rd2}
u(x,\tau) = u_{\tau}(x), \hspace{3 mm} x \in I \kern -.3em R^n,
\end{equation}
where $\lambda$ is a positive constant, $g $ is
given in $ L^2_{loc}(I \kern -.3em R, L^2( I \kern -.3em R^n) )$,
and $f$ is a nonlinear function
satisfying, for every $ x \in I \kern -.3em R^n$ and $ s \in I \kern -.3em R$,
\begin{equation}
\label{f1}
f(x,s)s \le - \alpha_{1} | s |^p + \phi_{1}(x) \quad \mbox{for some } \
p \ge 2,
\end{equation}
\begin{equation}
\label{f2}
| f(x,s) | \le \alpha_{2} | s |^{p-1} + \phi_{2}(x),
\end{equation}
\begin{equation}
\label{f3}
\frac{\partial f}{\partial s}(x,s) \le \alpha_3 ,
\end{equation}
where $\alpha_1, \ \alpha_2$ and $ \alpha_3 $ are
all positive constants, $\phi_1 \in L^1(I \kern -.3em R^n)$,
and $\phi_2 \in L^2(I \kern -.3em R^n) \cap L^q (I \kern -.3em R^n)$
with $ \frac{1}{p} + \frac{1}{q} = 1$.
Denote by
$ F(x,s)= \int_{0}^{s} f(x, \tau) d \tau $. Then we
assume that $F$ satisfies
\begin{equation}
\label{F1}
-\phi_4(x) - \alpha_4 | s |^p \le F(x,s) \le -\alpha_5 | s |^p +\phi_3(x),
\end{equation}
where $\alpha_4$ and $\alpha_5$ are positive constants
and $\phi_3, \phi_4 \in L^1(I \kern -.3em R^n)$.
As in the case of bounded domains (see, e.g., \cite{tem1}), it can be proved that
if $g \in L^2_{loc} (I \kern -.3em R, L^2 (I \kern -.3em R^n))$
and
\eqref{f1}-\eqref{F1}
hold true, then problem \eqref{rd1}-\eqref{rd2} is well-posed
in $L^2 (\R^n) $, that is, for every $\tau \in I \kern -.3em R$
and $ u_\tau \in L^2 (\R^n)
$, there exists a unique solution $ u \in C( [\tau, \infty), L^2 (\R^n)
) \bigcap L^2(\tau, \tau +T; H^1(I \kern -.3em R^n))
\bigcap L^p(\tau, \tau +T; L^p(I \kern -.3em R^n))$
for every $T>0$. Further, the solution is continuous with respect
to $u_\tau $ in $ L^2 (\R^n) $.
To construct a cocycle $\phi$ for problem \eqref{rd1}-\eqref{rd2}, we
denote by
$\Omega =I \kern -.3em R$, and define a shift operator $\theta_t$
on $\Omega$
for every
$t \in I \kern -.3em R$
by
$$
\theta_t (\tau ) = t+ \tau, \quad \mbox{for all} \ \ \tau \in I \kern -.3em R.
$$
Let $\phi$ be a mapping from $I \kern -.3em R^+ \times \Omega \times L^2 (\R^n) $
to $ L^2 (\R^n)$ given by
$$
\phi(t, \tau, u_\tau ) =
u(t+\tau, \tau, u_\tau),
$$
where $ t \ge 0$, $ \tau \in I \kern -.3em R $,
$ u_\tau \in L^2 (\R^n) $, and
$ u $ is the solution of problem \eqref{rd1}-\eqref{rd2}.
By the uniqueness of solutions, we find that
for every $t, s \ge 0$, $\tau \in I \kern -.3em R$ and
$ u_\tau \in L^2 (\R^n) $,
$$
\phi (t+s, \tau, u_\tau ) =
\phi (t, s+ \tau, (\phi(s, \tau, u_\tau ) ) ).
$$
Then it follows that $\phi$ is a continuous
$\theta$-cocycle on $L^2 (\R^n) $.
The purpose of this paper is to study
the existence of pullback attractors
for $\phi$ in an appropriate phase space.
Let $E$ be a subset of $ L^2 (\R^n) $ and denote
by
$$ \| E \| = \sup\limits_{x \in E}
\| x\|_{L^2 (\R^n) }.
$$
Suppose
$D =\{ D(t) \}_{t\in I \kern -.3em R}$ is a family of
subsets of $L^2 (\R^n) $ satisfying
\begin{equation}
\label{basin_cond}
\lim_{t \to - \infty} e^{ \lambda t} \| D( t) \|^2 =0,
\end{equation}
where $\lambda$ is the positive constant
appearing in \eqref{rd1}.
Hereafter, we use ${\mathcal{D}}_\lambda$
to denote
the collection of all
families of subsets of $L^2(I \kern -.3em R^n)$ satisfying \eqref{basin_cond},
that is,
\begin{equation}
\label{D_lambda}
{{\mathcal{D}}_\lambda = \{ D =\{ D(t) \}_{t\in I \kern -.3em R}:
D \ \mbox{satisfies} \ \eqref{basin_cond} \} }.
\end{equation}
Throughout this paper, we assume the following conditions for the external term:
\begin{equation}
\label{gcond}
\int_{-\infty}^\tau e^{\lambda \xi} \| g(\xi)\|^2 d \xi
< \infty, \quad \forall \ \tau \in I \kern -.3em R,
\end{equation}
and
\begin{equation}
\label{ginfinity}
\limsup_{k \to \infty} \int_{-\infty}^\tau \int_{|x| \ge k} e^{\lambda \xi}
|g(x, \xi) |^2 dx d\xi =0,
\quad \forall \ \tau \in I \kern -.3em R.
\end{equation}
We remark that
condition \eqref{gcond} is useful for proving existence of absorbing
sets for problem \eqref{rd1}-\eqref{rd2}, while
the asymptotically null condition
\eqref{ginfinity} is crucial for establishing the asymptotic compactness
of solutions.
Notice
that conditions \eqref{gcond} and \eqref{ginfinity}
do not require that $g$ be bounded in $L^2(I \kern -.3em R^n)$
when $t \to \pm \infty$. Particularly,
These assumptions do not have any restriction
on $g$ when $t \to +\infty$.
It follows from \eqref{ginfinity} that
for every $\tau \in I \kern -.3em R$ and $\eta>0$, there is $K=K(\tau, \eta)>0$
such that
\begin{equation}
\label{ginfinity2}
\int_{-\infty}^\tau \int_{|x| \ge K}
e^{\lambda \xi} |g(x, \xi) |^2 dx d\xi \le \eta e^{\lambda \tau} .
\end{equation}
As we will see later, inequality \eqref{ginfinity2} is
crucial for deriving uniform estimates on the tails of solutions
and these estimates are necessary for
proving the asymptotic compactness of solutions.
\section{Uniform estimates of solutions }
\setcounter{equation}{0}
In this section, we
derive uniform estimates of solutions of problem \eqref{rd1}-\eqref{rd2}
defined on $I \kern -.3em R^n$
when $t \to \infty$.
We start with the estimates in $L^2 (\R^n) $.
\begin{lem}
\label{lem41}
Suppose \eqref{f1} and \eqref{gcond} hold.
Then for every $\tau \in I \kern -.3em R$ and $D=\{D(t)\}_{t\in I \kern -.3em R} \in {\mathcal{D}}_\lambda$,
there exists $T=T(\tau, D)>0$ such that for all $t \ge T$,
$$
\| u(\tau, \tau -t, u_0(\tau -t) ) \|^2 \le M + M e^{- \lambda \tau}
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi ,
$$
$$
\int_{\tau -t}^{\tau} e^{\lambda \xi } \| u (\xi, \tau -t, u_0(\tau -t) ) \|^p_p
d\xi
\le Me^{\lambda \tau} + M
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
,
$$
and
$$
\int_{\tau -t}^{\tau} e^{\lambda \xi } \|
u (\xi, \tau -t, u_0(\tau -t) ) \|^2_{H^1}
d\xi
\le Me^{\lambda \tau} + M
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
,
$$
where
$ u_0(\tau -t) \in D(\tau -t)$, and
$M$ is a positive constant independent of
$\tau$ and $D$.
\end{lem}
\begin{proof}
Taking the inner product of \eqref{rd1}
with $u$ in $L^2(I \kern -.3em R^n)$ we get that
\begin{equation}
\label{p41_1}
\frac{1}{2} \frac{d}{dt} \| u \|^2
+ \| \nabla u \|^2 + \lambda \| u \|^2 = \int_{I \kern -.3em R^n} f(x, u) u dx + (g,u).
\end{equation}
For the nonlinear term, by \eqref{f1} we have
\begin{equation}
\label{p41_2}
\int_{I \kern -.3em R^n} f(x, u) u dx
\le - \alpha_1 \int_{I \kern -.3em R^n} |u|^p dx + \int_{I \kern -.3em R^n} \phi_1 dx.
\end{equation}
By the Young inequality, the last term on the right-hand side
of \eqref{p41_1} is bounded by
\begin{equation}
\label{p41_3}
|(g, u) | \le \| g \| \ \| u \|
\le {\frac 14} \lambda \| u \|^2 + {\frac 1{ \lambda}} \| g \|^2.
\end{equation}
It follows from \eqref{p41_1}-\eqref{p41_3} that
\begin{equation}
\label{p41_4}
{\frac d{dt}} \| u \|^2 + 2 \| \nabla u \|^2 + \lambda \| u \|^2
+ {\frac 12} \lambda \| u \|^2
+ 2 \alpha_1 \int_{I \kern -.3em R^n} |u|^p dx \le C + {\frac 2\lambda} \| g \|^2.
\end{equation}
Multiplying \eqref{p41_4} by $e^{\lambda t}$ and then integrating
the resulting inequality on $( \tau -t, \tau)$ with $t \ge 0$, we find that
$$
\| u (\tau, \tau -t, u_0(\tau -t) ) \|^2
+ 2 e^{-\lambda \tau} \int_{\tau -t}^\tau e^{\lambda \xi} \| \nabla u (\xi, \tau -t, u_0(\tau -t )) \|^2 d\xi
$$
$$
+ {\frac 12} \lambda e^{-\lambda \tau}
\int^\tau _{\tau -t} e^{\lambda \xi} \| u(\xi, \tau -t, u_0(\tau -t ) )\|^2 d\xi
+ 2 \alpha_1 e^{-\lambda \tau}
\int^\tau _{\tau -t} e^{\lambda \xi} \| u(\xi, \tau -t, u_0(\tau -t ) )\|^p_p d\xi
$$
$$
\le e^{-\lambda \tau} e^{\lambda (\tau -t )} \| u_0 (\tau -t ) \|^2
+ {\frac 2\lambda} e^{-\lambda \tau}
\int^\tau_{\tau -t} e^{\lambda \xi} \| g(\xi ) \|^2 d\xi
+ {\frac C\lambda}
$$
\begin{equation}
\label{p41_5}
\le e^{-\lambda \tau} e^{\lambda (\tau -t )} \| u_0 (\tau -t ) \|^2
+ {\frac 2\lambda} e^{-\lambda \tau}
\int^\tau_{-\infty} e^{\lambda \xi} \| g(\xi ) \|^2 d\xi
+ {\frac C\lambda}.
\end{equation}
Notice that $u_0(\tau -t) \in D(\tau -t)$
and $D= \{D(t)\}_{t \in I \kern -.3em R} \in {\mathcal{D}}_\lambda$. We find that
for every $\tau \in I \kern -.3em R$, there exists $T=T(\tau, D)$ such that
for all $t \ge T$,
\begin{equation}
\label{p41_6}
e^{ \lambda (\tau-t) } \| u_0(\tau -t ) \|^2
\le
{\frac {1}{ \lambda}} \int_{-\infty}^\tau
e^{\lambda \xi} \| g(\xi )\|^2 d \xi .
\end{equation}
By \eqref{p41_5}-\eqref{p41_6} we get that,
for all $t \ge T$,
$$
\| u (\tau, \tau -t, u_0(\tau -t) ) \|^2
+ 2 e^{-\lambda \tau} \int_{\tau -t}^\tau e^{\lambda \xi} \| \nabla u (\xi, \tau -t, u_0(\tau -t )) \|^2 d\xi
$$
$$
+ {\frac 12} \lambda e^{-\lambda \tau}
\int^\tau _{\tau -t} e^{\lambda \xi} \| u(\xi, \tau -t, u_0(\tau -t ) )\|^2 d\xi
+ 2 \alpha_1 e^{-\lambda \tau}
\int^\tau _{\tau -t} e^{\lambda \xi} \| u(\xi, \tau -t, u_0(\tau -t ) )\|^p_p d\xi
$$
$$
\le
{\frac 3\lambda} e^{-\lambda \tau}
\int^\tau_{-\infty} e^{\lambda \xi} \| g(\xi ) \|^2 d\xi
+ {\frac C\lambda},
$$
which completes the proof.
\end{proof}
The following lemma is useful for deriving
uniform estimates of solutions in $H^1(I \kern -.3em R^n)$.
\begin{lem}
\label{lem42}
Suppose \eqref{f1} and \eqref{gcond} hold.
Then for every $\tau \in I \kern -.3em R$ and $D=\{D(t)\}_{t\in I \kern -.3em R} \in {\mathcal{D}}_\lambda$,
there exists $T=T(\tau, D)>2$ such that for all $t \ge T$,
$$
\int_{\tau -2}^{\tau} e^{\lambda \xi } \|
u (\xi, \tau -t, u_0(\tau -t) ) \|^2
d\xi
\le Me^{\lambda \tau} + M
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
,
$$
$$
\int_{\tau -2}^{\tau} e^{\lambda \xi } \| \nabla u (\xi, \tau -t, u_0(\tau -t) ) \|^2
d\xi
\le Me^{\lambda \tau} + M
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
,
$$
and
$$
\int_{\tau -2}^{\tau} e^{\lambda \xi } \| u (\xi, \tau -t, u_0(\tau -t) ) \|^p_p
d\xi
\le Me^{\lambda \tau} + M
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
,
$$
where $ u_0(\tau -t) \in D(\tau -t)$, and
$M$ is a positive constant independent of
$\tau$ and $D$.
\end{lem}
\begin{proof}
By \eqref{p41_4} we find that
$$
{\frac d{dt}} \| u \|^2
+ \lambda \| u \|^2
\le C + {\frac 2\lambda} \| g \|^2.
$$
Let $s \in [\tau -2, \tau]$ and $t\ge 2$.
Multiplying the above by $e^{\lambda t}$ and integrating
over $(s, \tau -t)$,
we get
$$
e^{\lambda s} \| u(s, \tau -t, u_0(\tau -t) ) \|^2
\le e^{\lambda (\tau -t)} \| u_0(\tau -t ) \|^2
+ C \int^s_{\tau -t} e^{\lambda \xi} d\xi
+ {\frac 2\lambda} \int^s_{\tau -t} e^{\lambda \xi} \| g(\xi)\|^2d\xi
$$
$$
\le e^{\lambda (\tau -t)} \| u_0(\tau -t ) \|^2
+ {\frac C\lambda} e^{\lambda \tau}
+ {\frac 2\lambda} \int^\tau_{-\infty}
e^{\lambda \xi} \| g(\xi)\|^2d\xi.
$$
Therefore,
there exists $T=T(\tau, D)>2$ such that
for all $t \ge T$ and $s \in [\tau -2, \tau]$,
\begin{equation}
\label{p42_100}
e^{\lambda s} \| u(s, \tau -t, u_0(\tau -t) ) \|^2
\le
{\frac C\lambda} e^{\lambda \tau}
+ {\frac 3\lambda} \int^\tau_{-\infty}
e^{\lambda \xi} \| g(\xi)\|^2d\xi.
\end{equation}
Integrate the above with respect to $s$ on $(\tau -2, \tau)$
to obtain that
\begin{equation}
\label{p42_101}
\int_{\tau -2}^\tau
e^{\lambda s} \| u(s, \tau -t, u_0(\tau -t) ) \|^2 ds
\le
{\frac {2C}\lambda} e^{\lambda \tau}
+ {\frac 6\lambda} \int^\tau_{-\infty}
e^{\lambda \xi} \| g(\xi)\|^2d\xi.
\end{equation}
On the other hand, for $s =\tau -2$, \eqref{p42_100}
implies that
\begin{equation}
\label{p42_1}
e^{\lambda (\tau -2) } \| u(\tau -2, \tau -t, u_0(\tau -t) ) \|^2
\le
{\frac C\lambda} e^{\lambda \tau}
+ {\frac 3\lambda} \int^\tau_{-\infty}
e^{\lambda \xi} \| g(\xi)\|^2d\xi.
\end{equation}
Multiplying \eqref{p41_4} by $e^{\lambda t}$ and then integrating
over $(\tau -2, \tau )$, by \eqref{p42_1} we get that, for all $t\ge T$,
$$
e^{\lambda \tau} \| u(\tau, \tau -t, u_0(\tau -t) ) \|^2
+ 2 \int_{\tau -2}^\tau e^{\lambda \xi}
\| \nabla u (\xi, \tau -t, u_0(\tau -t) ) \|^2 d\xi
$$
$$
+ 2 \alpha_1 \int_{\tau -2}^\tau e^{\lambda \xi}
\| u (\xi, \tau -t, u_0(\tau -t) ) \|^p_p d\xi
$$
$$ \le
e^{\lambda (\tau -2 )} \| u(\tau -2, \tau -t, u_0(\tau -t) ) \|^2
+ {\frac {2}{\lambda}} \int_{\tau -2}^\tau e^{\lambda \xi} \| g (\xi) \|^2 d\xi
+ {\frac C\lambda} e^{\lambda \tau}
$$
$$
\le C \int_{-\infty}^\tau e^{\lambda \xi} \| g(\xi)\|^2 d\xi
+ C e^{\lambda \tau},
$$
which along with \eqref{p42_101} completes the proof.
\end{proof}
Note that $e^{\lambda \xi} \ge e^{\lambda \tau -2\lambda} $
for any $\xi \ge \tau -2$. So as an immediate consequence
of Lemma \ref{lem42} we have the following estimates.
\begin{cor}
\label{cor43}
Suppose \eqref{f1} and \eqref{gcond} hold.
Then for every $\tau \in I \kern -.3em R$ and $D=\{D(t)\}_{t\in I \kern -.3em R} \in {\mathcal{D}}_\lambda$,
there exists $T=T(\tau, D)>2$ such that for all $t \ge T$,
$$
\int_{\tau -2}^{\tau} \| u (\xi, \tau -t, u_0(\tau -t) ) \|^2
d\xi
\le M + M e^{- \lambda \tau}
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
,
$$
$$
\int_{\tau -2}^{\tau} \| \nabla u (\xi, \tau -t, u_0(\tau -t) ) \|^2
d\xi
\le M + M e^{- \lambda \tau}
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
,
$$
and
$$
\int_{\tau -2}^{\tau} \| u (\xi, \tau -t, u_0(\tau -t) ) \|^p_p
d\xi
\le M + M e^{- \lambda \tau}
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
,
$$
where $ u_0(\tau -t) \in D(\tau -t)$, and
$M$ is a positive constant independent of
$\tau$ and $D$.
\end{cor}
Next we derive uniform estimates of solutions
in $H^1(I \kern -.3em R^n)$.
\begin{lem}
\label{lem44}
Suppose \eqref{f1}, \eqref{F1} and \eqref{gcond} hold.
Then for every $\tau \in I \kern -.3em R$ and $D=\{D(t)\}_{t\in I \kern -.3em R} \in {\mathcal{D}}_\lambda$,
there exists $T=T(\tau, D)>2$ such that for all $t \ge T$,
$$
\| \nabla u (\tau, \tau -t, u_0(\tau -t) ) \|^2
\le M + M e^{- \lambda \tau}
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi ,
$$
$$
\| u (\tau, \tau -t, u_0(\tau -t) ) \|^p_p
\le M + M e^{- \lambda \tau}
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi ,
$$
and
$$
\int^\tau_{\tau -1} \| u_\xi (\xi, \tau -t, u_0(\tau -t) ) \|^2 d\xi
\le
M + M e^{- \lambda \tau}
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi ,
$$
where $ u_0(\tau -t) \in D(\tau -t)$, and
$M$ is a positive constant independent of
$\tau$ and $D$.
\end{lem}
\begin{proof}
In the following, we write $u_0(\tau -t)$ as $u_0$ for convenience.
Taking the inner product of \eqref{rd1}
with $u_t$ in $L^2(I \kern -.3em R^n)$ and then replacing $t$ by $\xi$,
we obtain
$$
\| u_{\xi} (\xi, \tau -t, u_0 ) \|^2 + {\frac d{d\xi}} \left (
{\frac 12} \| \nabla u (\xi, \tau -t, u_0 ) \|^2
+ {\frac 12} \lambda \| u (\xi, \tau -t, u_0 ) \|^2
-\int_{I \kern -.3em R^n} F(x, u) dx \right )
$$
$$
=(g(\xi), u_\xi (\xi, \tau -t, u_0) ).
$$
Note that the right-hand side of the above is bounded by
$$
|(g(\xi), u_\xi (\xi, \tau -t, u_0) )|
\le \| g(\xi) \| \ \| u_\xi (\xi, \tau -t, u_0 ) \|
\le
{\frac 12} \| u_\xi (\xi, \tau -t, u_0 ) \|^2 +
{\frac 12} \| g(\xi ) \|^2.
$$
Then we have
\begin{equation}
\label{p44_1}
\| u_{\xi} (\xi, \tau -t, u_0 ) \|^2 + {\frac d{d\xi}} \left (
\| \nabla u (\xi, \tau -t, u_0 ) \|^2
+ \lambda \| u (\xi, \tau -t, u_0 ) \|^2
- 2 \int_{I \kern -.3em R^n} F(x, u) dx \right )
\le \| g(\xi )\|^2,
\end{equation}
which implies that
\begin{equation}
\label{p44_2}
{\frac d{d\xi}} \left (
\| \nabla u (\xi, \tau -t, u_0 ) \|^2
+ \lambda \| u (\xi, \tau -t, u_0 ) \|^2
- 2 \int_{I \kern -.3em R^n} F(x, u) dx \right )
\le \| g(\xi )\|^2.
\end{equation}
Let $s \le \tau$ and $t\ge 2$. By integrating
\eqref{p44_2} over $(s, \tau)$ we get
that
$$
\| \nabla u (\tau, \tau -t, u_0 ) \|^2
+ \lambda \| u (\tau, \tau -t, u_0 ) \|^2
- 2 \int_{I \kern -.3em R^n} F(x, u (\tau, \tau -t, u_0 ) ) dx
$$
$$ \le
\| \nabla u (s, \tau -t, u_0 ) \|^2
+ \lambda \| u (s, \tau -t, u_0 ) \|^2
- 2 \int_{I \kern -.3em R^n} F(x, u (s, \tau -t, u_0 ) ) dx
+ \int_s^\tau \| g(\xi) \|^2 d \xi.
$$
Now integrating the above with respect to $s$ on
$(\tau -1, \tau)$ we find that
$$
\| \nabla u (\tau, \tau -t, u_0 ) \|^2
+ \lambda \| u (\tau, \tau -t, u_0 ) \|^2
- 2 \int_{I \kern -.3em R^n} F(x, u (\tau, \tau -t, u_0 ) ) dx
$$
$$ \le
\int_{\tau -1}^\tau \| \nabla u (s, \tau -t, u_0 ) \|^2 ds
+ \lambda \int_{\tau -1}^\tau\| u (s, \tau -t, u_0 ) \|^2 ds
$$
\begin{equation}
\label{p44_3}
- 2 \int_{\tau -1}^\tau \int_{I \kern -.3em R^n}
F(x, u (s, \tau -t, u_0 ) ) dx ds
+ \int_{\tau -1} ^\tau \| g(\xi) \|^2 d \xi.
\end{equation}
By \eqref{F1} we have
\begin{equation}
\label{p44_4}
\alpha_5 \| u(\tau, \tau -t, u_0(\tau -t) )\|^p_p
- \int_{I \kern -.3em R^n} \phi_3 (x) dx
\le - \int_{I \kern -.3em R^n} F(x, u (\tau , \tau -t, u_0 ) ) dx,
\end{equation}
and
\begin{equation}
\label{p44_5}
- \int_{I \kern -.3em R^n} F(x, u (s , \tau -t, u_0 ) ) dx
\le
\alpha_4 \| u(s, \tau -t, u_0(\tau -t) )\|^p_p
+ \int_{I \kern -.3em R^n} \phi_4 (x) dx.
\end{equation}
It follows from \eqref{p44_3}-\eqref{p44_5} that
$$
\| \nabla u (\tau, \tau -t, u_0 ) \|^2
+ \lambda \| u (\tau, \tau -t, u_0 ) \|^2
+ 2 \alpha_5 \| u (\tau, \tau -t, u_0 ) \|^p_p
$$
$$ \le
\int_{\tau -1}^\tau \| \nabla u (s, \tau -t, u_0 ) \|^2 ds
+ \lambda \int_{\tau -1}^\tau\| u (s, \tau -t, u_0 ) \|^2 ds
$$
$$
+ 2 \alpha_4 \int^\tau_{\tau -1}
\| u (s, \tau -t, u_0 (\tau -t) ) \|^p_p ds
+ \int^\tau_{\tau -1} \| g(\xi) \|^2 d\xi
+ 2 \int_{I \kern -.3em R^n} (\phi_3 (x) + \phi_4(x) ) dx,
$$
which along with Corollary \ref{cor43} implies that there
exists $T=T(\tau, D)>2$ such that for all
$t \ge T$,
$$
\| \nabla u (\tau, \tau -t, u_0 ) \|^2
+ \lambda \| u (\tau, \tau -t, u_0 ) \|^2
+ 2 \alpha_5 \| u (\tau, \tau -t, u_0 ) \|^p_p
$$
$$
\le
C+ C e^{-\lambda \tau} \int^\tau_{-\infty}
e^{\lambda \xi} \| g(\xi) \|^2 d\xi
+ \int_{\tau -1}^\tau \| g(\xi) \|^2 d\xi
$$
\begin{equation}
\label{p44_6}
\le
C+ C e^{-\lambda \tau} \int^\tau_{-\infty}
e^{\lambda \xi} \| g(\xi) \|^2 d\xi
+ e^\lambda e^{-\lambda \tau}
\int_{-\infty}^\tau e^{\lambda \xi} \| g(\xi) \|^2 d\xi.
\end{equation}
Similarly, first integrating \eqref{p44_2}
with respect to $\xi$ on $(s, \tau -1)$ and then
integrating
with respect to $s$ on $(\tau -2, \tau -1)$,
by using Corollary \ref{cor43} we can get
that for all $t \ge T$,
$$
\| \nabla u (\tau -1, \tau -t, u_0 ) \|^2
+ \lambda \| u (\tau-1, \tau -t, u_0 ) \|^2
+ 2 \alpha_5 \| u (\tau-1, \tau -t, u_0 ) \|^p_p
$$
\begin{equation}
\label{p44_7}
\le
C+ C e^{-\lambda \tau} \int^\tau_{-\infty}
e^{\lambda \xi} \| g(\xi) \|^2 d\xi
+ \int_{\tau -2}^{\tau -1} \| g(\xi) \|^2 d\xi.
\end{equation}
Now integrating \eqref{p44_1} over $(\tau -1, \tau)$ we obtain that
$$
\int^\tau_{\tau -1}
\| u_{\xi} (\xi, \tau -t, u_0 ) \|^2 d\xi
+ \| \nabla u (\tau, \tau -t, u_0 ) \|^2
+ \lambda \| u (\tau, \tau -t, u_0 ) \|^2
- 2 \int_{I \kern -.3em R^n} F(x, u (\tau) ) dx
$$
$$
\le \int^\tau_{\tau -1} \| g(\xi )\|^2 d\xi
+ \| \nabla u (\tau-1, \tau -t, u_0 ) \|^2
+ \lambda \| u (\tau-1, \tau -t, u_0 ) \|^2
- 2 \int_{I \kern -.3em R^n} F(x, u (\tau-1) ) dx,
$$
which along with \eqref{p44_4}, \eqref{p44_5} and
\eqref{p44_7} shows that for all $t \ge T$,
$$
\int^\tau_{\tau -1}
\| u_{\xi} (\xi, \tau -t, u_0 ) \|^2 d\xi
\le \int^\tau_{\tau -1} \| g(\xi )\|^2 d\xi
+ 2\int_{I \kern -.3em R^n} (\phi_3(x) + \phi_4(x)) dx
$$
$$
+ \| \nabla u (\tau-1, \tau -t, u_0 ) \|^2
+ \lambda \| u (\tau-1, \tau -t, u_0 ) \|^2
+ 2 \alpha_4 \| u (\tau-1, \tau -t, u_0 ) \|^p_p
$$
$$
\le
C+ C e^{-\lambda \tau} \int^\tau_{-\infty}
e^{\lambda \xi} \| g(\xi) \|^2 d\xi
+ \int_{\tau -2}^{\tau } \| g(\xi) \|^2 d\xi
$$
\begin{equation}
\label{p44_10}
\le
C+ C e^{-\lambda \tau} \int^\tau_{-\infty}
e^{\lambda \xi} \| g(\xi) \|^2 d\xi
+ e^{2\lambda} e^{-\lambda \tau}
\int_{-\infty}^\tau e^{\lambda \xi} \| g(\xi) \|^2 d\xi.
\end{equation}
Then Lemma \ref{lem44} follows from \eqref{p44_6} and
\eqref{p44_10} immediately.
\end{proof}
We now derive uniform estimates
of the derivatives of solutions in time. To this end, we also
assume ${\frac {dg}{dt}} \in L^2_{loc}(I \kern -.3em R, L^2(I \kern -.3em R^n) )$.
\begin{lem}
\label{lem45}
Suppose \eqref{f1}-\eqref{F1}
and \eqref{gcond} hold.
Let ${\frac {dg}{dt}} \in L^2_{loc}(I \kern -.3em R, L^2(I \kern -.3em R^n) )$.
Then for every $\tau \in I \kern -.3em R$ and $D=\{D(t)\}_{t\in I \kern -.3em R} \in {\mathcal{D}}_\lambda$,
there exists $T=T(\tau, D)>2$ such that for all $t \ge T$,
$$
\| u_\tau (\tau, \tau -t, u_0(\tau -t) ) \|^2
\le
M + M e^{- \lambda \tau}
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
+ M \int^\tau_{\tau -1} \| g_\xi (\xi) \|^2 d\xi
,
$$
where $ u_0(\tau -t) \in D(\tau -t)$, and
$M$ is a positive constant independent of
$\tau$ and $D$.
\end{lem}
\begin{proof}
Let $u_t =v$ and differentiate \eqref{rd1} with respect
to $t$ to get that
$$
{\frac {\partial v}{\partial t}} - \Delta v
+\lambda v
={\frac {\partial f}{\partial u}}(x,u) v
+ g_t (x, t).
$$
Taking the inner product of the above with $v$ in $L^2(I \kern -.3em R^n)$,
we obtain
\begin{equation}
\label{p45_1}
{\frac 12} {\frac d{dt}} \| v \|^2
+ \| \nabla v \|^2 + \lambda \| v\|^2
=\int_{I \kern -.3em R^n} {\frac {\partial f}{\partial u}}(x,u) |v(x,t) |^2 dx
+ \int_{I \kern -.3em R^n} g_t(x,t) v(x,t) dx.
\end{equation}
By \eqref{f3} and the Young inequality, it follows from
\eqref{p45_1} that
\begin{equation}
\label{p45_2}
{\frac d{dt}} \| v \|^2
\le 2 \alpha_3 \| v \| ^2
+ {\frac 1\lambda} \| g_t(t) \|^2.
\end{equation}
Let $s \in [\tau -1, \tau]$ and $t \ge 1$. Integrating
\eqref{p45_2} on $(s, \tau)$, by $v=u_t$ we get that
$$
\| u_\tau (\tau, \tau -t, u_0(\tau -t)) \|^2
\le \| u_s(s, \tau -t, u_0(\tau -t) )\|^2
$$
$$
+ 2\alpha_3 \int_s^\tau \| u_\xi (\xi, \tau -t, u_0(\tau -t) ) \|^2 d\xi
+ {\frac 1\lambda} \int_s^\tau \| g_\xi (\xi) \|^2 d\xi.
$$
$$
\le \| u_s(s, \tau -t, u_0(\tau -t) )\|^2
+ 2\alpha_3 \int_{\tau -1}^\tau \| u_\xi (\xi, \tau -t, u_0(\tau -t) ) \|^2 d\xi
+ {\frac 1\lambda} \int_{\tau -1}^\tau \| g_\xi (\xi) \|^2 d\xi.
$$
Now integrating the above with respect to $s$
on $(\tau -1, \tau)$ we find that
$$
\| u_\tau (\tau, \tau -t, u_0(\tau -t)) \|^2
\le \int_{\tau -1}^\tau \| u_s(s, \tau -t, u_0(\tau -t) )\|^2 ds
$$
$$
+ 2\alpha_3 \int_{\tau -1}^\tau \| u_\xi (\xi, \tau -t, u_0(\tau -t) ) \|^2 d\xi
+ {\frac 1\lambda} \int_{\tau -1}^\tau \| g_\xi (\xi) \|^2 d\xi,
$$
which along with Lemma \ref{lem44} shows that there
exists $T=(\tau, D)>2$ such that for all $t \ge T$,
$$
\| u_\tau (\tau, \tau -t, u_0(\tau -t)) \|^2
$$
$$
\le
C + C e^{- \lambda \tau}
\int_{-\infty}^\tau
e^{\lambda \xi}
\| g(\xi )\|^2 d \xi
+ {\frac 1\lambda} \int_{\tau -1}^\tau \| g_\xi (\xi) \|^2 d\xi.
$$
The proof is completed.
\end{proof}
We now establish uniform estimates on the tails
of solutions when $t \to \infty$. We show that the tails of solutions
are uniformly small for large space and time variables.
These uniform estimates are crucial for proving
the pullback asymptotic compactness of the cocycle $\phi$.
\begin{lem}
\label{lem46}
Suppose \eqref{f1}, \eqref{F1}
and \eqref{gcond}-\eqref{ginfinity} hold.
Then for every $\eta>0$, $\tau \in I \kern -.3em R$ and
$D=\{D(t)\}_{t\in I \kern -.3em R} \in {\mathcal{D}}_\lambda$,
there exists $T= T(\tau, D, \eta)>2$ and
$K=K(\tau, \eta)>0 $ such that for all $t \ge T$ and $k \ge K$,
$$
\int _{|x| \ge k} |u(x, \tau, \tau -t, u_0(\tau -t) )|^2 dx
\le \eta,
$$
where $ u_0(\tau -t) \in D(\tau -t)$,
$K(\tau, \eta)$ depends on $\tau$ and
$\eta$, and $T(\tau, D, \eta)$ depends
on $\tau$, $D$ and $\eta$.
\end{lem}
\begin{proof}
We use a cut-off technique to establish the estimates on the tails of solutions.
Let $\theta$ be a smooth function satisfying $0 \le \theta
(s) \le 1$ for $ s \in I \kern -.3em R^+$, and
$$
\theta (s) =0 \ \mbox{for} \ 0 \le s \le 1;
\ \ \theta (s) =1 \
\mbox{for} \ s \ge 2.
$$
Then there exists
a constant
$C$ such that $ | \theta^{\prime} (s) | \le C$
for $ s \in I \kern -.3em R^+ $.
Taking the inner product of \eqref{rd1} with
$ \theta ({\frac {|x|^2}{k^2}}) u $ in $L^2 (\R^n)$, we get
$$
{\frac 12} {\frac
d{dt}} \int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}}) |u|^2
- \int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}}) u \Delta u
+ \lambda \int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}}) |u|^2
$$
\begin{equation}
\label{p466_1}
= \int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}} )
f(x,u) u dx
+ \int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}}) g(x,t) u(x,t) dx .
\end{equation}
We now estimate the right-hand side of
\eqref{p466_1}. For the nonlinear term,
by \eqref{f1} we have
\begin{equation}
\label{p466_2}
\int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}} )
f(x,u) u dx
\le
-\alpha_1 \int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}} ) |u|^p dx
+ \int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}} ) \phi_1 (x) dx
\le \int_{|x| \ge k} \phi_1 (x) dx.
\end{equation}
For the
last term on the right-hand side of \eqref{p466_1}
we find that
$$ \int_{I \kern -.3em R^n} \theta (\frac{|x|^2}{k^2})
g(x, t) u(x,t) dx
= \int_{|x| \ge k} \theta (\frac{|x|^2}{k^2})
g(x, t) u(x,t) dx$$
$$
\le
\frac12 \lambda \int_{|x|\ge
k}\theta^2 (\frac{|x|^2}{k^2} ) |u |^2 dx +\frac
1{2\lambda}\int_{|x|\geq k} |g(x, t)|^2 dx $$
\begin{equation}
\label{p466_3}
\le
\frac12 \lambda \int_{I \kern -.3em R^n} \theta (\frac{|x|^2}{k^2})
|u |^2 dx +\frac 1{2\lambda}
\int_{|x|\geq k} |g (x, t)
|^2 dx .
\end{equation}
On the other hand,
for the second term on the left-hand side of \eqref{p466_1},
by integration by parts, we have
$$
\int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}} ) u \Delta u
= - \int _{I \kern -.3em R^n} \theta
({\frac {|x|^2}{k^2}}) | \nabla u|^2
- \int_{I \kern -.3em R^n} \theta^{\prime} ({\frac {|x|^2}{k^2}})
( {\frac {2x}{k^2}} \cdot \nabla u ) u .
$$
\begin{equation}
\label{p466_4}
\le
- \int _{ k \le |x| \le {\sqrt{2}}k} \theta^{\prime} ({\frac {|x|^2}{k^2}})
( {\frac {2x}{k^2}} \cdot \nabla u ) u
\le {\frac { C}k} \int_{k \le |x| \le {\sqrt{2}}k } | u| | \nabla u |
\le
{\frac { C}{ k}} ( \| u \|^2 + \| \nabla u \|^2 )
,
\end{equation}
where $C $ is independent of $k$.
It follows from \eqref{p466_1}-\eqref{p466_4}
that
$$
{\frac d{dt}}
\int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}})
|u|^2 dx
+ \lambda \int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}})
|u|^2 dx
$$
\begin{equation}
\label{p466_5}
\le 2 \int_{|x| \ge k} |\phi_1(x) | dx
+ {\frac 1{\lambda}} \int_{|x| \ge k} |g(x,t)|^2 dx
+ {\frac {C}{k}} ( \| u \|^2 + \| \nabla u \|^2 ).
\end{equation}
Multiplying \eqref{p466_5} by $e^{\lambda t}$
and then
integrating over $(\tau -t, \tau)$ with $t\ge 0$, we get that
$$
\int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}})
|u(x, \tau, \tau -t, u_0(\tau -t) )|^2
dx
$$
$$
\le e^{-\lambda \tau}e^{\lambda (\tau - t)}
\int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}})
|u_0(x, \tau -t ) |^2 dx
$$
$$
+ 2 e^{-\lambda \tau} \int_{\tau -t}^\tau \int_{|x| \ge k}
e^{\lambda \xi} |\phi_1 (x) | dx d\xi
+ {\frac 1{\lambda}} e^{-\lambda \tau}
\int_{\tau -t}^\tau \int_{|x| \ge k}
e^{\lambda \xi} |g(x, \xi) |^2 dx d\xi
$$
$$
+ {\frac {C}{k}} e^{-\lambda \tau} \int_{\tau -t}^\tau \
e^{\lambda \xi} \left (
\| u (\xi, \tau -t, u_0(\tau -t ) )\|^2
+ \| \nabla u (\xi, \tau -t, u_0(\tau -t ) )\|^2
\right )
d \xi
$$
$$
\le e^{-\lambda \tau}e^{\lambda (\tau - t)}
\int_{I \kern -.3em R^n}
|u_0(x, \tau -t ) |^2 dx
$$
$$
+ 2 e^{-\lambda \tau} \int_{-\infty}^\tau \int_{|x| \ge k}
e^{\lambda \xi} |\phi_1 (x) | dx d\xi
+ {\frac 1{\lambda}} e^{-\lambda \tau}
\int_{-\infty}^\tau \int_{|x| \ge k}
e^{\lambda \xi} |g(x, \xi) |^2 dx d\xi
$$
\begin{equation}
\label{p466_6}
+ {\frac {C}{k}} e^{-\lambda \tau} \int_{\tau -t}^\tau \
e^{\lambda \xi} \left (
\| u (\xi, \tau -t, u_0(\tau -t ) )\|^2
+ \| \nabla u (\xi, \tau -t, u_0(\tau -t ) )\|^2
\right )
d \xi.
\end{equation}
Note that given $\eta>0$, there is $T_1 =T_1(\tau, D, \eta)>0$
such that for all $t \ge T_1$,
\begin{equation}
\label{p_466_7}
e^{-\lambda \tau}e^{\lambda (\tau - t)}
\int_{|R^n} |u_0( \tau -t ) |^2 dx \le \eta.
\end{equation}
Since $\phi_1 \in L^1(I \kern -.3em R^n)$, there exists
$K_1 =K_1(\eta)>0$ such that for all $k \ge K_1$,
\begin{equation}
\label{p466_8}
2 e^{-\lambda \tau} \int_{-\infty}^\tau \int_{|x| \ge k}
e^{\lambda \xi} |\phi_1 (x) | dx d\xi
\le \eta.
\end{equation}
On the other hand, by \eqref{ginfinity2}
there is $K_2=K_2(\tau, \eta)> K_1$ such that
for all $k \ge K_2$,
\begin{equation}
\label{p_466_9}
{\frac 1{\lambda}} e^{-\lambda \tau}
\int_{-\infty}^\tau \int_{|x| \ge k}
e^{\lambda \xi} |g(x, \xi) |^2 dx d\xi
\le {\frac \eta\lambda} .
\end{equation}
For the last term on the right-hand side of
\eqref{p466_6}, it follows from Lemma \ref{lem41}
that there is
$T_2 =T_2(\tau, D)>0$ such that for all $t \ge T_2$,
$$
{\frac {C}{k}} e^{-\lambda \tau} \int_{\tau -t}^\tau \
e^{\lambda \xi} \left (
\| u (\xi, \tau -t, u_0(\tau -t ) )\|^2
+ \| \nabla u (\xi, \tau -t, u_0(\tau -t ) )\|^2
\right )
d \xi
$$
$$
\le {\frac { C}{k}} \left (1+ e^{-\lambda \tau} \right )
\int_{-\infty}^\tau e^{\lambda\xi}
\| g(\xi) \|^2 d \xi.
$$
Therefore, there is $K_3 =K_3(\tau, \eta)> K_2$ such that
for all $k\ge K_3$ and $t \ge T_2$,
\begin{equation}
\label{p466_10}
{\frac {C}{k}} e^{-\lambda \tau} \int_{\tau -t}^\tau \
e^{\lambda\xi} \left (
\| u (\xi, \tau -t, u_0(\tau -t ) )\|^2
+ \| \nabla u (\xi, \tau -t, u_0(\tau -t ) )\|^2
\right )
d \xi \le \eta.
\end{equation}
Let $T=\max \{T_1, T_2 \}$.
Then by \eqref{p466_6}-\eqref{p466_10} we find that
for all $k\ge K_3$ and $t \ge T$,
$$
\int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}})
|u(x, \tau, \tau -t, u_0(\tau -t) )|^2 dx
\le 3 \eta + {\frac \eta\lambda},
$$
and hence for all $k\ge K_3$ and $t \ge T$,
$$
\int _{|x| \ge \sqrt{2} k}
|u(x, \tau, \tau -t, u_0(\tau -t) )|^2 dx
$$
$$ \le
\int_{I \kern -.3em R^n} \theta ({\frac {|x|^2}{k^2}})
|u(x, \tau, \tau -t, u_0(\tau -t) )|^2 dx
\le 3 \eta + {\frac \eta\lambda},
$$
which completes the proof.
\end{proof}
\section{Existence of pullback attractors}
\setcounter{equation}{0}
In this section, we prove
the existence of a ${\mathcal{D}}_\lambda$-pullback
global attractor
for the non-autonomous Reaction-Diffusion
equation on $I \kern -.3em R^n$.
We
first establish the ${\mathcal{D}}_\lambda$-pullback asymptotic
compactness of solutions and prove the existence
of a pullback attractor in $L^2(I \kern -.3em R^n)$. Then we show that
this attractor is actually a ${\mathcal{D}}_\lambda$-pullback
attractor in $H^1(I \kern -.3em R^n)$.
\begin{lem}
\label{lem51}
Suppose \eqref{f1}-\eqref{F1}
and \eqref{gcond}-\eqref{ginfinity} hold.
Then $\phi$ is $\mathcal{D}_\lambda$-pullback
asymptotically compact in $L^2 (\R^n)$,
that is, for every $\tau \in I \kern -.3em R$,
$D=\{D(t)\}_{t\in I \kern -.3em R} \in {\mathcal{D}}_\lambda$,
and $t_n \to \infty$,
$u_{0,n} \in D(\tau -t_n)$, the sequence
$\phi(t_n, \tau -t_n, u_{0,n} ) $ has a
convergent
subsequence in $L^2 (\R^n) $.
\end{lem}
\begin{proof}
We use the uniform estimates on the tails of solutions to establish
the precompactness of $\phi (t_n , \tau -t_n , u_{0,n} )$
in $L^2 (\R^n) $, that is, we prove that for every
$\eta>0$, the sequence
$\phi (t_n , \tau -t_n , u_{0,n} ) $ has a finite covering
of balls of radii less than $\eta$.
Given $K>0$, denote by
\[{\Omega}_K = \{ x: |x| \le K \} \quad \mbox{and} \quad
{\Omega}^c_K = \{ x: |x| > K \}.\]
Then by Lemma \ref{lem46}, for the given $\eta >0$,
there exist $K= K(\tau, \eta)>0$ and $T=T(\tau, D, \eta)>2$
such that for $t \ge T$,
$$
\| \phi (t , \tau -t , u_{0}(\tau-t) ) \| _{L^2 ({\Omega}^c_{K} )
}
\le \frac{\eta}{4}.
$$
Since $t_n \to \infty$, there is $N_1=N_1(\tau, D, \eta)>0$ such that
$t_n \ge T$ for all $n \ge N_1$, and hence we obtain that,
for all $n \ge N_1$,
\begin{equation}
\label{p51_1}
\| \phi (t_n , \tau -t_n , u_{0,n} ) \| _{L^2 ({\Omega}^c_{K} )
}
\le \frac{\eta}{4}.
\end{equation}
On the other hand, by Lemmas \ref{lem41} and \ref{lem44}, there
exist $C=C(\tau)>0$ and $N_2(\tau, D)>0$
such that for all $n \ge N_2 $,
\begin{equation}
\label{p51_2}
\| \phi (t_n , \tau -t_n , u_{0,n} ) \| _{H^1( {\Omega}_{K } )
} \le C.
\end{equation}
By the compactness of embedding
$ H^1( {\Omega}_{K } ) \hookrightarrow L^2 ( {\Omega}_{K } )$,
the sequence $ \phi (t_n , \tau -t_n , u_{0,n} ) $ is precompact in
$L^2 ( {\Omega}_{K } )$.
Therefore,
for the given $\eta>0$,
$ \phi (t_n , \tau -t_n , u_{0,n} )$
has a finite covering in
$L^2 ( {\Omega}_{K } ) $ of
balls of radii less than $\eta/4$, which along with \eqref{p51_1}
shows
that $ \phi (t_n , \tau -t_n , u_{0,n} ) $ has a finite covering in
$ L^2 (\R^n) $ of balls of radii less than $\eta$, and
thus $ \phi (t_n , \tau -t_n , u_{0,n} )$ is precompact
in $L^2 (\R^n) $.
\end{proof}
We now present the existence of a pullback global
attractor
for $\phi$ in $L^2(I \kern -.3em R^n)$.
\begin{thm}
\label{thm52}
Suppose \eqref{f1}-\eqref{F1}
and \eqref{gcond}-\eqref{ginfinity} hold.
Then problem \eqref{rd1}-\eqref{rd2} has a
unique $\mathcal{D}_\lambda$-pullback global attractor
$\{\mathcal{A}(\tau) \}_{\tau \in I \kern -.3em R}\in {\mathcal{D}_\lambda} $ in $L^2 (\R^n)
$, that is, for every $\tau \in I \kern -.3em R$,
(i) \ $\mathcal{A}(\tau)$ is compact in $L^2(I \kern -.3em R^n)$;
(ii) \ $\{\mathcal{A}(\tau)\}_{\tau\in I \kern -.3em R}$ is invariant, that is,
$$ \phi(t, \tau, \mathcal{A}(\tau) )
= \mathcal{A}(t+\tau), \ \ \forall \ t \ge 0;
$$
(iii) \ \ $\{\mathcal{A}(\tau)\}_{\tau \in I \kern -.3em R}$
attracts every set in $\mathcal{D}_\lambda$ with respect to the norm
of $L^2(I \kern -.3em R^n)$,
that is, for every
$B = \{B(\tau)\}_{\tau \in I \kern -.3em R} \in \mathcal{D}_\lambda$,
$$ \lim_{t \to \infty} d_{L^2(I \kern -.3em R^n)}
(\phi(t, {\tau -t}, B({\tau -t})), \mathcal{A}(\tau))=0,
$$
where for any $Y, \ Z \subseteq L^2(I \kern -.3em R^n)$,
$$
d_{L^2(I \kern -.3em R^n)} (Y,Z) =
\sup_{y \in Y }
\inf_{z\in Z} \| y-z\|_{L^2(I \kern -.3em R^n)}.
$$
\end{thm}
\begin{proof}
For $ \tau \in I \kern -.3em R$, denote by
$$
B(\tau) = \{ u : \ \| u \| ^2
\le M+ M e^{-\lambda \tau} \int_{-\infty}^\tau
e^{\lambda \xi} \| g(\xi) \|^2 d \xi \},
$$
where $M$ is the positive constant in Lemma \ref{lem41}.
Note that $B=\{B(\tau)\}_{\tau \in I \kern -.3em R} \in {\mathcal{D}_\lambda} $
is a ${\mathcal{D}_\lambda}$-pullback absorbing for
$\phi$ in $L^2 (\R^n) $ by Lemma \ref{lem41}.
In addition, $\phi$ is ${\mathcal{D}_\lambda}$-pullback
asymptotically compact by Lemma \ref{lem51}. Thus the existence of
a ${\mathcal{D}_\lambda}$-pullback global attractor for $\phi$
in $L^2(I \kern -.3em R^n)$ follows
from Proposition \ref{att}.
\end{proof}
In what follows, we strengthen Theorem \ref{thm52} and show that
the global attractor
$\{\mathcal{A}(\tau) \}_{\tau \in I \kern -.3em R} $ is actually
a $\mathcal{D}_\lambda$-pullback global attractor
in $H^1(I \kern -.3em R^n)$. As a necessary step towards this goal,
we first prove the asymptotic compactness of solutions in
$H^1(I \kern -.3em R^n)$.
\begin{lem}
\label{lem53}
Suppose \eqref{f1}-\eqref{F1}
and \eqref{gcond}-\eqref{ginfinity} hold. Let ${\frac {dg}{dt}} \in L^2_{loc} (I \kern -.3em R, L^2(I \kern -.3em R^n))$.
Then $\phi$ is $\mathcal{D}_\lambda$-pullback
asymptotically compact in $H^1(I \kern -.3em R^n)$,
that is, for every $\tau \in I \kern -.3em R$,
$D=\{D(t)\}_{t\in I \kern -.3em R} \in {\mathcal{D}}_\lambda$,
and $t_n \to \infty$,
$u_{0,n} \in D(\tau -t_n)$, the sequence
$\phi(t_n, \tau -t_n, u_{0,n} ) $ has a
convergent
subsequence in $H^1(I \kern -.3em R^n)$.
\end{lem}
\begin{proof}
By Lemma \ref{lem51},
the sequence
$\phi(t_n, \tau -t_n, u_{0,n} ) = u(\tau, \tau -t_n, u_{0,n} ) $ has a convergent
subsequence in $L^2(I \kern -.3em R^n)$, and hence
there exists $v \in L^2(I \kern -.3em R^n)$ such that,
up to a subsequence,
$$
u(\tau, \tau -t_n, u_{0,n} )
\to v \quad \mbox{in } \ \ L^2 (I \kern -.3em R^n).
$$
This shows that
\begin{equation}
\label{p53_1}
u(\tau, \tau -t_n, u_{0,n} )
\quad \mbox{is a Cauchy sequence in }
\ L^2(I \kern -.3em R^n).
\end{equation}
Next we prove that $u(\tau, \tau -t_n, u_{0,n} )$ is actually
a Cauchy sequence in $H^1(I \kern -.3em R^n)$.
For any $n, m \ge 1$, it follows from \eqref{rd1} that
$$
-\Delta \left (
u(\tau, \tau -t_n, u_{0,n})
- u(\tau, \tau -t_m, u_{0,m})
\right )
+ \lambda \left (
u(\tau, \tau -t_n, u_{0,n})
- u(\tau, \tau -t_m, u_{0,m})
\right )
$$
\begin{equation}
\label{p53_2}
=
f(x, u(\tau, \tau -t_n, u_{0,n}) )
- f (x, u(\tau, \tau -t_m, u_{0,m}) )
-
u_\tau(\tau, \tau -t_n, u_{0,n})
+ u_\tau (\tau, \tau -t_m, u_{0,m}).
\end{equation}
Multiplying \eqref{p53_2} by
$u(\tau, \tau -t_n, u_{0,n})
- u(\tau, \tau -t_m, u_{0,m})$,
by \eqref{f3} we get that
$$
\| \nabla \left (
u(\tau, \tau -t_n, u_{0,n})
- u(\tau, \tau -t_m, u_{0,m})
\right ) \|^2
+ \lambda \|
u(\tau, \tau -t_n, u_{0,n})
- u(\tau, \tau -t_m, u_{0,m})
\|^2
$$
$$
\le \| u_\tau(\tau, \tau -t_n, u_{0,n})
- u_\tau (\tau, \tau -t_m, u_{0,m}) \|
\| u(\tau, \tau -t_n, u_{0,n})
-u (\tau, \tau -t_m, u_{0,m})\|.
$$
\begin{equation}
\label{p53_3}
+
\alpha_3
\|
u(\tau, \tau -t_n, u_{0,n})
- u(\tau, \tau -t_m, u_{0,m})
\|^2.
\end{equation}
By Lemma \ref{lem45} we find that for every
$\tau \in I \kern -.3em R$, there exists $T=T(\tau, D)$ such that
for all $t \ge T$,
$$
\| u_\tau(\tau, \tau -t , u_{0} (\tau -t) ) \|
\le C.
$$
Since $t_n \to \infty$, there exists $N=N(\tau, D)$ such that
$t_n \ge T$ for all $n \ge T$. Thus we obtain that,
for all $n \ge N$,
$$
\| u_\tau(\tau, \tau -t_n , u_{0,n} ) \|
\le C,
$$
which along with \eqref{p53_3} shows that,
for all $n, m \ge N$,
$$
\| \nabla \left (
u(\tau, \tau -t_n, u_{0,n})
- u(\tau, \tau -t_m, u_{0,m})
\right ) \|^2
+ \lambda \|
u(\tau, \tau -t_n, u_{0,n})
- u(\tau, \tau -t_m, u_{0,m})
\|^2
$$
$$
\le 2C
\| u(\tau, \tau -t_n, u_{0,n})
-u (\tau, \tau -t_m, u_{0,m})\|.
$$
\begin{equation}
\label{p53_4}
+
\alpha_3
\|
u(\tau, \tau -t_n, u_{0,n})
- u(\tau, \tau -t_m, u_{0,m})
\|^2.
\end{equation}
It follows from \eqref{p53_1} and \eqref{p53_4}
that
$u(\tau, \tau -t_n, u_{0,n})$ is a Cauchy sequence
in $H^1(I \kern -.3em R^n)$. The proof is completed.
\end{proof}
We are now ready to prove the existence of a global attractor
for problem \eqref{rd1}-\eqref{rd2} in $H^1(I \kern -.3em R^n)$.
\begin{thm}
\label{thm54}
Suppose \eqref{f1}-\eqref{F1}
and \eqref{gcond}-\eqref{ginfinity} hold.
Let ${\frac {dg}{dt}} \in L^2_{loc} (I \kern -.3em R, L^2(I \kern -.3em R^n))$.
Then problem \eqref{rd1}-\eqref{rd2} has a
unique $\mathcal{D}_\lambda$-pullback global attractor
$\{\mathcal{A}(\tau) \}_{\tau \in I \kern -.3em R}\in {\mathcal{D}_\lambda} $ in $H^1(I \kern -.3em R^n)
$, that is, for every $\tau \in I \kern -.3em R$,
(i) \ $\mathcal{A}(\tau)$ is compact in $H^1(I \kern -.3em R^n)$;
(ii) \ $\{\mathcal{A}(\tau)\}_{\tau\in I \kern -.3em R}$ is invariant, that is,
$$ \phi(t, \tau, \mathcal{A}(\tau) )
= \mathcal{A}(t+\tau), \ \ \forall \ t \ge 0;
$$
(iii) \ \ $\{\mathcal{A}(\tau)\}_{\tau \in I \kern -.3em R}$
attracts every set in $\mathcal{D}_\lambda$ with respect to the norm
of $H^1 (I \kern -.3em R^n)$,
that is, for every
$B = \{B(\tau)\}_{\tau \in I \kern -.3em R} \in \mathcal{D}_\lambda$,
$$ \lim_{t \to \infty} d_{H^1 (I \kern -.3em R^n)}
(\phi(t, {\tau -t}, B({\tau -t})), \mathcal{A}(\tau))=0,
$$
where for any $Y, \ Z \subseteq H^1 (I \kern -.3em R^n)$,
$$
d_{H^1 (I \kern -.3em R^n)} (Y,Z) =
\sup_{y \in Y }
\inf_{z\in Z} \| y-z\|_{H^1(I \kern -.3em R^n)}.
$$
\end{thm}
\begin{proof}
The invariance of $\{\mathcal{A}(\tau)\}_{\tau \in I \kern -.3em R}$
is already given in Theorem \ref{thm52}.
So we only need to prove (i) and (iii).
Proof of (i). Let $\{v_n\}_{n=1}^\infty \subseteq \mathcal{A}(\tau)$. We want
to show that there exists $v \in \mathcal{A}(\tau)$ such
that, up to a subsequence,
$ v_n \to v $ in $H^1(I \kern -.3em R^n)$.
Since $\mathcal{A}(\tau)$ is compact in $L^2(I \kern -.3em R^n)$ by Theorem \ref{thm52},
there exists $v \in \mathcal{A}(\tau)$ such that, up to a subsequence,
\begin{equation}
\label{p54_1}
v_n \to v \quad \mbox{in} \ \ L^2(I \kern -.3em R^n).
\end{equation}
We now prove the convergence in \eqref{p54_1} actually
holds in $H^1(I \kern -.3em R^n)$. Let $\{t_n\}_{n=1}^\infty$
be a sequence with $t_n \to \infty$.
By the invariance of
$\{\mathcal{A}(\tau)\}_{\tau \in I \kern -.3em R}$, for every $n \ge 1$,
there exists $w_n \in \mathcal{A} (\tau -t_n)$ such that
\begin{equation}
\label{p54_2}
v_n = \phi (t_n, \tau - t_n, w_n ).
\end{equation}
By Lemma \ref{lem53}, it follows from \eqref{p54_2} that,
there exist $\tilde{v} \in H^1(I \kern -.3em R^n)$ such that, up to a subsequence,
\begin{equation}
\label{p54_3}
v_n = \phi (t_n, \tau - t_n, w_n ) \to \tilde{v} \quad \mbox{in } \ H^1(I \kern -.3em R^n) .
\end{equation}
Notice that \eqref{p54_1} and \eqref{p54_3} imply
$\tilde{v} =v \in \mathcal{A}(\tau)$, and thus (i) follows.
Proof of (iii). Suppose (iii) is not true. Then there are
$\tau \in I \kern -.3em R$,
$B = \{B(\tau)\}_{\tau \in I \kern -.3em R} \in \mathcal{D}_\lambda$,
$\epsilon_0>0$ and $t_n \to \infty$ such that
$$
d_{H^1 (I \kern -.3em R^n)}
(\phi(t_n, {\tau -t_n}, B({\tau -t_n})), \mathcal{A}(\tau)) \ge 2\epsilon_0,
$$
which implies that for every $n\ge 1$, there exists
$v_n \in B(\tau -t_n)$ such that
\begin{equation}
\label{p54_10}
d_{H^1 (I \kern -.3em R^n)}
(\phi(t_n, {\tau -t_n}, v_n ), \mathcal{A}(\tau)) \ge \epsilon_0.
\end{equation}
On the other hand,
By Lemma \ref{lem53}, there is $v \in H^1(I \kern -.3em R^n)$ such that, up
to a subsequence,
\begin{equation}
\label{p54_11}
\phi(t_n, {\tau -t_n}, v_n )
\to v \quad \mbox{in } \ H^1(I \kern -.3em R^n).
\end{equation}
Since
$\{\mathcal{A}(\tau)\}_{\tau \in I \kern -.3em R}$
attracts
$B = \{B(\tau)\}_{\tau \in I \kern -.3em R} $ in $L^2(I \kern -.3em R^n)$ by Theorem \ref{thm52}, we have
\begin{equation}
\label{p54_12}
\lim_{n \to \infty } d_{L^2 (I \kern -.3em R^n)}
(\phi(t_n , {\tau -t_n }, v_n ), \mathcal{A}(\tau))=0.
\end{equation}
By \eqref{p54_11}-\eqref{p54_12} and
the compactness of $\mathcal{A}(\tau)$,
we find that $v \in \mathcal{A}(\tau) $ and
\begin{equation}
\label{p54_15}
\lim_{n \to \infty} d_{H^1 (I \kern -.3em R^n)}
(\phi(t_n, {\tau -t_n}, v_n ), \mathcal{A}(\tau))
\le \lim_{n \to \infty} d_{H^1 (I \kern -.3em R^n)}
(\phi(t_n, {\tau -t_n}, v_n ), v ) =0,
\end{equation}
a contradiction with \eqref{p54_10}. The proof is completed.
\end{proof}
|
1,108,101,566,483 | arxiv | \section{Memo}
\section{Introduction}
\subsection{Background}
Let $G$ be a group and $N$ be a normal subgroup of $G$. An element of the form $[g,x] = gxg^{-1}x^{-1}$ with $g\in G$ and $x\in N$ is called a \emph{$(G,N)$-commutator} or a \emph{mixed commutator}; $[G,N]$ is the subgroup generated by mixed commutators, and it is called the \emph{$(G,N)$-commutator subgroup} or the \emph{mixed commutator subgroup}. The \emph{$(G,N)$-commutator length} or the \emph{mixed commutator length} $\cl_{G,N}$ is the word length on $[G,N]$ with respect to the set of mixed commutators. Namely, for $x\in [G,N]$, $\cl_{G,N}(x)$ is the smallest integer $n$ such that there exist $n$ mixed commutators whose product is $x$. In the case of $N=G$, the notions of $[G,N]$ and $\cl_{G,N}$ coincide with those of the \emph{commutator subgroup} $[G,G]$ and the \emph{commutator length} $\cl_G$, respectively. Commutator lengths have been extensively studied in geometric topology (for example, see \cite{EK}, \cite{BIP}, \cite{Tsuboi12} and \cite{Tsuboi13}).
It is a classical fact that $\cl_G(y)$ has the following geometric interpretation. We can regard an element $y$ in $[G,G]$ as a homotopy class of a loop of the classifying space $BG$ of $G$. Since $y \in [G,G]$, the loop can be extended as a continuous map from a compact connected surface $\Sigma$ with $\partial \Sigma = S^1$ to $BG$; the commutator length coincides with the minimum genus of such $\Sigma$.
This description can be generalized to the setting of the mixed commutator length $\cl_{G,N}$: in \cite[Theorem~1.3]{KKMM1}, the authors obtained geometric and combinatorial interpretation of $\cl_{G,N}$, which is explained in terms of the concept of $(\hG,\bG)$-simplicial surfaces.
In general, it is difficult to determine the precise values of $\cl_G$ and $\cl_{G,N}$. In the present paper, we investigate the comparison between mixed commutator lengths $\cl_{G,N}$ and usual commutator lengths $\cl_G$. Especially in the case of certain wreath products, we determine the precise value of $\cl_{G,N}$ in terms of general ranks due to Malcev \cite{Mal}.
In the present paper, we use the notation $\Gamma$ and $q$ to fit the following short exact sequence:
\begin{align*}
1\longrightarrow N\longrightarrow G\stackrel{q}{\longrightarrow} \Gamma \longrightarrow 1. \tag{$\star$}
\end{align*}
\subsection{Mixed commutator length on wreath products}
Our main result determines the mixed commutator length for certain elements in wreath products. For a group $H$, $e_H$ denotes the group unit of $H$. For two groups $H$ and $\Gamma$, the (restricted) \emph{wreath product} $H\wr \Gamma$ is defined as the semidirect product $\left(\bigoplus_{\Gamma}H\right)\rtimes \Gamma$, where $\Gamma$ acts on $\bigoplus_{\Gamma}H$ by shifting the coordinates. By letting $G = H \wr \Gamma$, $G$ admits a natural projection $q \colon G \twoheadrightarrow \Gamma$ fitting in short exact sequence ($\star$). Note that in this case short exact sequence $(\star)$ splits.
We regard an element of $\bigoplus_{\Gamma} H$ as a function $u$ from $\Gamma$ to $H$ such that $u(\gamma) = e_H$ except for finitely many $\gamma \in \Gamma$.
For $\gamma \in \Gamma$, we write $u(\gamma) \in H$ to be the $\gamma$-entry of $u$.
In the case where $H=\ZZ$, for $\lambda\in \Gamma$, $\delta_\lambda \colon \Gamma \to \ZZ$ denotes the Kronecker delta function at $\lambda$, meaning that $\delta_\lambda(\gamma)=1$ if $\gamma = \lambda$ and $\delta_\lambda(\gamma)=0$ otherwise.
For a group $\Gamma$ and a subset $S$, $\langle S\rangle$ denotes the subgroup of $\Gamma$ generated by $S$. In the present paper, set $\NN=\{1,2,3,\ldots\}$.
Now we state our first result:
\begin{thm}\label{thm:wreath_new}
Let $\Gamma$ be a group. Set $G=\ZZ\wr \Gamma$ and $N=\bigoplus_{\Gamma}\ZZ$. Let $k\in \NN$ and $\lambda_1,\ldots ,\lambda_k\in \Gamma\setminus \{e_{\Gamma}\}$. Let $\Lambda=\langle \lambda_1,\ldots ,\lambda_k\rangle$. Set
\[
x_{(\lambda_1,\ldots ,\lambda_k)}=\sum_{i=1}^k\delta_{\lambda_i}-k\delta_{e_{\Gamma}}.
\]
Then we have
\[
\cl_{G,N}( x_{(\lambda_1,\ldots ,\lambda_k)})=\intrk^{\Gamma}(\Lambda),
\]
where $\intrk^{\Gamma}(\Lambda)$ is the \emph{intermediate rank} of the group pair $(\Gamma,\Lambda)$, defined in Definition~$\ref{def:int_rk}$.
\end{thm}
The formula in Theorem~\ref{thm:wreath_new} is stated in terms of a variant of ranks of groups, which we call the \emph{intermediate rank}, as follows. This notion relates to the concept of \emph{general rank} in the sense of Malcev \cite{Mal}. Here, $\ZZ_{\geq 0}:=\{n\in \ZZ\;|\; n\geq 0\}$.
\begin{definition}[intermediate rank]\label{def:int_rk}
\begin{enumerate}[(1)]
\item For a group $\Lambda$, the \emph{rank} of $\Lambda$ is defined by
\[
\rk(\Lambda)=\inf \{ \#S \; | \; \Lambda = \langle S \rangle \} \ \in \ZZ_{\geq 0}\cup \{\infty\}.
\]
\item For a pair $(\Gamma,\Lambda)$ of a group $\Gamma$ and its subgroup $\Lambda$, the \emph{intermediate rank} of $(\Gamma,\Lambda)$ is defined by
\[
\intrk^{\Gamma}(\Lambda)=\inf \{ \rk(\Theta) \; | \; \Lambda \leqslant \Theta\leqslant \Gamma \}\ \in \ZZ_{\geq 0}\cup \{\infty\}.
\]
\item (general rank, \cite{Mal}) For a group $\Gamma$, the \emph{general rank} of $\Gamma$ is defined by
\[
\genrk(\Gamma)=\sup\{\intrk^{\Gamma}(\Lambda)\;|\; \textrm{$\Lambda$ is a finitely generated subgroup of $\Gamma$}\}.
\]
\end{enumerate}
\end{definition}
Furthermore, we have our main result, Theorem~\ref{thm:wreath_shin}, which generalizes Theorem~\ref{thm:wreath_new}. To state the theorem, we employ the following terminology.
\begin{definition}\label{def:support}
Let $T$ be a non-empty set and $A$ be an additive group. Let $u\in \bigoplus_{T}A$. We regard $u$ as a map $u\colon T\to A$ such that $u(t)=0$ for all but finitely many $t\in T$.
\begin{enumerate}[$(1)$]
\item The \emph{support} $\supp(u)$ is defined as the set
\[
\supp(u)=\{t\in T \;|\; u(t)\ne 0\}.
\]
\item For a subset $S$ of $T$, we say that $S$ is a \emph{zero-sum set for $u$} if
\[
\sum_{s\in S}u(s)=0.
\]
\end{enumerate}
\end{definition}
\begin{thm}[mixed commutator length on wreath products]\label{thm:wreath_shin}
Let $\Gamma$ be a group. Set $G=\ZZ\wr \Gamma$ and $N=\bigoplus_{\Gamma}\ZZ$.
\begin{enumerate}[$(1)$]
\item Let $\Lambda$ be a subgroup of $\Gamma$. Assume that $x\in N$ fulfills the following three conditions.
\begin{enumerate}[$(i)$]
\item $e_{\Gamma}\in \supp(x)$.
\item $\Lambda$ is a zero-sum set for $x$, namely, $\sum_{\lambda\in \Lambda}x(\lambda)=0$.
\item For every zero-sum set $S$ for $x$ satisfying $S\subseteq \supp(x)$ and $e_{\Gamma}\in S$, we have $\langle S\rangle =\Lambda$.
\end{enumerate}
Then we have
\[
\cl_{G,N}(x)=\intrk^{\Gamma}(\Lambda).
\]
\item We have
\[
\{\cl_{G,N}(x)\;|\; x\in [G,N]\}=\{r\in \ZZ_{\geq 0}\;|\;r\leq \genrk(\Gamma)\}.\]
Furthermore, the following holds true: for every $r\in \NN$ with $r\leq \genrk(\Gamma)$, there exists $x_r\in [G,N]$ such that $\cl_{G,N}(x_r)=r$ and that $x_r$ fulfills conditions $(i)$--$(iii)$ of $(1)$ with $\Lambda=\langle \supp(x)\rangle$.
\end{enumerate}
\end{thm}
We note that $\Lambda$ in Theorem~\ref{thm:wreath_shin}~(1) must be finitely generated because $\supp(x)$ is a finite set.
On $\cl_G$, in \cite[Theorem~7.1]{KKMM1} we showed that if $(\star)$ splits, then
\begin{eqnarray}\label{eq:3bai}
\cl_{G,N}(x)\leq 3\cl_{G}(x)
\end{eqnarray}
holds for every $x\in [G,N]$. In this paper, we improve this bound for the case where $N$ is \emph{abelian} in the following manner.
\begin{thm}\label{thm:split}
Let $(G,N,\Gamma)$ be a triple of groups that fits in the short exact sequence $(\star)$. Assume that $(\star)$ splits and that $N$ is \emph{abelian}. Then, we have
\begin{eqnarray}\label{eq:2bai}
\cl_{G,N}(x)\leq 2\cl_{G}(x)
\end{eqnarray}
for every $x\in [G,N]$.
\end{thm}
Results in this paper show that \eqref{eq:2bai} is \emph{sharp}; see Theorem~\ref{thm:abelian}, Theorem~\ref{thm:surface_new} and Proposition~\ref{prop:cw}. We also note that Theorem~\ref{thm:split} in particular applies to the triple $(G,N,\Gamma)=(\ZZ\wr \Gamma, \bigoplus_{\Gamma}\ZZ, \Gamma)$ for every group $\Gamma$.
\subsection{Coincidence problem of $\cl_G$ and $\cl_{G,N}$}
In the case that $\Gamma$ is abelian, it is easy to compute $\intrk^{\Gamma}(\Lambda)$ and $\genrk(\Gamma)$:
for an abelian group $\Gamma$, the intermediate rank coincides with $\rk(\Lambda)$, and the general rank $\genrk(\Gamma)$ coincides with the \emph{special rank of $\Gamma$}, which is defined as
the supremum of ranks of all finitely generated subgroups; see Definition~\ref{def:local_rank} and Lemma~\ref{lem:abel}. This allows us to construct examples of pairs $(G,N)$ such that $\cl_G$ and $\cl_{G,N}$ are different.
Before discussing the coincidence problem of $\cl_G$ and $\cl_{G,N}$, we recall from our previous paper \cite{K2M3} that the stabilizations of $\cl_G$ and $\cl_{G,N}$ coincide in several cases. For $y \in [G,G]$ and $x \in [G,N]$, set
\[ \scl_G(y) = \lim_{n \to \infty} \frac{\cl_G(y^n)}{n} \quad \textrm{and} \quad \scl_{G,N}(x) = \lim_{n \to \infty} \frac{\cl_{G,N}(x^n)}{n}.\]
We call $\scl_G(y)$ the \emph{stable commutator length of $y$}, and $\scl_{G,N}(x)$ the \emph{stable mixed commutator length of $x$}. For a comprehensive introduction to the stable commutator lengths, we refer to Calegari's book \cite{Ca}.
By a celebrated result by Bavard (see \cite{Bav} or \cite{Ca}), called the Bavard duality theorem, it is well known that the stable commutator lengths are closely related to the notion of quasimorphisms of groups. In \cite{KKMM1} the authors established the Bavard duality theorem for $\scl_{G,N}$, which implies that $\scl_{G,N}$ are closely related to $G$-invariant quasimorphisms on $N$. Using this, in our previous work, we show the following coincidence result of $\scl_G$ and $\scl_{G,N}$ (see \cite[Proposition~1.6]{KKMM1} and \cite[Theorems~1.9, 1.10 and 2.1]{K2M3}).
\begin{thm}[\cite{KKMM1}, \cite{K2M3}]\label{thm:scl}
In $(\star)$, if $\Gamma$ is solvable and if either of the following conditions
\begin{enumerate}[$(1)$]
\item short exact sequence $(\star)$ \emph{virtually splits}, meaning that, there exists $(\Lambda,s_{\Lambda})$ such that $\Lambda$ is a subgroup of finite index of $\Gamma$ and $s_{\Lambda}\colon \Lambda\to G$ a homomorphism satisfying $q\circ s_{\Lambda}=\mathrm{id}_{\Lambda}$;
\item $\HHH^2(G;\RR)=0$;
\item or, $\HHH^2(\Gamma;\RR)=0$
\end{enumerate}
is satisfied, then $\scl_{G}$ coincides with $\scl_{G,N}$ on $[G,N]$.
\end{thm}
Examples of pairs $(G,N)$ such that $\scl_G$ and $\scl_{G,N}$ do not coincide are also known; see \cite{KK} and \cite{MMM}.
Now we consider the coincidence problem of $\cl_G$ and $\cl_{G,N}$. There is no guarantee that $\cl_G$ and $\cl_{G,N}$ coincide even if $\scl_G$ and $\scl_{G,N}$ coincide. Nevertheless, $\cl_G$ and $\cl_{G,N}$ actually coincide in the following case. Here, a group is said to be \emph{locally cyclic} if every finitely generated subgroup is cyclic:
\begin{thm} \label{thm local cyclicity}
For every triple $(G,N,\Gamma)$ fitting in $(\star)$ such that $\Gamma$ is locally cyclic, $\cl_G$ and $\cl_{G,N}$ coincide on $[G,N]$.
\end{thm}
Cyclic groups, $\QQ$ and $\ZZ[1/2]/\ZZ$ are typical examples of locally cyclic groups, and $(\ZZ/2\ZZ)^2$ and $\RR$ are not locally cyclic. Every locally cyclic group is abelian. Note that $\Gamma$ is locally cyclic if and only if the general rank of $\Gamma$ is at most $1$. Hence Theorem~\ref{thm local cyclicity} can be rephrased as follows: if $\genrk (\Gamma) \le 1$, then $\cl_G$ and $\cl_{G,N}$ coincide on $[G,N]$.
Contrastingly, as an application of Theorems \ref{thm:wreath_new} and \ref{thm:wreath_shin}, we construct a pair $(G,N)$ such that $\cl_G$ and $\cl_{G,N}$ do not coincide when $\Gamma$ is abelian but not locally cyclic:
\begin{thm}[result for abelian $\Gamma$]\label{thm:abelian}
Let $\Gamma$ be an abelian group. Set $G=\ZZ\wr \Gamma$ and $N=\bigoplus_{\Gamma}\ZZ$. Then we have
\[
\{(\cl_G(x),\cl_{G,N}(x))\;|\;x\in [G,N]\}=\left\{ \Big( \left\lceil \frac{r}{2}\right\rceil,r \Big)\;|\; r\in \ZZ_{\geq 0},\ r\leq \sperk(\Gamma)\right\},
\]
where $\sperk(\Gamma)$ is the special rank of $\Gamma$, defined in Definition~$\ref{def:local_rank}$.
Here, $\lceil \cdot \rceil$ is the ceiling function.
In particular, if $\Gamma$ is not locally cyclic, then for the pair $(G,N)=(\ZZ\wr \Gamma,\bigoplus_{\Gamma}\ZZ)$, which fits in \emph{split} exact sequence $(\star)$, $\cl_{G}$ and $\cl_{G,N}$ do \emph{not} coincide on $[G,N]$. If moreover $\sperk(\Gamma)=\infty$, then for the same pair $(G,N)$, we have
\[
\sup_{x\in [G,N]}(\cl_{G,N}(x)-C\cdot \cl_G(x))=\infty
\]
for every real number $C<2$.
\end{thm}
In particular, Theorem~\ref{thm:abelian} implies that Theorem~\ref{thm:scl} is no longer true if $\scl_G$ and $\scl_{G,N}$ are replaced with $\cl_G$ and $\cl_{G,N}$, respectively. This means that the coincidence problem of $\cl_G$ and $\cl_{G,N}$ is more subtle than the one of $\scl_G$ and $\scl_{G,N}$.
Theorem~\ref{thm:abelian} provides the following example: for $(G,N)=(\ZZ\wr \RR,\bigoplus_{\RR}\ZZ)$, we have $\sup_{x\in [G,N]}\cl_G(x)=\infty$ but $\scl_G \equiv \scl_{G,N}\equiv 0$ on $[G,N]$; see Example~\ref{example:wreath} for more details.
It is a much more difficult problem to construct a group $G$ with $\sup_{x\in [G,G]}\cl_G(x)=\infty$ but $\scl_{G}\equiv 0$ on $[G,G]$ such that the abelianization $G^{\mathrm{ab}}=G/[G,G]$ is finite.
See \cite{Muranov} and \cite{Mimura} for such examples.
\subsection{The class $\CSD$ and permutational wreath products}\label{subsec:perm}
In the final part of this introduction, we state our result for (possibly) non-abelian $\Gamma$. The main task is to bound $\cl_G(x)$ \emph{from above} in certain good situations. For this purpose, we define a class of groups $\CSD$ for a non-empty set $D$ with
\[
D\subseteq \{(g,r)\in \NN^2\;|\; g+1\leq r\leq 2g\},
\]
which is related to surface groups; see Definition~\ref{def:CSD}.
Here, we consider \emph{permutational} (restricted) wreath products: let $\Gamma$ be a group and $\rho\colon \Gamma \curvearrowright X$ be a $\Gamma$-action on a set $X$. Let $H$ be a group. Then the \emph{permutational wreath product} $H\wr_{\rho}\Gamma$ is the semidirect product $\left(\bigoplus_{X}H\right)\rtimes \Gamma$, where $\Gamma$ acts on $\bigoplus_{X}H$ by shifts via $\rho$.
\begin{thm}[result for permutational wreath products]\label{thm:surface_new}
Let $D$ be a non-empty subset of $\{(g,r)\in \NN^2\;|\; g+1\leq r\leq 2g\}$. Assume that a group $\Gamma$ is a member of $\CSD$, the class defined in Definition~$\ref{def:CSD}$. Then, there exist a group quotient $Q$ of $\Gamma$ and a quotient map $\sigma\colon \Gamma\twoheadrightarrow Q$ such that
\begin{eqnarray}\label{eq:perm}
(G,N)=(\ZZ\wr_{\rho_Q}\Gamma,\bigoplus_{Q}\ZZ)
\end{eqnarray}
satisfies the following condition, where $\rho_Q\colon G\curvearrowright Q$ is the composition of $\sigma$ and the natural $Q$-action on $Q$ by left multiplication: for every $(g,r)\in D$, there exists $x_{(g,r)}\in [G,N]$ such that
\begin{eqnarray}\label{eq:surface_ineq}
\left\lceil \frac{r}{2}\right\rceil\leq \cl_{G}(x_{(g,r)})\leq g \quad \textrm{and}\quad \cl_{G,N}(x_{(g,r)})=r.
\end{eqnarray}
In particular, $\cl_{G}$ and $\cl_{G,N}$ do \emph{not} coincide on $[G,N]$ for the pair $(G,N)$ in \eqref{eq:perm}, which fits in \emph{split} short exact sequence $(\star)$ above. If for a fixed real number $C\in [1,2)$,
\[
\sup_{(g,r)\in D}(Cr-g)=\infty
\]
holds, then we have
\[
\sup_{x\in [G,N]}(\cl_{G,N}(x)-C\cdot \cl_G(x))=\infty
\]
for the pair $(G,N)$ above.
\end{thm}
See also Proposition~\ref{prop:surface_key}, which is the key to the construction of $x_{(g,r)}$.
By \eqref{eq:surface_ineq}, if $r\in \{2g-1,2g\}$, then we have
\[
\cl_G(x_{(g,r)})=g.
\]
We present several examples of groups in the class $\CSD$. Our example includes the fundamental groups of mapping tori of certain surface diffeomorphisms. For $g\in \NN$, let $\Mod(\Sigma_g)$ be the mapping class group of $\Sigma_g$.
Here, $\Sigma_g$ denotes the closed connected orientable surface of genus $g$.
We have the \emph{symplectic representation} $s_g\colon \Mod(\Sigma_g)\twoheadrightarrow \Sp(2g,\ZZ)$, which is induced by the action of $\Mod(\Sigma_g)$ on the abelianization of $\pi_1(\Sigma_g)$. For an orientation-preserving diffeomorphism $f\colon \Sigma_g\to \Sigma_g$, let $T_f$ denote the mapping torus of $f$; see Subsection~\ref{subsec:mapping} for details. See also Corollary~\ref{cor:free} and Theorem~\ref{thm:mapping_tori} for further examples. Here for (3) of Theorem \ref{thm:ex_CSD}, we use results by Levitt--Metaftsis \cite{LM} and Amoroso--Zannier \cite{AZ}.
\begin{thm}[examples of groups in $\CSD$]\label{thm:ex_CSD}
\begin{enumerate}[$(1)$]
\item Every group $\Gamma$ admitting an abelian subgroup $\Lambda$ that is not locally cyclic is a member of $\CC_{\Surf_{\{(1,2)\}}}$.
\item For every $g\in \NN$, the surface group $\pi_1(\Sigma_g)$ is a member of $\CC_{\Surf_{\{(g,2g)\}}}$. For a non-empty set $J\subseteq \NN$, the free product $\bigast_{g\in J}\pi_1(\Sigma_g)$ is a member of $\CC_{\Surf_{D_J}}$, where $D_J=\{(g,2g)\;|\; g\in J\}$.
\item There exists an effective absolute constant $n_0\in \NN$ such that the following holds. Assume that $\psi\in \Mod(\Sigma_2)$ satisfies that the order of $s_2(\psi)$ is infinite. Let $f\colon \Sigma_2\to \Sigma_2$ be a diffeomorphism on $\Sigma_2$ whose isotopy class is $\psi$. Then for every $n\in \NN$ with $n\geq n_0$, we have
\[
\pi_1(T_{f^n})\in \CC_{\Surf_{\{(2,3)\}}}\cup \CC_{\Surf_{\{(2,4)\}}}.
\]
\item Let $g$ be an integer at least $2$. Assume that $\psi\in \Mod(\Sigma_g)$ satisfies that $s_g(\psi)\in \{\pm I_{2g}\}$, where $I_{2g}$ denotes the identity matrix in $\Sp(2g,\ZZ)$. Let $f$ be a diffeomorphism on $\Sigma_g$ whose isotopy class is $\psi$. Then we have
\[
\pi_1(T_{f})\in \CC_{\Surf_{\{(g,2g)\}}}.
\]
\end{enumerate}
\end{thm}
The present paper is organized as follows. In Section~\ref{sec:proof_new}, we prove Theorems~\ref{thm:wreath_shin} and \ref{thm:split} (hence, Theorem~\ref{thm:wreath_new} as well). In Section~\ref{sec:proof}, we prove Theorems \ref{thm local cyclicity} and \ref{thm:abelian}. In Section~\ref{sec:surface}, we define the class $\CSD$ and prove Theorem~\ref{thm:surface_new}. In Section~\ref{sec:surface_example}, we discuss examples of groups in $\CSD$, including ones from mapping tori of certain surface diffeomorphisms. Theorem~\ref{thm:ex_CSD} is proved there. We make concluding remarks in Section~\ref{sec:remark}: there, we exhibit examples from symplectic geometry (Subsection~\ref{subsec:symp}) and we collect basic properties of general rank and provide some examples of groups of finite general rank (Subsection~\ref{subsec:gen_rk}).
\section{Proofs of Theorem~\ref{thm:wreath_shin} and Theorem~\ref{thm:split}}\label{sec:proof_new}
\subsection{Proof of Theorem~\ref{thm:wreath_shin}}\label{subsec:wreath}
In this subsection, we study $\cl_{G,N}$ for the pair $(G,N)=(A\wr\Gamma,\bigoplus_{\Gamma}A)$, where $A$ is an additive group and $\Gamma$ is a (possibly non-abelian) group. In this setting, we write $(v,\gamma)$ to indicate the corresponding element of $G$, where $v\in N$ and $\gamma\in \Gamma$. Also, the group $\Gamma$ acts on $N$ as the left-regular representation: for $\lambda\in \Gamma$ and for $u\in N$, $\lambda u\in N$ is a function defined as
\[
\lambda u(\gamma):=u(\lambda^{-1}\gamma)
\]
for $\gamma\in \Gamma$. This action is used to define $G=A\wr \Gamma$. We observe that $\Gamma$ can also act on $N$ as the right-regular representation: for $\lambda\in \Gamma$ and for $u\in N$, $u\lambda \in N$ is a function defined as
\[
u\lambda(\gamma):=u(\gamma\lambda^{-1})
\]
for $\gamma\in \Gamma$. We note that these two actions commute.
\begin{lem}\label{lem:commutator}
Let $A$ be an additive group and $\Gamma$ be a group. Let $G=A\wr \Gamma$, and $N=\bigoplus_{\Gamma}A$. Then for every $v,w\in N$ and every $\gamma,\lambda\in \Gamma$, we have
\[
\big[ (v,\gamma), (w, \lambda)\big] = (\gamma w -[\gamma, \lambda]w +v-\gamma \lambda\gamma^{-1}v, [\gamma,\lambda]).
\]
In particular, if $\gamma$ and $\lambda$ commute, then
\begin{eqnarray} \label{eq mixed commutator_general}
\big[ (v,\gamma), (w, \lambda)\big] = (\gamma w - w +v-\lambda v, e_\Gamma).
\end{eqnarray}
Also, we have
\begin{eqnarray} \label{eq mixed commutator}
\big[ (v,\gamma), (w, e_{\Gamma})\big] = (\gamma w - w, e_\Gamma).
\end{eqnarray}
\end{lem}
\begin{proof}
Let $(v,\gamma), (v', \gamma'), (w, \lambda) \in G$. Then we have
\[ (v, \gamma) (v', \gamma') = (v + \gamma v', \gamma \gamma' ), \; (v, \gamma)^{-1} = (- \gamma^{-1} v, \gamma^{-1}).\]
Using these, we have
\begin{eqnarray*}
\big[ (v, \gamma), (w, \lambda) \big] & = & (v, \gamma) (w, \lambda) (- \gamma^{-1} v, \gamma^{-1}) (- \lambda^{-1} w, \lambda^{-1}) \\
& = & (v + \gamma w, \gamma \lambda) (- \gamma^{-1} v, \gamma^{-1})(- \lambda^{-1} w, \lambda^{-1}) \\
& = & (v + \gamma w - \gamma \lambda \gamma^{-1} v , \gamma \lambda \gamma^{-1}) (- \lambda^{-1} w, \lambda^{-1})\\
& = & (v + \gamma w - \gamma \lambda \gamma^{-1} v - [\gamma,\lambda]w , [\gamma,\lambda]),
\end{eqnarray*}
as desired. This immediately implies \eqref{eq mixed commutator_general} and \eqref{eq mixed commutator}.
\end{proof}
The following lemma is a key to bounding mixed commutator lengths from below.
\begin{lem}\label{lem:claim}
Let $A$ be an additive group and $\Gamma$ be a group. Let $G=A\wr \Gamma$, and $N=\bigoplus_{\Gamma}A$. Assume that $u\in N$ is written as
\[
u=\sum_{i=1}^k (\gamma_i w_i-w_i)
\]
for some $k\in \NN$, some $\gamma_1,\ldots ,\gamma_k\in \Gamma$ and some $w_1,\ldots ,w_k\in N$. Let $\Lambda=\langle \gamma_1,\ldots ,\gamma_k\rangle$. Then we have
\[
\sum_{\lambda\in\Lambda}u(\lambda)=0.
\]
\end{lem}
\begin{proof}
It suffices to show that
\[ \sum_{\lambda \in \Lambda} (\gamma_i w_i - w_i)(\lambda) = 0 \]
for every $i \in \{ 1, \cdots, k\}$. Since $\gamma_i \in \langle \gamma_1, \cdots, \gamma_k \rangle$, we conclude that the correspondence $\Gamma \to \Gamma$, $\gamma \mapsto \gamma_i^{-1} \gamma$ sends $\Lambda$ to $\Lambda$ bijectively. This clearly verifies the assertion above.
\end{proof}
In what follows, we mainly discuss the case where $A=\ZZ$. In this case, we recall from the introduction that $\delta_\lambda \colon \Gamma \to \ZZ$ for $\lambda\in \Gamma$ denotes the Kronecker delta function at $\lambda$. Then, the left and right actions of $G$ on $N$ are expressed as $\gamma \delta_{\lambda} =\delta_{\gamma\lambda}$ and $\delta_{\lambda}\gamma =\delta_{\lambda\gamma}$ for $\gamma,\lambda\in \Gamma$. The following proposition provides the estimate in Theorem~\ref{thm:wreath_shin}~(1) from below.
\begin{prop}\label{prop:frombelow}
Let $\Gamma$ be a group. Set $G=\ZZ\wr \Gamma$, and $N=\bigoplus_{\Gamma}\ZZ$. Let $k\in \NN$. Let $\Lambda$ be a subgroup of $\Gamma$. Assume that $x\in N$ fulfills the following two conditions:
\begin{itemize}
\item $e_{\Gamma}\in \supp(x)$.
\item For every zero-sum set $S$ for x satisfying $S\subseteq \supp(x)$ and $e_{\Gamma}\in S$, we have $\langle S \rangle \geqslant \Lambda$.
\end{itemize}
Then, we have
\[
\cl_{G,N}(x)\geq \intrk^{\Gamma}(\Lambda).
\]
Here, we set $\cl_{G,N}(x)=\infty$ if $x\not \in [G,N]$.
\end{prop}
\begin{proof}
If $x\not \in [G,N]$, then the inequality trivially holds. In what follows, we assume that $x\in [G,N]$. Set $l=\cl_{G,N}(x)$. Then, by \eqref{eq mixed commutator}, there exist $\gamma_1, \cdots, \gamma_l \in \Gamma$ and $w_1, \cdots, w_l \in N$ such that
\[
x = \sum_{i=1}^l (\gamma_i w_i - w_i).
\]
Set $\Theta = \langle \gamma_1, \cdots, \gamma_l \rangle$. Then Lemma~\ref{lem:claim} implies that
\begin{eqnarray} \label{eq Theta}
\sum_{\theta \in \Theta} x(\theta) = 0.
\end{eqnarray}
Set $S=\supp(x)\cap \Theta$. Then by \eqref{eq Theta}, $S$ is a zero-sum set for $x$ that satisfies $S\subseteq \supp(x)$ and $e_{\Gamma}\in S$. Hence by assumption, $\langle S\rangle \geqslant \Lambda$ holds. This implies that
\[
\Theta \geqslant \Lambda.
\]
Therefore, we have
\[
l\geq \rk(\Theta)\geq \intrk^{\Gamma}(\Lambda). \qedhere
\]
\end{proof}
The next proposition, in turn, provides the estimate in Theorem~\ref{thm:wreath_shin}~(1) from above.
\begin{prop}\label{prop:fromabove}
Let $\Gamma$ be a group. Let $G=\ZZ\wr \Gamma$, and $N=\bigoplus_{\Gamma}\ZZ$. Let $\Theta$ be a non-trivial finitely generated subgroup of $\Gamma$. Assume that an element $x\in N$ satisfies the following two conditions:
\begin{itemize}
\item $\supp(x)\subseteq \Theta$.
\item $\Theta$ is a zero-sum set for $x$.
\end{itemize}
Then, we have $x\in [G,N]$ and
\[
\cl_{G,N}(x)\leq \rk(\Theta).
\]
\end{prop}
\begin{proof}
Let $r=\rk(\Theta)\in \NN$. Fix elements $\theta_1,\ldots ,\theta_r\in \Theta$ that satisfy $\langle \theta_1,\ldots ,\theta_r\rangle =\Theta$. Define $M$ as
\[
M=\left\{\sum_{i=1}^r (\theta_iw_i-w_i) \; \middle|\; w_1,w_2,\ldots ,w_r\in N \right\}.
\]
By \eqref{eq mixed commutator}, every element $z$ in $M$ satisfies that
\[
\cl_{G,N}(z)\leq r.
\]
In what follows, we will show that $x\in M$.
Observe that $M$ is a $\ZZ$-module equipped with the right $\Gamma$-action $z\mapsto z \gamma$ for $z\in M$ and $\gamma \in \Gamma$. To see that $M$ is closed under the right action, write $z\in M$ as $\sum_{i=1}^r (\theta_iw_i-w_i)$, where $w_1,\ldots,w_r\in N$. Then, since the left and right actions of $\Gamma$ on $N$ commute, we conclude that for every $\gamma\in \Gamma$,
\[
z\gamma=\sum_{i=1}^r (\theta_i(w_i\gamma)-(w_i\gamma))\in M.
\]
We claim that for every $\theta \in \Theta$,
\begin{eqnarray}\label{eq:induction}
\delta_{\theta}-\delta_{e_{\Gamma}}\in M
\end{eqnarray}
holds true. We verify this claim by induction on the word length of $\theta$ with respect to $\{\theta_1,\ldots ,\theta_r\}$. If $\theta=e_{\Gamma}$ or $\theta\in \{\theta_1^{\pm},\ldots ,\theta_r^{\pm}\}$, then \eqref{eq:induction} holds. Indeed, to see the case where $\theta\in \{\theta_1^{-1},\ldots ,\theta_r^{-1}\}$, we have for every $1\leq i\leq r$,
\[
\delta_{\theta_i^{-1}}-\delta_{e_{\Gamma}}=(\delta_{e_{\Gamma}}-\delta_{\theta_i})\theta_i^{-1}=(\delta_{e_{\Gamma}}-\theta_i\delta_{e_{\Gamma}})\theta_i^{-1}\in M.
\]
In our induction step, take an arbitrary element $\theta\in \Theta$ whose word length with respect to $\{\theta_1,\ldots ,\theta_r\}$ is at least $2$. Write $\theta$ as
\[
\theta= \theta'\lambda,
\]
where $\lambda\in \{\theta_1^{\pm},\ldots ,\theta_r^{\pm}\}$ and the word length of $\theta'$ with respect to $\{\theta_1,\ldots ,\theta_r\}$ is smaller than that of $\theta$. Our induction hypothesis implies that
\[
\delta_{\theta'}-\delta_{e_{\Gamma}}\in M.
\]
We then have
\begin{eqnarray*}
\delta_{\theta}-\delta_{e_{\Gamma}} &=& \delta_{\theta'\lambda}-\delta_{e_{\Gamma}}\\
&=& (\delta_{\theta'\lambda}-\delta_{\lambda})+(\delta_{\lambda}-\delta_{e_{\Gamma}})\\
&=& (\delta_{\theta'}-\delta_{e_{\Gamma}})\lambda+(\delta_{\lambda}-\delta_{e_{\Gamma}})\in M;
\end{eqnarray*}
this ends our proof of the claim.
Finally, we have
\[
x=\sum_{\theta\in \Theta}x(\theta)(\delta_{\theta}-\delta_{e_{\Gamma}})
\]
by assumption. Therefore, by the claim above (see \eqref{eq:induction}) we conclude that
\[
x\in M.
\]
This completes the proof.
\end{proof}
To prove Theorem~\ref{thm:wreath_shin}~(2), we employ the following result on general and intermediate ranks. We include the proof for the convenience of the reader.
\begin{lem}\label{lem:gen_rk}
Let $\Gamma$ be a group. Assume that $r\in \NN$ satisfies $r\leq \genrk(\Gamma)$. Then, there exists a finitely generated subgroup $\Lambda_r$ of $\Gamma$ such that
\[
r=\intrk^{\Gamma}(\Lambda_r).
\]
\end{lem}
\begin{proof}
The conclusion holds by definition if $r=\genrk(\Gamma)$. In what follows, we treat the remaining case: in this case, there exists a finitely generated subgroup $\Lambda$ of $\Gamma$ such that $\intrk^{\Gamma}(\Lambda)>r$. Set $s=\intrk^{\Gamma}(\Lambda)$. There exists a subgroup $\Theta$ of $\Gamma$ with $\Lambda\leqslant \Theta\leqslant \Gamma$ and $\rk(\Theta)=s$. Fix a set of generators $\{\theta_1,\ldots,\theta_s\}$ of size $s$ of $\Theta$. Set
\[
\Lambda_r=\langle \theta_1,\ldots,\theta_r\rangle.
\]
We claim that
\begin{eqnarray}\label{eq:r}
\intrk^{\Gamma}(\Lambda_r)=r.
\end{eqnarray}
To prove this claim, suppose that it is not the case. Then, there exists a subgroup $\Theta_r$ of $\Gamma$ with $\Lambda_r\leqslant \Theta_r\leqslant \Gamma$ such that $\rk(\Theta_r)<r$. Set
\[
\overline{\Theta}=\langle \Theta_r \cup \{\theta_{r+1}.\ldots ,\theta_s\}\rangle.
\]
Then, we have $\Lambda\leqslant \Theta\leqslant \overline{\Theta}\leqslant \Gamma$ but
\[
\rk(\overline{\Theta})\leq \rk(\Theta_r)+(s-r)<s;
\]
this contradicts $\intrk^{\Gamma}(\Lambda)=s$. Therefore, we obtain \eqref{eq:r}.
\end{proof}
Now we are ready to prove Theorem~\ref{thm:wreath_shin}.
\begin{proof}[Proofs of Theorems~$\ref{thm:wreath_shin}$ and $\ref{thm:wreath_new}$]
First, we show Theorem~\ref{thm:wreath_shin}~(1). Set $r=\intrk^{\Gamma}(\Lambda)$; note that $r\leq k<\infty$. Take $\Theta$ with $\Lambda\leqslant \Theta\leqslant \Gamma$ that satisfies $\rk(\Theta)=r$. Then, by conditions (i), (ii), and (iii), we can apply Propositions~\ref{prop:frombelow} and \ref{prop:fromabove} to $x$. Indeed, since $\supp(x)$ is a zero-sum set with $\supp(x)\subseteq \supp(x)$ and $e_{\Gamma}\in \supp(x)$ by (i) and (ii), condition (iii) implies that
\[
\supp(x)\subseteq \langle\supp(x)\rangle =\Lambda.
\]
Then, these propositions provide the estimates
\[
\cl_{G,N}(x)\geq r\quad \textrm{and}\quad \cl_{G,N}(x)\leq r,
\]
respectively. Therefore, we obtain the conclusion of Theorem~\ref{thm:wreath_shin}~(1).
Next, we will prove Theorem~\ref{thm:wreath_shin}~(2). Let $r\in \NN$ satisfy $r\leq \genrk(\Gamma)$. By Lemma~\ref{lem:gen_rk}, there exists a finitely generated subgroup $\Lambda_r$ of $\Gamma$ such that $\intrk^{\Gamma}(\Lambda_r)=r$. Take a generating set $\{\lambda_1,\ldots ,\lambda_r\}$ of $\Lambda_r$ of size $r$. Set
\[
x_r=\sum_{i=1}^r \delta_{\lambda_i}-r\delta_{e_{\Gamma}}\in N.
\]
Then, this $x_r$ fulfills conditions (i)--(iii) of Theorem~\ref{thm:wreath_shin}~(1) with $\Lambda$ being $\Lambda_r=\langle \supp(x_r)\rangle$. In particular, Theorem~\ref{thm:wreath_shin}~(1) implies that
\[
\cl_{G,N}(x_r)=r.
\]
The rest is to show that
\begin{eqnarray}\label{eq:reversed_incl}
\{\cl_{G,N}(x)\;|\; x\in [G,N]\}\subseteq \{r\in \ZZ_{\geq 0}\;|\; r\leq \genrk(\Gamma)\}.
\end{eqnarray}
Take an arbitrary $x\in [G,N]$. Then $\supp(x)$ is finite and $\supp(x)$ is a zero-sum set for $x$. Define $\Lambda_x:=\langle \supp(x)\rangle$, which is a finitely generated subgroup of $\Gamma$. Then, Proposition~\ref{prop:fromabove} implies that $\cl_{G,N}(x)\leq \intrk^{\Gamma}(\Lambda_x)$. Hence, we obtain \eqref{eq:reversed_incl}. This completes the proof of Theorem~\ref{thm:wreath_shin}~(2).
Finally, Theorem~\ref{thm:wreath_new} immediately follows from Theorem~\ref{thm:wreath_shin}~(1).
\end{proof}
Theorem~\ref{thm:wreath_shin}~(2) (former assertion) can be generalized to the following setting. Here, recall from Subsection~\ref{subsec:perm} the definition of permutational wreath products.
\begin{prop}\label{prop:wreath_perm}
Let $\Gamma$ be a group. Let $Q$ be a group quotient of $\Gamma$ and $\sigma\colon \Gamma \twoheadrightarrow Q$ be a group quotient map. Let $\rho_Q\colon \Gamma\curvearrowright Q$ be the $\Gamma$-action defined by the composition of $\sigma$ and the natural $Q$-action $Q\curvearrowright Q$ by left multiplication. Set
\[
(G,N)=(\ZZ\wr_{\rho_Q}\Gamma,\bigoplus_{Q}\ZZ).
\]
Then we have
\[
\{\cl_{G,N}(x)\;|\;x\in [G,N]\}=\{r\in \ZZ_{\geq 0}\;|\; r\leq \genrk(Q)\}.
\]
\end{prop}
\begin{proof}
For $v\in N$ and $\gamma\in \Gamma$, write $(v,\gamma)$ to indicate the corresponding element of $G$. Note that $N$ adimits a narutal left $Q$-action. Then, similar computation to one in Lemma~\ref{lem:commutator} shows that for every $w,v\in N$ and every $\gamma\in \Gamma$,
\[
\big[ (v,\gamma), (w, e_{\Gamma})\big] = (\sigma(\gamma) w - w , e_{\Gamma}).
\]
Observe that $N$ admits the right $Q$-action $vq(p):=v(pq^{-1})$ for $v\in N$ and $p,q\in Q$, which commutes with the left $Q$-action. Therefore, the proofs of Propositions~\ref{prop:frombelow} and \ref{prop:fromabove} can be generalized to our setting.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:split}}\label{subsec:split}
Here we prove Theorem~\ref{thm:split}. The proof goes along a similar way to the one in the proof of \cite[Theorem~7.1]{KKMM1}; we include the proof for the convenience of the reader. The following lemma is the key (see also \cite[Lemma~7.4]{KKMM1}).
\begin{lem}\label{lem:split}
Let $q\colon G\twoheadrightarrow \Gamma$ be a group quotient map and $N=\Ker(q)$. Assume that $N$ is \emph{abelian}.
Let $k\in \NN$. Let $f_1, \cdots, f_k, g_1, \cdots, g_k, a_1, \cdots, a_k, b_1, \cdots, b_k \in G$ with
\[
q(f_i) = q(a_i)
\quad \textrm{and}\quad
q(g_i) = q(b_i)
\]
for every $ 1\leq i \leq k$.
Then, $[f_1, g_1] \cdots [f_k, g_k] ([a_1, b_1] \cdots [a_k, b_k])^{-1}$ is contained in $[G,N]$. Moreover, the following inequality holds:
\[
\cl_{G,N}\Big( [f_1,g_1] \cdots [f_k,g_k] \big( [a_1, b_1] \cdots [a_k, b_k] \big)^{-1} \Big) \leq 2k.
\]
\end{lem}
\begin{proof}
We prove this lemma by induction on $k$. The case $k = 0$ is obvious. Suppose that $k = 1$, and set $f = f_1$, $g = g_1$, $a = a_1$, and $b = b_1$. Then, there exist $v_1, v_2 \in N$ such that $f = a v_1$ and $g = a v_2$. Then we have
\begin{eqnarray*}
[f,g][a, b]^{-1} & = & [a v_1, b v_2] [a, b]^{-1} \\
& = & a v_1 b v_2 v_1^{-1} a^{-1} v_2^{-1} b^{-1} b a b^{-1} a^{-1} \\
& = & a v_1 b v_2 v_1^{-1} a^{-1} v_2^{-1} a b^{-1} a^{-1} \\
& = & a b (b^{-1} v_1 b v_1^{-1} v_1 v_2 v_1^{-1} v_2^{-1} v_2 a^{-1} v_2^{-1} a) b^{-1} a^{-1} \\
& = & (a b) [b^{-1}, v_1] [v_2, a^{-1}] (a b)^{-1}.
\end{eqnarray*}
Here, recall that $N$ is assumed to be abelian and hence that $v_1v_2v_1^{-1}v_2^{-1}=e_G$.
Note that $\cl_{G,N}$ is $G$-invariant, meaning that
\[
\cl_{G,N}(x)=\cl_{G,N}(zxz^{-1})
\]
for every $z\in G$ and every $x\in N$: this is because
\[
z[g,v]z^{-1}=[zgz^{-1},zvz^{-1}]
\]
for every $z,g\in G$ and every $v\in N$.
Therefore, we have $\cl_{G,N}([f,g] [a, b]^{-1}) \le 2$. Now, we proceed to the induction step. Suppose that $k \geq 2$, and set
\[
\xi = [a_1, b_1] \cdots [a_{k-1}, b_{k-1}].
\]
Then we have
\[
[f_1, g_1] \cdots [f_k, g_k] ([a_1, b_1] \cdots [a_k, b_k])^{-1} = ([f_1, g_1] \cdots [f_{k-1}, g_{k-1}] \xi^{-1}) \cdot (\xi [f_k, g_k] [a_k, b_k]^{-1} \xi^{-1}).
\]
By $G$-invariance of $\cl_{G,N}$ and the inductive hypothesis, we conclude that
\[
\cl_{G,N} \Big( [f_1, g_1] \cdots [f_k, g_k] ([a_1, b_1] \cdots [a_k, b_k])^{-1}\Big) \le 2(k-1)+2=2k.
\]
This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~$\ref{thm:split}$]
In split exact sequence $(\star)$,
take a homomorphism $s\colon \Gamma\to G$ such that $q\circ s={\rm id}_{\Gamma}$.
Take an arbitrary element $x$ in $[G,N]$. Then, Lemma~\ref{lem:split} implies that
\[
\cl_{G}(x \cdot ((s\circ q)(x))^{-1})\leq 2\cl_{G,N}(x).
\]
Here, note that $s\circ q\colon G\to G$ is a group homomorphism. Observe that $(s\circ q)(x)=e_{G}$ since $[G,N]\leqslant N=\Ker(q)$. Therefore, we conclude that
\[
\cl_{G}(x)\leq 2\cl_{G,N}(x),
\]
as desired.
\end{proof}
\section{The case of abelian $\Gamma$}\label{sec:proof}
The goal of this section is to prove Theorems~\ref{thm local cyclicity} and \ref{thm:abelian}.
In this section, we focus on the case where $\Gamma$ is abelian. First, we recall the definition of \emph{special rank} of groups in the sense of Malcev \cite{Mal}.
\begin{definition}[\cite{Mal}]\label{def:local_rank}
For a group $\Gamma$, the \emph{special rank} of $\Gamma$ is defined by
\[
\sperk(\Gamma)=\sup \{ \rk(\Lambda) \; | \; \textrm{$\Lambda$ is a finitely generated subgroup of $\Gamma$} \}\ \in \ZZ_{\geq 0}\cup \{\infty\}.
\]
\end{definition}
For a group pair $(\Gamma,\Lambda)$ with $\Gamma\geqslant \Lambda$, computation of $\intrk^{\Gamma}(\Lambda)$ is easy if $\Gamma$ is abelian.
\begin{lem}\label{lem:abel}
Let $(\Gamma,\Lambda)$ be a pair of a group $\Gamma$ and its subgroup $\Lambda$. Assume that $\Gamma$ is abelian. Then we have that
\[
\intrk^{\Gamma}(\Lambda)= \rk(\Lambda).
\]
In particular, we have that
\[
\genrk(\Gamma)=\sperk(\Gamma).
\]
\end{lem}
\begin{proof}
The inequality $\intrk^{\Gamma}(\Lambda)\leq \rk(\Lambda)$ holds in general. Take a group $\Theta$ with $\Lambda\leqslant \Theta\leqslant \Gamma$. If $\rk(\Theta)=\infty$, then $\rk(\Lambda)\leq \rk(\Theta)=\infty$. If $\rk(\Theta)<\infty$, then the classification of finitely generated abelian groups implies
\[
\rk(\Theta)\geq \rk(\Lambda).
\]
Hence, we conclude that $\rk(\Theta)\geq \rk(\Lambda)$. Therefore, we have the reversed inequality
\[
\intrk^{\Gamma}(\Lambda)\geq \rk(\Lambda)
\]
as well. This ends our proof.
\end{proof}
See Subsection~\ref{subsec:gen_rk} for more details on general ranks and special ranks.
In what follows, we will prove Theorem~\ref{thm local cyclicity}, which states that $\cl_G$ and $\cl_{G,N}$ coincide on $[G,N]$ when $\Gamma$ is locally cyclic.
Our proof of Theorem~\ref{thm local cyclicity} employs the following lemma.
\begin{lem}\label{lem:commcomp}
Let $G$ be a group and let $g,h \in G$. Then the following hold:
\begin{enumerate}[$(1)$]
\item If $x \in G$ commutes with $g$, then $[g, hx] = [g,h]$. In particular, we have $[g,h] = [g, hg^k]$ for every $k \in \ZZ$.
\item If $y \in G$ commutes with $h$, then $[gy, h] = [g,h]$. In particular, we have $[g,h] = [gh^k,h]$ for every $k \in \ZZ$.
\end{enumerate}
\end{lem}
\begin{proof}
Computation shows that
\[ [g,hx] = ghx g^{-1} x^{-1} h^{-1} = ghg^{-1} h^{-1} = [g,h],\]
\[ [gy, h] = gyh y^{-1} g^{-1} h^{-1} = ghg^{-1} h^{-1} = [g,h].\qedhere\]
\end{proof}
\begin{remark} \label{remark left right}
Using Lemma~\ref{lem:commcomp}, we have for every $g\in G$ and every $x\in N$,
\[ [g,x] = [g, xg^{-1}] = [gxg^{-1}, x g^{-1}], \; [x,g] = [xg^{-1}, g] = [xg^{-1}, gxg^{-1}].\]
These mean that $c \in G$ is a $(G,N)$-commutator if and only if there exist $g \in G$ and $x \in N$ such that $c = [x,g]$.
\end{remark}
\begin{proof}[Proof of Theorem~$\ref{thm local cyclicity}$]
Let $g,h \in G$.
We show that $[g,h]$ is a $(G,N)$-commutator.
Without loss of generality, we may assume that $(\bar{g},\bar{h})\neq (e_\Gamma,e_\Gamma)$.
Let $q$ denote the quotient map $G \to \Gamma$. We write $\bar{x}$ instead of $q (x)$ for $x \in G$. Since $\Gamma$ is locally cyclic and $(\bar{g},\bar{h})\neq (e_\Gamma,e_\Gamma)$, there exists an isomorphism $f$ from $\langle \bar{g}, \bar{h} \rangle$ to $\ZZ$ or $\ZZ/k\ZZ$ for some $k > 0$.
We first show the case $\langle \bar{g}, \bar{h} \rangle$ is isomorphic to $\ZZ$. We now show the following claim:
\bigskip \noindent
{\bf Claim~1.} Assume that $g_0$ and $h_0$ are elements in $G$ such that $g_0, h_0 \in \langle g, h \rangle$ and $[g_0, h_0] = [g,h]$. If $\min \{ |f (\bar{g_0})|, |f(\bar{h}_0)| \} > 0$, then there exist $g_1 , h_1 \in G$ such that $g_1, h_1 \in \langle g, h \rangle$, $[g_1, h_1] = [g,h]$, and
\[ \min \{ |f(\bar{g}_0)|, |f(\bar{h}_0)|\} > \min \{ |f(\bar{g}_1)|, |f(\bar{h}_1)|\}. \]
Now we start the proof of Claim~1. Set $m = |f(\bar{g}_0)|$ and $n = |f(\bar{h}_0)|$. Suppose $m \ge n$. Then there exist $k,r \in \ZZ$ such that $m = nk + r$ and $|r| < |n|$.
Set $g_1 = g_0 h_0^{-k}$ and $h_ 1 = h_0$. By Lemma \ref{lem:commcomp}, we have
\[ [g,h] = [g_0, h_0] = [g_0 h_0^{-k}, h_0] = [g_1, h_1].\]
Then $f(\bar{g}_1) = f( q (g_0 h_0^{-k})) = m - kn = r$. Thus we have
\[ \min \{ |f(\bar{g}_1)|, |f(\bar{h}_1)|\} = \min \{ |r|, |n|\} = |r| < |n| = \min \{ |m|, |n|\}.\]
The case $m \le n$ is proved in a similar manner. This completes the proof of our claim.
Starting with the case $g = g_0$ and $h = h_0$, applying Claim~1 iteratively, we obtain $g', h' \in G$ such that $[g,h] = [g', h']$ and one of $g'$ and $h'$ belongs to $N$. Hence Remark~\ref{remark left right} completes the proof of the case $\langle \bar{g}, \bar{h} \rangle$.
Next we consider the case $f$ is an isomorphism from $\langle \bar{g}, \bar{h} \rangle$ to $\ZZ/k\ZZ$.
Let $\tilde{f}$ denote the composition $\langle \bar{g}, \bar{h} \rangle \to \ZZ / k \ZZ \xrightarrow{\cong} \{ 0,1, \cdots, k-1\}$. Here $\ZZ / k \ZZ \to \{ 0,1, \cdots, k-1\}$ is the inverse of the natural projection $\{ 0,1, \cdots, k-1\} \to \ZZ/k\ZZ$.
In a similar manner to Claim~1, we can show the following:
\bigskip \noindent
{\bf Claim~2.} Assume that $g_0$ and $h_0$ are elements in $G$ such that $g_0, h_0 \in \langle g, h \rangle$ and $[g_0, h_0] = [g,h]$. If $\min \{ \tilde{f}(\bar{g}_0), \tilde{f}(\bar{h}_0)\} > 0$, then there exist $g_1, h_1 \in G$ such that $g_1, h_1 \in \langle g, h \rangle$, $[g_1, h_1] = [g,h]$ and
\[ \min \{ \tilde{f}(\bar{g}_0), \tilde{f}(\bar{h}_0)\} > \min \{ \tilde{f}(\bar{g}_1), \tilde{f}(\bar{h}_1)\}.\]
Using Claim 2 iteratively, we can obtain elements $g',h' \in G$ such that $[g,h] = [g', h']$ and one of $g'$ and $h'$ belongs to $N$. Remark~\ref{remark left right} completes the proof.
\end{proof}
Now we proceed to the proof of Theorem~\ref{thm:abelian}.
\begin{proof}[Proof of Theorem~$\ref{thm:abelian}$]
First we claim that
\[
\cl_{G}(x)=\left\lceil \frac{\cl_{G,N}(x)}{2}\right\rceil
\]
for every $x\in [G,N]$. Indeed, this equality follows from \eqref{eq mixed commutator_general} and \eqref{eq mixed commutator}. Therefore, Theorem~\ref{thm:wreath_shin}~(2) and Lemma~\ref{lem:abel} complete our proof.
\end{proof}
\begin{remark}
For a fixed prime number $p$, let $A = \bigoplus_{n \in \NN} ( \ZZ / p^n \ZZ )$. Then, in the setting of Theorem~\ref{thm:abelian}, we may replace $(G,N)=(\ZZ\wr \Gamma, \bigoplus_{\Gamma}\ZZ)$ with $(G,N) = (A \wr \Gamma, \bigoplus_{\Gamma}A)$.
This new pair $(G,N)$ satisfies the same conclusion as in Theorem~\ref{thm:abelian}. This provides an example with $N$ being \emph{locally finite}, meaning that every finitely generated subgroup of $N$ is finite.
\end{remark}
\begin{example}\label{example:wreath}
Set $(G,N) = (\ZZ \wr \RR, \bigoplus_{\RR} \ZZ)$, or $(G,N) = (\ZZ \wr \overline{\QQ}^{\rm alg},\bigoplus_{\overline{\QQ}^{\rm alg}} \ZZ)$.
We note that $G$ is countable in the latter example.
Here $\overline{\QQ}^{\rm alg}$ denotes the algebraic closure of $\QQ$.
Then we claim that
\begin{eqnarray}\label{eq:cl}
\sup_{x\in [G,N]}\cl_{G}(x)=\infty \quad \textrm{and}\quad \sup_{x\in [G,N]}\cl_{G,N}(x)=\infty
\end{eqnarray}
and that
\begin{eqnarray}\label{eq:scl}
\scl_{G}\equiv 0 \ \textrm{on}\ [G,N] \quad \textrm{and}\quad \scl_{G,N}\equiv 0 \ \textrm{on}\ [G,N].
\end{eqnarray}
Indeed, Theorem~\ref{thm:abelian} implies \eqref{eq:cl} because $\sperk(\RR)=\sperk(\overline{\QQ}^{\rm alg})=\infty$. Lemma~\ref{lem:commutator} implies that
\[
\big[ (nv,\gamma), (nw, e_{\Gamma})\big] = \big[ (v,\gamma), (w, e_{\Gamma})\big]^n\]
for every $v,w\in N$, every $\gamma\in \Gamma$ and every $n\in \NN$. From this together with commutativity of $N$, we can deduce that for every $x\in [G,N]$,
\[
\sup_{n\in \NN}\cl_{G}(x^n)\leq \sup_{n\in \NN}\cl_{G,N}(x^n)=\cl_{G,N}(x)<\infty.
\]
Therefore, we obtain \eqref{eq:scl}.
One remark is that we can deduce \eqref{eq:scl} from the Bavard duality theorem for $\scl_{G,N}$ (\cite[Theorem~1.2]{KKMM1}). Indeed, since $N$ is abelian, the Bavard duality theorem for $\scl_{G,N}$ implies that $\scl_{G,N}\equiv 0$ on $[G,N]$. Hence, for every $x\in [G,N]$, we have $\scl_G(x)\leq \scl_{G,N}(x)=0$.
\end{example}
Here, we provide one application of Theorem~\ref{thm local cyclicity}, which is an improvement of our previous work in \cite{KKMM1}.
\begin{example}\label{ex:braid}
For $n \ge 2$, recall that the \emph{Artin braid group $B_n$ with $n$ strands} is defined to be the group generated by $n-1$ generators $\sigma_1, \cdots, \sigma_{n-1}$ with the following relations:
\[ \sigma_i \sigma_j = \sigma_j \sigma_i\]
for all $i, j \in \{1,2,\ldots ,n-1\}$ satisfying $|i - j| \ge 2$ and
\[ \sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_i \sigma_{i+1}\]
for every $1\leq i\leq n-2$. For the foundation of braid groups, see \cite{KTbook} for example. Set
\[
(G,N)=(B_n,[B_n,B_n]).
\]
$\Gamma=G/N$ is isomorphic to $\ZZ$, and hence $(\star)$ splits for this triple $(G,N,\Gamma)$. In particular, \eqref{eq:3bai} provides that $\cl_{G,N}(x)\leq 3\cl_{G}(x)$ for every $x\in [G,N]$: this estimate was obtained in \cite[Example 7.14]{KKMM1}. In fact, Theorem~\ref{thm local cyclicity} implies that the genuine equality
\[
\cl_{G,N}(x)=\cl_{G}(x)
\]
holds for every $x\in [G,N]$.
\end{example}
\section{The class $\CSD$}\label{sec:surface}
In this section, we define a class $\CSD$ of groups and prove Theorem~\ref{thm:surface_new}. Recall that $\NN=\{1,2,3,\ldots\}$ in this paper.
\subsection{Definition of $\CSD$}\label{subsec:classes}
We first introduce the following notion.
\begin{definition}[$\pi_1(\Sigma_g)$-triple]\label{def:triple}
Let $g\in \NN$. Let $\Gamma$ be a group. A \emph{$\pi_1(\Sigma_g)$-triple for $\Gamma$} is defined to be a triple $(\phi,Q,\sigma)$ of a group homomorphism $\phi\colon \pi_1(\Sigma_g)\to \Gamma$, a group quotient $Q$ of $\Gamma$ and a group quotient map $\sigma\colon \Gamma\to Q$ such that $P:=(\sigma\circ \phi)(\pi_1(\Sigma_g))$ is abelian.
\end{definition}
\begin{remark}\label{rem:notinj}
In Definition~\ref{def:triple}, the homomorphism $\phi$ is \emph{not} required to be injective.
\end{remark}
We note that in Definition~\ref{def:triple},
\[
\intrk^Q(P)\leq \rk(P)\leq \rk(\pi_1(\Sigma_g))=2g.
\]
\begin{definition}[class $\CSD$]\label{def:CSD}
Let $D$ be a non-empty subset of
\[
\{(g,r)\in \NN^2\;|\; g+1\leq r\leq 2g\}.
\]
Then, the class $\CSD$ is defined as the class of all groups $\Gamma$ that satisfy the following condition: there exist a group quotient $Q$ of $\Gamma$ and a quotient map $\sigma\colon \Gamma\twoheadrightarrow Q$ such that for every $(g,r)\in D$, there exists $\phi_{(g,r)}\colon \pi_1(\Sigma_g) \to \Gamma$ such that $(\phi_{(g,r)},Q,\sigma)$ is a $\pi_1(\Sigma_g)$-triple for $\Gamma$ satisfying
\begin{eqnarray}\label{eq:ranksurf}
\intrk^{Q}(P_{(g,r)})=r,
\end{eqnarray}
where $P_{(g,r)}=(\sigma\circ \phi_{(g,r)})(\pi_1(\Sigma_g))$.
\end{definition}
By definition, we have
\begin{eqnarray}\label{eq:bigcap}
\CSD\subseteq \bigcap_{(g,r)\in D}\CC_{\Surf_{\{(g,r)\}}}.
\end{eqnarray}
The following proposition asserts that a weaker form of the reversed inclusion also holds.
\begin{prop}\label{prop:reversed}
Let $D\subseteq \{(g,r)\in \NN^2\;|\; g+1\leq r\leq 2g\}$ be non-empty.
Let $\Gamma \in \bigcap_{(g,r)\in D}\CC_{\Surf_{\{(g,r)\}}}$.
Then there exists $D_{\Gamma}\subseteq \{(g,t)\in \NN^2\;|\; g+1\leq t\leq 2g\}$ such that the following hold true:
\begin{itemize}
\item $\Gamma \in \CC_{\Surf_{D_{\Gamma}}}$,
\item for every $(g,r) \in D$, there exists $r'$ with $r \leq r' \leq 2g$ such that $(g,r') \in D_{\Gamma}$.
\end{itemize}
\end{prop}
The following lemma is immediate by definition; it is employed in the proof of Proposition~\ref{prop:reversed}.
\begin{lem}\label{lem:quotient}
Let $(\Gamma,\Lambda)$ be a pair of a group $\Gamma$ and its subgroup $\Lambda$. Let $\tau\colon \Gamma\twoheadrightarrow \tau(\Gamma)$ be a group quotient map. Then we have
\[
\intrk^{\tau(\Gamma)}(\tau(\Lambda))\leq \intrk^{\Gamma}(\Lambda).
\]
\end{lem}
\begin{proof}[Proof of Proposition~$\ref{prop:reversed}$]
Let $(g,r)\in D$. Take a $\pi_1(\Sigma_g)$-triple $(\phi_{(g,r)},Q_{(g,r)},\sigma_{(g,r)})$ for $\Gamma$ as in the definition of $\CC_{\Surf_{\{(g,r)\}}}$. From $(\sigma_{(g,r)}\colon \Gamma\twoheadrightarrow Q_{(g,r)})_{{(g,r)}\in D}$, we will construct $(Q, \sigma)$ in the following manner. Define $Q_D$ and $\sigma_D\colon \Gamma\to Q_D$ by
\[
Q_D=\prod_{(g,r)\in D}Q_{(g,r)},\quad \sigma_D\colon \gamma\mapsto (\sigma_{(g,r)}(\gamma))_{(g,r)\in D}.
\]
Set $Q:=\sigma_D(\Gamma)$ and let $\sigma\colon \Gamma\twoheadrightarrow Q$ be the surjective map defined from $\sigma_D$.
Then, by Lemma~\ref{lem:quotient}, there exists $r' = r'_{(g,r)}$ with $r \leq r' \leq 2g$ such that $\intrk^Q((\sigma \circ \phi_{(g,r)})(\pi_1(\Sigma_g))) = r'$.
Finally, set
\[
D_{\Gamma} = \{ (g, r'_{(g,r)})\;|\; (g,r) \in D \}.
\]
We conclude that $\Gamma \in \CC_{\Surf_{D_{\Gamma}}}$.
\end{proof}
We will see examples of elements in $\CSD$ in Section~\ref{sec:surface_example}. Here, we exhibit the most basic example.
\begin{example}\label{example:surface_basic}
Let $g\in \NN$. Then we have
\[
\pi_1(\Sigma_g)\in \CC_{\Surf_{\{(g,2g)\}}}.
\]
Indeed, consider the abelianization map
\[
\Ab_{\pi_1(\Sigma_g)}\colon \pi_1(\Sigma_g)\twoheadrightarrow \pi_1(\Sigma_g)^{\ab}\simeq \ZZ^{2g}.
\]
Then $(\phi,Q,\sigma)=(\mathrm{id}_{\pi_1(\Sigma_g)},\ZZ^{2g},\Ab_{\pi_1(\Sigma_g)})$ is a $\pi_1(\Sigma_g)$-triple with $\intrk^{Q}(P)=2g$, where $P=(\sigma\circ \phi)(\pi_1(\Sigma_g))$.
\end{example}
\subsection{Proof of Theorem~\ref{thm:surface_new}}\label{subsec:proof_surface_new}
In this subsection, we prove Theorem~\ref{thm:surface_new}. The following is the key proposition. Here for a permutational wreath product $H\wr_{\rho}\Gamma$ associated with the action $\rho\colon \Gamma \curvearrowright X$, we regard an element $u\in \bigoplus_{X}H$ as a map $u\colon X\to H$ such that $u(p)=e_H$ for all but finite $p\in X$.
\begin{prop}[key proposition]\label{prop:surface_key}
Let $g\in \NN$. Let $\Gamma$ be a group and let $(\phi,Q,\sigma)$ be a $\pi_1(\Sigma_g)$-triple for $\Gamma$. Set $P=(\sigma\circ \phi)(\pi_1(\Sigma_g))$. Let $\rho_Q\colon \Gamma\curvearrowright Q$ be the $\Gamma$-action given by the composition of $\sigma\colon \Gamma\twoheadrightarrow Q$ and the action $Q\curvearrowright Q$ by multiplication. Set
\[
(G,N)=(\ZZ\wr_{\rho_Q}\Gamma, \bigoplus_{Q}\ZZ).
\]
Assume that $x\in N$ fulfills the following three conditions.
\begin{enumerate}[$(i)$]
\item $e_{Q}\in \supp(x)$.
\item $P$ is a zero-sum set for $x$.
\item For every zero-sum set $S$ for $x$ satisfying $S\subseteq \supp(x)$ and $e_Q\in S$, we have $\langle S\rangle=P$.
\end{enumerate}
Then, $x\in [G,N]$ and
\[
\left\lceil \frac{\intrk^Q(P)}{2}\right\rceil\leq \cl_G(x)\leq g \quad \textrm{and} \quad \cl_{G,N}(x)=\intrk^Q(P)
\]
hold true.
\end{prop}
\begin{proof}
Recall the proof of Proposition~\ref{prop:wreath_perm}; in particular, we have for every $v,w\in N$ and every $\gamma\in \Gamma$,
\begin{eqnarray}\label{eq:comm}
\big[ (v,\gamma), (w, e_{\Gamma})\big] = (\sigma(\gamma) w - w , e_{\Gamma}).
\end{eqnarray}
The proof of Proposition~\ref{prop:wreath_perm} shows that
\begin{eqnarray}\label{eq:int_rank_surface}
\cl_{G,N}(x)=\intrk^Q(P).
\end{eqnarray}
In what follows, we will prove that $\cl_G(x)\leq g$.
In general, we have for every $v,w\in N$ and every $\gamma,\lambda\in \Gamma$,
\[
\big[ (v,\gamma), (w, \lambda)\big] = (\sigma(\gamma) w -\sigma([\gamma, \lambda]) w +v-\sigma(\gamma \lambda\gamma^{-1}) v, [\gamma,\lambda]).
\]
In particular,
if $\sigma(\gamma)$ commutes with $\sigma(\lambda)$, then we have
\begin{eqnarray}\label{eq:comm_comm}
\big[ (v,\gamma), (w, \lambda)\big] = (\sigma(\gamma) w - w +v-\sigma(\lambda) v, [\gamma,\lambda]);
\end{eqnarray}
Now fix a system of standard generators $(\alpha_1, \beta_1, \cdots, \alpha_g, \beta_g)$ of $\pi_1(\Sigma_g)$, meaning that,
\begin{eqnarray}\label{eq:surface_presentation}
\pi_1(\Sigma_g)=\langle \alpha_1,\beta_1,\ldots,\alpha_g,\beta_g\;|\; [\alpha_1,\beta_1]\cdots [\alpha_g,\beta_g]=e_{\pi_1(\Sigma_g)}\rangle .
\end{eqnarray}
For every $1\leq i\leq g$, set $a_i=(\sigma\circ\phi)(\alpha_i)$ and $b_i=(\sigma\circ\phi)(\beta_i)$. Then, a similar argument to the proof of Proposition~\ref{prop:fromabove} verifies the following: there exist $w_1,\ldots,w_g\in N$ and $v_1,\ldots,v_{g}\in N$ such that
\begin{eqnarray}\label{eq:surface}
x=\sum_{i=1}^g\left\{(a_iw_i-w_i)+(b_iv_i-v_i)\right\}.
\end{eqnarray}
For these $w_1,\ldots ,w_g$ and $v_1,\ldots,v_g$, set
\[
\xi_i=\big[ (-v_i,\phi(\alpha_i)), (w_i, \phi(\beta_i))\big]\ \in [G,G]
\]
for every $1\leq i\leq g$. Then, \eqref{eq:comm_comm}, \eqref{eq:surface_presentation} and \eqref{eq:surface} imply that
\[
x=\xi_1\xi_2\cdots \xi_g.
\]
Here, we observe that for every $1\leq i\leq g$, $(\sigma\circ\phi)([\alpha_i,\beta_i])=e_Q$. (Recall from the definition of $\pi_1(\Sigma_g)$-triples that $P$ is abelian.) Hence, we obtain
\begin{eqnarray}\label{eq:leq_surface}
\cl_{G}(x)\leq g,
\end{eqnarray}
as desired.
By combining \eqref{eq:int_rank_surface} and \eqref{eq:leq_surface} with Theorem~\ref{thm:split}, we obtain the conclusion.
\end{proof}
\begin{proof}[Proof of Theorem~$\ref{thm:surface_new}$]
Let $(Q,\sigma)$ be a pair that is guaranteed in the definition of $\CSD$. Fix $(g,r)\in D$. Then, there exists $\phi_{(g,r)}\colon \pi_1(\Sigma_g)\to\Gamma$ such that $(\phi_{(g,r)},Q,\sigma)$ is a $\pi_1(\Sigma_g)$-triple for $\Gamma$ satisfying \eqref{eq:ranksurf}. Take an arbitrary $x=x_{(g,r)}$ that fulfills conditions (i), (ii) and (iii) in Proposition~\ref{prop:surface_key} with respect to $\phi=\phi_{(g,r)}$. Then by Proposition~\ref{prop:surface_key}, we have
\[
\left\lceil \frac{r}{2}\right\rceil\leq \cl_G(x_{(g,r)})\leq g\quad \textrm{and}\quad \cl_{G,N}(x_{(g,r)})=r,
\]
as desired.
\end{proof}
\begin{remark}
In the setting of Theorem~\ref{thm:surface_new}, assume that $\sup\{r\,|\,(g,r)\in D\}=\infty$. Then a similar argument to one in Example~\ref{example:wreath} shows \eqref{eq:cl} and \eqref{eq:scl}.
\end{remark}
\section{Members of $\CSD$}\label{sec:surface_example}
In this section, we exhibit examples of groups in $\CSD$ and prove Theorem~\ref{thm:ex_CSD}. For a group $H$, let $H^{\ab}:=H/[H,H]$ be the abelianization of $H$, and $\Ab_H\colon H\twoheadrightarrow H^{\ab}$ be the abelianization map. Set $\NN_{\geq 2}:=\{n\in \NN\;|\; n\geq 2\}$.
\subsection{Basic examples}\label{subsec:ex}
We start from basic examples.
\begin{example}\label{example:surface}
For $g\in \NN$, take an arbitrary group quotient $\Lambda$ of $\pi_1(\Sigma_g)$ satisfying $\rk(\Lambda^{\mathrm{ab}})=r$ with $g+1\leq r\leq 2g$. Then $\Lambda\in \CC_{\Surf_{\{(g,r)\}}}$. To see this, take a quotient map $\phi\colon \pi_1(\Sigma_g)\twoheadrightarrow \Lambda$. Then, $(\phi_{(g,r)},Q,\sigma)=(\phi, \Lambda^{\mathrm{ab}}, \mathrm{Ab}_{\Lambda})$ is a $\pi_1(\Sigma_g)$-triple for $\Lambda$ that satisfies \eqref{eq:ranksurf}.
\end{example}
\begin{lem}\label{lem:freefinite}
Let $g\in \NN$. Let $(A_i)_{i=1}^g$ be a family of abelian groups of special rank at least $2$. Then the free product $\Gamma=\bigast_{i=1}^g A_i$ belongs to $\CC_{\Surf_{\{(g,2g)\}}}$.
\end{lem}
\begin{proof}
For every $1\leq i\leq g$, since $\sperk(A_i)\geq 2$, we can take a subgroup $M_i$ of $A_i$ with $\rk(M_i)=2$.
Then, the free product $\bigast_{i=1}^{g}M_i$ can be seen as a group quotient of $\pi_1(\Sigma_g)$. Here, recall presentation \eqref{eq:surface_presentation}. Let $\phi_0\colon \pi_1(\Sigma_g)\twoheadrightarrow \bigast_{i=1}^gM_i$ be the quotient map and $\iota\colon \bigast_{i=1}^gM_i\hookrightarrow \Gamma$ be the natural embedding. Then, the triple $(\phi_{(g,2g)},Q,\sigma)=(\iota\circ \phi_0, \bigoplus_{i=1}^gA_i,\Ab_{\Gamma})$ is a $\pi_1(\Sigma_g)$-triple for $\Gamma$ that satisfies \eqref{eq:ranksurf} with $r=2g$.
\end{proof}
\begin{lem}\label{lem=product}
Let $D$ be a non-empty subset of $\{(g,r)\in \NN^2\;|\; g+1\leq r\leq 2g\}$. Let $\Lambda_{(g,r)}\in \mathcal{C}_{\mathrm{Surf}_{\{(g,r)\}}}$ for every $(g,r)\in D$. Then, three groups $\bigast_{(g,r)\in D}\Lambda_{(g,r)}$, $\bigoplus_{(g,r)\in D}\Lambda_{(g,r)}$ and $\prod_{(g,r)\in D}\Lambda_{(g,r)}$ are members of $\CSD$.
\end{lem}
\begin{proof}
Let $\Gamma$ be either of the three groups above. Fix $(g,r)\in D$. Take a $\pi_1(\Sigma_g)$-triple $(\phi_{(g,r)},Q_{(g,r)},\sigma_{(g,r)})$ for $\Lambda_{(g,r)}$ that satisfies $\intrk^{Q_{(g,r)}}(P_{(g,r)})=2g$, where $P_{(g,r)}=(\sigma_{(g,r)}\circ \phi_{(g,r)})(\pi_1(\Sigma_g))$. Let $\iota_{(g,r)}\colon \Lambda_{(g,r)} \hookrightarrow \Gamma$ be the natural embedding. Then, the triple $(\phi_{(g,r)},Q,\sigma)=(\iota_{(g,r)}\circ \phi_{(g,r)}, \Gamma^{\mathrm{ab}},\mathrm{Ab}_{\Gamma})$ is a $\pi_1(\Sigma_g)$-triple for $\Gamma$ that satisfies \eqref{eq:ranksurf}.
\end{proof}
Lemma~\ref{lem=product}, together with Example~\ref{example:surface_basic} and Lemma~\ref{lem:freefinite}, yields the following corollary.
\begin{cor}\label{cor:free}
Let $J$ be a non-empty set of $\NN$. Let $(A_i)_{i\in \NN}$ be a family of abelian groups such that $\sperk(A_i)\geq 2$ for every $i\in \NN$. Then we have
\[
\bigast_{g\in J}\pi_1(\Sigma_g)\in \CC_{\Surf_{D_J}} \quad \textrm{and} \quad \bigast_{i\in \NN}A_i\in \CC_{\Surf_{D_{\NN}}},
\]
where $D_J=\{(g,2g)\;|\; g\in J\}$.
\end{cor}
For a general group $\Gamma$ and its subgroup $\Lambda$, it seems difficult to bound $\intrk^{\Gamma}(\Lambda)$ from below. However, it is easy to check whether $\intrk^{\Gamma}(\Lambda)\leq 1$ since a group of rank at most $1$ must be cyclic. This observation yields the following proposition.
\begin{prop}\label{prop:noZ2}
Every group $\Gamma$ that contains an abelian subgroup $\Lambda$ with $\sperk(\Lambda)\geq 2$ is a member of $\mathcal{C}_{\mathrm{Surf}_{\{(1,2)\}}}$.
\end{prop}
\begin{proof}
By assumption, we can take $\Lambda_1\leqslant \Lambda$ with $\rk(\Lambda_1)=2$. Recall that $\pi_1(\Sigma_1)\simeq \ZZ^2$. Hence, there exists a surjective homomorphism $\phi\colon \pi_1(\Sigma_1)\twoheadrightarrow \Lambda_1$. Let $\iota\colon \Lambda_1\hookrightarrow \Gamma$ be the inclusion map. Then, $(\iota\circ\phi,\Gamma,\mathrm{id}_{\Gamma})$ is a $\pi_1(\Sigma_1)$-triple for $\Gamma$. We have
\[
\intrk^{\Gamma}(\Lambda_1)= 2
\]
since $\Lambda_1$ is not cyclic. Hence, $\Gamma\in \mathcal{C}_{\mathrm{Surf}_{\{(1,2)\}}}$.
\end{proof}
\subsection{Fundamental groups of mapping tori}\label{subsec:mapping}
Here we discuss examples of groups in $\CSD$ coming from $3$-dimensional (hyperbolic) geometry. For $g\in \NN$, let $\mathrm{Mod}(\Sigma_g)$ denote the \emph{mapping class group} of $\Sigma_g$: it is defined as the group quotient of the group of orientation-preserving diffeomorphisms modulo isotopy. It is well known that reduction to the action on $\HHH_1(\Sigma_g;\ZZ)\simeq \ZZ^{2g}$ (equipped with a natural symplectic structure coming from the intersection form) produces the natural symplectic representation $s_g\colon \mathrm{Mod}(\Sigma_g)\twoheadrightarrow \mathrm{Sp}(2g,\ZZ)$ of $\mathrm{Mod}(\Sigma_g)$. For an orientation-preserving diffeomorphism $f\colon \Sigma_{g} \to \Sigma_{g}$, let $T_f$ denote the \emph{mapping torus} of $f$, meaning that,
\[
T_f:=(\Sigma_g\times [0,1])/((p,0)\sim(f(p),1)\ \textrm{for every}\ p\in \Sigma_g).
\]
The celebrated theorem by Thurston \cite{Thurston} states that $T_f$ is a hyperbolic manifold if and only if $\psi\in \mathrm{Mod}(\Sigma_g)$ is a pseudo-Anosov class if and only if $\pi_1(T_f)$ does not contain $\ZZ^2$. Hence, if $[f]$ is not a pseudo-Anosov class, then $\pi_1(T_f)\in \mathcal{C}_{\mathrm{Surf}_{\{(1,2)\}}}$ by Proposition~\ref{prop:noZ2}.
The fundamental group $\pi_1(T_f)$ is described in terms of the isotopy class $\psi=[f]\in \Mod(\Sigma_g)$ as follows. Let $\Psi\in \mathrm{Aut}(\pi_1(\Sigma_g))$ be the automorphism of $\pi_1(\Sigma_g)$ induced by $f$. Then we have a natural isomorphism
\begin{eqnarray}\label{eq:mapping_pi1}
\pi_1(T_f)\simeq \pi_1(\Sigma_g)\rtimes_{\Psi}\ZZ.
\end{eqnarray}
Here in the right hand side, the $\ZZ$-action is given by $\Psi$. Then, formation of the abelianization of $\pi_1(\Sigma_g)$ induces the quotient map
\begin{eqnarray}\label{eq:mapping_solv}
\sigma\colon \pi_1(T_f)\twoheadrightarrow \ZZ^{2g}\rtimes_{s_g(\psi)}\ZZ,
\end{eqnarray}
where the $\ZZ$-action on $\ZZ^{2g}$ is given by $s_g(\psi)\in \Sp(2g,\ZZ)$. Therefore, we have the following result.
\begin{lem}\label{lem:mapping}
Let $g\in \NN_{\geq 2}$ and let $\psi\in \Mod(\Sigma_g)$. Let $f\colon \Sigma_g\to\Sigma_g$ be a diffeomorphism on $\Sigma_g$ whose isotopy class is $\psi$. Let $\phi\colon \pi_1(\Sigma_g)\hookrightarrow \pi_1(T_f)$ be the natural embedding from \eqref{eq:mapping_pi1}. Let $\sigma$ be the map in \eqref{eq:mapping_solv}. Then, the triple $(\phi,Q,\sigma)=(\phi, \ZZ^{2g}\rtimes_{s_g(\psi)}\ZZ,\sigma)$ is a $\pi_1(\Sigma_g)$-triple for $\pi_1(T_f)$.
\end{lem}
In Lemma~\ref{lem:mapping}, set $Q=\ZZ^{2g}\rtimes_{s_g(\psi)}\ZZ$ and $P=(\sigma\circ \phi)(\pi_1(\Sigma_g))(\simeq \ZZ^{2g})$. The next task is to compute $\intrk^Q(P)$ from below. Levitt--Metaftsis \cite{LM} and Amoroso--Zannier \cite{AZ} obtained the following result. Here for $d\in \NN$, let $\mathrm{Mat}_{d\times d}(\ZZ)$ denote the ring of $d\times d$ integer matrices; we regard it as a subring of $\mathrm{Mat}_{d\times d}(\mathbb{C})$, the ring of $d\times d$ complex matrices and discuss eigenvalues and eigenspaces of elements of $\mathrm{Mat}_{d\times d}(\ZZ)$ as those of $\mathrm{Mat}_{d\times d}(\mathbb{C})$. Their results are stated in terms of the following concepts.
\begin{definition}
Let $d\in \NN_{\geq 2}$ and $A\in \mathrm{Mat}_{d\times d}(\ZZ)$.
\begin{enumerate}[$(1)$]
\item Let $v\in \ZZ^d$. Then, the \emph{$A$-orbit} of $v$ is the set
\[
\{A^nv\;|\; n\in\ZZ_{\geq 0}\},
\]
where $A^0:=I_d$.
\item We define $\OR(A)$ as the minimal number of elements in $\ZZ^d$ whose $A$-orbits generate $\ZZ^d$ as a $\ZZ$-module.
\end{enumerate}
\end{definition}
If $A\in \GL(n,\ZZ)$, then the Cayley--Hamilton theorem implies that the $\ZZ$-module generated by the $A$-orbit of $v$ equals that generated by the set $\{A^nv\;|\; n\in\ZZ\}$ for every $v\in \ZZ^d$.
\begin{thm}[\cite{LM}, \cite{AZ}]\label{thm:rkmapping}
Let $d\in \NN_{\geq 2}$ and $A\in \mathrm{Mat}_{d\times d}(\ZZ)$. Set
\begin{eqnarray}\label{eq:numbertheory}
C_d=\prod_{q\leq d}q,
\end{eqnarray}
where $q$ runs over the prime powers at most $d$.
Then the following hold.
\begin{enumerate}[$(1)$]
\item $($\cite[Corollary~2.4]{LM}$)$ Assume that $A\in \GL(d,\ZZ)$. Let $H=\ZZ^d\rtimes_A\ZZ$, where $\ZZ$-action on $\ZZ^d$ is given by $A$. Then, we have
\[
\rk(H)=1+\OR(A).
\]
\item $($\cite[Theorem~1.5]{LM}$)$ Assume that $A\in\GL(d,\ZZ)$ and that $A$ is of infinite order. Then, there exists $n_0\in \NN$ such that for every $n\in \NN$ with $n\geq n_0$,
\[
\OR(A^n)\geq 2
\]
holds.
\item $($\cite[Theorem~1.5]{AZ}$)$ There exists an effective absolute constant $c>0$ such that in the setting of $(2)$, we can take
\[
n_0=\lceil cd^6(\log d)^6\rceil.
\]
\item $($\cite[Remark~4.1]{AZ}$)$ Assume that $A$ has only one eigenvalue. Then for every $n\geq C_d$, we have
\[
\OR(A^n)= d.
\]
\item $($\cite[Remark~4.2]{AZ}$)$ Assume that $A$ has two eigenvalues whose ratio is not a root of unity. Let $r$ be the sum of the dimensions of their eigenspaces. Then for every $n\geq C_d$, we have
\[
\OR(A^n)\geq r.
\]
\end{enumerate}
\end{thm}
In~(4), if $A\in \GL(d,\ZZ)$, then the unique eigenvalue of $A$ must be either $1$ or $-1$. We state the following immediate corollary to Theorem~\ref{thm:rkmapping}~(1).
\begin{cor}\label{cor:intrank}
Let $d\in \NN_{\geq 2}$ and $A\in \GL(d,\ZZ)$. Let $H=\ZZ^d\rtimes_A\ZZ$ and $K=\ZZ^d$, the kernel of the natural projection $H\twoheadrightarrow \ZZ$. Then
\[
\intrk^H(K)=\min\{d, \rk(H)\}=\min\{d, 1+\OR(A)\}.
\]
\end{cor}
\begin{proof}
First observe that every group $\Theta$ with $K\leqslant \Theta \leqslant H$ is of the form $K\rtimes_{A}(l\ZZ)$ with $l\in \ZZ_{\geq 0}$. By Theorem~\ref{thm:rkmapping}~(1), for every $l\in \NN$, we have
\[
\rk(K\rtimes_{A}(l\ZZ))=1+\OR(A^l)\geq 1+\OR(A)=\rk(K\rtimes_{A}\ZZ).
\]
If $l=0$, then $\rk(K\rtimes_{A}(0\ZZ))=\rk(K)=d$. Hence, we obtain the conclusion.
\end{proof}
By letting $d=2g$, we have the following proposition from Lemma~\ref{lem:mapping} and Corollary~\ref{cor:intrank}.
\begin{prop}\label{prop:mapping_torus}
Let $g\in \NN_{\geq 2}$. Let $\psi\in \mathrm{Mod}(\Sigma_g)$. Let $f \colon \Sigma_{g} \to \Sigma_{g}$ be a diffeomorphism whose isotopy class $[f]$ is $\psi$. Let $s_g\colon \mathrm{Mod}(\Sigma_g)\twoheadrightarrow \mathrm{Sp}(2g,\ZZ)$ be the symplectic representation. Let $Q=\ZZ^{2g}\rtimes_{s_g(\psi)} \ZZ$. Assume that $\rk(Q)\geq g+1$, equivalently, that $\OR(s_g(\psi))\geq g$. Then,
\[
\pi_1(T_f)\in \CC_{\Surf_{\{(g,r)\}}},
\]
where $r=\min\{2g, \OR(s_g(\psi))+1\}$.
\end{prop}
Then, (2)--(5) of Theorem~\ref{thm:rkmapping} yield the following theorem. We recall that the kernel of the symplectic representation $s_g\colon \Mod(\Sigma_g)\twoheadrightarrow \Sp(2g,\ZZ)$ is called the \emph{Torelli group} $\mathcal{I}(\Sigma_g)$ of $\Sigma_g$.
\begin{thm}[groups in $\CSD$ from mapping tori]\label{thm:mapping_tori}
Assume the setting of Proposition~$\ref{prop:mapping_torus}$. Then the following hold true. Here, the constant $C_d$ for $d\in \NN_{\geq 2}$ is given by \eqref{eq:numbertheory}.
\begin{enumerate}[$(1)$]
\item Assume that $s_g(\psi)\in \{\pm I_{2g}\}$. Then, $\pi_1(T_f)\in \CC_{\Surf_{\{(g,2g)\}}}$ holds. In particular, if $\psi\in \mathcal{I}(\Sigma_g)$, then we have $\pi_1(T_f)\in \CC_{\Surf_{\{(g,2g)\}}}$.
\item Assume that $s_g(\psi)\in \Sp(2g,\ZZ)$ has only one eigenvalue $($hence either $1$ or $-1$$)$. Then, for every $n\in \NN$ with $n\geq C_{2g}$, we have
\[
\pi_1(T_{f^n})\in \CC_{\Surf_{\{(g,2g)\}}}.
\]
\item Let $t$ be an integer with $t\geq g$. Assume that $s_g(\psi)$ has two eigenvalues whose ratio is not a root of unity. Moreover, assume that the sum of the dimensions of their eigenspaces is at least $t$. Then for every $n\in \NN$ with $n\geq C_{2g}$, we have
\[
\pi_1(T_{f^n})\in \bigcup_{r=\min\{2g,t+1\}}^{2g}\CC_{\Surf_{\{(g,r)\}}}.
\]
\item Assume that $g=2$. Then there exists an effective absolute constant $n_0\in \NN$ such that the following holds true: assume that $s_2(\psi)$ is of infinite order. Then for every $n\in \NN$ with $n\geq n_0$, we have
\[
\pi_1(T_{f^n})\in \CC_{\Surf_{\{(2,3)\}}}\cup \CC_{\Surf_{\{(2,4)\}}}.
\]
\end{enumerate}
\end{thm}
\begin{proof}
We apply Proposition~\ref{prop:mapping_torus}. Item (1) follows because $\OR(I_{2g})=\OR(-I_{2g})=2g$. (In this case, we can also determine the intermediate rank directly.) Item (2) follows from Theorem~\ref{thm:rkmapping}~(4); (3) follows from Theorem~\ref{thm:rkmapping}~(5). Finally (4) can be derived from (2) and (3) of Theorem~\ref{thm:rkmapping}. Indeed, take the effective absolute constant $c>0$ as in Theorem~\ref{thm:rkmapping}~(3) and set
\[
n_0=\lceil c\cdot 4^6(\log 4)^6\rceil.
\]
Then for every $n\in \NN$ with $n\geq n_0$, we have
\[
\rk(\ZZ^{2g}\rtimes_{s_g(\psi)^n}\ZZ)=1+\OR(s_g(\psi)^n)\geq 3;
\]
hence we obtain the conclusion.
\end{proof}
We are now ready to prove Theorem~\ref{thm:ex_CSD}.
\begin{proof}[Proof of Theorem~$\ref{thm:ex_CSD}$]
Item (1) is stated as Proposition~\ref{prop:noZ2} and Theorem~\ref{thm:mapping_tori}~(5); (2) follows from Example~\ref{example:surface_basic} and Corollary~\ref{cor:free}. Items (3) and (4) follow from (4) and (1) of Theorem~\ref{thm:mapping_tori}, respectively.
\end{proof}
\begin{remark}\label{rem:residual}
If $J\subseteq \NN_{\geq 2}$, then the group $\Gamma=\bigast_{g\in J}\pi_1(\Sigma_g)$ is \emph{fully residually free}, that means, for every finite subset $S$ of $\Gamma$, there exist a free group $H$ and a homomorphism $\phi\colon \Gamma\to H$ such that $\phi|_S$ is injective. To see this, first recall from \cite{GBaum} that $\pi_1(\Sigma_g)$ is fully residually free for every $g\in \NN_{\geq 2}$. Then, apply results in \cite{BBaum}.
\end{remark}
\begin{remark}\label{rem:souto}
Souto \cite{Souto} showed the following: if $\psi\in \Mod(\Sigma_g)$ is a pseudo-Anosov mapping class in the setting of Proposition~\ref{prop:mapping_torus}, then there exists $n_{\psi}\in \NN$ such that for every $n\in \NN$ with $n\geq n_{\psi}$, we have $\rk(\pi_1(T_{f^n}))=2g+1$. Hence for every $n\in \NN$ with $n\geq n_{\psi}$, we have
\[
\intrk^{\pi_1(T_{f^n})}(\pi_1(\Sigma_g))=2g.
\]
\end{remark}
We finally pose the following problems, which seem open, in relation to Theorems~\ref{thm local cyclicity}, \ref{thm:abelian} and \ref{thm:surface_new}.
\begin{problem}\label{prob:exist}
\begin{enumerate}[$(1)$]
\item Does there exist a non-abelian group $\Gamma$ such that for every pair $(G,N)$ fitting in $(\star)$, $\cl_G$ and $\cl_{G,N}$ coincide on $[G,N]$?
\item Does there exist a non-abelian group $\Gamma$ such that for every pair $(G,N)$ fitting in \emph{split} short exact sequence $(\star)$, $\cl_G$ and $\cl_{G,N}$ coincide on $[G,N]$?
\item Find a `good' class $\mathcal{C}$ of groups $\Gamma$ such that for every pair $(G,N)$ fitting in $(\star)$,
\[
\sup_{x\in [G,N]}(\cl_{G,N}(x)-\cl_{G}(x))<\infty
\]
holds.
\item Find a `good' class $\mathcal{C}$ of groups $\Gamma$ such that for every pair $(G,N)$ fitting in \emph{split} short exact sequence $(\star)$,
\[
\sup_{x\in [G,N]}(\cl_{G,N}(x)-\cl_{G}(x))<\infty
\]
holds.
\end{enumerate}
\end{problem}
\section{Concluding remarks}\label{sec:remark}
\subsection{Examples from symplectic geometry}\label{subsec:symp}
In this subsection, we exhibit examples of triples $(G,N,\Gamma)$ that fit in $(\star)$ from symplectic geometry. In the first example, $\cl_G$ coincides with $\cl_{G,N}$ on $[G,N]$ and $G/N\simeq \RR$.
For basic concepts of symplectic geometry, see \cite{HZ}, \cite{MS}, and \cite{PR}.
A symplectic manifold is said to be \textit{exact} if the symplectic form is an exact form.
\begin{prop}\label{ham cal}
Let $(M,\omega)$ be an exact symplectic manifold.
Set $G=\Ham(M,\omega)$, where $\Ham(M,\omega)$ is the group of Hamiltonian diffeomorphisms $($with compact support$)$ of $(M,\omega)$ and $N$ the commutator subgroup $[G,G]$ of $G$.
Then, the following hold true.
\begin{enumerate}[$(1)$]
\item $N=[G,N]$.
\item $G/N\simeq \RR$.
\item $\cl_G$ and $\cl_{G,N}$ coincide on $N$.
\end{enumerate}
\end{prop}
Here, we remark that (1) and (2) are known. To prove Proposition \ref{ham cal}, we use the Calabi homomorphism.
For an exact symplectic manifold $(M,\omega)$, we recall that the \textit{Calabi homomorphism}
is a function $\mathrm{Cal} \colon \Ham(M,\omega)\to\mathbb{R}$ defined by
\[
\mathrm{Cal}(\varphi_H)=\int_0^1\int_M H_t\omega^n\,dt,
\]
where $\varphi_H$ is the Hamiltonian diffeomorphism generated by a smooth function $H \colon [0,1] \times M \to \RR$.
It is known that the Calabi homomorphism is a well-defined surjective group homomorphism and that $\Ker(\Cal)=N$ (see \cite{Cala,Ban,Ban97,MS,Hum}). We also remark that $N$ is perfect (see \cite{Ban} and \cite{Ban97}).
We also note that every exact symplectic manifold is open.
Indeed, it is known that the symplectic form of a closed symplectic manifold is cohomologically non-trivial (see \cite[Section 1.1]{HZ} and \cite{MS}).
\begin{proof}[Proof of Proposition~$\ref{ham cal}$]
First, we prove (1). As mentioned earlier, the group $N$ is known to be perfect. Hence,
we have $N=[N,N]\leqslant [G,N]$.
Since $N$ is a normal subgroup of $G$, we have $[G,N]\leqslant N$. Therefore, we conclude that
\[
N=[G,N].
\]
Item (2) holds since the Calabi homomorphism is surjective and $\Ker(\Cal)=N$.
Finally, we prove (3). Let $f,g\in G$.
In what follows, we will show that $[f,g]$ is a $(G,N)$-commutator. As we mentioned above, every exact symplectic manifold is open. Hence, we can take $h\in \hG$ such that the following two conditions are fulfilled:
\begin{itemize}
\item the support of $h$ is disjoint from that of $f$;
\item $\mathrm{Cal}(h)=-\mathrm{Cal}(g)$.
\end{itemize}
By the first condition, $[f,g]=[f,gh]$ holds.
By the second condition, we have
\[
\Cal(gh)=\Cal(g)+\Cal(h)=0;
\]
it implies that $gh\in N$ since $\Ker(\Cal)=N$. Therefore, every $(G,G)$-commutator is a $(G,N)$-commutator, and hence $\cl_G$ coincides with $\cl_{G,N}$ on $[G,N]=N$. This completes our proof.
\end{proof}
In the proof of Proposition~\ref{ham cal}, we use the various properties of the Calabi homomorphism.
If we consider the analogue of Proposition~\ref{ham cal} on the flux homomorphism, then the following problem seems open.
\begin{problem}\label{flux}
Let $(M,\omega)$ be a closed symplectic manifold with $\HHH^1(M;\RR)=\RR$.
Let $G$ be the identity component $\Symp_0(M,\omega)$ of the group $\Symp(M,\omega)$ of symplectomorphisms of $(M,\omega)$ and $N$ the group of Hamiltonian diffeomorphisms of $(M,\omega)$.
Then, does $\cl_G$ and $\cl_{G,N}$ coincide on $N$?
\end{problem}
We note that under the setting of Problem \ref{flux}, there exists a subgroup $\Gamma_{\omega}$ of $\HHH^1(M;\RR)$, which is called the \textit{flux group} of $(M,\omega)$ such that $G/N=\HHH^1(M;\RR)/\Gamma_{\omega}$ (\cite{Ban}, \cite{Ban97}).
We also note that $\Gamma_{\omega}$ is known to be always discrete in $\HHH^1(M;\RR)$ (\cite{O}) and that the quotient homomorphism $G\to G/N$ has a section homomorphism if $\HHH^1(M;\RR)=\RR$.
For examples of closed symplectic manifolds with $\HHH^1(M;\RR)=\RR$, see \cite{G95} and \cite{HAP}.
The second example comes from the following proposition and corollary.
\begin{prop}\label{prop:cw}
Let $\Gamma$ be a group.
Assume that the commutator width of $\Gamma$ is finite, meaning that
\[
\sup_{\gamma \in[\Gamma,\Gamma]}\cl_{\Gamma}(\gamma)<\infty.
\]
Let $n_{\Gamma}\in \ZZ_{\geq 0}$ be the quantity defined on the left hand side. Set
\begin{eqnarray}\label{eq:ab_perm}
(G,N)=(\ZZ\wr_{\rho_{\Gamma^{\ab}}}\Gamma,\bigoplus_{\Gamma^{\ab}}\ZZ),
\end{eqnarray}
where $\rho_{\Gamma^{\ab}}\colon \Gamma\curvearrowright \Gamma^{\ab}$ is the composition of $\Ab_{\Gamma}$ and $\Gamma^{\ab}\curvearrowright \Gamma^{\ab}$ by left multiplication. Then for every $x\in [G,N]$, we have
\[
\left\lceil \frac{\cl_{G,N}(x)}{2}\right\rceil\leq \cl_{G}(x)\leq \left\lceil \frac{\cl_{G,N}(x)}{2}\right\rceil+n_{\Gamma}.
\]
\end{prop}
\begin{cor}\label{cor:cw}
Let $\Gamma$ be a group. Assume that
\[
n_{\Gamma}=\sup_{\gamma \in[\Gamma,\Gamma]}\cl_{\Gamma}(\gamma)<\infty \quad \textrm{and}\quad \sperk(\Gamma^{\ab})=\infty.
\]
Let $(G,N)$ be the pair defined by $\eqref{eq:ab_perm}$. Then, we have
\[
\sup_{x\in [G,N]}(\cl_{G,N}(x)-C\cdot \cl_{G}(x))=\infty
\]
for every real number $C < 2$ but
\[
\sup_{x\in [G,N]}(2\cl_{G}(x)-\cl_{G,N}(x))\leq 2n_{\Gamma}+1.
\]
\end{cor}
We note that Propositions~\ref{prop:wreath_perm} and \ref{prop:cw} recover Theorem~\ref{thm:abelian}. Indeed, if $\Gamma$ is abelian, then $n_{\Gamma}=0$ and $\Gamma^{\ab}=\Gamma$ hold.
Symplectic geometry supplies the following interesting example to which Corollary~\ref{cor:cw} applies.
\begin{example}\label{ex:symp}
Let $(M,\omega)$ be an exact symplectic manifold.
Set $\Gamma=\Ham(M\times \RR^{2},\mathrm{pr_1}^\ast\omega+\mathrm{pr_2}^\ast\omega_{0})$, where $\mathrm{pr_1}\colon M\times\RR^2 \to M$, $\mathrm{pr_2}\colon M\times\RR^2 \to \RR^2$ are the first, second projection, respectively and $\omega_{0}$ is the standard symplectic form on $\RR^{2}$.
Then this $\Gamma$ satisfies that
\begin{equation}\label{r2n ham}
\sup_{\gamma\in [\Gamma,\Gamma]}\cl_{\Gamma}(\gamma)\leq 2 \quad \textrm{and}\quad \sperk(\Gamma^{\ab})=\infty.
\end{equation}
Indeed, the former assertion follows from the work of Burago--Ivanov--Polterovich \cite[Corollary~2.3]{BIP}; the latter holds since $\Gamma^{\ab}\simeq \RR$.
Here, note that $(M\times \RR^{2},\mathrm{pr_1}^\ast\omega+\mathrm{pr_2}^\ast\omega_{0})$ is an exact symplectic manifold and hence Proposition \ref{ham cal}~(2) applies.
In particular, $\Gamma=\Ham(\RR^{2n},\omega_{0})$ for every $n\geq 1$ satisfies \eqref{r2n ham}. Here, $\omega_{0}$ is the standard symplectic form on $\RR^{2n}$.
\end{example}
\begin{comment}
\begin{example}\label{ex:symp}
Let $n\in \NN$.
Set $\Gamma=\Ham(\RR^{2n},\omega_{0})$, where $\omega_{0}$ is the standard symplectic form on $\RR^{2n}$.
Then this $\Gamma$ satisfies that
\begin{equation}\label{r2n ham}
\sup_{\gamma\in [\Gamma,\Gamma]}\cl_{\Gamma}(\gamma)\leq 2 \quad \textrm{and}\quad \sperk(\Gamma^{\ab})=\infty.
\end{equation}
Indeed, the former assertion follows from the work of Burago--Ivanov--Polterovich \cite{BIP}; the latter holds since $\Gamma^{\ab}\simeq \RR$.
Here, note that $(\RR^{2n},\omega_{0})$ is an exact symplectic manifold and hence $\Gamma^{\ab}\simeq \RR$ (see (2) of Proposition \ref{ham cal}).
\end{example}
\begin{remark}\label{subcritical stein}
We can generalize Example \ref{ex:symp} in terms of Stein geometry.
For basic concepts of Stein manifolds, see \cite{BC02} and \cite{CE12}.
Let $(M,\omega)$ be a symplectic manifold which is a complete subcritical Stein manifold and set $\Gamma=\Ham(M,\omega)$.
Then, $(M,\omega)$ is an exact symplectic manifold and hence $\Gamma$ satisfies the former assertion of ($\ref{r2n ham}$).
By combing the work of Burago--Ivanov--Polterovich \cite{BIP} with \cite[Lemma 3.2]{BC02}, the later assertion of ($\ref{r2n ham}$) holds.
Thus, we can generalize Example \ref{ex:symp} to a complete subcritical Stein manifold.
\end{remark}
\end{comment}
\begin{proof}[Proofs of Proposition~$\ref{prop:cw}$ and Corollary~$\ref{cor:cw}$]
First, we prove Proposition~\ref{prop:cw}. Let $x\in [G,N]$ be a non-trivial element and set $r=\cl_{G,N}(x)\in \NN$. Then, by \eqref{eq:comm}, there exist $\gamma_1,\ldots ,\gamma_r\in \Gamma$ and $w_1,\ldots ,w_r\in N$ such that
\[
x=\sum_{i=1}^r (\Ab_{\Gamma}(\gamma_i)w_i-w_i).
\]
First, we treat the case where $r$ is even. Since $\Gamma^{\ab}$ is abelian, we then have
\[
(x,\xi)=[(-w_2,\gamma_1),(w_1,\gamma_2)][(-w_4,\gamma_3),(w_3,\gamma_4)]\cdots [(-w_r,\gamma_{r-1}),(w_{r-1},\gamma_{r})],
\]
where $\xi$ is defined by
\[
\xi=[\gamma_1,\gamma_2][\gamma_3,\gamma_4]\cdots[\gamma_{r-1},\gamma_r]\in [\Gamma,\Gamma].
\]
By assumption, $\xi^{-1}$ may be written as the product of at most $n_{\Gamma}$ single commutators. This means, there exist an integer $k\leq n_{\Gamma} $, elements $\lambda_1,\ldots ,\lambda_k\in \Gamma$ and $\lambda'_1,\ldots ,\lambda'_k\in \Gamma$ such that
\[
\xi^{-1}=[\lambda_1,\lambda'_1][\lambda_2,\lambda'_2]\cdots[\lambda_k,\lambda'_k].
\]
Therefore, we have
\[
x=[(-w_2,\gamma_1),(w_1,\gamma_2)]\cdots [(-w_r,\gamma_{r-1}),(w_{r-1},\gamma_{r})][(0,\lambda_1),(0,\lambda'_1)]\cdots[(0,\lambda_k),(0,\lambda'_k)]
\]
and
\[
\cl_{G}(x)\leq \frac{r}{2}+k\leq \frac{r}{2}+n_{\Gamma}.
\]
For the case where $r$ is odd, a similar argument to one above shows that
\[
\cl_{G}(x)\leq \frac{r+1}{2}+n_{\Gamma}.
\]
By combining these two inequalities with Theorem~\ref{thm:split}, we obtain the conclusion of Proposition~\ref{prop:cw}.
Finally, we prove Corollary~\ref{cor:cw}: it immediately follows from Proposition~\ref{prop:wreath_perm}, Proposition~\ref{prop:cw} and Lemma~\ref{lem:abel}.
\end{proof}
\subsection{Examples of groups of finite general rank}\label{subsec:gen_rk}
For a group $\Gamma$ that may not be finitely generated, we have two notions of ranks due to Malcev: the \emph{general rank} $\genrk(\Gamma)$ (Definition~\ref{def:int_rk}) and the \emph{special rank} $\sperk(\Gamma)$ (Definition~\ref{def:local_rank}). For abelian $\Gamma$, these two coincide (Lemma~\ref{lem:abel}). However, $\genrk(\Gamma)$ can be much smaller than $\sperk(\Gamma)$ in general. For instance, $\Gamma=F_2$, we have
\[
\genrk(F_2)=2\quad \textrm{and}\quad \sperk(F_2)=\infty.
\]
Here for $n\in \NN$, $F_n$ denotes the free group of rank $n$. To see the latter, we observe that $F_n$ embeds into $F_2$ for every $n\in \NN$ (to see the former, see Example~\ref{ex:fg}). Here we list basic properties of general ranks and exhibit some examples of groups of finite general rank. The contents in this subsection might be known to the experts on general ranks; nevertheless, we include the proofs for the sake of convenience. We refer the reader to \cite{Azarov17} for study of general ranks.
\begin{example}\label{ex:spe}
By definition, we have $\genrk(\Gamma)\leq \sperk(\Gamma)$ for every group $\Gamma$. Hence every group $\Gamma$ of finite special rank is of finite general rank. Groups of finite special rank have been studied by various researchers; see \cite{DKS}.
\end{example}
\begin{example}\label{ex:fg}
Let $\Gamma$ be a finitely generated group. Then we have
\[
\genrk(\Gamma)=\rk(\Gamma)<\infty.
\]
To see this, note that $\intrk^{\Gamma}(\Gamma)=\rk(\Gamma)$ and $\intrk^{\Gamma}(\Lambda)\leq \rk(\Gamma)$ for every subgroup $\Lambda$ of $\Gamma$.
\end{example}
\begin{remark}\label{rem:cyc}
We recall from the proof of Proposition~\ref{prop:noZ2} that for a group $\Gamma$ and its finitely generated subgroup $\Lambda$, $\intrk^{\Gamma}(\Lambda)\leq 1$ holds if and only if $\Lambda$ is cyclic. Therefore, $\Gamma$ is of general rank at least $1$ if and only if $\Gamma$ is locally cyclic.
\end{remark}
From the viewpoints of Examples~\ref{ex:spe} and \ref{ex:fg}, in what follows, we seek for groups of finite general ranks such that they are of infinite special rank and that they are not finitely generated. First, we discuss some permanence properties of having finite general ranks.
\begin{prop}[stability under taking group quotients]\label{prop:quot}
Let $\Gamma$ be a group and $Q$ be a group quotient of $\Gamma$. Then we have
\[
\genrk(\Gamma)\geq \genrk(Q).
\]
In particular, we have
\[
\genrk(\Gamma)\geq \sperk(\Gamma^{\ab}).
\]
\end{prop}
\begin{prop}[stability under inductive limits]\label{prop:ind}
Let $(\Gamma_i,\iota_{ij})$ be an injective system of groups, namely, $\iota_{ij}\colon \Gamma_i\hookrightarrow \Gamma_{j}$ is an injective group homomorphism for every $i,j\in \NN$ with $i\leq j$ such that $\iota_{ii}=\mathrm{id}_{\Gamma_i}$ for every $i\in \NN$ and $\iota_{jk}\circ \iota_{ij}=\iota_{ik}$ for every $i,j,k\in\NN$ with $i\leq j\leq k$. Let $\Gamma=\varinjlim \Gamma_i$ be the inductive limit of $(\Gamma_i,\iota_{ij})$. Then we have
\[
\genrk(\Gamma)\leq \liminf_{i\to \infty} \genrk(\Gamma_i).
\]
In particular, if $\Gamma_1\leqslant\Gamma_2\leqslant\Gamma_3\cdots\leqslant \Gamma_i\leqslant \Gamma_{i+1}\leqslant \cdots$ is an increasing sequence of groups, then we have
\begin{eqnarray}\label{eq:ind_ineq}
\genrk(\bigcup_{i\in \NN}\Gamma_i)\leq \liminf_{i\to \infty} \genrk(\Gamma_i).
\end{eqnarray}
\end{prop}
In general, the equality does \emph{not} hold in $\eqref{eq:ind_ineq}$; see Example~\ref{example not equal}.
\begin{prop}[stability under extensions]\label{prop:ext_rk}
Assume that
\[
1 \longrightarrow N \longrightarrow G \longrightarrow \Gamma \longrightarrow 1
\]
is a short exact sequence of groups. Then we have
\begin{eqnarray}\label{eq:loc_intrk_ineq}
\genrk(G)\leq \genrk(N)+\genrk(\Gamma).
\end{eqnarray}
\end{prop}
\begin{prop}[stability under taking overgroups and subgroups of finite indices]\label{prop:subgrp}
Let $\Gamma$ be a group and $\Lambda$ its subgroup. Assume that $\Gamma$ is non-trivial. Then we have
\begin{eqnarray}\label{eq:over}
\genrk(\Gamma)\leq \genrk(\Lambda)+[\Gamma:\Lambda]-1
\end{eqnarray}
and that
\begin{eqnarray}\label{eq:sub}
\genrk(\Lambda)\leq [\Gamma:\Lambda]\cdot (\genrk(\Gamma)-1)+1.
\end{eqnarray}
Here, $[\Gamma:\Lambda]$ denotes $\#(\Gamma/\Lambda)$, the \emph{index} of $\Lambda$ in $\Gamma$.
\end{prop}
In Proposition~\ref{prop:subgrp}, if $\Lambda$ is normal in $\Gamma$, then \eqref{eq:loc_intrk_ineq} provides a better bound
\[
\genrk(\Gamma)\leq \genrk(\Lambda)+\genrk(\Gamma/\Lambda)
\]
than \eqref{eq:over}.
\begin{prop}[stability under wreath products]\label{prop:wreath_rk}
Let $H$ and $\Gamma$ be groups. Then we have
\begin{eqnarray}\label{eq:loc_intrk_ineq_wreath}
\genrk(H\wr \Gamma)\leq \genrk(H)+\genrk(\Gamma).
\end{eqnarray}
\end{prop}
\begin{proof}[Proof of Proposition~$\ref{prop:quot}$]
The former assertion immediately follows from Lemma~\ref{lem:quotient}; the latter assertion then holds by Lemma~\ref{lem:abel}.
\end{proof}
\begin{proof}[Proof of Proposition~$\ref{prop:ind}$]
This follows from the very definition of general rank. Indeed, every finite subset $S$ of $\Gamma$ may be regarded as a subset of $\Gamma_i$ for a sufficiently large $i$, depending on $S$.
\end{proof}
\begin{proof}[Proof of Proposition~$\ref{prop:ext_rk}$]
For the proof, we may assume that $\genrk(N)$ and $\genrk(\Gamma)$ are finite. Let $q\colon G\twoheadrightarrow \Gamma$ be the quotient map in the short exact sequence. Set $m=\genrk(N)$ and $l=\genrk(\Gamma)$. Take finitely many elements $g_1,\ldots,g_k\in G$ arbitrarily.
Since $\genrk(\Gamma)=l$, there exists a subgroup $\Theta$ of $\Gamma$ such that
\[
\langle q(g_1),q(g_2),\ldots ,q(g_k)\rangle \leqslant \Theta \leqslant \Gamma \quad \textrm{and}\quad \rk(\Theta)\leq l.
\]
Set $s=\rk(\Theta)$, and take a generating set $\{\theta_1,\ldots ,\theta_s\}$ of $\Theta$ of size $s$. For every $1\leq i\leq s$, fix $h_i\in G$ satisfying $q(h_i)=\theta_i$. Set $H=\langle h_1,\ldots ,h_s\rangle$. Then, by construction, there exist $f_1,\ldots ,f_k\in H$ such that for every $1\leq j\leq k$, $q(g_jf_j^{-1})=e_{\Lambda}$ holds. This is equivalent to saying that $g_jf_j^{-1}\in N$. For every $1\leq j\leq k$, set $x_j=g_jf_j^{-1}\in N$. Then, since $\genrk(N)=m$, there exists a subgroup $K$ of $N$ with
\[
\langle x_1,x_2,\ldots ,x_k\rangle \leqslant K \leqslant N
\]
such that $\rk(K)\leq m$. For such $K$, we have
\[
\langle g_1,\ldots ,g_k\rangle \leqslant \langle K\cup H\rangle \leqslant G \quad \textrm{and}\quad \rk(\langle K\cup H\rangle )\leq \rk(K)+s\leq m+l.
\]
This implies \eqref{eq:loc_intrk_ineq}, as desired.
\end{proof}
\begin{proof}[Proof of Proposition~$\ref{prop:subgrp}$]
First we prove \eqref{eq:over}. We may assume that $\genrk(\Lambda)$ and $[\Gamma:\Lambda]$ are finite. Set $m=\genrk(\Lambda)$ and $l=[\Gamma:\Lambda]$. Take a set $\{s_1=e_{\Gamma},\ldots ,s_l\}$ of complete representatives of $\Gamma/\Lambda$. Take an arbitrary finitely generated subgroup $\Xi$ of $\Gamma$; let $\{\xi_1,\ldots ,\xi_k\}$ be a set of generators of size $k$, where $k=\rk(\Xi)$. Then, for every $1\leq i\leq k$, there exists a unique $1\leq j_i\leq l$ such that $s_{j_i}^{-1}\xi_i \in \Lambda$. Let $H$ be the subgroup of $\Lambda$ generated by $\{s_{j_i}^{-1}\xi_i\;|\; 1\leq i\leq k\}$. Since $\genrk(\Lambda)=m$, there exists a subgroup $\Theta$ of $\Lambda$ such that
\[
H\leqslant \Theta\leqslant \Lambda \quad \textrm{and} \quad \rk(\Theta)\leq m.
\]
We then observe that
\[
\Xi\leqslant \langle \Theta\cup \{s_2,\ldots ,s_l\}\rangle \leqslant \Gamma;
\]
hence obtaining $\genrk(\Gamma)\leq m+l-1$.
Next, we show \eqref{eq:sub}. Before proceeding to the proof, we recall the following variant of \emph{Schreier's subgroup lemma}: let $H$ be a non-trivial finitely generated group and $K$ be a subgroup of finite index in $H$. Then
\begin{eqnarray}\label{eq:index}
\rk(K)\leq [H:K]\cdot (\rk(H)-1)+1
\end{eqnarray}
holds. For instance, see \cite[Proposition~12.1 in Chapter~III]{LSbook}.
Again, we may assume that $\genrk(\Lambda)$ and $[\Gamma:\Lambda]$ are finite. Set $m=\genrk(\Gamma)$ and $l=[\Gamma:\Lambda]$. Since $\Gamma$ is non-trivial, we have $m\geq 1$. Take an arbitrary finitely generated non-trivial subgroup $\Xi$ of $\Lambda$. Then since $\genrk(\Gamma)=m$, there exists a non-trivial subgroup $\Theta$ of $\Gamma$ such that
\[
\Xi\leqslant \Theta\leqslant \Gamma \quad \textrm{and} \quad \rk(\Theta)\leq m.
\]
Then, we have
\[
\Xi\leqslant \Theta\cap \Lambda \leqslant \Lambda\quad \textrm{and}\quad
[\Theta:\Theta\cap \Lambda]\leq [\Gamma:\Lambda]\leq l.
\]
Hence, \eqref{eq:index} implies that
\[
\rk(\Theta\cap \Lambda)\leq [\Theta:\Theta\cap \Lambda] \cdot (\rk(\Theta)-1)+1\leq l(m-1)+1.
\]
Therefore, we obtain the conclusion.
\end{proof}
\begin{proof}[Proof of Proposition~$\ref{prop:wreath_rk}$]
Set $G=H\wr \Gamma$.
For the proof, we may assume that $\genrk(H)$ and $\genrk(\Gamma)$ are finite.
Set $m=\genrk(N)$ and $l=\genrk(\Gamma)$.
Take finitely many elements $g_1,\ldots,g_k\in G$ arbitrarily.
Write $g_i=(v_i,\gamma_i)$ for every $1\leq j\leq k$, where $v_i=\bigoplus_{\Gamma}H$ and $\gamma_i\in \Gamma$.
Since $\genrk(\Gamma)=l$, there exists a subgroup $\Theta$ of $\Gamma$ such that
\[
\langle \gamma_1,\gamma_2,\ldots ,\gamma_k\rangle \leqslant \Theta \leqslant \Gamma \quad \textrm{and}\quad \rk(\Theta)\leq l.
\]
Set $s=\rk(\Theta)$, and take a generating set $\{\theta_1,\ldots ,\theta_s\}$ of $\Theta$ of size $s$.
Set
\[
S=\{v_j(\gamma)\;|\; 1\leq j\leq k,\ \gamma \in \Gamma\};
\]
it is a finite subset of $H$.
Since $\genrk(H)=m$, there exists a subgroup $P$ of $H$ with
\[
\langle S\rangle \leqslant P\leqslant H \quad \textrm{and}\quad \rk(P)\leq m.
\]
Set $\rk(P)=t$ and take a generating set $\{p_1,\ldots ,p_t\}$ of $P$ of size $t$.
Set $f_i=(\mathbf{e},\theta_i) \in \hG$ and $w_n=(p_n\delta_{e_H},e_{\Gamma}) \in \hG$ for every $1\leq i\leq s$ and every $1\leq n\leq t$. Here, $\mathbf{e}$ denotes the map $H\to \Gamma$ sending every $h \in H$ to $e_{\Gamma}$; $p_n\delta_{e_H}$ means the map $H\to \Gamma$ that sends $e_H$ to $p_n$ and sends all the other elements to $e_{\Gamma}$. Then we have
\[
\langle g_1,g_2,\ldots ,g_k\rangle \leqslant \langle f_1,\ldots, f_s, w_1,\ldots ,w_t\rangle \leqslant G
\]
and
\[
\rk(\langle f_1,\ldots, f_s, w_1,\ldots ,w_t\rangle)\leq s+t\leq l+m.
\]
Therefore, we obtain \eqref{eq:loc_intrk_ineq_wreath}, as desired.
\end{proof}
\begin{example} \label{example not equal}
Here we exhibit an example for which the equality does not hold in $\eqref{eq:ind_ineq}$. For every $n \ge 3$, take an injective homomorphism $f_n\colon F_2\hookrightarrow F_n$ and take an injectve homomorphism $g_n\colon F_n\hookrightarrow F_2$. Then, consider a sequence
\begin{eqnarray}\label{eq:ind_1}
F_2 \xrightarrow{f_3} F_3 \xrightarrow{g_3} F_2 \xrightarrow{f_4} F_4 \xrightarrow{g_4} F_2 \xrightarrow{f_5} F_5 \xrightarrow{g_5} \cdots,
\end{eqnarray}
and let $\Gamma$ be the inductive limit of this sequence. We first claim that
\[
\genrk(\Gamma)=2.
\]
To see this, first we have $\genrk (\Gamma) \ge 2$ since $\Gamma$ is not locally cyclic. Also, by applying Proposition~\ref{prop:ind} to the inductive system $\eqref{eq:ind_1}$, we have $\genrk(\Gamma)\leq \genrk(F_2)=2$. Therefore, we verify the claim.
Now we regard $\Gamma$ as the inductive limit of another inductive system
\begin{eqnarray}\label{eq:ind_2}
\Gamma_1=F_3 \xrightarrow{f_4\circ g_3} \Gamma_2=F_4 \xrightarrow{f_5\circ g_4} \Gamma_3= F_5 \xrightarrow{f_6\circ g_5} \cdots.
\end{eqnarray}
Then, for the inductive system $\eqref{eq:ind_2}$, we have
\[
2=\genrk(\Gamma) < \liminf_{i\to \infty}\genrk(\Gamma_i)=\infty;
\]
in particular, the inequality $\eqref{eq:ind_ineq}$ is \emph{strict} in this setting.
\end{example}
\begin{example} \label{infinite braid}
Recall the definition of the braid group $B_n$ from Example~\ref{ex:braid}.
There exists a natural injective homomorphism from $B_n$ to $B_{n+1}$, and we define $B_\infty$ to be the inductive limit of $B_n$.
In what follows, we show that
\[
\genrk(B_\infty) = 2.
\]
For $n \ge 3$, $B_n$ is non-abelian, and generated by two elements $\sigma_1$ and $\sigma_{n-1} \cdots \sigma_1$ since
\[ (\sigma_{n-1} \cdots \sigma_1)^{-1} \sigma_i (\sigma_{n-1} \cdots \sigma_1) = \sigma_{i+1}\]
for every $1\leq i \leq n-2$. This means that $\genrk(B_n) = 2$. Hence Proposition~\ref{prop:ind} implies that $\genrk(B_\infty) = 2$.
We can also define the inductive limit of the natural inductive system $([B_n,B_n])_{n\geq 2}$; this limit equals $[B_{\infty}, B_{\infty}]$. Then, we have
\[
\genrk([B_{\infty},B_{\infty}])=2.
\]
Indeed, this follows from Proposition~\ref{prop:ind} and the result \cite{Kordek} of Kordek stating that $\rk([B_n,B_n])=2$ for every $n\geq 7$.
\end{example}
\begin{example}\label{ex:pure}
Here we provide another example related to braid groups; but it has infinite general rank. For every $n\geq 2$, $B_n$ admits a natural surjective homomorphism $B_n\twoheadrightarrow \mathrm{Sym}(n)$, where $\mathrm{Sym}(n)$ denotes the symmetric group of degree $n$. The kernel $P_n$ of this homomorphism is called the \emph{pure braid group with $n$ strands}. We can consider the inductive limit $P_{\infty}$ of the natural inductive system $(P_n)_{n\geq 2}$. Then we have
\[
\genrk(P_{\infty})=\infty.
\]
To see this, we first recall the following well known fact for every $n\geq 2$:
\[
P_{n}^{\ab}\simeq \ZZ^{\binom{n}{2}}
\]
(see \cite[Corollary 1.20]{KTbook}). By construction, we then have
\[
P_{\infty}^{\ab}\simeq \varinjlim \,\ZZ^{\binom{n}{2}}\simeq \bigoplus_{\NN}\ZZ.
\]
Here $(\ZZ^{\binom{n}{2}})_{n\geq 2}$ forms an inductive system via natural inclusion maps. Hence, Proposition~\ref{prop:quot} implies that
\[
\genrk(P_{\infty})\geq \sperk(P_{\infty}^{\ab})=\infty.
\]
\end{example}
In the settings of Examples~\ref{infinite braid} and \ref{ex:pure}, we have a short exact sequence
\begin{eqnarray}\label{eq:braids}
1\longrightarrow P_{\infty}\longrightarrow B_{\infty}\longrightarrow \mathrm{Sym}_{\mathrm{fin}}(\NN)\longrightarrow 1.
\end{eqnarray}
Here, $\mathrm{Sym}_{\mathrm{fin}}(\NN)$ denotes the \emph{finitary symmetric group} on $\NN$: it is the inductive limit of the natural inductive system $(\mathrm{Sym}(n))_{n\geq 2}$. We note that the group $\mathrm{Sym}_{\mathrm{fin}}(\NN)$ is locally finite and that
\[
\genrk(\mathrm{Sym}_{\mathrm{fin}}(\NN))=2.
\]
To see the latter, use Proposition~\ref{prop:quot} (or Proposition~\ref{prop:ind}). From these points of view, $\mathrm{Sym}_{\mathrm{fin}}(\NN)$ might be seen as a `small' group. However, in \eqref{eq:braids} we have
\[
2=\genrk(B_{\infty})<\genrk(P_{\infty})+\genrk(\mathrm{Sym}_{\mathrm{fin}}(\NN))=\infty;
\]
in particular, the inequality~\eqref{eq:loc_intrk_ineq} is \emph{far from being sharp} in this setting.
The following example comes from $1$-dimensional dynamics. Let $F$ be the group of Richard Thompson: it is the group of homeomorphisms on an interval $[0,1]$ that satisfy the following three conditions:
\begin{itemize}
\item they are piecewise linear;
\item in the pieces where maps are linear (affine), the slope is in the set $\{2^n\;|\; n\in \ZZ\}$;
\item the breakpoints are finitely many and they belong to $([0,1]\cap \ZZ[1/2])^2$.
\end{itemize}
It is known that the commutator subgroup $[F,F]$ equals the group consisting of all elements in $F$ that are identity in neighborhoods of $0$ and $1$, and that
\[
F^{\ab}=F/[F,F]\simeq \ZZ^2.
\]
It is also known that $[F,F]$ is a simple group with $\rk([F,F])=\infty$, while $\rk(F)=2$. See \cite{CFP} for a comprehensive treatise on $F$.
\begin{prop}\label{prop:F}
For Thompson's group $F$, we have
\[
\genrk([F,F])=2.
\]
\end{prop}
The proof uses standard ideas on $F$ described in \cite[Section~4]{CFP}.
\begin{proof}
Since $[F,F]$ is not locally cyclic, we have $\genrk([F,F])\geq 2$. In what follows, we will show that $\genrk([F,F])\leq 2$. Let $k\in \NN$ and let $f_1,\ldots ,f_k\in [F,F]$. Then, since $f_1,\ldots ,f_k$ are finitely many, there exist $a,b\in \ZZ[1/2]$ with $0<a<1/4<3/4<b<1$ such that for every $1\leq i\leq k$, $f_i$ is identity in $[0,a] \cup [b,1]$. By \cite[Lemma~4.2]{CFP}, there exists $g\in [F,F]$ such that for every $1\leq i\leq k$, $gf_ig^{-1}$ is identity in $[0,1/4]\cup [3/4,1]$. By \cite[Lemma~4.4]{CFP}, the group $H$ consisting of all elements in $[F,F]$ that are identity in $[0,1/4]\cup [3/4,1]$ is isomorphic to $F$; hence it is generated by two elements. Therefore, we have
\[
\langle f_1,\ldots ,f_k\rangle \leqslant g^{-1}Hg\leqslant [F,F]
\]
with $\rk(g^{-1}Hg)=2$. This yields
\[
\genrk([F,F])\leq 2,
\]
as desired.
\end{proof}
With the aid of Propositions~\ref{prop:ind}, \ref{prop:ext_rk} and \ref{prop:wreath_rk}, we can build up groups of finite general rank out of groups that are known to have finite general rank. For instance, the group $\QQ\wr (B_{\infty}\wr [F,F])$ is a group of general rank at most $5$ by Example~\ref{infinite braid} and Proposition~\ref{prop:F}.
In relation to Theorem~\ref{thm:wreath_shin}, the following problem seems interesting.
\begin{problem}
Given a group $\Gamma$ $($non-abelian and not finitely generated$)$, determine $\genrk(\Gamma)$.
\end{problem}
\section*{Acknowledgment}
The authors are grateful to the anonymous referee for useful comments, which improve the present paper.
The second author is supported by JSPS KAKENHI Grant Number JP20H00114 and JST-Mirai Program Grant Number JPMJMI22G1.
The third author is supported by JSPS KAKENHI Grant Number JP21J11199.
The first author, the fourth author and the fifth author are partially supported by JSPS KAKENHI Grant Number JP21K13790, JP19K14536 and JP21K03241, respectively.
|
1,108,101,566,484 | arxiv | \section{Introduction}
\label{intro}
The low-lying structure of $^{12}$C is still one of the most fascinating open problems in nuclear physics. Alpha-clusterization and the nature of the so-called Hoyle state, which plays a crucial role in nucleosynthesis, have attracted special interest. Microscopic theories (e.g., \cite{Chernykh07}) support the existence of three-alpha cluster configurations for the $^{12}$C nucleus, a fact which justifies the use of cluster models (e.g.,~\cite{Nguyen13}) and algebraic methods (e.g.,~\cite{dellaRocca17}). These approaches, although simpler, are particularly suitable for the description of reaction observables.
Experimentally, different probes have been extensively used to access the properties of its ground and excited states. We discuss here the case of $\alpha+{^{12}}$C inelastic scattering studied within two theoretical three-body approaches: The hyperspherical formalism, and the algebraic cluster model.
Our goal is to compare form factors for inelastic scattering in these two approaches and set the basis for full coupled-channel calculations.
\section{Three-body calculations}
\label{sec-1}
As in Refs.~\cite{Nguyen13,Ishikawa14}, the $\alpha+\alpha+\alpha$ problem can be solved within the hyperspherical formalism~\cite{Zhukov93}, which has been successfully applied to describe the properties of Borromean nuclei such as the two-neutron halo systems $^{6}$He or $^{11}$Li, or the weakly bound stable nucleus $^{9}$Be~\cite{JCasal14}. Within this framework, the states of the three-body system can be written as
\begin{equation}
\Psi^{j\mu}(\rho,\Omega)=\rho^{-5/2}\sum_{\beta}\chi_\beta^j(\rho)\Upsilon_\beta^{j\mu}(\Omega),
\label{eq:wf}
\end{equation}
where $\rho$ is the hyper-radius, $\Omega$ contains all the angular dependence, functions $\Upsilon_\beta^{j\mu}$ are the so-called hyperspherical harmonics, and $\chi_\beta^j$ are the hyperradial functions to be determined. Here, $\beta$ is the set of quantum numbers defining each channel. In the case of three zero-spin particles, $\beta\equiv\{K,l_x,l_y\}$, where $l_x$ and $l_y$ are the orbital angular momenta associated to the usual Jacobi coordinates,
so that $\boldsymbol{j}=\boldsymbol{l_x}+\boldsymbol{l_y}$ gives the total angular momentum, and $K$ is the so-called hypermomentum. Note that $l_x$ has to be even for identical bosons. Details can be found in Ref.~\cite{Nguyen13}. The hyperangular functions and $\chi_\beta^j(\rho)$ can be expanded in a given basis,
\begin{equation}
\chi_{\beta}^{j}(\rho) = \sum_{i} C_{i\beta}^{j} U_{i\beta}(\rho).
\end{equation}
Index $i$ runs over the number of hyperradial excitations included, and coefficients $C_{i\beta}^j$ are obtained upon diagonalization of the three-body Hamiltonian with coupling potentials
\begin{equation}
V_{\beta'\beta}^{j\mu}(\rho)=\left\langle \Upsilon_{\beta }^{j\mu}(\Omega)\Bigg|\sum_{j>i=1}^3 V_{ij}(\rho,\alpha) \Bigg|\Upsilon_{\beta'}^{ j\mu}(\Omega) \right\rangle + \delta_{\beta,\beta'}V_{3b}^j(\rho).
\end{equation}
This contains all binary interactions, integrated over the angular dependence, and a phenomenological three-body force to account for effects not explicitly included in the binary interactions~\cite{IJThompson04}.
In this work, calculations are performed with the Ali-Bodmer $\alpha$-$\alpha$ nuclear potential~\cite{AliBodmer} together with a hard-sphere Coulomb term. Radial functions are expanded in a THO basis~\cite{JCasal14}, and the three-body force is adjusted to reproduce the energies of the first $2^+_1$ bound excited state and the $0^+_2$ Hoyle state in $^{12}$C, which have a well-developed three-body character~\cite{Chernykh07}.
The set of coupled differential equations to describe $\alpha+^{12}\text{C}$ inelastic scattering requires the computation of radial form factors involving the projectile-target interaction. Within the hyperspherical three-body framework, this is a four-body problem involving the $\alpha$-$\alpha$ potential. Between states labeled $i$ and $j$, the form factor is simpl
\begin{equation}
F_{ij} (\vec{R}) = \langle \Psi_i|\sum_{q=1}^3 V_{\alpha\alpha}(|\vec{r}_q-\vec{R}|)|\Psi_j\rangle,
\end{equation}
where $\vec{R}$ is the projectile-target distance and $\vec{r}_q$ is the position of each $\alpha$ within the target. For this purpose, the potential we use is derived from the double folding of two Gaussian densities, adjusted to reproduce the radius of the $\alpha$ particle, with the M3Y nucleon-nucleon interaction~\cite{m3y}. This way, the computed form factors will be consistent with those described in the next section within the formalism of densities and transition densities.
\section{Algebraic cluster model}
\label{sec-2}
Assuming the $D_{3h}$ symmetry corresponding to an equilateral triangle, the density for the ground-state band for $^{12}$C can be writte as~\cite{dellaRocca17}
\begin{equation}
\rho_{\rm g.s.} (\vec{r},\{\vec{r}_k\}) = \sum_{k=1}^3 \rho_\alpha (\vec{r}-\vec{r}_k), ~~~~~~ \rho_\alpha (\vec{r}) = \left(\frac{d}{\pi}\right)^{3/2} e^{-dr^2},
\end{equation}
with $d=0.56(2)$ fm$^{-2}$ to reproduce the radius of the $\alpha$ particle and $\{\vec{r}_k\}$ at the vertices of the triangle, i.e., $\vec{r}_1=(\beta,\pi/2,0)$, $\vec{r}_2=(\beta,\pi/2,2\pi/3)$ and $\vec{r}_3=(\beta,\pi/2,4\pi/3)$ in spherical coordinates. The radial parameter $\beta=1.82$ fm ensures the $0^+_1$ ground-state radius and $B(E2)$ value to the first $2^+_1$ state are reproduced. Expanded in spherical harmonics, it takes the form
\begin{equation}
\rho_{\rm g.s.} (\vec{r})=\sum_{\lambda\mu} \rho_{\rm g.s.}^{\lambda\mu}(r) Y_{\lambda\mu}(\theta,\varphi),
\end{equation}
where only the multipoles allowed by the $D_{3h}$ symmetry appear in the sum. The 00 term represents the 0$^+_1$ ground-state density, while others represent the change in density for transitions to higher lying states of the same band, e.g., 20 is the term associated to the $2^+_1$ state.
By considering now symmetric vibrations $\Delta\beta^A$ along the radial direction, we can construct the band associated to the Hoyle state as a breathing mode. The transition densities connecting the ground-state band with this $A$-type band can be obtained in leading order as
\begin{equation}
\delta\rho_{\text{g.s.}\rightarrow A}(\vec{r}) = \chi_1 \frac{d}{d\beta} \rho_{\rm g.s.} (\vec{r},\beta),
\end{equation}
with $\chi_1\simeq 0.247$ to recover the experimental monopole matrix element $M(E0)$. Again, expanding in multipoles one gets
\begin{equation}
\delta\rho_{\text{g.s.}\rightarrow A}(\vec{r}) = \sum_{\lambda\mu} \delta\rho_{\text{g.s.}\rightarrow A}^{\lambda\mu}(r) Y_{\lambda\mu}(\theta,\varphi).
\end{equation}
For details about these densities and transition densities, see Refs.~\cite{Vitturi19,vittprep19}. In this case, form factors for $\alpha+^{12}\text{C}$ inelastic scattering can be obtained following a double folding procedure with the M3Y interaction introduced above,
\begin{equation}
F_{ij} (\vec{R}) = \int\int \rho_\alpha(\vec{r}_1-\vec{R}) v_{NN}(|\vec{r}_{12}|) \delta\rho^{i\rightarrow j}(\vec{r}_2)d\vec{r}_1d\vec{r}_2.
\end{equation}
\section{Comparison of form factors}
\label{sec-3}
The form factors for the $0^+_1\rightarrow0^+_2$, $0^+_1\rightarrow2^+_1$ and $2^+_1\rightarrow0^+_2$ transitions, computed within the two approaches, are shown for comparison in Fig.~\ref{fig:ff}. A rather good agreement is observed, even though they come from very different theoretical approaches. This may indicate that the two models capture essentially the same geometrical properties of $^{12}$C. Form factors connecting the bound states ($0^+_1,2^+_1$) with the Hoyle state ($0^+_2$) seem to exhibit a larger extension within the hyperspherical approach. This may have implication for the corresponding cross section. Full coupled-channel calculations involving these form factors, as well as those connecting other low-lying states in $^{12}$C, are ongoing and will be presented elsewhere.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{ff.pdf}
\caption{Nuclear form factors for $\alpha+{^{12}}$C inelastic scattering. Comparison between the three-body hyperspherical formalism (dashed) and the algebraic cluster model (solid). See text.}
\label{fig:ff}
\end{figure}
|
1,108,101,566,485 | arxiv | \section{Introduction}
Unstable $r$-modes \cite{A97,FM98} may limit the angular velocity of
old neutron stars spun up by accretion and may contribute to the spin-down of
nascent neutron stars (see \cite{Lindblom98b,fsbook,Bondarescu07,Bondarescu09}
for references and reviews). Spruit \cite{Spruit99} argued that
angular momentum loss from the star would generate differential rotation,
because the loss rate depends on the mode shape and varies over the star.
Growing differential rotation winds up and amplifies the star's magnetic field,
and Rezzolla and collaborators~\cite{Rezzolla00,Rezzolla01b,Rezzolla01c}, studied
the possibility that the energy lost to the magnetic field would damp out
the $r$-mode instability. (In Spruit's scenario, a buoyancy instability of
the greatly enhanced magnetic field could power a $\gamma$-ray burst.)
To estimate the magnetic-field windup, Rezzolla {\it et al.} used a
drift velocity of a fluid element; this is second-order in perturbation
theory, but because the second-order velocity field had not been computed,
they estimated it by integrating the first order velocity field. Subsequently,
Cuofano {\it et al.} \cite{Cuofano2010,Cuofano_etal12} used this estimate of drift
velocity to study the evolution of the $r$-mode instability damped by magnetic
field wind-up.%
\footnote{Work by Abbassi, {\it et al.}~\cite{ARR12}
also looks at the damping of
$r$-modes due to a magnetic field; here, however, the magnetic dissipation
arises from magnetic diffussivity in a linearized MHD treatment.}
Following Spruit's work, Levin and Ushomirsky found the differential rotation
of the unstable $r$-mode in a toy model of a spherical shell of fluid
\cite{LevinUshormirsky01}. S\'a \cite{Sa2004} then carried out the first
computation of the differential rotation associated with a stable $r$-mode of
uniformly rotating barotropic Newtonian stellar models and, with collaborators,
looked at implications of the calculation for the unstable mode~\cite{Sa2005a,Sa2005b}.
The differential rotation arises at second order in perturbation theory as a
time-independent, axisymmetric part of the solution to the perturbed Euler
equations; for the $r$-mode whose linear part is associated with the
angular harmonic $Y^{\ell\ell}$, S\'a's solution has the form
\be
\delta^{(2)}\Omega
= \alpha^2 \Omega C_\Omega
\left(\frac zR\right)^2\left(\frac \varpi R\right)^{2\ell-4}
+ \alpha^2 \delta^{(2)}_N\Omega(\varpi).
\label{e:sa_rotation}\ee
Here $\alpha$ measures the amplitude of the first-order perturbation,
$C_\Omega$ is dimensionless and of order unity, the $z$-axis is
the axis of rotation, and $\varpi$ is the distance from the axis. The
function $\delta^{(2)}_N\Omega(\varpi)$ is arbitrary. This ambiguity in the
rotation law is present for the following reason. One can perturb a
uniformly rotating barotropic star by adding differential rotation,
changing the angular velocity from $\Omega$ to
$\Omega+\delta\Omega(\varpi)$. If $\delta\Omega(\varpi)$ is chosen to
be quadratic in $\alpha$, $\delta\Omega(\varpi)=\alpha^2
\delta^{(2)}_N\Omega(\varpi)$, it and the corresponding time-independent
perturbations of density, pressure, and gravitational potential
$\Phi$, constitute a solution to the time-independent second-order
perturbation equations. Cao {\it et al.}~\cite{CZW15} use a particular
choice of $\delta^{(2)}\Omega$ to recompute the magnetic damping.
In the present paper, we show that the second-order radiation-reaction
force removes the ambiguity in the differential rotation associated
with the Newtonian $r$-modes. In effect, the degeneracy in the space of
zero-frequency solutions is broken by the radiation-reaction force,
which picks out a unique differential rotation law that depends on the
neutron-star equation of state. We find an explicit formula for that
rotation law for the unstable $r$-modes of
slowly rotating stars.
To lowest nonvanishing post-Newtonian order, the growth time $\tau$ of the
radiation-reaction driven (CFS) instability of an $r$-mode is given by
\[
\beta\equiv \frac1\tau
= C_\beta \frac G{c^{2\ell+3}} M R^{2\ell} \Omega^{2\ell+2},
\]
where $C_\beta$ is a dimensionless constant that depends on the
equation of state. In using the Newtonian Euler equation together
with the radiation-reaction force at lowest nonvanishing
post-Newtonian order, we are neglecting radiation-reaction terms
smaller by factors of ${\cal O}(R\Omega/c)$ and ${\cal O}(GM/Rc^2)$; this means, in
particular, that we keep only terms linear in the dimensionless
parameter $\beta/\Omega$.
Three small parameters appear in the paper: The amplitude $\alpha$ of
the perturbation, the dimensionless growth rate $\beta/\Omega$, and,
in the final, slow-rotation part of the paper, the angular velocity $\Omega$.
For the logic of the paper, it is helpful to
note that these three parameters can be regarded as independent of one another.
The growth rate $\beta$ can be varied by changing the equation of state of
the material while keeping $\alpha$ and $\Omega$ fixed; for example, in polytropes
(stars based on the polytropic equation of state $p=K\rho^n$), one can change
$\beta$ by changing the polytropic constant $K$.
The plan of the paper is as follows. Sect.~\ref{s:newtonian} lists
the equations governing a Newtonian star acted on by a post-Newtonian
radiation-reaction force, with the star modeled as a self-gravitating
perfect fluid. In Sect.~\ref{s:Perturbed Stellar Models}, we discuss
first- and second-order perturbations of a uniformly rotating star.
From the second-order equations, we obtain a formal expression for the
unique differential rotation law of an unstable $r$-mode in
terms of the first-order perturbations and second-order contributions
that will turn out to be of higher-order in $\Omega$.
Up to this point in the paper, the analysis holds
for rapidly rotating stars. In Sect.~\ref{s:SlowRotation}, we
specialize to a slowly rotating background, keeping terms of lowest
nonvanishing order in $\Omega$ and thereby obtaining an explicit
formula for the radiation-reaction induced differential rotation.
Finally, a discussion section briefly comments on the validity of
the results for an accreting neutron star, when one includes magnetic fields,
nonzero initial data for other modes, and viscosity.
Our notation for fluid perturbations is chosen to make explicit the
orders of the expansions in the amplitude $\alpha$ and angular
velocity $\Omega$. The notation is
defined as it is introduced in Secs. II and III, but, for easy
reference, we also provide a table that summarizes the notation
in Appendix~\ref{s:Notation}. We use gravitational units, setting $G=c=1$.
\section{Newtonian Stellar Models}
\label{s:newtonian}
Let $Q=\{\rho, v^a,p,\Phi\}$
denote the collection of fields that
determine the state of the fluid in a self-gravitating Newtonian
stellar model. The quantity $\rho$ represents the mass density, $v^a$
the fluid velocity, $p$ the pressure, and $\Phi$ the gravitational
potential. For a barotropic equation of state $p=p(\rho)$,
the specific enthalpy $h$ of the fluid is
\be
h= \int_0^p \frac{dp}{\rho},
\ee
and we define a potential $U$ by
\be
U= h + \Phi.
\ee
The evolution of the fluid is determined by Euler's equation,
the mass-conservation law, and the Poisson equation for the Newtonian gravitational
potential. These equations may be written as
\bea
E^a&\equiv& \partial_tv^a + v^b\nabla_bv^a + \nabla^a U=f^a_{GR},
\label{e:FullEulerEquation}\\
0&=&\partial_t\rho + \nabla_a(\rho v^a),\\
\nabla^2\Phi &=& 4\pi\rho.
\label{e:GravPotentialEq}
\eea
The version of the Euler equation that we use,
Eq.~(\ref{e:FullEulerEquation}), includes $\vec f_{GR}$, the
post-Newtonian gravitational radiation-reaction force (per unit mass).
This force plays a central role in the nonlinear evolution of the
$r$-modes that is the primary focus of our paper. It is
given by
\bea
&&
\!\!\!\!\!
\vec f_{GR}=\sum_{l\geq 2}\sum_{|m|\leq l} \frac{(-1)^{\ell+1}N_\ell}{32\pi}
\,\Re\Biggl\{
\frac{\vec \nabla(r^\ell Y^{\ell m})}{\sqrt{\ell}}
\frac{d^{\,2\ell+1}I^{\ell m}}{dt^{\,2\ell+1}}
\nonumber\\
&&\!\!\!\!\!
-
\frac{2r^\ell\vec Y^{\ell m}_{B}}{\sqrt{\ell+1}}
\frac{d^{\,2\ell+2}S^{\ell m}}{dt^{\,2\ell+2}}
-
\frac{2\vec v\times \vec \nabla(r^\ell Y^{\ell m})}{\sqrt{\ell}}
\frac{d^{\,2\ell+1}S^{\ell m}}{dt^{\,2\ell+1}}\Biggr\},\quad
\label{e:GRForceDef}
\eea
where $\Re (Z)$ denotes the real part of a complex quantity $Z$. The
quantities $I^{\ell m}$ and $S^{\ell m}$ are the complex mass and
current multiple moments of the fluid source
(cf. Thorne~\cite{Thorne1980} Eqs.~5.18a,b) defined by,
\bea
I^{\ell m}&=&
\frac{N_\ell}{\sqrt{\ell}}
\int \rho\, r^\ell Y^{*\ell m} d^3 x,\\
S^{\ell m}&=&\frac{2N_\ell}{\sqrt{\ell+1}}
\int \rho \,r^\ell \vec v \cdot \vec Y^{*\ell m}_{B} d^3x,
\eea
with $N_\ell$ the constant
\bea
N_\ell = \frac{16\pi}{(2\ell+1)!!}
\sqrt{\frac{(\ell+2)(\ell+1)}{2(\ell-1)}}.
\label{e:NlDef}
\eea
The functions $Y^{\ell m}$ are the standard spherical harmonics,
while the $\vec Y^{\ell m}_B$ are the magnetic-type vector
harmonics defined by
\bea
\vec Y^{\ell m}_B = \frac{\vec r\times \vec\nabla Y^{lm}}{\sqrt{\ell(\ell+1)}}.
\eea
We use the normalizations $1=\int |Y^{\ell m}|^2 d\cos\theta d\phi$
and $1=\int |\vec Y^{\ell m}_B|^2 d\cos\theta d\phi$ for these
spherical harmonics. In Cartesian coordinates $\vec r$ is given by
$\vec r =(x,y,z)$. We point out that this expression for the gravitational
radiation-reaction force, Eq.~(\ref{e:GRForceDef}), agrees with the
mass-multipole part of the force given by Ipser and
Lindblom~\cite{Ipser1991}. It also agrees with the current-multipole
part of the force given by Lindblom, {\it et al.}~\cite{Lindblom2001}
(following Blanchet~\cite{Blanchet1997} and Rezzolla, {\it et
al.}~\cite{Rezzolla1999}) for the $\ell=2$ and $m=2$ case. The general
form of the force given in Eq.~(\ref{e:GRForceDef}), however, is new.
The post-Newtonian radiation-reaction force is gauge dependent, so the
expression for it is not unique. We derived the expression for
the force given in Eq.~(\ref{e:GRForceDef}) by
requiring that it implies a time-averaged (over several
oscillation periods) power $\langle\!\langle
dE/dt\rangle\!\rangle|_{GR}$ (which is gauge invariant), and angular
momentum flux $\langle\!\langle d\vec J/dt\rangle\!\rangle|_{GR}$
lost to gravitational waves that agree with the standard
post-Newtonian expressions, cf. Thorne~\cite{Thorne1980}. We
present expressions for these flux quantities in
Appendix~\ref{s:RadiationReaction} that are equivalent to, but are
somewhat simpler than the standard ones.
We consider small perturbations of rigidly rotating, axisymmetric,
barotropic equilibrium models (models with a barotropic equation
of state). The fluid velocity in these equilibria is
denoted
\be
\vec v = \Omega\,\vec\phi,
\label{e:EquilibriumV}
\ee
where $\vec\phi$ generates rotations about the $z$ axis; in Cartesian
coordinates, $\vec\phi =(-y,x,0)$. For barotropic equilibria,
Euler's equation reduces to
\be
0= \nabla_a(h+\Phi -{\scriptstyle\frac{1}{2}} \varpi^2\Omega^2),
\label{e:EquilibriumEuler}
\ee
where $h$ is the specific enthalpy of the fluid and $\varpi$ is the cylindrical
radial coordinate, $\varpi^2=x^2+y^2$.
The surface of the star is the boundary where the
pressure and the enthalpy vanish: $p=h=0$.
\section{Perturbed Stellar Models}
\label{s:Perturbed Stellar Models}
We denote by $Q(\alpha,t,\vec x)$ a one-parameter family of stellar
models. For each value of the parameter $\alpha$, $Q(\alpha,t,\vec x)$
satisfies the full nonlinear time-dependent
Eqs.~(\ref{e:FullEulerEquation})--(\ref{e:GravPotentialEq}).
We assume that the model with $\alpha=0$ is an axisymmetric
equilibrium model, as described in
Eqs.~(\ref{e:EquilibriumV})--(\ref{e:EquilibriumEuler}).
The exact perturbation $\delta Q$, defined as the difference between
$Q(\alpha)$ and $Q(0)$, is defined everywhere on the intersection of
the domains where $Q(\alpha)$ and $Q(0)$ are defined:
\be
\delta Q(\alpha, t,\vec x)\equiv Q(\alpha,t,\vec x)-Q(0,t,\vec x).
\ee
It is also be useful to define $\delta^{(n)} Q$, the derivatives
of the one parameter family $Q(\alpha)$ evaluated at the
unperturbed stellar model, where $\alpha=0$:
\be
\delta^{(n)} Q(t,\vec x) = \frac{1}{n!}\frac{\partial^{\,n}\,
Q(\alpha,t,\vec x)}
{\partial\alpha^n}\biggr|_{\alpha=0}.
\label{e:deltaN Q Def}
\ee
These derivatives can be used to define a formal power series expansion
for $\delta Q$:
\be
\delta Q(\alpha, t,\vec x)= \alpha\, \delta^{(1)} Q(t,\vec x) + \alpha^2\, \delta^{(2)}
Q(t,\vec x) + {\cal O}(\alpha^3).
\label{e:delta Q}
\ee
Each point in
the interior of the unperturbed star is, for sufficiently small $\alpha$,
in the interior of the perturbed star; the derivatives
$\delta^{(n)} Q$ defined in Eq.~(\ref{e:deltaN Q Def})
and the formal power series expansion in Eq.~(\ref{e:delta Q})
are thus well-defined at all points of the interior of
the unperturbed star, but may diverge at the surface.
We consider constant-mass sequences of stellar models, i.e., models
whose exact mass perturbations, $\delta M = M(\alpha)- M(\alpha=0)$
vanish identically for all values of
$\alpha$. The integrals of the $n^\mathrm{th}$-order density
perturbations therefore vanish identically for these models:
\be
0=\frac{1}{n!}\left.\frac{d^{\,n} M(\alpha)}{d\alpha^n}\right|_{\alpha=0} =
\int \delta^{(n)} \rho\, \sqrt{g}\,d^{\,3}x.
\label{e:MassConservationIntegral}
\ee
The exact (to all orders in the perturbation parameter
$\alpha$) perturbed evolution equations for these stellar
models can be written in the form
\bea
\delta E^a &=& (\partial_t + \Omega \Lie_\phi) \delta v^a
+ 2 \Omega \delta v^b \nabla_b\phi^a
+\nabla^a\delta U,\nonumber\\
&&\qquad\qquad\qquad\,\,
+ \,\,\delta v^b\nabla_b\delta v^a = \delta f_{GR}^a,
\label{e:PerturbedEulerExact}\\
0&=&(\partial_t + \Omega \Lie_\phi)\delta\rho
+\nabla_a(\rho\,\delta v^a + \delta \rho\,\delta v^a),
\label{e:PerturbedMassConsExact}\qquad\\
\nabla^2\delta\Phi &=&4\pi\delta\rho,
\label{e:PerturbedGravPotentialEqExact}
\eea
where $\Lie_\phi$ is the Lie derivative along the vector field $\vec \phi$,
and $\rho$ is the density of the unperturbed star.
The exact perturbed gravitational radiation-reaction force $\delta \vec f_{GR}$
that appears in Eq.~(\ref{e:PerturbedEulerExact}) is given by
\bea
&&
\!\!\!
\delta \vec f_{GR}=\sum_{l\geq 2}\sum_{|m|\leq l} \frac{(-1)^{\ell+1}N_\ell}{32\pi}
\,\Re\Biggl\{
\frac{\vec \nabla(r^\ell Y^{\ell m})}{\sqrt{\ell}}
\frac{d^{\,2\ell+1}\delta I^{\ell m}}{dt^{\,2\ell+1}}
\nonumber\\
&
-
\frac{2r^\ell\vec Y^{\ell m}_{B}}{\sqrt{\ell+1}}
\frac{d^{\,2\ell+2}\delta S^{\ell m}}{dt^{\,2\ell+2}}
-
\frac{2\Omega\vec\phi
\times \vec \nabla(r^\ell Y^{\ell m})}{\sqrt{\ell}}
\frac{d^{\,2\ell+1}\delta S^{\ell m}}{dt^{\,2\ell+1}}\nonumber\\
&&
-
\frac{2\delta\vec v
\times \vec \nabla(r^\ell Y^{\ell m})}{\sqrt{\ell}}
\frac{d^{\,2\ell+1}\delta S^{\ell m}}{dt^{\,2\ell+1}}
\Biggr\},
\label{e:PerturbedGRForceExact}
\eea
where
\bea
\delta I^{\ell m}&=&
\frac{N_\ell}{\sqrt{\ell}}
\int \delta \rho\, r^\ell Y^{*\ell m} d^3 x,
\label{e:PerturbedMassMultipole}\\
\delta S^{\ell m}&=&\frac{2N_\ell}{\sqrt{\ell+1}}
\int r^\ell \left[\rho\,\delta \vec v+\delta \rho \,
\left(\Omega\vec\phi
+\delta \vec v\right)\right]\cdot \vec Y^{*\ell m}_{B} d^3x,
\nonumber
\label{e:PerturbedCurrentMultipole}\\
\eea
It is convenient to decompose the perturbations
$\delta Q$ into parts $\delta_N Q$ that satisfy the pure Newtonian
evolution equations, and parts $\delta_R Q$ caused by the addition of
the gravitational radiation-reaction force. In particular the
non-radiative stellar perturbations $\delta_N Q$ satisfy the perturbed
Euler equation:
\be
\delta \vec E = 0.
\ee
When the effects of gravitational radiation-reaction are included, the
complete perturbation, $\delta Q$, satisfies the Euler equation
driven by the gravitational radiation-reaction force
\be
\delta \vec E = \delta \vec f_{GR}.
\ee
\subsection{First Order Perturbations}
\label{s:FirstOrderPerturbations}
The classical first-order (in powers of $\alpha$) $r$-modes have
angular and temporal dependence \cite{PP78,fsbook}
\bea \delta^{(1)}_N \rho &=& \delta^{(1)}_N \hat\rho_- \,\sin \psi_N,
\label{e:deltaINRrho}\\
\delta^{(1)}_N v^a &=& \varpi^{-2}\phi^a\phi_b\delta^{(1)}_N \hat v^b_+\,
\sin \psi_N +P^{\,a}{}_b
\delta^{(1)}_N \hat v^b_+\, \cos \psi_N,\nonumber\\\\
\delta^{(1)}_N U &=& \delta^{(1)}_N \hat U_-\,\sin \psi_N,\\
\delta^{(1)}_N \Phi &=& \delta^{(1)}_N \hat \Phi_-\,\sin \psi_N,
\label{e:deltaINRPhi}
\eea
where $\psi_N=\omega_Nt+m\phi$, with $m\neq 0$. The tensor
\be
P^{\,a}{}_b\equiv \delta^a{}_b-\varpi^{-2}\phi^a\phi_b
\label{e:projection}\ee
is the
projection operator orthogonal to $\phi^a$, and $\delta^{(1)}_N \hat
Q=\delta^{(1)}_N \hat Q(\varpi,z)$ depends on the cylindrical coordinates
$\varpi$ and $z$, but not on $\phi$ or $t$. The origin of time has
been chosen to give the perturbations definite parity under the
diffeomorphism $\phi\rightarrow -\phi$ at $t=0$. We use the term
$\phi$-{\it parity} to mean parity under this transformation. The
subscripts $\pm$ indicate that $\delta^{(1)}_N\hat\rho_-$, $\delta^{(1)}_N
\hat U_-$, and $\delta^{(1)}_N \hat\Phi_-$ are parts of odd $\phi$-parity
scalars, while $\delta^{(1)}_N \hat v^a_+$ is part of an even $\phi$-parity
vector field.
When gravitational radiation reaction is included, the Euler equation
is altered by the relatively weak radiation-reaction force $\vec f_{GR}$.
The first order radiation-reaction force can be written in the form:
\bea
\delta^{(1)} \vec f_{GR}=\beta\delta^{(1)}_N \vec v_+ + \delta^{(1)}_\perp \vec f_{GR+},
\label{e:GRForceParityEq}
\eea
where $\beta$ is the growth rate of the $r$-mode instability, and
$\delta^{(1)}_\perp \vec f_{GR+}$ is (by definition)
the even $\phi$-parity part of the
radiation-reaction force that is orthogonal to $\delta^{(1)}_N \vec v_+$
and that
therefore does not contribute directly to the energy evolution of the mode.
Equation~(\ref{e:PerturbedGRForceExact}) implies that the odd
$\phi$-parity part of the radiation-reaction force, $\delta^{(1)}_\perp \vec f_{GR-}$,
vanishes when the classical $r$-mode is chosen to have the
$\phi$-parity given in Eqs.~(\ref{e:deltaINRrho})--(\ref{e:deltaINRPhi}).
The gravitational radiation-reaction force causes an instability by
introducing an imaginary part $\beta$ to the frequency of the
mode. The overall structure of the modes is therefore changed in the
following way (schematically):
\bea \delta^{(1)} \rho &=& \left(\delta^{(1)}_N \hat\rho_-+\delta^{(1)}_R \hat\rho_-\right)
\sin\psi \,e^{\beta t}
+\delta^{(1)}_R \hat\rho_+
\cos\psi\, e^{\beta t},
\label{e:deltaIrho}\nonumber\\\\
\delta^{(1)} v^a &=& \delta^{(1)}_R \hat v^b_-
\Bigl[\varpi^{-2}\phi^a\phi_b\, \cos\psi+P^{\,a}{}_b
\, \sin\psi\Bigr]e^{\beta t}\nonumber\\
&&+
\left(\delta^{(1)}_N \hat v^b_++\delta^{(1)}_R \hat v^b_+\right)
\times\nonumber\\
&&\quad\Bigl[\varpi^{-2}\phi^a\phi_b\, \sin\psi+P^{\,a}{}_b
\, \cos\psi\Bigr]e^{\beta t},\\
\delta^{(1)} U &=&
\left(\delta^{(1)}_N \hat U_-+\delta^{(1)}_R \hat U_-\right)
\sin\psi\, e^{\beta t}\nonumber\\
&&\quad+\delta^{(1)}_R \hat U_+
\cos\psi\, e^{\beta t},\\
\delta^{(1)} \Phi &=&\left(
\delta^{(1)}_N \hat \Phi_-+\delta^{(1)}_R \hat \Phi_-\right)
\sin\psi\, e^{\beta t}\nonumber\\
&&\quad+\delta^{(1)}_R \hat \Phi_+
\cos\psi\, e^{\beta t},
\label{e:deltaIPhi}
\eea where $\psi=\psi_N+\psi_R=\omega_N t +\omega_R t + m\phi$. The
radiative corrections $\delta^{(1)}_R \hat Q$ are smaller than the
non-radiative perturbations $\delta^{(1)}_N\hat Q$ by terms of order ${\cal
O}(\beta/\omega_N)$. The radiative correction $\omega_R$ to the frequency,
is smaller than $\omega_N$ by a term of order ${\cal O}(\beta/\omega_N)^2$,
so we ignore that change here, setting $\psi=\psi_N$.%
\footnote{Friedman and Schutz~\cite{FriedmanSchutz1975}
derive the following general expression for the frequencies of the
modes of Lagrangian systems (including Newtonian fluids with
gravitational radiation-reaction forces):
$0=A(\omega+i\beta)^2-(B+iD)(\omega+i\beta)-C$, where $A$, $B$, $C$
and $D$ are real. The term $D$ vanishes for non-dissipative
Newtonian fluid stars. When $D$ is small, it is straightforword to
show that the real part of the frequency, $\omega$, differs from the
frequency of the non-dissipative $D=0$ system, $\omega_N$, by terms
of order $D^2$: $\omega=\omega_N+{\cal O}(D^2)$. It is also easy to
show that the imaginary part of the frequency $\beta$ is
proportional to $D$ for a mode with $\beta_N=0$.}
The radiative corrections to the $r$-mode, $\delta^{(1)}_R Q$, are
determined by substituting
Eqs.~(\ref{e:deltaIrho})--(\ref{e:deltaIPhi}) into the first-order
perturbed mass conservation and Euler equations. After applying the
equations satisfied by the non-radiative parts of the perturbations,
$\delta^{(1)}_N Q$, the resulting system of equations can be divided into
parts proportional to $\sin\psi_N$ and $\cos\psi_N$ respectively, each of
which must vanish separately. The resulting equations can be divided
further into a set that determines $\delta^{(1)}_R \hat\rho_- $, $\delta^{(1)}_R
\hat U_- $, and $\delta^{(1)}_R\hat v^a_+$, and another that determines
$\delta^{(1)}_R \hat \rho_+ $, $\delta^{(1)}_R \hat U_+ $, and $\delta^{(1)}_R\hat
v^a_-$.
The equations that determine the radiative corrections
having the same $\phi$-parity as the classical non-radiative $r$-modes
are then
\bea
&& \!\!\!\!\!
(\omega_N+m\Omega)\,\delta^{(1)}_R \hat\rho_-
+ m\rho\varpi^{-2} \phi_a\,\delta^{(1)}_R\hat v^a_+\nonumber\\
&&\qquad
+\nabla_a\left(\rho P^a{}_b\delta^{(1)}_R \hat v^b_+\right)=0,\quad
\label{e:FirstOrderOddParityRhoEq}\\
&&\!\!\!\!\!
\left[(\omega_N+m\Omega)\phi_a+2\varpi\Omega\nabla_a\varpi\right]
\delta^{(1)}_R\hat v^a_+= - m\,\delta^{(1)}_R \hat U_-,\quad\\
&&\!\!\!\!\!
\left[(\omega_N+m\Omega)P^a{}_b+\frac2\varpi\Omega\nabla^a\varpi\phi_b\right]
\delta^{(1)}_R\hat v^b_+\nonumber\\
&&\qquad= P^{ab}\nabla_b\,\delta^{(1)}_R \hat U_-.
\label{e:FirstOrderEvenParityVelocityEq}
\eea
These equations are homogeneous and are identical to those satisfied
by the classical $r$-modes. The solutions for $\delta^{(1)}_R \hat \rho_-
$, $\delta^{(1)}_R \hat U_- $, and $\delta^{(1)}_R\hat v^a_+$ are therefore
proportional to the classical $r$-modes: $\delta^{(1)}_N \hat \rho_- $,
$\delta^{(1)}_N \hat U_- $, and $\delta^{(1)}_N\hat v^a_+$. The effect of
adding these radiative corrections to the classical $r$-modes is
simply to re-scale its amplitude. We choose to keep the amplitude,
$\alpha$, of the mode fixed, and therefore without loss of generality
we set
\bea
0=\delta^{(1)}_R\hat \rho_- =\delta^{(1)}_R \hat U_- =\delta^{(1)}_R\hat v^a_+.
\eea
It follows that the first-order radiative corrections have $\phi$-parity
opposite to that of the classical $r$-modes: $\delta^{(1)}_R
\hat\rho=\delta^{(1)}_R\hat\rho_+$, $\delta^{(1)}_R \hat U = \delta^{(1)}_R \hat U_+
$, and $\delta^{(1)}_R\hat v^a = \delta^{(1)}_R\hat v^a_-$. They are determined
by the equations
\bea
&&\!\!\!\!\!
(\omega_N+m\Omega)\,\delta^{(1)}_R\hat \rho + m\rho\varpi^{-2}
\phi_a\,\delta^{(1)}_R\hat v^a
\label{e:FirstOrderEvenParityRhoEq}
\nonumber\\
&&\qquad-\nabla_a\left(\rho
P^a{}_b\delta^{(1)}_R \hat v^b\right)=\beta\, \delta^{(1)}_N
\rho,\\
&&\!\!\!\!\!
\left[(\omega_N+m\Omega)\phi_a-2\varpi\Omega\nabla_a\varpi\right]
\delta^{(1)}_R\hat v^a+ m\,\delta^{(1)}_R \hat U\nonumber\\
&&\qquad=
\phi_b\,\delta^{(1)}_\perp \hat f^b_{GR},\qquad\quad\\
&&\!\!\!\!\!
\left[(\omega_N+m\Omega)P^a{}_b-\frac2\varpi\Omega\nabla^a\varpi\phi_b\right]
\delta^{(1)}_R\hat v^b\nonumber\\
&&\qquad+ P^{ab}\nabla_b\delta^{(1)}_R \hat
U = P^a{}_b\,\delta^{(1)}_\perp \hat f^b_{GR}.
\label{e:FirstOrderOddParityVelocityEq}
\eea
The general solution to the inhomogeneous system,
Eqs.~(\ref{e:FirstOrderEvenParityRhoEq})--(\ref{e:FirstOrderOddParityVelocityEq}),
for $\delta^{(1)}_R\hat\rho$, $\delta^{(1)}_R\hat U$, and $\delta^{(1)}_R\hat v^a$
consists of an arbitrary solution to the homogeneous equations
(obtained by setting $\beta\delta^{(1)}_N\hat\rho =\delta^{(1)}_\perp
f_{GR}^a=0$) plus a particular solution. These homogeneous equations
are identical to
Eqs.~(\ref{e:FirstOrderOddParityRhoEq})--(\ref{e:FirstOrderEvenParityVelocityEq}),
so their general solution is a multiple of the classical $r$-modes.
Because their $\phi$-parity is opposite to that of the classical
$r$-modes the effect of the homogeneous contributions
$\delta^{(1)}_R\hat\rho$, $\delta^{(1)}_R\hat U$, and $\delta^{(1)}_R\hat v^a$ is to
change the overall phase of the mode. We choose (by appropriately
adjusting the time that we label $t=0$) to keep this phase unchanged,
and we can therefore, without loss of generality, set to zero the
homogeneous parts of the solutions to
Eqs.~(\ref{e:FirstOrderOddParityRhoEq})--(\ref{e:FirstOrderEvenParityVelocityEq}).
The inhomogeneous terms on the right sides of
Eqs.~(\ref{e:FirstOrderEvenParityRhoEq})--(\ref{e:FirstOrderOddParityVelocityEq}),
$\beta\delta^{(1)}_N\hat\rho$ and $\delta^{(1)}_\perp \hat f_{GR}^a$, are all of
order $\beta$. Thus the particular solution to
Eqs.~(\ref{e:FirstOrderEvenParityRhoEq})--(\ref{e:FirstOrderOddParityVelocityEq})
must also be of order $\beta$ as well. It follows that the
radiation-reaction corrections to the first-order $r$-modes $\delta^{(1)}_R
Q$ are smaller than the classical $r$-modes $\delta^{(1)}_N Q$ by terms of
order $\cal{O}(\beta/\omega)$. To lowest-order in $\beta$, therefore,
the corrections to the first-order $r$-modes in
Eqs.~(\ref{e:deltaIrho})--(\ref{e:deltaIPhi}) simply change the
overall scale of the mode by the factor $e^{\beta t}$: $\delta^{(1)} Q =
\delta^{(1)}_N Q\, e^{\beta t}$.
\subsection{Second-Order Perturbations}
\label{s:SecondOrderPerturbations}
The second-order perturbation equations are a sum of terms linear
in $\delta^{(2)} Q$ and terms quadratic in $\delta^{(1)} Q$. For example,
the second-order perturbation of the Euler equation,
$\displaystyle\delta^{(2)} E^a = \left.\frac12\frac {d^2}{d\alpha^2}E^a\right|_{\alpha=0}$,
includes the term $\delta^{(1)} v^b\nabla_b\delta^{(1)} v^a$, which serves as
an effective source term for the second-order perturbations
$\delta^{(2)} v^a$ and $\delta^{(2)} U$. In the absence of gravitational
radiation reaction, it follows that the
second-order Newtonian $r$-mode $\delta^{(2)}_N Q$ is a sum of
terms of three kinds: a term with angular and temporal dependence
$\cos(2\psi_N)$, where $\psi_N=m\phi+\omega_N t$, a term with dependence
$\sin(2\psi_N)$,
and a term that is time independent and axisymmetric.
This time-independent axisymmetric part of the
velocity perturbation can be regarded as differential rotation.
As we have emphasized in the Introduction, the second-order Newtonian $r$-modes are not determined uniquely:
Given a particular solution $\delta^{(2)}_{NP} Q$ to the second-order Newtonian perturbation equations with perturbed velocity field $\delta^{(2)}_{NP} v^a$,
there is a family of solutions $\delta^{(2)}_N Q$ with perturbed velocity field
$\delta^{(2)}_N v^a = \delta^{(2)}_{NP} v^a + \delta^{(2)}_N\Omega(\varpi)\phi^a$, where
$\delta^{(2)}_N\Omega(\varpi)$ is arbitrary.
This degeneracy is broken by
gravitational radiation reaction. The presence of the
radiation-reaction force picks out a unique $\delta^{(2)} v^a$ that
displays the gravitational radiation driven growth of the second-order
$r$-modes: $\delta^{(2)} v^a\propto e^{2\beta t}$.
To find this differential rotation law, one must solve the second-order
axisymmetric perturbation equations with radiation-reaction force for
the axisymmetric parts of the second-order $r$-modes.
Denote the axisymmetric part of a perturbation $\delta Q$ by
$\bigl\langle \delta Q \bigr\rangle$, and denote by $\delta^{(2)}\Omega$
the exponentially growing differential rotation of the unstable $r$-mode:
\be
\delta^{(2)}\Omega \equiv \bigl\langle\delta^{(2)}_N v^\phi\bigr\rangle e^{2\beta t}
= [\bigl\langle\delta^{(2)}_{NP} v^\phi\bigr\rangle
+ \delta^{(2)}_N\Omega(\varpi)] e^{2\beta t}.
\label{e:Omega_decomp}\ee
Without solving the full system, however, one can obtain
a formal expression for $\delta^{(2)}\Omega$ in terms of the
known first-order perturbation together with
other parts of the second-order axisymmetric perturbation.
As we will see in the next section, this expression is all that is
needed to find $\delta^{(2)}\Omega$ to lowest nonvanishing
order in $\Omega$: The other parts of the second-order perturbation
give only higher-order contributions. Finding this formal expression
for $\delta^{(2)}\Omega$ and showing that it is
unique are the goals of the present section.
We now turn our attention to solving the perturbation equations for
the axisymmetric parts of the second-order $r$-modes. The axisymmetric parts of the second-order
perturbations can be written in terms of their radiative and
non-radiative pieces:
\begin{subequations}
\bea
\bigl\langle\delta^{(2)} \rho \bigr\rangle&=& \Bigl(\bigl\langle \delta^{(2)}_N \rho\bigr\rangle
+\bigl\langle \delta^{(2)}_R \rho\bigr\rangle\Bigr)e^{2\beta t},\\
\bigl\langle\delta^{(2)} v^a \bigr\rangle&=&
\Bigl(\bigl\langle \delta^{(2)}_N v^a\bigr\rangle
+\bigl\langle \delta^{(2)}_R v^a\bigr\rangle\Bigr)e^{2\beta t},\\
\bigl\langle\delta^{(2)} U \bigr\rangle&=& \Bigl(\bigl\langle \delta^{(2)}_N U\bigr\rangle
+\bigl\langle \delta^{(2)}_R U\bigr\rangle\Bigr)e^{2\beta t},\\
\bigl\langle\delta^{(2)} \Phi \bigr\rangle&=&
\Bigl(\bigl\langle \delta^{(2)}_N \Phi\bigr\rangle
+\bigl\langle \delta^{(2)}_R \Phi\bigr\rangle\Bigr)e^{2\beta t},\\
\bigl\langle\delta^{(2)} f^a_{GR} \bigr\rangle&=&
\bigl\langle \delta^{(2)}_R f^a_{GR}\bigr\rangle e^{2\beta t}.
\eea\end{subequations}
These quantities are determined by the second-order axisymmetric parts
of the perturbed stellar evolution equations:
\bea
&&
\!\!\!\!\!\!
2\beta\bigl\langle \delta^{(2)} v^a\bigr\rangle
+ 2\Omega\bigl\langle\delta^{(2)} v^b\bigr\rangle\nabla_b\phi^a
+\nabla^a\bigl\langle\delta^{(2)} U\bigr\rangle
\nonumber\\
&&\qquad\qquad\qquad = \bigl\langle \delta^{(2)}\! f^a_{GR}\bigr\rangle
-\bigl\langle\delta^{(1)} v^b\,\nabla_b\delta^{(1)} v^a\bigr\rangle,
\label{e:PerturbedEulerII}\qquad\\
&&
\!\!\!\!\!\!
2\beta\bigl\langle\delta^{(2)} \rho\bigr\rangle + \nabla_a\Bigl[
\rho \bigl\langle\delta^{(2)} v^a\bigr\rangle + \bigl\langle \delta^{(1)} \rho\, \delta^{(1)} v^a\bigr\rangle
\Bigr] = 0,
\label{e:PerturbedMassConsII}\\
&&
\!\!\!\!\!\!
\nabla^2 \bigl\langle\delta^{(2)}\Phi\bigr\rangle = 4\pi \bigl\langle\delta^{(2)}
\rho\bigr\rangle.
\label{e:PerturbedPoissonII}\eea
The uniqueness of the second-order differential rotation $\delta^{(2)}\Omega$
can be seen as follows. Let $\langle\delta^{(2)} Q\rangle$ and
$\langle\delta^{(2)} \widetilde Q\rangle$ be two solutions to the
second-order perturbation equations, Eqs.~(\ref{e:PerturbedEulerII}),
(\ref{e:PerturbedMassConsII}), and (\ref{e:PerturbedPoissonII}),
associated with the same time-dependence $e^{2\beta t}$ and with the same
first-order solution $\delta^{(1)} Q$. The difference
$\langle\delta^{(2)} Q\rangle - \langle\delta^{(2)} \widetilde Q\rangle$
of the two solutions then satisfies the {\it linearized} Poisson equation and
the {\it linearized} Euler and mass conservation equations obtained by
setting to zero the terms involving $\delta^{(1)} v^a$ and $\delta^{(2)} f^a_{GR}$
in Eqs.~(\ref{e:PerturbedEulerII}) and (\ref{e:PerturbedMassConsII}).
That is, $(\langle\delta^{(2)} Q\rangle - \langle\delta^{(2)} \widetilde Q\rangle)e^{2\beta t}$
is an axisymmetric solution to the first-order Newtonian perturbation equations.
But the Newtonian star has no such solution, no mode with growth rate $2\beta$.
Thus $(\langle\delta^{(2)} Q\rangle - \langle\delta^{(2)} \widetilde Q\rangle)e^{2\beta t}=0$,
implying that $\delta^{(2)}\Omega$ is unique.
(Note, however, that the decomposition (\ref{e:Omega_decomp}) is not unique:
The arbitrariness in the differential rotation of the Newtonian $r$-mode
means that one is free to add to $\bigl\langle\delta^{(2)}_{NP}v^\phi\bigr\rangle$
an arbitrary function $f(\varpi)$ if one simultaneously changes $\delta^{(2)}_N\Omega(\varpi)$ to $\delta^{(2)}_N\Omega(\varpi) - f(\varpi)$.)
We now obtain equations for $\delta^{(2)}_N Q$ and $\delta^{(2)}_R Q$.
Keeping terms to first order in $\beta$, the terms quadratic in
first-order perturbed quantities that appear in
Eqs.~(\ref{e:PerturbedEulerII}) and (\ref{e:PerturbedMassConsII}) have
the forms,
\bea
\bigl\langle\delta^{(1)} v^b\nabla_b\delta^{(1)} v^a\bigr\rangle &=&
\left(\bigl\langle\delta^{(1)}_N v^b\nabla_b
\delta^{(1)}_N v^a\bigr\rangle\right.\nonumber\\
&&\qquad\qquad\left.
+\beta\,\bigl\langle\delta^{(2)}_R V^a\bigr\rangle\right) e^{2\beta t},\\
\bigl\langle\delta^{(1)} \rho\, \delta^{(1)} v^a \bigr\rangle &=&
\left(\bigl\langle\delta^{(1)}_N \rho\,
\delta^{(1)}_N v^a \bigr\rangle
+\beta\, \bigl\langle\delta^{(2)}_R W^a\bigr\rangle\right) e^{2\beta t},\nonumber\\
\eea
where
\bea
\beta \bigl\langle\delta^{(2)}_R V^a\bigr\rangle&=&
\bigl\langle\delta^{(1)}_R v^b\nabla_b
\delta^{(1)}_N v^a\bigr\rangle
+\bigl\langle\delta^{(1)}_N v^b\nabla_b\delta^{(1)}_R v^a\bigr\rangle,
\label{e:Vdef}\qquad\\
\beta \bigl\langle\delta^{(2)}_R W^a\bigr\rangle&=&
\bigl\langle\delta^{(1)}_R \rho\,
\delta^{(1)}_N v^a \bigr\rangle + \bigl\langle\delta^{(1)}_N \rho\,
\delta^{(1)}_R v^a \bigr\rangle .\label{e:Wdef}
\eea
The non-radiative parts $\langle\delta^{(2)}_N Q\rangle$ of the perturbations
are determined, up to a perturbation that adds differential rotation
$\delta^{(2)}_N\Omega(\varpi)$,
by the axisymmetric parts of the Newtonian Euler and
mass-conservation equations:
\bea
&&
\!\!\!\!
2\Omega\bigl\langle\delta^{(2)}_N v^b\bigr\rangle\nabla_b\phi^a
+\nabla^a\bigl\langle\delta^{(2)}_N U\bigr\rangle=
-\bigl\langle\delta^{(1)}_N v^b\,\nabla_b\delta^{(1)}_N v^a\bigr\rangle,
\label{e:NRPerturbedEulerII}\nonumber\\\\
&&
\qquad\quad
\nabla_a\Bigl[
\rho\, \bigl\langle\delta^{(2)}_N v^a\bigr\rangle + \bigl\langle \delta^{(1)}_N\! \rho\,\,
\delta^{(1)}_N v^a\bigr\rangle
\Bigr] = 0.
\label{e:NRPerturbedMassConsII}
\eea
Given a particular solution $\delta^{(2)}_{NP} Q$ to these equations, we
want to find the remaining contribution $\delta^{(2)}_N\Omega(\varpi)$
to the differential rotation of Eq.~(\ref{e:Omega_decomp})
that is picked out by the radiation-reaction.
We define the radiative part of the perturbation,
$\bigl\langle\delta^{(2)}_R Q\bigr\rangle$,
by requiring that it be created
entirely by the radiation reaction forces; $\bigl\langle\delta^{(2)}_R Q\bigr\rangle$ is therefore proportional
to the radiation reaction rate $\beta$. When $\langle\delta^{(2)}_N Q\rangle$ satisfies
the Newtonian equations (\ref{e:NRPerturbedEulerII}) and (\ref{e:NRPerturbedMassConsII}), the axisymmetric parts of the
full perturbed Euler and mass-conservation equations with radiation-reaction
have at ${\cal O}(\beta)$ the form
\bea
&&
2\beta\bigl\langle\delta^{(2)}_N v^a\bigr\rangle
+ 2\Omega \bigl\langle \delta^{(2)}_R v^b\bigr\rangle
\nabla_b\phi^a + \nabla^a\bigl\langle\delta^{(2)}_R U\bigr\rangle
\nonumber\\
&&\qquad\qquad\qquad\qquad =\bigl\langle \delta^{(2)}_R\! f_{GR}^{\,a}\bigr\rangle
-\beta\, \bigl\langle\delta^{(2)}_R V^a\bigr\rangle,\\
&&
\label{e:SecondOrderPerturbedEuler}\nonumber\\
&&\nabla_a\Bigl(\rho\,\bigl\langle\delta^{(2)}_R v^a \bigr\rangle\Bigr)
= -2\beta\bigl\langle \delta^{(2)}_N \rho\bigr\rangle-\beta\,\nabla_a
\bigl\langle\delta^{(2)}_R W^a\bigr\rangle.\qquad
\label{e:SecondOrderPerturbedMassConservation}
\eea
To find an expression for $\delta^{(2)}_N\Omega(\varpi)$, we first write
$\bigl\langle\delta^{(2)}_N v^a\bigr\rangle$ as
$\bigl\langle\delta^{(2)}_{NP} v^a\bigr\rangle + \delta^{(2)}_N\Omega(\varpi)\phi^a$ and move the term involving
$\bigl\langle\delta^{(2)}_{NP} v^a\bigr\rangle$ to the right side of
Eq.~(\ref{e:SecondOrderPerturbedEuler}):
\bea
2\beta\delta^{(2)}_N\Omega(\varpi)\phi^a
+2\Omega \bigl\langle \delta^{(2)}_R v^b\bigr\rangle\nabla_b\phi^a
&+& \nabla^a\bigl\langle\delta^{(2)}_R U\bigr\rangle \nonumber\\
&=&\beta \bigl\langle \delta^{(2)}_R F^a\bigr\rangle,
\label{e:SecondOrderPerturbedEuler1}\qquad
\eea
where
\bea
\beta \bigl\langle \delta^{(2)}_R F^a\bigr\rangle
=\bigl\langle \delta^{(2)}_R\! f_{GR}^{\,a}\bigr\rangle
- 2\beta \bigl\langle\delta^{(2)}_{NP} v^a \bigr\rangle
-\beta\, \bigl\langle\delta^{(2)}_R V^a\bigr\rangle.\quad
\label{e:Fdef}
\eea
We next write the components of the axisymmetric part
of the second-order perturbed Euler
equation, Eq.~(\ref{e:SecondOrderPerturbedEuler1}), in cylindrical coordinates:
\begin{subequations}
\bea
2\beta\varpi\delta^{(2)}_N\Omega(\varpi)
&+&2\Omega\bigl\langle \delta^{(2)}_R v^\varpi\bigr\rangle
=\beta\varpi\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle,
\label{e:SecondOrderEulerVarpi}
\qquad\\
-2\Omega\varpi\bigl\langle\delta^{(2)}_R v^\phi\bigr\rangle
&=& -\partial_\varpi \bigl\langle\delta^{(2)}_R U\bigr\rangle
+\beta\bigl\langle \delta^{(2)}_R F^{\varpi}\bigr\rangle,
\label{e:SecondOrderEulerPhi}\\
0&=&
-\partial_z\bigl\langle\delta^{(2)}_R U\bigr\rangle+\beta\bigl\langle \delta^{(2)}_R F^{z}\bigr\rangle.
\qquad\label{e:SecondOrderEulerZ}
\eea\label{e:SecondOrderEuler}\end{subequations}
Using Eq.~(\ref{e:SecondOrderEulerVarpi}) to determine $\bigl\langle
\delta^{(2)}_R v^\varpi\bigr\rangle$, the axisymmetric part of
the second-order mass conservation
Eq.~(\ref{e:SecondOrderPerturbedMassConservation}) can be written as
\bea
&&\frac{\beta}{2\Omega\varpi}
\partial_\varpi\left[\rho\varpi^2\left(\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle
-2\delta^{(2)}_N\Omega(\varpi)\right)\right]\nonumber\\ &&\quad
+ \partial_z\left[\rho\bigl\langle \delta^{(2)}_R
v^z\bigr\rangle\right] =
-2\beta\bigl\langle \delta^{(2)}_N \rho\bigr\rangle-\beta\,\nabla_a
\bigl\langle\delta^{(2)}_R W^a\bigr\rangle.
\nonumber\\
\label{e:SecondOrderMassCons}
\eea
The star's surface is defined as the $p=0$ surface. Because
$\delta^{(2)}\rho$ is a derivative evaluated at $\alpha=0$, it has
support on the unperturbed star. While the density perturbation
$\delta^{(2)}\rho$ is not finite for some equations of state at the
surface of the star, it is integrable in the sense that $\delta^{(2)}\int
\rho\,dz$ is finite, as one would expect from the integrability of
the mass-conservation condition in
Eq.~(\ref{e:MassConservationIntegral}). In particular, for polytropes
with fractional polytropic index $0<n<2$, $\delta^{(2)}\rho$ diverges at
$z=z_S$, but, as we show in Appendix \ref{s:surface}, $\delta^{(2)}\int
\rho\,dz$ is finite. Here we
denote by $z_S(\varpi)$ the value of $z$ (the Cartesian coordinate
axis parallel to the rotation axis) at the surface of the
unperturbed star.
We now multiply the second-order mass conservation equation,
Eq.~(\ref{e:SecondOrderMassCons}), by $2\varpi\Omega/\beta$ and
integrate with respect to $z$ over the support of the star. It will
be convenient to extend the domain of integration to extend slightly
beyond the surface of the unperturbed star. Because each integrand has
support on the unperturbed star, we simply take the integrals to
extend from $-\infty$ to $\infty$ instead of
$-z_S$ to $z_S$. We then have
\bea
0&=&4\varpi \Omega\int_{-\infty}^{\infty} dz\bigl\langle \delta^{(2)}_N\rho\bigr\rangle\nonumber\\
&&+\int_{-\infty}^{\infty} dz\partial_\varpi\left[\rho\varpi^2
\left(\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle
-2\delta^{(2)}_N\Omega(\varpi) \right)
\right]\nonumber\\
&&+2\varpi\Omega \int_{-\infty}^{\infty} dz \nabla_a
\bigl\langle\delta^{(2)}_R W^a\bigr\rangle.
\label{e:Integral1}
\eea
The second integral on the right side of Eq.~(\ref{e:Integral1}) can
be re-written as
\bea
&&\int_{-\infty}^{\infty}
dz\partial_\varpi\left[\rho\varpi^2 \left(\bigl\langle \delta^{(2)}_R
F^{\,\phi}\bigr\rangle -2\delta^{(2)}_N \Omega(\varpi)
\right)\right]=\nonumber\\
&&\qquad\qquad\partial_\varpi\int_{-\infty}^{\infty}
dz\rho\varpi^2 \left(\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle
-2\delta^{(2)}_N \Omega(\varpi)\right).\qquad
\label{e:Integral2}
\eea
The expression in Eq.~(\ref{e:Integral1}) can
then be integrated from $\varpi=0$ to $\varpi$, using
Eq.~(\ref{e:Integral2}), to obtain an expression for
$\delta^{(2)}_N\Omega(\varpi) $:
\bea
&&\!\!\!\!\!\!
2\varpi^2 \delta^{(2)}_N \Omega(\varpi) \int_{-\infty}^{\infty} dz\, \rho =
\varpi^2 \int_{-\infty}^{\infty} dz\, \rho\,
\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle\nonumber\\
&&\quad+4\Omega \int_0^\varpi d\varpi' \varpi'
\int_{-\infty}^{\infty} dz\bigl\langle \delta^{(2)}_N\rho\bigr\rangle
\nonumber\\
&&\quad+2\Omega\int_0^\varpi d\varpi' \varpi' \int_{-\infty}^{\infty} dz
\,\nabla_a\bigl\langle\delta^{(2)}_R W^a\bigr\rangle.
\label{e:Integral3}
\eea
Because of the axisymmetry of its integrand, the third term on the
right side of Eq.~(\ref{e:Integral3}) is, up to a factor of $2\pi$,
the volume integral of a divergence. The boundary of the
three-dimensional region of integration has two parts: One is just
outside the surface of the star, where $\delta^{(2)}_R W^a$ vanishes; the
second is the cylinder at constant $\varpi$ from $-z_S$ to $z_S$, with
outward normal $\nabla_a\varpi$ and element of area $\varpi d\phi
dz$. The term is then given by
\begin{align}
{\displaystyle \int}_0^\varpi d\varpi' \varpi' \int_{-\infty}^{\infty} dz
\,\nabla_a&\bigl\langle\delta^{(2)}_R W^a\bigr\rangle
\nonumber\\
= \varpi \int_{-\infty}^{\infty} dz&\bigl\langle\delta^{(2)}_R W^\varpi\bigr\rangle.\qquad
\end{align}
With this simplification, Eq.~(\ref{e:Integral3})
can be written in the form:
\bea
&&\!\!\!\!\!\!
2\varpi^2 \delta^{(2)}_N \Omega(\varpi) \int_{-\infty}^{\infty} dz\, \rho =
\varpi^2 \int_{-\infty}^{\infty} dz\, \rho\,
\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle\nonumber\\
&&\quad+4\Omega \int_0^\varpi d\varpi' \varpi'
\int_{-\infty}^{\infty} dz\bigl\langle \delta^{(2)}_N\rho\bigr\rangle\nonumber\\
&&
\quad+2\varpi\Omega\int_{-\infty}^{\infty} dz\bigl\langle\delta^{(2)}_R W^\varpi\bigr\rangle.
\label{e:Integral4}
\eea
This provides a formal expression for $\delta^{(2)}_N \Omega(\varpi)$ in terms of the
first-order perturbations that comprise $\bigl\langle \delta^{(2)}_R
F^{\,\phi}\bigr\rangle$ and $\bigl\langle\delta^{(2)}_R W^\varpi\bigr\rangle$
and the second-order perturbation $\bigl\langle \delta^{(2)}_N\rho\bigr\rangle$.
\footnote{
As mentioned above, Appendix \ref{s:surface} shows that
assuming smoothness of the displacement of the surface as a
function of $\alpha$ and $\vec x$ implies integrability of
$\delta^{(2)}_N\rho$. A simpler way to see that the right side of
Eq.~(63) is finite is to note that smoothness of the displacement of
the surface implies one-sided differentiability of $\delta^{(2)}\vec v$
at the surface.
The perturbed mass conservation equation,
Eq.~(\ref{e:SecondOrderPerturbedMassConservation}),
then implies that the combination
$ 2 \langle\delta^{(2)}_N\rho\rangle + \nabla_a \langle\delta^{(2)}_R W^a\rangle$
is finite at the surface and hence integrable.
This is enough to imply that the expression in Eq.~(\ref{e:Integral4}) for
$\delta^{(2)}_N \Omega(\varpi)$ is finite.
}
Together with $\bigl\langle\delta^{(2)}_{NP} v^\phi\bigr\rangle $,
it determines the differential rotation of the unstable $r$-mode.
We conclude this section with a discussion of two simplifications in
evaluating $\delta^{(2)}_N \Omega(\varpi)$, one from the
fact that we work to first order in the growth rate $\beta$, the
second from the slow-rotation approximation of the next section.
The first is a simplification of the expression Eq.~(\ref{e:Integral4})
for the radiation-reaction force. The integrand of the first term in
Eq.~(\ref{e:Integral4}), $\rho\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle$,
is given by the $\phi$-component of Eq.~(\ref{e:Fdef}):
\be \beta \bigl\langle
\delta^{(2)}_R F^{\,\phi}\bigr\rangle = \bigl\langle \delta^{(2)}_R\!
f_{GR}^{\,\phi}\bigr\rangle - 2\beta \bigl\langle\delta^{(2)}_N v^\phi
\bigr\rangle -\beta\, \bigl\langle\delta^{(2)}_R V^\phi\bigr\rangle.
\label{e:betaF}
\ee
To evaluate $\bigl\langle \delta^{(2)}_R\! f_{GR}^{\,\phi}\bigr\rangle$,
we must find the axisymmetric, second-order, part of the expression
for $\delta\vec f_{GR}$ on the right side of
Eq.~(\ref{e:PerturbedGRForceExact}). Recall that the axisymmetric
parts of any second-order quantity have time dependence $e^{2\beta
t}$. The first three terms in the bracketed expression in
Eq.~(\ref{e:PerturbedGRForceExact}) involve high-order time derivatives of
$\delta^{(2)} I^{\ell 0}$ or $\delta^{(2)} S^{\ell 0}$, and are
therefore proportional to high powers of $\beta$ and can be neglected.
We are left with only the fourth term,
\bea
\big\langle\delta^{(2)}_R \vec f_{GR}\bigr\rangle &=& \frac{(-1)^\ell
N_\ell}{8\pi\sqrt{\ell}}\nonumber\\ &&\times
\Re\left\langle\delta^{(1)}_N \vec v \times \vec\nabla(r^\ell Y^{\ell\ell})
\frac{d^{\,2\ell+1}\delta^{(1)}_N
S^{\ell\ell}}{dt^{\,2\ell+1}}\right\rangle.\nonumber\\
\label{e:fGR}
\eea
The second simplification involves the quantities
$\bigl\langle\delta^{(2)}_R V^a\bigr\rangle$ and
$\bigl\langle\delta^{(2)}_R W^a\bigr\rangle$ that appear
in Eq.~(\ref{e:Integral4}). They are defined in
Eqs.~(\ref{e:Vdef}) and (\ref{e:Wdef}). Using the general expressions
for the first order perturbations given in
Eqs.~(\ref{e:deltaIrho})--(\ref{e:deltaIPhi}), we can express these
quantities in terms of the first order perturbations:
\begin{eqnarray}
\bigl\langle\beta\delta^{(2)}_R
W^a\bigr\rangle&=&{\scriptstyle\frac{1}{2}} P^a{}_b\left(\delta^{(1)}_R\hat\rho\delta^{(1)}_N\hat v^b
+\delta^{(1)}_N\hat\rho\delta^{(1)}_R\hat v^b\right),\quad\\
\bigl\langle\beta\delta^{(2)}_R V^a\bigr\rangle&=&
{\scriptstyle\frac{1}{2}}\varpi^{-2}\phi^a\left[\delta^{(1)}_R\hat v^k
\nabla_k\left(\delta^{(1)}_N\hat v^b\phi_b\right)
\right.\nonumber\\
&&\qquad\qquad
\left.+\delta^{(1)}_N\hat v^k\nabla_k\left(\delta^{(1)}_R\hat v^b\phi_b\right)\right].
\end{eqnarray}
As we will see in the following section, these terms and the term
involving $\delta^{(2)}_N\rho$ in Eq.~(\ref{e:Integral4}) are higher order
in $\Omega$ than the first two terms of Eq.~(\ref{e:betaF})
and can therefore be neglected when evaluating
$\delta^{(2)}_N\Omega(\varpi)$ for slowly rotating stars using
Eq.~(\ref{e:Integral4}).
This fact is essential, because $\delta^{(2)}_N\rho$ itself depends
on $\delta^{(2)}_N\Omega$.
This discussion has been somewhat abstract but quite general. Apart
from assuming the integrability of the perturbed density so that mass
conservation, Eq.~(\ref{e:MassConservationIntegral}), can be enforced,
no assumption has been made up to this point about the particular
equation of state of the matter in these stellar models, nor has any
assumption been made about the magnitude of the angular velocity of
the star. In order to proceed further, however, we will need to
assume that the stellar model is slowly rotating in a suitable sense.
To find an explicit solution for $\delta^{(2)}_N\Omega(\varpi)$, we will also need
to make some choice for the equation of state for the stellar matter.
The slow rotation assumption and its implications are discussed in
Sec.~\ref{s:SlowRotation}, while the complete solution for
$\delta^{(2)}\Omega$, the second-order $r$-mode angular velocity that is
driven by gravitational radiation reaction, is determined in
Sec.~\ref{s:PolytropicStellarModels} for the case stars of composed of
matter with a range of polytropic equations of state.
\section{Slow Rotation Expansion}
\label{s:SlowRotation}
We consider the one-parameter families of stars $Q=Q(\Omega)$ composed
of matter with a fixed equation of state, and having masses that are
independent of the angular velocity: $M(\Omega)=M_0$. The structures
of slowly rotating stellar models in these families are conveniently
written as expansions in the dimensionless angular velocity,
\be
\widetilde\Omega =\frac{\Omega}{\Omega_0},
\ee
where $\Omega_0=\sqrt{M_0/R^3}$, and $M_0$ is the mass and $R$ the radius
of the non-rotating star in the sequence. The slow rotation expansion of these stellar models
is denoted,
\be
Q = \sum_{n=0} Q_n \widetilde\Omega^n = Q_0 + Q_1 \widetilde\Omega + Q_2 \widetilde\Omega^2
+ {\cal O}(\widetilde\Omega^3).
\label{e:Qn}\ee
For equilibrium rotating stars these expansions of the basic fluid
variables have the forms:
\bea
\rho&=&\rho_0+\rho_2\,\widetilde\Omega^2+{\cal O}(\Omega^4),\\
v^a&=&\Omega\,\phi^a,\\
p&=&p_0+p_2\,\widetilde\Omega^2+{\cal O}(\Omega^4),\\
\Phi&=&\Phi_0+\Phi_2\widetilde\Omega^2+{\cal O}(\Omega^4).
\eea
We will represent the perturbations of these stellar models $\delta Q$
as dual expansions in the mode amplitude $\alpha$ and the angular
velocity parameter $\widetilde\Omega$:
\bea
\delta Q = \sum_{n,k} \alpha^n\, \widetilde\Omega^k\, \delta^{(n)} Q_k.
\eea
Our main goal here is to determine to lowest-order in angular velocity
the axisymmetric part of the second-order perturbations of the
$r$-mode angular velocity field $\bigl\langle \delta^{(2)}_R
v^\phi\bigr\rangle $ that is driven by the gravitational-radiation
instability. Doing this requires the explicit slow-rotation forms of
the first and the second-order perturbations. These slow-rotation
expansions are described in the remainder of this section.
\subsection{First Order Perturbations}
\label{s:slowfirstorder}
The effect of the first-order gravitational radiation-reaction force
$\delta^{(1)} \vec f_{GR}$ on the structure
of the classical $r$-mode (beyond its overall effect
on its amplitude) was first studied (for $\ell=2$) by
Dias and S\'a \cite{Sa2005b}. We agree with the results they obtain but will
need to clarify their meaning. We also extend the calculation to general
values of $\ell$.
To first order in mode amplitude $\alpha$ and lowest non-trivial
order in angular velocity $\tilde \Omega$, the classical $r$-modes
with the $\phi$-parity described in Sec.~\ref{s:FirstOrderPerturbations}
can be written the form
\bea
\delta^{(1)}_N p_1&=&\delta^{(1)}_N \rho_1=\delta^{(1)}_N \Phi_1=0,
\label{e:ClassicalRModeDensity}
\\
\delta^{(1)}_N \vec v_1&=&\Im\left[\frac{R\Omega_0}{\ell}
\left(\frac{r}{R}\right)^{\ell}\!\!\vec r\times
\vec\nabla\left(\sin^\ell\theta e^{i\ell\phi+i\omega t}\right)\right],\nonumber\\
\label{e:ClassicalRModeVelocity}
\eea
where $\Im(Z)$ is the imaginary part of a quantity $Z$.
An equivalent expression
for the classical $r$-mode velocity in terms of vector spherical harmonics is
\bea \delta^{(1)}_N \vec v_1 &=& \Im\left(A_\ell r^\ell \vec Y^{\ell \ell}_B
e^{i\omega t}\right),\\ &=&-\frac{iA_\ell r^\ell}{2}\left[\vec
Y^{\ell\ell}_Be^{i\omega t}-(-1)^\ell \vec Y^{\ell-\ell}_Be^{-i\omega
t}\right],
\eea
where $A_\ell$ is given by
\begin{eqnarray}
A_\ell&=& (-1)^\ell 2^\ell (\ell-1)! \sqrt{\frac{4\pi\ell(\ell+1)}{(2\ell+1)!}}
R^{-\ell+1}\Omega_0.
\end{eqnarray}
The frequencies of these classical $r$-modes have the form
\bea
\omega_N
&=&-\frac{(\ell-1)(\ell+2)}{\ell+1}\Omega +{\cal O}(\Omega^3).
\label{e:RModeFrequency}
\eea
At this order in $\Omega$, the $r$-modes do not affect the fluid variables
$\delta \rho$ and $\delta p$, which are ${\cal O}(\Omega^2)$. Because of this,
the $r$-mode velocity field at order $\Omega$
does not depend on the equation of state.
Four features of the gravitational radiation-reaction force are
important in determining the way it alters each $r$-mode: {\it a)} The
$\phi$-parity of $\delta^{(1)} \vec f_{GR} $, as shown in the last section,
is opposite to that of the classical mode; {\it b)} its magnitude, as
shown below, is dominated by the current current multipole
$S^{\ell\ell}$; {\it c)} it can be decomposed in the manner \be
\delta^{(1)} \vec f_{GR} = \beta \delta^{(1)}_N\vec v + \delta^{(1)}_\perp \vec f_{GR},
\label{e:fdecompose}\ee
where the two terms in the decomposition are orthogonal with respect
to a density-weighted inner product, $\int \sqrt{g}\, d^{\,3}x\,
\rho_0\, \delta^{(1)}_N\vec v\,\cdot\, \delta^{(1)}_\perp \vec f_{GR} =0$;
and {\it d)} as we show below, $\delta^{(1)}_\perp \vec f_{GR}$ is a
gradient, $\delta^{(1)}_\perp \vec f_{GR} = \vec\na
\delta^{(1)}_\perp{\cal F}$.
It is straightforward to evaluate the multipole moments of the $r$-modes
using Eqs.~(\ref{e:PerturbedMassMultipole}) and
(\ref{e:PerturbedCurrentMultipole}) and the expressions for the
classical $r$-modes from Eqs.~(\ref{e:ClassicalRModeDensity}) and
(\ref{e:ClassicalRModeVelocity}). The expressions for the
non-vanishing multipole moments of the $r$-modes can be written in the
form
\bea
\!\!\!\!\!\!
\delta^{(1)}_N S^{\ell\ell} &=& (-1)^\ell \delta^{(1)}_N S^{*\ell-\ell}\nonumber\\
&=&-i\frac{A_\ell N_\ell e^{i\omega t} }{\sqrt{\ell+1}}
\int_0^R r^{2\ell+2}\rho_0\, dr.
\label{e:deltaS}\eea
Inserting these expressions into the formula for the
gravitational radiation-reaction force,
Eq.~(\ref{e:PerturbedGRForceExact}), we find
\bea
\!\!\!\!\!\!
\delta^{(1)}_N \vec f_{GR} &=& \frac{(-1)^\ell N_\ell}{8\pi}
\Re\left\{\left[\frac{i\omega}{\sqrt{\ell+1}}
r^\ell \vec Y^{\ell\ell}_B\right.\right. \nonumber\\
&&+\left.\left. \frac{\Omega}{\sqrt{\ell}}
\vec\phi
\times\vec\nabla(r^\ell Y^{\ell\ell})\right]
\frac{d^{2\ell+1}\delta S^{\ell\ell}}{dt^{2\ell+1}}\right\}.
\label{e:FirstOrderRModeGRForce}
\eea
This expression can be re-written as a linear combination
of $r^\ell\vec Y^{\ell\ell}_B$ and $\vec\nabla(r^\ell Y^{\ell\ell})$
using the identity
\bea
\vec\phi\times\vec\nabla(r^\ell Y^{\ell\ell}) =
i\sqrt{\ell(\ell+1)}r^\ell\vec Y^{\ell\ell}_B - z \vec\nabla(r^\ell Y^{\ell\ell}).
\eea
The resulting expression for $\delta^{(1)}_N \vec f_{GR}$ can
therefore be written in the following way:
\bea
\delta^{(1)} \vec f_{GR} &=& \beta \delta^{(1)}_N\vec v + \delta^{(1)}_\perp \vec f_{GR},
\label{e:FirstOrderGRForce}
\eea
where $\beta$ is given by
\bea
\beta = \frac{N^2_\ell \omega^{2\ell+2}}{4\pi(\ell^2-1)(\ell+2)}
\int_0^R r^{2\ell+2}\rho_0\,dr,
\label{e:BetaDef}
\eea
and where $\delta^{(1)}_\perp \vec f_{GR}$ is defined by
\bea
&&\!\!\!\!\!
\delta^{(1)}_\perp \vec f_{GR}=
-\frac{N^2_\ell \omega^{2\ell+1}\Omega}{8\pi}
\int_0^R r^{2\ell+2}\rho\,dr\nonumber\\
&&\qquad\qquad\times
\left\{\frac{\delta^{(1)}_N \vec v}{\ell+1}
+\frac{\Re\left[z A_\ell\vec\nabla(r^\ell Y^{\ell\ell})e^{i\omega t}\right]}
{\sqrt{\ell(\ell+1)}}\right\}
.\qquad
\label{e:F_R_GR_Def}
\eea
This expression for $\delta^{(1)}_\perp \vec f_{GR}$ can be rewritten as
a gradient,
\bea
\delta^{(1)}_\perp \vec f_{GR}&=&\Im\left\{
i\beta A_\ell\frac{\sqrt{\ell(\ell+1)}}2\vec\nabla\left[
r^{\ell+1}\cos\theta\, Y^{\ell\ell}e^{i\omega t}\right]\right\}\nonumber\\
&=:&\vec\nabla \delta^{(1)}_\perp{\cal F}.
\label{e:Fgradient}
\eea
Eqs.~(\ref{e:FirstOrderGRForce}) and (\ref{e:Fgradient}) give the
decomposition of Eq.~(\ref{e:fdecompose}),
and the orthogonality of the two parts,
\be
\int\rho_0 \delta^{(1)}_N \vec v \cdot \delta^{(1)}_\perp \vec f_{GR}
\,\sqrt{g}\,d^{\,3}x = 0, \ee
is implied by the relation
\bea
&&
\displaystyle \int \epsilon^{abc} \nabla_a(\cos\theta\,
Y^{\ell\ell}) \nabla_b r \nabla_c\bar
Y^{\ell\ell}\sqrt g\,d^{\,2}\,x\nonumber\\
&&\qquad = -\int
\epsilon^{abc} \cos\theta\, Y^{\ell\ell}\nabla_b r\nabla_a
\nabla_c\bar Y^{\ell\ell} \sqrt g\,d^{\,2}\,x= 0,\nonumber\\
\eea
where $\sqrt{g}\,d^{\,2}\,x$ is the volume element on the sphere:
$\sqrt{g}\,d^{\,2}\,x\equiv -r^2d\cos\theta\, d\phi$. At this order
in $\Omega$, the density $\rho_0$ plays no role in the orthogonality,
but it is with respect to the density-weighted inner product that the
operators appearing in the perturbed Euler equation are formally
self-adjoint.
It follows that $\delta^{(1)}_\perp \vec f_{GR}$ is the part of the
gravitational radiation-reaction force that does not contribute
directly to the exponential growth of the classical $r$-mode
instability and that the coefficient $\beta$ is the
growth rate of the gravitational radiation driven instability in the
$r$-modes. Substituting into Eq.~(\ref{e:BetaDef}) the expressions for
$N_\ell$ from Eq.~(\ref{e:NlDef}) and the $r$-mode frequency $\omega$
from Eq.~(\ref{e:RModeFrequency}) gives
\bea
\!\!\!\!
\beta= \frac{32\pi\Omega^{2\ell+2}(\ell-1)^{2\ell}}{[(2\ell+1)!!]^2}\!\!
\left(\frac{\ell+2}{\ell+1}\right)^{2\ell+2}\!\!
\int_0^R r^{2\ell+2}\rho_0\,dr,
\label{e:beta}\eea
which agrees with the expression for the gravitational radiation
growth rate of the $r$-mode instability given in Lindblom,
Owen and Morsink~\cite{Lindblom98b}.
These expressions for the slow rotation limits of the
radiation-reaction force confirm the general expressions,
e.g. Eq.~(\ref{e:GRForceParityEq}), used in our discussion of the
general properties of the first-order $r$-modes in
Sec.~\ref{s:FirstOrderPerturbations}. It follows from that discussion
that the general form of the first-order $r$-mode
velocity, to lowest order in
the angular velocity of the star, is given by
\bea
\delta^{(1)} \vec v = \tilde \Omega\, \delta^{(1)}_N\! \vec v_1\, e^{\beta t}.
\eea
To evaluate $\delta^{(2)}_N\Omega$ using Eq.~(\ref{e:Integral4}), we
need to determine $\delta^{(1)}_R\rho$ and $\delta^{(1)}_R \vec v$, or at least
to show that they are negligibly small compared to other terms in the
equation. We show in the heuristic argument below that
$\delta^{(1)}_R\rho={\cal O}(\beta\Omega) $ and $\delta_R^{(1)} \vec v =
{\cal O}(\beta\Omega^2)$, which will allow us to neglect them in our
slow rotation expansion. A more precise version of the argument is
given in Appendix~\ref{s:ordering_appendix}.
The fact that $\delta^{(1)}_R \vec v$ is
higher-order in $\Omega$ than $\delta^{(1)}_R\rho$ is the reverse of their
relation in the classical $r$-modes. This reversal depends on the
appearance of the gradient $\vec \nabla \delta^{(1)}_\perp{\cal F}$ in the
decomposition of the gravitational radiation-reaction force $\delta^{(1)}_R
\vec f_{GR}$.
The equations that determine $\delta^{(1)}_R Q$,
Eqs.~(\ref{e:FirstOrderEvenParityRhoEq})--(\ref{e:FirstOrderOddParityVelocityEq}),
can be written more compactly as
\bea
(\omega_N+\ell\,\Omega)\,\delta^{(1)}_R\hat \rho +\vec\nabla\cdot\left(\rho\delta^{(1)}_R
\vec{\hat v}\right) &=&\beta\, \delta^{(1)}_N\rho,
\label{e:d1continuity}\\
(\omega_N+\ell\,\Omega)\delta^{(1)}_R\vec{\hat v} +2\Omega\delta^{(1)}_R\vec{\hat v}\cdot
\nabla \vec\phi
&& \nonumber\\
= -\vec\nabla(\delta^{(1)}_R \hat U\!\!&-&\!\!\delta^{(1)}_\perp{\cal F}).
\label{e:d1Euler}
\eea
The value of $\delta_R^{(1)} \vec{\hat v}$ is fixed by
the curl of the perturbed Euler equation, (\ref{e:d1Euler}):
\be
\vec\na\times\left[(\omega_N+\ell\,\Omega)
\delta^{(1)}_R \vec{\hat v} +2\Omega\delta^{(1)}_R \vec{\hat v}\cdot \na\vec \phi\right]=0,
\ee
which involves only $\delta_R^{(1)}\vec{\hat v}$. Its two independent
components give two relations for the three components of
$\delta_R^{(1)}\vec{\hat v}$, in which all coefficients are ${\cal
O}(\Omega)$. All components of $\delta_R^{(1)}\vec{\hat v}$ are
therefore of the same order in $\Omega$. Similarly, the two relations
among $\delta_R^{(1)}U$, $\delta_R^{(1)}\Phi$, and
$\delta_R^{(1)}\rho$ given by the equation of state and the Poisson
equation imply that $\delta_R^{(1)}U$ and $\delta_R^{(1)}\rho$ are of
the same order in $\Omega$. The continuity equation,
(\ref{e:d1continuity}), then implies that $\delta^{(1)}_R \vec v = {\cal
O}(\Omega\delta^{(1)}_R \rho)$. Finally, the $\phi$-component of the
Euler equation gives, to lowest order in $\Omega$,
\be \delta_R^{(1)} U =
\delta^{(1)}_\perp{\cal F}
+ {\cal O}(\Omega^2\delta^{(1)}_R\rho).
\label{e:deltaU}\ee
From its definition in Eq.~(\ref{e:Fgradient}) it follows that
$\delta^{(1)}_\perp{\cal F} ={\cal O}(\Omega\beta)$, which then implies that
$\delta^{(1)}_R\rho= {\cal O}(\beta\Omega)$ and $\delta_R^{(1)} \vec v = {\cal
O}(\beta \Omega^2)$.
Dias and S\'a~\cite{Sa2005b} find, for an $\ell = 2$ perturbation,
a solution $\delta^{(1)}_R \vec v, \delta^{(1)}_R U$ that is a sum of a)
our solution
with $ \delta^{(1)}_R U$ given by Eq.~(\ref{e:deltaU}) and b)
a solution to the homogeneous equations with $\phi$-parity
opposite to that of the Newtonian $r$-mode $\delta^{(1)}_N Q$.
As noted above, adding part b) of their solution
is equivalent to changing the initial phase of the perturbation.
\subsection{Second Order Axisymmetric Perturbations}
\label{s:SecondOrderAxisymmetric}
In computing the quadratic terms that enter the second-order
perturbation equations, it will be useful to have explicit expressions
for the classical $r$-mode $\delta^{(1)}_N v^a_1$ in cylindrical coordinates
$(\varpi,z,\phi)$,
\begin{subequations}
\bea
\!\!\!\!\!\!\!\!
\delta^{(1)}_N v^\varpi_1&=&-\Omega_0\,z \left(\frac{\varpi}{R}\right)^{\ell-1}
\cos(\ell\phi+\omega t),\\
\!\!\!\!\!\!\!\!
\delta^{(1)}_N v^z_1&=&\Omega_0 \,R \left(\frac{\varpi}{R}\right)^{\ell}
\cos(\ell\phi+\omega t),\\
\!\!\!\!\!\!\!\!
\delta^{(1)}_N v^\phi_1&=&
\Omega_0\,\frac{z}{R}\left(\frac{\varpi}{R}\right)^{\ell-2}
\!\!\sin(\ell\phi+\omega t).
\eea\label{e:deltav_cylindrical}\end{subequations}
From these one finds explicit expressions for the cylindrical components
of the quadratic term $\bigl\langle \delta^{(1)}_N v^b_1\nabla_b \delta^{(1)}_N
v^a_1\bigr\rangle$, which appears as a source in the second-order Euler
equation, Eq.~(\ref{e:PerturbedEulerII}):
\begin{subequations}
\bea
&&
\!\!\!\!\!
\bigl\langle\delta^{(1)}_N \vec v_1\cdot\vec\nabla\delta^{(1)}_N v^\varpi_1\bigr\rangle=\nonumber\\
&&\qquad\frac{\Omega_0^2}{2R}\left[2(\ell-1)z^2-\varpi^2\right]
\left(\frac{\varpi}{R}\right)^{2\ell-3},
\qquad\qquad
\\
&&
\!\!\!\!\!
\bigl\langle\delta^{(1)}_N \vec v_1\cdot\vec\nabla\delta^{(1)}_N v^z_1\bigr\rangle=-\ell\Omega_0^2z
\left(\frac{\varpi}{R}\right)^{2\ell-2},\\
&&
\!\!\!\!\!
\bigl\langle\delta^{(1)}_N \vec v_1\cdot\vec\nabla\delta^{(1)}_N v^\phi_1\bigr\rangle =0.
\eea\end{subequations}
The axisymmetric parts of the non-radiative second-order perturbations
$\bigl\langle\delta^{(2)}_N v^a\bigr\rangle$ and $\bigl\langle\delta^{(2)}_N
U\bigr\rangle$ are determined by solving the perturbed Euler equation,
Eq.~(\ref{e:NRPerturbedEulerII}), and the perturbed mass conservation
equation, Eq.~(\ref{e:NRPerturbedMassConsII}).
The lowest order in angular
velocity contributions to Euler's equation
$0=\bigl\langle\delta^{(2)}_N E_{a}\bigr\rangle$
are given by:
\begin{subequations}\bea
0 = &&
\bigl\langle\delta^{(2)}_N E_\varpi\bigr\rangle =
-2\varpi\Omega_0\bigl\langle\delta^{(2)}_N v^\phi_1\bigr\rangle
+\partial_\varpi\bigl\langle\delta^{(2)}_N U_2\bigr\rangle\nonumber\\
&&\qquad\qquad
+\left[2(\ell-1)z^2-\varpi^2\right]\frac{\Omega_0^2}{2R}
\left(\frac{\varpi}{R}\right)^{2\ell -3},\qquad\quad
\label{e:PerturbedEulerVarpi}\\
0 = &&
\bigl\langle\delta^{(2)}_N
E_z\bigr\rangle = \partial_z\bigl\langle\delta^{(2)}_N U_2\bigr\rangle -\ell z
\Omega_0^2\left(\frac{\varpi}{R}\right)^{2\ell-2},
\label{e:PerturbedEulerZ}\\
0 = &&
\bigl\langle\delta^{(2)}_N
E_\phi\bigr\rangle = 2\varpi\Omega_0 \bigl\langle\delta^{(2)}_N v^\varpi_1
\bigr\rangle.
\label{e:PerturbedEulerPhi}
\eea \end{subequations}
The integrability conditions for these equations,
$\bigl\langle\delta^{(2)}_N E_{a}\bigr\rangle=0$, are given by
$\nabla_{[a}\bigl\langle\delta^{(2)}_N E_{b]}\bigr\rangle=0$. In cylindrical coordinates, the
lowest-order in angular velocity parts of these integrability
conditions are
\begin{subequations}
\bea
0&=&\nabla_{[z}\bigl\langle\delta^{(2)}_N E_{\varpi]}\bigr\rangle=
-\varpi\Omega_0\partial_z\bigl\langle\delta^{(2)}_N v^\phi_1\bigr\rangle\nonumber\\
&&
\qquad\qquad\qquad\quad
+(\ell^2-1)\frac{\Omega_0^2z}{R}\left(\frac{\varpi}{R}\right)^{2\ell-3},\qquad\\
0&=&\nabla_{[z}\bigl\langle\delta^{(2)}_N E_{\phi]}\bigr\rangle=
\Omega_0\partial_z\bigl\langle\delta^{(2)}_N v^\varpi_1\bigr\rangle,\\
0&=&\nabla_{[\varpi}\bigl\langle\delta^{(2)}_N E_{\phi]}\bigr\rangle=
\Omega_0\partial_\varpi\left(\varpi\bigl\langle\delta^{(2)}_N v^\varpi_1\bigr\rangle\right).
\eea\end{subequations}
These conditions, together with the requirement that the solution is
nonsingular on the rotation axis, determine
$\bigl\langle\delta^{(2)}_N v^\varpi_1\bigr\rangle$
and $\bigl\langle\delta^{(2)}_N v^\phi_1\bigr\rangle$, up to the time independent
differential rotation $\delta^{(2)}_N\Omega(\varpi)$
As before, we denote a particular choice by $\delta^{(2)}_{NP}v^\phi$:
\bea
\bigl\langle\delta^{(2)}_N v^\varpi_1\bigr\rangle&=&0,
\label{e:deltaIINRVvarpi}\\
\bigl\langle\delta^{(2)}_{NP} v^\phi_1\bigr\rangle
&=&(\ell^2-1)\frac{\Omega_0z^2}{2R^2}\left(\frac{\varpi}{R}\right)^{2\ell-4}.\quad
\label{e:delta2vphi}\eea
The remaining component, $\bigl\langle\delta^{(2)}_N v^z_1\bigr\rangle$, is
determined from the lowest order in angular velocity piece of the
perturbed mass conservation equation [cf. Eq.~(\ref{e:NRPerturbedMassConsII})],
\be
\nabla_a\left(\rho\bigl\langle\delta^{(2)}_N v^a_1\bigr\rangle\right)=0.
\ee
This equation, together with Eq.~(\ref{e:deltaIINRVvarpi}), shows that
the only non-singular solution for $\bigl\langle\delta^{(2)}_N v^z_1\bigr\rangle$ is
\bea
\bigl\langle\delta^{(2)}_N v^z_1\bigr\rangle&=&0.
\eea
The scalar parts of the second order non-radiative $r$-mode,
$\bigl\langle\delta^{(2)}_N \rho\bigr\rangle$ and
$\bigl\langle\delta^{(2)}_N \Phi\bigr\rangle$,
are determined by completing the solution to the perturbed Euler
equation $\bigl\langle\delta^{(2)}_N E_a\bigr\rangle=0$, and then solving the
perturbed gravitational potential equation. The potential
$\bigl\langle\delta^{(2)}_N U\bigr\rangle$ is determined by integrating the
perturbed Euler Eqs.~(\ref{e:PerturbedEulerVarpi}) and
(\ref{e:PerturbedEulerZ}) . Using
Eqs.~(\ref{e:Omega_decomp}) and (\ref{e:delta2vphi}) we obtain the
following expression for the axisymmetric part of the solution, to
lowest order in angular velocity,
\bea
\bigl\langle\delta^{(2)}_N U_2\bigr\rangle
&=&\frac{\Omega_0^2R^2}{4\ell}\left(\frac{\varpi}{R}\right)^{2\ell}
+\frac{\ell\,\Omega_0^2 z^2}{2}\left(\frac{\varpi}{R}\right)^{2\ell-2}\nonumber\\
&&+ 2\Omega_0\int^\varpi_0 \varpi'\delta^{(2)}_N\Omega(\varpi')d\varpi'
+ \delta^{(2)}_N C_2,\qquad
\label{e:deltaIIU2}
\eea
where $\delta^{(2)}_N C_2$ is a constant.
The pressure as well as the density perturbations, $\delta^{(2)} p$ and
$\delta^{(2)} \rho$, are related to $\delta^{(2)} U$ as follows,
\bea
\delta^{(2)} U &=&\delta^{(2)}\Phi+\frac{1}{\rho}\delta^{(2)} p
-\frac{1}{2\rho^2}\delta^{(1)} p\, \delta^{(1)} \rho \nonumber\\
&=&\delta^{(2)}\Phi+\frac{\gamma p }{\rho^2}\delta^{(2)} \rho
\nonumber\\
&&\quad+\frac{p}{2\rho^2}\left[\frac{\gamma(\gamma-2)}{\rho}
+\frac{d\gamma}{d\rho}\right](\delta^{(1)} \rho)^2,
\label{e:eosII}\eea
where $\gamma=d\log p/d \log \rho$ is the adiabatic index. For the
$r$-modes, the first-order perturbations $\delta^{(1)} p$ and $\delta^{(1)} \rho$ are ${\cal O}(\Omega^2)$. So at lowest order in angular velocity, the
relation between $\delta^{(2)} U$ and
$\delta^{(2)}\rho$ simplifies to
\bea
\delta^{(2)} U_2 &=&\delta^{(2)}\Phi_2+\frac{\gamma p }{\rho^2}\delta^{(2)} \rho_2.
\eea
The gravitational potential $\delta^{(2)}\Phi$ is determined by solving
the perturbed gravitational potential equation,
\bea
\nabla^2\delta^{(2)}\Phi &=& 4\pi\delta^{(2)}\rho.
\eea
For the $r$-modes, to lowest order in the angular velocity, this
equation my be re-written as
\bea \nabla^2\delta^{(2)}\Phi_2 +\frac{4\pi\rho^2}{\gamma
p_0}\delta^{(2)}\Phi_2 &=&\frac{4\pi\rho^2}{\gamma p_0} \delta^{(2)} U_2.
\eea
Using the expression derived in Eq.~(\ref{e:deltaIIU2}) for the
axisymmetric part of $\delta^{(2)}_N U_2$, we find the general
equation for $\bigl\langle\delta^{(2)}_N \Phi_2\bigr\rangle$:
\bea
&&\nabla^2\bigl\langle\delta^{(2)}_N\Phi_2\bigr\rangle
+\frac{4\pi\rho^2}{\gamma p_0}\bigl\langle\delta^{(2)}\Phi_2\bigr\rangle
\nonumber\\
&&\quad=\frac{4\pi\rho^2}{\gamma p_0}\left\{
\frac{\Omega_0^2R^2}{4\ell}\left(\frac{\varpi}{R}\right)^{2\ell}
+\frac{\ell\,\Omega_0^2 z^2}{2}\left(\frac{\varpi}{R}\right)^{2\ell-2}
\right.\nonumber\\
&&\qquad\qquad\left.
+ 2\Omega_0\int^\varpi_0 \varpi'\delta^{(2)}_N\Omega(\varpi')d\varpi'+
\delta^{(2)}_N C_2\right\}.\qquad
\label{e:SecondOrderGravPotenialEq}
\eea
Finally, we use Eq.~(\ref{e:Integral4}) to obtain an explicit formula
for the second-order differential rotation, $\delta^{(2)}_N\Omega(\varpi)$, in terms of the second-order radiation-reaction
force and the second-order velocity perturbation $\delta^{(2)}_N v^a$. Of
the three terms on the right side of that equation, we will see that
the second and third are higher order in $\Omega$ than the first, and
we will evaluate the first term to leading order in $\Omega$.
We first use Eq.~(\ref{e:fGR}) to find an explicit form for the second-order
radiation-reaction force $\bigl\langle \delta^{(2)}_R\! \vec f_{GR}\bigr\rangle$.
From Eqs.~(\ref{e:deltav_cylindrical}) and (\ref{e:deltaS}) for $\delta^{(1)}_N
v^\theta$ and $\delta^{(1)}_N S^{\ell\ell} $, we find
\be \bigl\langle \delta^{(2)}_R\! \vec f_{GR}\bigr\rangle =
-\frac{(\ell+1)^2}4 \beta \Omega \left(\frac\varpi
R\right)^{2\ell-2}\vec\phi.
\label{e:fGRSimplified}
\ee
The second term $\delta_N^{(2)}v^\phi$ in Eq.~(\ref{e:betaF}) is given
by Eq.~(\ref{e:delta2vphi}). In the final term, $\delta_R^{(2)} V^\phi$,
by its definition~(\ref{e:Vdef}),
is proportional to a product of components of $\delta_N^{(1)} \vec v$
and $\delta_R^{(1)} \vec v$. By our initial normalization,
$\delta_N^{(1)} \vec v = {\cal O}(\Omega)$,
and we found in Sect.~\ref{s:slowfirstorder} that $\delta_R^{(1)} \vec v$ is
${\cal O}(\Omega\delta_R^{(1)}\vec f_{GR}) = {\cal O}(\beta\Omega^2)$.
From Eqs.~(\ref{e:betaF}), (\ref{e:fGRSimplified}), and
(\ref{e:delta2vphi}), we have
\bea
\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle
&=& -\Omega \left(\frac\varpi R\right)^{2\ell-4}
\left[ \frac{(\ell+1)^2}4 \left(\frac\varpi R\right)^{2} \right.
\nonumber\\
&&\qquad
\qquad\quad\qquad\left. + (\ell^2-1) \left(\frac zR\right)^2\right].\qquad
\label{e:Fphi}\eea
Equation~(\ref{e:Fphi}) implies $\bigl\langle \delta^{(2)}_R
F^{\,\phi}\bigr\rangle ={\cal O}(\Omega)$. The second term in
Eq.~(\ref{e:Integral4}) has integrand proportional to
$\mbox{$\bigl\langle \delta^{(2)}_N\rho\bigr\rangle$}$.
Because $\delta_N^{(2)}\rho = {\cal O}(\Omega^2)$, the integrand
is ${\cal O}(\Omega^2)$, and the term itself is ${\cal O}(\Omega^3)$,
two orders higher than $\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle$.
Finally, the last term in (\ref{e:Integral4}) is proportional to
$\Omega\bigl\langle\delta^{(2)}_R W^a\bigr\rangle$. Eq.~(\ref{e:Wdef})
implies $\bigl\langle\delta^{(2)}_R W^a\bigr\rangle = {\cal O}(\Omega^2)$,
whence the last term is again ${\cal O}(\Omega^3)$.
With the dominant term in Eq.~(\ref{e:Integral4}) determined by
$\bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle$, we have
\be
\hspace{-4mm} \delta^{(2)}_N\Omega(\varpi)
=\frac{\int_{-z_S}^{z_S} dz\, \rho\, \bigl\langle \delta^{(2)}_R F^{\,\phi}\bigr\rangle}
{2\int_{-z_S}^{z_S} dz\, \rho}.
\label{e:d2ROmega}\ee
This integrand can be re-written in a more explicit form using
Eqs.~(\ref{e:Fphi}) and (\ref{e:delta2vphi}):
\bea
\delta^{(2)}_N \Omega(\varpi)&=&
-\Omega\left(\frac\varpi R\right)^{2\ell-4}
\left[ \frac{(\ell+1)^2}8 \left(\frac\varpi R\right)^{2}\right.\nonumber\\
&&\qquad\qquad\qquad\quad \left.
+\frac{(\ell^2-1)}{2}\Upsilon(\varpi)\right],\qquad
\label{e:IIRROmega}
\eea
where $\Upsilon(\varpi)$ is the equation-of-state dependent,
mass-weighted average of $(z/R)^2$,
\bea
\Upsilon(\varpi)&=&\frac{ \int_{-z_S}^{z_S} dz\, \rho z^2}
{ R^2\int_{-z_S}^{z_S} dz\, \rho}.
\label{e:UpsilonDef}
\eea
The limits of integration, $\pm z_S(\varpi)$, in this expression are the
$\varpi$ dependent values of $z$ at the surface of the equilibrium
star. To lowest order in $\Omega$ these limits are the same as those
in a spherical nonrotating star:
\be z_S(\varpi) =
\sqrt{R^2-\varpi^2}. \ee
The part of the second-order differential rotation that is not
explicitly caused by the radiation-reaction force, $\bigl\langle\delta^{(2)}_{NP} v^\phi_1\bigr\rangle$, is given in Eq.~(\ref{e:delta2vphi}):
\bea
\bigl\langle\delta^{(2)}_{NP} v^\phi\bigr\rangle
&=&(\ell^2-1)\frac{\Omega}{2}\left(\frac{ z}{R}\right)^2\left(\frac{\varpi}{R}\right)^{2\ell-4}.\quad
\label{e:IINROmega}
\eea
Together Eqs.~(\ref{e:IIRROmega}) and (\ref{e:IINROmega}) determine
(to lowest order in $\Omega$) the time-dependent differential-rotation
induced by gravitational-radiation reaction:
\bea
\hspace{-4mm} \delta^{(2)}\Omega
&=& \left[\bigl\langle\delta^{(2)}_{NP} v^\phi\bigr\rangle+\delta^{(2)}_N \Omega(\varpi)
\right]e^{2\beta t}.\qquad
\label{e:d2Omega}\eea
The key result of this section is the derivation of an
explicit expression (\ref{e:d2ROmega}) for
$\delta^{(2)}_N\Omega(\varpi)$ in terms of the first-order $r$-mode.
An expression of this kind exists because the rest of the
second-order perturbation, the perturbed density, pressure,
and potential, are higher-order in $\Omega$. Like the
velocity field of the first-order $r$-mode, the second-order
differential rotation of the unstable $r$-mode can be found
without simultaneously solving for the perturbed density
and pressure.
This separation of orders also leads to an iterative method
for solving the second-order Newtonian perturbation equations at
successive orders in $\Omega$ that mirrors the method we have just used to
determine the axisymmetric parts of $\delta^{(2)}_N v^a$ at ${\cal O}(\Omega)$ and
$\delta^{(2)}_N \rho$, $\delta^{(2)}_N p$, and $\delta^{(2)}_N \Phi$
at ${\cal O}(\Omega^2)$. At each order, the ambiguity in the
Newtonian differential rotation is resolved by using Eq.~(\ref{e:Integral4}).
We assume that the first-order Newtonian perturbation
equations have been solved to the desired order in $\Omega$.
We suppose one has found the perturbed Newtonian velocity
$\delta^{(2)}_N v^a$ to ${\cal O}(\Omega^{2k-1})$ and the scalar quantities
in $\delta^{(2)}_N Q$ to ${\cal O}(\Omega^{2k}$), and we list the steps
to obtain the next-order correction: to find $\delta^{(2)}_N v^a_{2k+1}$
and the scalar quantities to ${\cal O}(\Omega^{2k+2})$.
\begin{enumerate}
\item Because $\delta^{(2)}_N v^a_{2k-1}$ is known,
and the integrability conditions $\nabla_{[a}\delta^{(2)}_N E_{b]}=0$
have an additional power of $\Omega$ in each term, they are satisfied to
at ${\cal O}(\Omega^{2k})$. One can then integrate the $\varpi$ or $z$
component of the perturbed Newtonian Euler equation (\ref{e:NRPerturbedEulerII})
to find $\delta^{(2)}_N U_{2k+2}$ up to a constant $\delta^{(2)}_N C_{2k+2}$.
\item Equation~(\ref{e:eosII}) determines
$\delta^{(2)}_N \rho_{2k+2}$ up to the ambiguity associated with
$\delta^{(2)}_N C_{2k+2}$.
The Poisson equation, Eq.~(\ref{e:PerturbedPoissonII}), with the
conditions that $\delta^{(2)}_N\Phi_{2k+2}$ vanish at infinity and have
no monopole part (no change in mass), determines both $\delta^{(2)}_N\Phi_{2k+2}$
and the constant $\delta^{(2)}_N C_{2k+2}$.
\item Equation~(\ref{e:eosII}) (or, alternatively, the Poisson equation) gives
$\delta^{(2)}_N \rho_{2k+2}$, and the equation of state determines $\delta^{(2)}_N p_{2k+2}$.
\item Finally, one uses the known first-order perturbation $\delta^{(1)}_N v^a$ to solve
two independent components of the curl of the Euler equation,
$\delta^{(2)}_N E_{a}=0$ for $\delta^{(2)}_N v^\phi_{2k+1}$ and
$\delta^{(2)}_N v^\varpi_{2k+1}$; $\bigl\langle\delta^{(2)}_N v^\phi_{2k+1}\bigr\rangle$
has an $f(\varpi)$
ambiguity that is resolved by Eq.~(\ref{e:Integral4}).
The final component $\delta^{(2)}_N v^z_{2k+1}$ is found from the second-order
mass-conservation equation.
\end{enumerate}
\subsection{Secular drift of a fluid element}
The differential rotation we have found for the unstable $r$-mode
extends the work of S\'a and
collaborators~\cite{Sa2004}-\cite{Sa2005b} to obtain the differential
rotation of the unstable second-order $r$-mode. The studies of
magnetic field wind-up by Rezzolla, {\it et al.}~\cite{Rezzolla00,Rezzolla01b,Rezzolla01c},
which predated this
work, explicitly omitted the form of the second order perturbation to
the velocity field that we have computed here. These authors obtained
a secular drift $\phi(t)$ in the position of a fluid element by
integrating the $\ell=2$ form of the equations for the position
$\phi(t)$ and $\theta(t)$ of a particle whose perturbed velocity field
is found solely from the first-order perturbation $\delta^{(1)}_N v^a$ of
Eq.~(\ref{e:ClassicalRModeVelocity}), from the equations \begin{subequations} \bea
\frac{d\theta}{dt} &=& \alpha\delta^{(1)}_N v^\theta[\theta(t),\phi(t)],
\\ \frac{d\phi}{dt} &=& \alpha\delta^{(1)}_N v^\phi[\theta(t),\phi(t)].
\label{e:drift}\eea\end{subequations}
The equations are nonlinear in $\theta(t), \phi(t)$, and the solution
is written to ${\cal O}(\alpha^2)$. The axisymmetric part of the solution
is again the part that is not oscillatory in time; in our notation, it has the form
\be
\langle\theta(t)\rangle = 0, \quad
\langle\phi(t)\rangle = \alpha^2 \frac34 \left[\left(\frac\varpi R\right)^2 - 2\left(\frac z R\right)^2\right] \Omega t.
\ee
A secular drift obtained in this way has been used in subsequent papers by
Cuofano, {\it et al.}~\cite{Cuofano2010,Cuofano_etal12}, and by
Cao, {\it et al.}~\cite{CZW15}.
When one includes the second-order differential rotation
$\delta^{(2)}\Omega$ of the unstable $\ell=2$ $r$-mode from
Eqs.~(\ref{e:d2Omega}), additional terms are
added to the secular drift $\phi(t)$ of a fluid element's position.
The resulting expression is
given for $t\ll1/\beta$ by
\be \langle\phi(t)\rangle = \alpha^2
\left\{\frac34 \left[\left(\frac\varpi R\right)^2 - 2\left(\frac z
R\right)^2\right] \Omega+\delta^{(2)}\Omega|_{t=0}\right\} t.
\ee
Using the expression for $\delta^{(2)}\Omega$ in Eq.~(\ref{e:d2Omega}), with
Eqs.~(\ref{e:IIRROmega}) and (\ref{e:IINROmega}), we obtain the following
explicit form for the second-order drift of an unstable $\ell=2$ $r$-mode:
\be
\langle\phi(t)\rangle = -\frac32 \alpha^2\,\Omega
\left[\frac14 \left(\frac\varpi R\right)^2+\Upsilon(\varpi)\right] t.
\label{e:phi_drift}\ee
This expression for the drift $\langle\phi(t)\rangle$ is independent
of $z$, and therefore describes a drift that is constant on
$\varpi\,=$ constant cylinders. The analogous expression for the drift
found previously by S\'a~\cite{Sa2004} has this same feature, and
Chugunov~\cite{Chugunov2015} observes that the drift in these modes
can therefore be completely eliminated in the pure Newtonian case by
appropriately choosing the arbitrary second-order angular velocity
perturbation.
For long times (that is, for $\beta t$ arbitrary but
$\beta\ll\Omega$), the time dependence $t$ in Eq.~(\ref{e:phi_drift}) is replaced by
$\displaystyle (e^{2\beta t}-1)/2\beta$. This expression is not of
order $1/\beta$, but satisfies the bound
\be
\frac{e^{2\beta t}-1}{2\beta} < t \frac{e^{2\beta t}+1}2,
\ee
for $t>0$.
\section{Polytropic Stellar Models}
\label{s:PolytropicStellarModels}
In this section we evaluate Eq.~(\ref{e:d2Omega}), to determine the
changes in the rotation laws of uniformly rotating polytropes that are
induced by the gravitational-radiation driven instability in the
$r$-modes. Polytropic stellar models (polytropes) are stars
composed of matter whose equation of state has the form
\be
p = K\rho^{1+1/n},
\ee
where $K$ and $n$, the {\it polytropic index}, are constants.
We start with the simplest case, $n=0$, the uniform-density
models. The only dependence of the differential rotation
$\delta^{(2)}\Omega$ on the equation of state is in
$\Upsilon(\varpi)$, the mass-weighted average of $(z/R)^2$ at
fixed $\varpi$ defined in
Eq.~(\ref{e:UpsilonDef}). This average
can be evaluated analytically in the uniform-density case:
\bea
\Upsilon(\varpi)=\frac{R^2-\varpi^2}{3R^2}
= \frac{z_S^2(\varpi)}{3R^2}.
\eea
Combining this result with Eqs.~(\ref{e:IIRROmega}), (\ref{e:IINROmega})
and (\ref{e:d2Omega}), gives
\bea
\hspace{-4mm} \delta^{(2)} \Omega
&=& \Omega \left(\frac{\varpi}{R}\right)^{2\ell-4}
\left[ \frac{(\ell+1)(\ell-7)}{24}\left(\frac{\varpi}{R}\right)^2\right.
\nonumber\\
&&\hspace{15mm}\left.+ \frac{(\ell^2-1)}6\left(3\frac{z^2}{R^2}-1\right)
\right]e^{2\beta t}.\qquad
\label{e:d20uniform}\eea
In particular, for the $\ell=2$ $r$-mode,
the radiation-reaction induced differential rotation has the form
\be \delta^{(2)}\Omega =
\Omega\left[\frac32\left(\frac{z}{R}\right)^2-\frac58
\left(\frac{\varpi}{R}\right)^2
-\frac12\right] e^{2\beta t},
\ee
which is positive in a neighborhood of the poles and negative near the
equatorial plane. Figure~\ref{f:dOmegaContour} illustrates the
gravitational-radiation driven differential rotation $\delta^{(2)}
\Omega/\Omega$ from the $\ell=2$ $r$-mode instability of a
slowly-rotating uniform-density star. This figure shows contours of
constant $\delta^{(2)} \Omega/\Omega$, on a cross section of the star that
passes through the rotation axis. For example, this figure ilustrates
that $\delta^{(2)} \Omega/\Omega\approx-9/8$ near the surface of the star
at the equator. This indicates that the angular velocity of the star
is reduced by an amount $\approx-(9/8)\Omega\alpha^2 e^{2\beta t}$ in
this region, where $\alpha e^{\beta t}$ is the amplitude of the
$r$-mode, and $\Omega$ is the angular velocity of the unperturbed
star. Similarly this figure illustrates that $\delta^{(2)}
\Omega/\Omega\approx 1$ near the poles. The angular velocity of the
star is enhanced by the $r$-mode instability in these regions.
\begin{figure}
\includegraphics[width=3.in]{Fig1.eps}
\caption{\label{f:dOmegaContour} Differential rotation
$\delta^{(2)} \Omega/\Omega$ from the $\ell=2$ $r$-mode instability
evaluated on a cross section through the rotation axis of a
slowly-rotating uniform-density star. The solution
scales with time as $e^{2\beta t}$}
\end{figure}
The equilibrium structures of $n=1$ polytropes can also be expressed
in terms of simple analytical functions, but the integrals that
determine $\Upsilon(\varpi)$ in Eq.~(\ref{e:UpsilonDef}) can
not. We therefore evaluate these quantities for all the $n\neq 0$
polytropes numerically.
The structures of the non-rotating Newtonian polytropes are determined
by the Lane-Emden equations, which are generally written in the form,
\bea
\frac{d}{d\xi}\left(\xi^2\frac{d\theta}{d\xi}\right)=-\xi^2\theta^n,
\label{e:LaneEmden}
\eea
where $\theta$ is related to the density by $\rho=\rho_c\theta^n$,
with $\theta=1$ at the center of the star and $\theta=0$ at its
surface. The variable $\xi$ is the scaled radial coordinate,
$r=a\xi$, with
\bea
a^2=\frac{(n+1)K\rho_c^{(1-n)/n}}{4\pi G}.
\eea
We solve Eq.~(\ref{e:LaneEmden}) numerically to determine the
Lane-Emden functions $\theta(\xi)$, use them to evaluate the density
profiles of these stars, $\rho(r)=\rho_c\theta^n$, and finally perform
the integrals numerically in Eq.~(\ref{e:UpsilonDef}) that determine
the mass weighted average $\Upsilon(\varpi)$ of $(z/R)^2$ for
spherical polytropes. Figure~\ref{f:Upsilon} illustrates the results
for a range of polytropic indices. Because they are more centrally
condensed, stars with softer equations of state, i.e. polytropes with
larger values of $n$, have smaller
$\Upsilon(\varpi)$. This is most pronounced
near the rotation axis of the star where $\varpi=0$ and values of
$z^2$ in the dense core dominate the average.
Figure~\ref{f:dOmega_n} illustrates $\delta^{(2)}_N \Omega/\Omega$ from
Eq.~(\ref{e:IIRROmega}), the differential rotation induced by the
gravitational-radiation driven instability in the $\ell=2$ $r$-modes
for polytropes having a range of polytropic indices $n$. This graph
shows that the equatorial surface value ($\varpi=R$) of $\delta^{(2)}_N
\Omega/\Omega$ is the same for all the polytropes. This is not a
surprise, because $\Upsilon(\varpi)=0$ there for all equations of
state. Stars composed of fluid having stiffer equations of state,
i.e. smaller values of $n$, have larger values of $|\delta^{(2)}_N
\Omega/\Omega|$ near the rotation axis where $\varpi=0$.
Figure~\ref{f:dOmega_L} illustrates the differential rotation induced
by the gravitational-radiation induced instability in the $r$-modes of
$n=1$ polytropes having a range of different spherical harmonic mode
index $\ell$ values. The figure portrays a differential rotation
$\delta^{(2)}_N \Omega/\Omega$ induced by gravitational radiation that, like the magnitude of the linear mode, is more
narrowly confined to the equatorial region near the surface of the
star as the $r$-mode harmonic index $\ell$ is increased.
\begin{figure}
\includegraphics[width=3.3in]{Fig2.eps}
\caption{\label{f:Upsilon} Dimensionless ratio of the integrals
$\Upsilon(\varpi)$ defined in Eq.~(\ref{e:UpsilonDef}) that
determines the gravitational-radiation induced differential rotation
in polytropic stellar models having a range of polytropic indices
$n$. }
\end{figure}
\begin{figure}
\includegraphics[width=3.3in]{Fig3.eps}
\caption{\label{f:dOmega_n} Differential rotation induced by the
gravitational-radiation instability in the $\ell=2$ $r$-modes for a
range of polytropic indices $n$.}
\end{figure}
\begin{figure}
\includegraphics[width=3.3in]{Fig4.eps}
\caption{\label{f:dOmega_L} Differential rotation induced by the
gravitational-radiation instability in various $r$-modes
of $n=1$ polytropes for a range of spherical harmonic
mode index $\ell$ values.}
\end{figure}
\section{Discussion}
The radiation-reaction force uniquely determines the exponentially
growing differential rotation of the unstable, nonlinear $r$-mode.
We have found expressions for the rotation law and for the
corresponding secular drift of a
fluid element and have obtained their explicit forms for slowly rotating polytropes.
The formalism presented here describes an $r$-mode, driven by gravitational radiation reaction, at second order in its amplitude $\alpha$, and restricted to
a perfect-fluid Newtonian model. We now comment briefly on
the meaning of the work within a broader physical context.
First, a realistic evolution involves coupling to other modes, because
realistic initial data has small, nonzero initial amplitudes for all
modes and, at higher orders in $\alpha$, other modes are excited by
the $r$-mode itself. As a result of the couplings, the $r$-mode
amplitude will saturate, and studies of its nonlinear evolution (see
\cite{Bondarescu07,Bondarescu09} and references therein) suggest a
saturation amplitude of order $10^{-4}$ or smaller. By the time the
mode reaches saturation, the amplitude of daughter modes may be large
enough that their own second-order axisymmetric parts contribute
significantly to the differential rotation law.
Second, when there is a background magnetic field, the growing axisymmetric
magnetic field generated by the $r$-mode's secular drift can change the
profile of the growing differential rotation \cite{Chugunov2015}
The second-order Euler equation
(\ref{e:PerturbedEulerII}) is altered by the second-order Lorentz force per unit mass,
given in an ideal magnetohydrodynamics approximation
by
$\alpha^2\langle\delta^{(2)} f_{\rm magnetic}\rangle = \alpha^2\langle\delta^{(2)} [\frac1{4\pi\rho}(\nabla\times\vec B)\times \vec B]\rangle$.
This will be of order the radiation-reaction force after an amplitude-independent time%
\footnote{For a magnetic field that grows linearly in time,
we have
\[
\alpha^2\langle\delta^{(2)} f_{\rm magnetic}\rangle \sim
\alpha^2\frac1{4\pi\rho R} B_0^2\Omega t.
\] The second-order radiation reaction force is given by
$\alpha^2\delta^{(2)} f_{GR}\sim \alpha^2\beta\Omega R$, implying that the
Lorentz force $\alpha^2\langle\delta^{(2)} f_{\rm magnetic}\rangle$
has comparable magnitude after a time given in Eq.~(\ref{e:tmagnetic}).
Here we follow Chugunov \cite{Chugunov2015}.
Chugunov uses this argument to conclude that the magnetic field will not be significantly
enhanced after it reaches $B \sim 10^8(\alpha/10^{-4})^2$ G,
but his analysis is restricted to the case where the gravitational
radiation-reaction force on the $r$-mode is negligible.
We have checked the conclusion of continued growth for Shapiro's model of
a uniform-density cylinder with an initial magnetic field \cite{shapiro00},
by adding a forcing term of the form of the second-order axisymmetric
radiation-reaction force~\cite{flr15}. We expect the amplification factor
of the magnetic field to be limited by the value of the mode amplitude,
$\alpha e^{\beta t}$, at nonlinear saturation, not by the value of the field,
unless the initial magnetic field is of order $10^{12}$ G or larger. }
\be
t \sim \beta t_A^2 \sim 10^6 {\rm s}\ \frac{\rho}{10^{15}\rm g/cm^3}
\frac{\beta}{10^{-6}{\rm s}^{-1}} \frac{10^8\rm G}{B_0^2},
\label{e:tmagnetic}\ee
where $t_A$ is the Alfv\'en time associated with the background field,
$t_A = \sqrt{4\pi\rho}/B_0$. After this time and until the mode
reaches its nonlinear saturation amplitude, we expect that the
radiation-reaction force will continue to drive growing differential
rotation. The functional form of this differential rotation,
however, will be determined by both $\delta^{(2)} f_{GR}$
and $\langle\delta^{(2)} f_{\rm magnetic}\rangle$.
After nonlinear saturation, we expect the growth of differential
rotation and of the magnetic field to stop within a time on the order
of the Alfv\'en time. This is because (1) the radiation-reaction force
is now time independent, and (2), with a background magnetic field,
there should no longer be a zero-frequency subspace of modes
associated with adding differential rotation. Reason (2) means that
the differential rotation and the magnetic field at the time of mode
saturation become initial data for a set of modes whose frequencies
are of order the Alv\'en frequency. The second-order axisymmetric
part of the $r$-mode after saturation becomes effectively a system of
stable oscillators driven by a constant force. Such systems have no
growing modes, and therefore no secularly growing magnetic field.
The explicit form of the secular drift we obtain
is new, but its magnitude is consistent with that used in earlier work
\cite{Rezzolla00,Rezzolla01b,Rezzolla01c,Cuofano2010,Cuofano_etal12,CZW15}
that examines the damping of the unstable $r$-mode by this energy transfer
mechanism. This damping mechanism becomes important
whenever the rate of energy transfer to the magnetic field (by
winding up magnetic field lines or, for a superconducting region in a
neutron-star interior, by stretching magnetic-flux tubes or other
mechanisms), is comparable to the growth rate of the unstable
$r$-mode. Assuming the energy transferred to the magnetic field is
not returned to the $r$-mode and that a large fraction of the core is
a type II superconductor, Rezzolla et al. \cite{Rezzolla00} estimate
that the instability will be magnetically damped for a magnetic field
of order $10^{12}$ G. As noted above, we expect this magnetic
damping mechanism to play a role only if the
magnetic field reaches this $10^{12}$ G range
prior to nonlinear saturation of the $r$-mode.
We think it likely that
a limit on magnetic field growth imposed by saturation means that this
field strength can be reached only if the initial field is not far
below $10^{12}$ G. In addition, for an initial field of order $B\geq 10^{12}$ G
or larger, if all axisymmetric perturbations that
wind up the magnetic field have frequency higher than or of order
the Alfv\'en frequency, we conjecture (based on the toy model mentioned
in Foonote 4) that the enhancement of the
magnetic field will be too small to damp the $r$-mode.
Finally, if the magnetic field is large enough to significantly modify
the structure of the first order r-modes, all of the calculations here
would need to be modified. Previous studies, however
\cite{MR02,R02,lee05,GA07,LJP10,CS13,ARR12}, find that field strength
$B \gtrsim 10^{14}-10^{15}$ G is needed to significantly alter the
linear $r$-mode of a star with spin greater than 300 Hz. When the
viscous damping time is comparable to the gravitational-wave growth
time, one would also need to include viscosity in the 2nd-order
equations that determine the differential rotation.
\acknowledgments
We thank Andrey Chugunov for helpful comments on an earlier draft of
this manuscript, Luciano Rezzolla and Chugunov for discussions of
magnetic field evolution, and the referee for a careful reading,
useful suggestions, and insight into the likely role of nonlinear
saturation in the evolution of the r-mode's magnetic field. JF thanks
Shin'ichirou Yoshida for corrections and contributions to an early set
of notes. LL thanks the Leonard E. Parker Center for Gravitation,
Cosmology and Astrophysics, University of Wisconsin at Milwaukee for
their hospitality during several visits during which much of the
research presented here was performed. LL was supported at Caltech in
part by a grant from the Sherman Fairchild Foundation and by grants
DMS-1065438 and PHY-1404569 from the National Science Foundation. JF
was supported in part by grant PHY-1001515 from the National Science
Foundation.
\clearpage
|
1,108,101,566,486 | arxiv | \section{Introduction}
The past two years have seen several experimental observations of new
heavy and light quark states. Most of
those states contain a charm quark, so their observation rekindled
interest in heavy-flavor spectroscopy~\cite{Petrov:2003un}. The unusual
properties of those states invited some speculations regarding their possible
non-$q\bar q$ nature. Among those is the
$X(3872)$ state which, being discovered in the decay
$X(3872) \to J/\psi \pi^+\pi^-$, contains charm-anticharm
quarks~\cite{MolExp,Reviews}.
While a traditional $c\overline{c}$ quarkonium interpretation of this
state has been proposed~\cite{Barnes:2003vb}, its somewhat unusual mass and
decay patterns prompted a series of more exotic
interpretations~\cite{IntX}. Since the mass of the $X(3872)$ state lies
tantalizingly close to the $D^{*0}\overline D{}^0$ threshold of 3871.3~MeV, it is
tempting to assume that $X(3872)$ could be a $D^{*0}\overline D{}^0$ molecular
state~\cite{MoleculeX,Braaten}. Recent Belle data appear to be consistent with
this assignment, preliminarily confirming its $J^{PC}$ = $1^{++}$
quantum numbers~\cite{Abe:2005ix}.
Of course, states of different ``nature''
can mix, if they have the same quantum numbers, further complicating
the interpretation of the experimental data~\cite{Browder:2003fk}.
An unambiguous identification of this state must be done with
many different measurements of its decay and production
patterns. Regardless of whether $X(3872)$ is identified to be a
molecule or a regular $q\bar q$ charmonium, a theoretical analysis of
heavy-meson molecular states should be performed. Until
recently~\cite{Braaten} these studies were done mostly with the
help of various quark models~\cite{MoleculeX,OldStudies}.
In this paper we shall study those states using the techniques of
effective field theories.
This study is possible due to the multitude of scales present in QCD.
The extreme smallness of the binding energy
\begin{equation}\label{bindex}
E_b=(m^{\phantom{l}}_{D^0}+m^{\phantom{l}}_{D^{0*}})-M^{\phantom{l}}_X=
-0.6 \pm 1.1~\mbox{MeV}
\end{equation}
suggests that this state can play the role of the ``deuteron''
(or ``deuson,'' see N.~A.~T\"orn\-qvist's paper in~\cite{MoleculeX})
in meson-meson interactions. The ``deuteron-like'' nature of this
state allows us to use methods similar to those developed for the
description of the deuteron, with the added benefit of heavy-quark
symmetry. The tiny binding energy of this molecular state introduces
an energy scale which is much smaller than the mass of the lightest
particle, the pion, whose exchange can provide binding.
Thus, for a suitable description of this state in the framework of
effective field theory, the pion, along with other particles providing
possible binding contributions (i.e. the $\rho$-meson and other higher mass
resonances), must be integrated out. The resulting Lagrangian should
contain only heavy-meson degrees of freedom with interactions
approximated by local four-boson terms constrained only by the
symmetries of the theory. This approach is similar to
the description of the deuteron in the effective theory without
dynamical pions~\cite{Weinberg}. Nevertheless, we shall often appeal
to the ``exchange picture'' to gain insight into the structure of the
effective Lagrangian.
This approach provides a model-independent description of molecular
states with somewhat limited predictive power. In particular, we would
not be able to say {\it whether} the state $X(3872)$ is a $D^{*0}\overline D{}^0$
molecule or not. What we {\it would} be able to say is that if indeed
$X(3872)$ is a $D^{*0}\overline D{}^0$ molecule, the heavy-quark symmetry makes
a definite statement on the existence of a molecular state in the
$B^{*0}\overline B{}^0$ system. We also show that even though $D$ and $D^*$
are degenerate in the heavy-quark limit, the existence of a
molecular state in $D^{*0}\overline D{}^0$ channel does not necessarily
imply a molecular state in the $D^0\overline {D}^0$ or $B^{*0}\overline D{}^0$
channels.
This paper is organized as follows. In Sec. II we write the most
general effective Lagrangian consistent with the heavy-quark and chiral
symmetry. In Sec. III we obtain the bound-state energy by solving
a system of Lippmann-Schwinger equations and relate the bound-state
energies of $D^{*0}\overline D{}^0$ and $B^{*0}\overline B{}^0$ states.
We present our conclusions in Sec. IV.
\section{The Effective Lagrangian}
In order to describe the molecular states of heavy mesons we need an effective Lagrangian
which contains two- and four-body interaction terms.
The two-body effective Lagrangian that describes the strong interactions of
the heavy mesons $P$ and $P^*$ ($P=B,D$) containing one heavy quark $Q$
is well known~\cite{Grinstein:1992qt}:
\begin{eqnarray}\label{Lagr2}
{\cal L}_2 ~&=& ~-i \mbox{Tr} \left[ \overline{H}^{(Q)} v \cdot D H^{(Q)} \right]
- \frac{1}{2 m^{\phantom{l}}_P} \mbox{Tr} \left[ \overline{H}^{(Q)} D^2 H^{(Q)} \right]
\nonumber \\
&+& ~\frac{\lambda_2}{m^{\phantom{l}}_P} \mbox{Tr}
\left[ \overline{H}^{(Q)} \sigma^{\mu\nu } H^{(Q)} \sigma_{\mu\nu} \right]
+ \frac{ig}{2} \mbox{Tr} \overline{H}^{(Q)} H^{(Q)} \gamma_\mu \gamma_5
\left[\xi^\dagger \partial^\mu \xi - \xi \partial^\mu \xi^\dagger
\right] + ...
\end{eqnarray}
where the ellipsis denotes terms with more derivatives or including explicit
factors of light quark masses, $D_{ab}^\mu=
\delta_{ab}\partial^\mu-(1/2)\left(\xi^\dagger \partial^\mu \xi +
\xi \partial^\mu \xi^\dagger
\right)_{ab}$, and $g$ is the $P^{*}P\pi$ coupling.
As usual, we introduce a superfield describing the combined doublet of
pseudoscalar heavy-meson fields $P_a = \left(P^0, P^+\right)$ and
their vector counterparts with $v\cdot P^{*(Q)}_{a}=0$,
\begin{equation}
H_a^{(Q)}=\frac{1+\not{v}}{2}\left[
P^{*(Q)}_{a\mu} \gamma^\mu - P_a^{(Q)} \gamma_5
\right], \qquad \overline{H}^{(Q) a} = \gamma^0 H_a^{(Q)\dagger} \gamma^0.
\end{equation}
These fields have the usual transformation properties under the heavy-quark
spin symmetry and SU(2)$_V$ flavor symmetry\footnote{A generalization of this
discussion to the flavor SU(3) symmetry is rather straightforward.},
\begin{equation}
H_a^{(Q)} \to S\left(H^{(Q)}U^\dagger\right)_a, \qquad \overline{H}^{(Q) a} \to
\left(U \overline{H}^{(Q)}\right)^a S^{-1},
\end{equation}
and describe heavy mesons with a definite velocity $v$~\cite{Falk:1991nq}.
The third term in Eq.~(\ref{Lagr2}) is needed to account for the $P-P^*$ mass
difference $\Delta\equiv m^{\phantom{l}}_{P^*}-m^{\phantom{l}}_P=-2\lambda_2/m^{\phantom{l}}_P$.
The pseudo-Goldstone fields are introduced as $\xi=e^{i\widetilde{M}/f}$,
where $\widetilde{M}$ is the usual meson matrix~\cite{Donoghue:1992dd}
\begin{equation}
\widetilde{M}=\left(
\begin{array}{cc}
\frac{1}{\sqrt{2}}\pi^0 & \pi^+ \\
\pi^- & -\frac{1}{\sqrt{2}}\pi^0
\end{array}
\right),
\end{equation}
and $f\simeq135$ MeV is the pion decay constant. Notice that since
heavy quark-antiquark pair production is absent in this effective theory,
the effective Lagrangian of Eq.~(\ref{Lagr2}) does not contain heavy
antimeson degrees of freedom. Since we are describing the molecular
states of heavy mesons, those fields should have to be explicitly added to
the Lagrangian. The fields $H_a^{(\overline{Q})}$ and $H_a^{(\overline{Q})\dagger}$
that describe the propagation of heavy antimesons, i.e.
containing the heavy antiquark $\overline{Q}$, are introduced as
\begin{equation}
H_a^{(\overline{Q})}=\left[
P^{*(\overline{Q})}_{a\mu} \gamma^\mu - P_a^{(\overline{Q})} \gamma_5
\right] \frac{1-\not{v}}{2}, \qquad
\overline{H}^{(\overline{Q}) a} = \gamma^0 H_a^{(\overline{Q})\dagger} \gamma^0,
\end{equation}
and transform as $H_a^{(\overline{Q})} \to
\left(U H^{(\overline{Q})}\right)_a S^{-1}$ and
$\overline{H}^{(\overline{Q}) a} \to
S \left(\overline{H}^{(\overline{Q})}U^{\dagger}\right)^a$ under heavy-quark spin
and SU(2)$_V$ symmetry.
In order to write an effective Lagrangian describing the properties of
$X(3872)$, we need to couple the fields $H_a^{(Q)}$ and $H^{(\overline{Q})a}$ so that
the resulting Lagrangian respects the heavy-quark spin and chiral symmetries.
Since the binding energy of $X(3872)$ is small, the size of a bound state is rather
large. This means that the particular details of the interaction of
the heavy meson and antimeson pair (for example, a $\rho$-meson exchange
contribution) are irrelevant for the description of such a bound state and can be well
approximated by four-meson local interactions. One can write a Lagrangian describing
$X(3872)$ by first writing an effective Lagrangian above $\mu=m_\pi$ and then matching it
onto the Lagrangian for $\mu<m_\pi$, i.e. integrating out the pion degrees of freedom.
The general effective Lagrangian consistent with heavy-quark spin and chiral symmetries
can be written as
\begin{equation}\label{Lagr}
{\cal L}={\cal L}_2+{\cal L}_4,
\end{equation}
where the two-body piece, consistent with reparametrization invariance, is given
by Eq.~(\ref{Lagr2}) and the four-body piece is
\begin{eqnarray}\label{Lagr4}
-{\cal L}_4&=& \frac{C_1}{4} \mbox{Tr} \left[ \overline{H}^{(Q)} H^{(Q)} \gamma_\mu \right]
\mbox{Tr} \left[ H^{(\overline{Q})} \overline{H}^{(\overline{Q})} \gamma^\mu \right]
+ \frac{C_2}{4} \mbox{Tr} \left[ \overline{H}^{(Q)} H^{(Q)} \gamma_\mu \gamma_5 \right]
\mbox{Tr} \left[ H^{(\overline{Q})} \overline{H}^{(\overline{Q})} \gamma^\mu \gamma_5 \right].
\end{eqnarray}
This Lagrangian, together with ${\cal L}_2$, describes the scattering of $P$ and $P^*$ mesons
at the energy scale above $m_\pi$. Integrating out the pion degrees of freedom at tree
level corresponds to a modification $C_2' \to C_2 + (2/3) \left(g/f\right)^2$. Since in this paper
we will not discuss matching at higher orders, the Lagrangian in Eq.~(\ref{Lagr4}) will be
used for the calculation of the bound state properties of $X(3872)$.
By virtue of the heavy-quark spin symmetry, the same Lagrangian governs the four-boson
interactions of {\it all} $P_a^{(*)}=D^{(*)}$ or $B^{(*)}$
states, while the flavor $SU(2)_V$ implies that there could be four such states
for each $P_a^{(*)}\overline{P}_b^{(*)}$. Indeed, not
all of these states are bound. Here we shall concentrate on $X(3872)$, which is a bound
state of two {\it neutral} bosons, $P_a\equiv P^0\equiv P$, assuming
the isospin breaking advocated
in Ref.~\cite{MoleculeX}. Notice that the most general Lagrangian involves two couplings,
$C_1$ and $C_2$. Other Dirac structures are possible, but will yield the same
Lagrangian for the $P\overline{P^*}$ bound state.
In order to relate the properties of $P\overline{P^*}$ molecules in the charm and beauty sectors
we shall need to see how $C_i$ couplings scale as functions of the heavy-quark mass $M$.
To see this, we recall that a system of two heavy particles requires
a nonrelativistic $v/c$ expansion, not a $1/M$ expansion. This is
necessary to avoid that the resulting loop integrals acquire pinch
singularities~\cite{Weinberg}.
Therefore, we must powercount $p^0 \sim \vec{p}^2/M$, where $\vec{p}$ is a characteristic
3-momentum of a heavy meson in the $P\overline{P^*}$ molecular bound state, which implies
that the first and the second
terms in Eq.~(\ref{Lagr2}) scale in the same way. Since the action $S=\int d^4x ~{\cal L}$ does
not scale with the heavy-quark mass, this implies that $d^4x \sim M$ and the
Lagrangian density ${\cal L} \sim 1/M$. The kinetic term in Eq.~(\ref{Lagr2}) then gives the
expected scaling of the heavy-meson field $H\sim P \sim P^*\sim M^0$, which in turn
implies from Eq.~(\ref{Lagr4}) that the couplings
\begin{equation}\label{Scaling}
C_i \sim 1/M.
\end{equation}
This dimensional analysis, however, cannot be used to predict the relative contributions of
other couplings in ${\cal L}_4$, say, relativistic corrections to Eq.~(\ref{Lagr4}), because
of the fine-tuning which produces a molecular state close to threshold in the first
place~\cite{Braaten,Weinberg}. We will use it only to relate properties of $DD^*$
and $BB^*$ systems. Similar dimensional analysis was proposed for non-relativistic
QCD~\cite{Luke:1996hj}.
Evaluating the traces yields for the $P\overline{P^*}$ sector
\begin{eqnarray}\label{LocalLagr}
{\cal L}_{4,PP^*} = &-& C_1 P^{(Q)\dagger} P^{(Q)}
P^{*(\overline{Q})\dagger}_\mu P^{* (\overline{Q}) \mu}
- C_1 P^{*(Q)\dagger}_\mu P^{*(Q) \mu}
P^{(\overline{Q})\dagger} P^{(\overline{Q})} \nonumber \\
&+& C_2 P^{(Q)\dagger} P^{*(Q)}_\mu
P^{* (\overline{Q})\dagger \mu} P^{(\overline{Q})}
+ C_2 P^{* (Q)\dagger}_\mu P^{(Q)}
P^{(\overline{Q})\dagger} P^{* (\overline{Q}) \mu}
+\dots
\end{eqnarray}
Notice that this Lagrangian differs from the one used in~\cite{Braaten}, where the
interaction strength is described by a single parameter $\lambda=-C_1=C_2$. The difference
can be understood in the ``exchange'' model, where $C_1$ and $C_2$ come from the
exchanges of mesons of different masses and parity. In this language the model of Ref.~\cite{Braaten}
corresponds to the situation of degenerate parity states. In QCD, however,
negative parity states are generally lighter than their positive parity counterparts.
This is especially clear for the lightest octet of pseudoscalar mesons, where chiral symmetry
forces their masses to be almost zero, while all the corresponding scalar mesons have
masses of the order of 1 GeV. We will nevertheless show that the resulting binding energy
still depends on a single parameter, a linear combination of $C_1$ and $C_2$.
Similarly, one obtains the component Lagrangian governing the interactions of
$P$ and $\overline P$,
\begin{equation}\label{LocalLagrPP}
{\cal L}_{4,PP} = C_1 P^{(Q)\dagger} P^{(Q)}
P^{(\overline{Q})\dagger} P^{(\overline{Q})}.
\end{equation}
Clearly, one cannot relate the existence of the bound state in the
$P\overline{P^*}$ and $P\overline{P}$ channels, as the properties of
the latter will depend on $C_1$ alone, not a linear combination of $C_1$ and $C_2$.
\section{Properties of bound states}
In order to describe bound states we shall modify the approach of
S.~Weinberg~\cite{Weinberg}. The lowest-energy bound state of $P$ and
$\overline{P^*}$ is an eigenstate of charge conjugation. Here we carry out
our calculation for ($P$,$\overline{P^*}$) = ($D$,$\overline{D^*}$),
but analogous considerations will apply to the $B$ system. The two
eigenstates of charge conjugation will be given by
\begin{equation}\label{Eigenstate}
\ket{X_{\pm}}=\frac{1}{\sqrt{2}}\left[
\ket{D^* \overline{D}} \pm \ket{D \overline{D}^*}
\right].
\end{equation}
To find the bound-state energy of $X(3872)$ with $J^{PC}=1^{++}$,
we shall look for a pole of the transition amplitude $T_{++}=\bra{X_+}T\ket{X_+}$.
\begin{figure}[tb]
\centerline{\epsfxsize=12cm\epsffile{figure1.eps}}
\centerline{\parbox{17cm}{\caption{\label{fig1}
Transition amplitudes for the $D-\overline{D}^*$ scattering written in the form
of Lippmann-Schwinger equations. Double lines indicate the vector $D^*$ or
$\overline{D}^*$ mesons, solid lines pseudoscalar $D$ or $\overline{D}$ states.}}}
\end{figure}
We first define the following transition amplitudes,
\begin{eqnarray}\label{Ts}
T_{11}&=&\langle D^* \overline{D}| T | D^* \overline{D} \rangle, \quad
T_{12}=\langle D^* \overline{D}| T | D \overline{D}^* \rangle,
\nonumber \\
T_{21}&=&\langle D \overline{D}^*| T | D^* \overline{D} \rangle, \quad
T_{22}=\langle D \overline{D}^*| T | D \overline{D}^* \rangle,
\end{eqnarray}
which correspond to the scattering of $D$ and $D^*$ mesons. Clearly,
at tree level, $T_{ii} \sim C_1$ and $T_{ij,~i\neq j} \sim C_2$, since
we consider only contact interactions. But we also have to include a
resummation of loop contributions to complete the leading
order~\cite{Weinberg}. These transition amplitudes satisfy a system of
Lippmann-Schwinger equations depicted in Fig.~\ref{fig1},
\begin{eqnarray}\label{LSE}
i T_{11}&=& -i C_1 + \int \frac{d^4 q}{(2\pi)^4} T_{11}
G_{PP^*} C_1 - \int \frac{d^4 q}{(2\pi)^4} T_{12}
G_{PP^*} C_2,
\nonumber \\
i T_{12}&=& ~~i C_2 - \int \frac{d^4 q}{(2\pi)^4} T_{11}
G_{PP^*} C_2 + \int \frac{d^4 q}{(2\pi)^4} T_{12}
G_{PP^*} C_1,
\nonumber \\
i T_{21}&=& ~~i C_2 + \int \frac{d^4 q}{(2\pi)^4} T_{21}
G_{PP^*} C_1 - \int \frac{d^4 q}{(2\pi)^4} T_{22}
G_{PP^*} C_2,
\\
i T_{22}&=& -i C_1 - \int \frac{d^4 q}{(2\pi)^4} T_{21}
G_{PP^*} C_2 + \int \frac{d^4 q}{(2\pi)^4} T_{22}
G_{PP^*} C_1,
\nonumber
\end{eqnarray}
where
\begin{equation}\label{Tra}
G_{PP^*}=
\frac{1}{4} \frac{1}{\left(\vec{p}^2/{2m^{\phantom{l}}_{D^*}}+
q_0-\Delta-\vec{q}^2/{2m^{\phantom{l}}_{D^*}}+i\epsilon\right)
\left(\vec{p}^2/{2m^{\phantom{l}}_{D}}-q_0-\vec{q}^2/{2m^{\phantom{l}}_D}+i\epsilon \right)},
\end{equation}
$\vec{p}$ is the momentum of one of the mesons in the
center-of-mass system, and we canceled out factors of
$m^{\phantom{l}}_D m^{\phantom{l}}_{D^*}$ appearing on both sides of Eq.~(\ref{LSE}).
Note that in Eq.~(\ref{Tra}) the vector propagator includes the mass
difference $\Delta$, as a consequence of the term proportional to
$\lambda_2$ in the Lagrangian of Eq.~(\ref{Lagr2})~\cite{Grinstein:1992qt}.
This choice of propagators is not unique, but our results will not
depend on it, because it amounts to a choice of a finite phase
multiplying the heavy-meson fields. This rephasing is equivalent to
measuring energies with respect to the pseudoscalar mass, $m_D$.
Then, the position of the transition amplitude pole
on the energy scale will be measured with respect to the ``constituent mass'' of the system,
which is in our case twice the pseudoscalar mass $m^{\phantom{l}}_D$~\cite{Manohar:2000dt}.
A different choice of phase will give different propagators
but also a different ``constituent mass''.
Since we are interested in the pole of the amplitude $T_{++}$, we must
diagonalize this system of equations rewritten in an algebraic matrix form,
\begin{eqnarray}\label{LSEMatrix}
\left(
\begin{array}{c}
T_{11} \\
T_{12} \\
T_{21} \\
T_{22}
\end{array}
\right)
=
\left(
\begin{array}{c}
-C_1 \\
C_2 \\
C_2 \\
-C_1
\end{array}
\right)+
i\widetilde{A} \left(
\begin{array}{cccc}
-C_1 & C_2 & 0 & 0 \\
C_2 & -C_1 & 0 & 0 \\
0 & 0 & -C_1 & C_2 \\
0 & 0 & C_2 & -C_1
\end{array}
\right)
\left(
\begin{array}{c}
T_{11} \\
T_{12} \\
T_{21} \\
T_{22}
\end{array}
\right).
\end{eqnarray}
Notice that the matrix is in the block-diagonal form, which allows us to solve
Eq.~(\ref{LSEMatrix}) in two steps working only with $2\times 2$ matrices. The solution
of Eq.~(\ref{LSEMatrix}) produces the $T_{++}$ amplitude,
\begin{equation}\label{Solution}
T_{++}=\frac{1}{2}\left( T_{11}+T_{12}+T_{21}+T_{22} \right)=
\frac{\lambda}{1-i\lambda \widetilde{A}},
\end{equation}
with $\lambda=-C_1+C_2$, and $\widetilde{A}$ is a (divergent) integral
\begin{eqnarray}\label{Integral}
\widetilde{A}&\;=\;&\frac{1}{4}\int\frac{d^4q}{(2\pi)^4}
\frac{1}{\left(\vec{p}^2/{2m^{\phantom{l}}_{D^*}}+
q_0-\Delta-\vec{q}^2/{2m^{\phantom{l}}_{D^*}}+i\epsilon\right)
\left(\vec{p}^2/{2m^{\phantom{l}}_D}-q_0-\vec{q}^2/{2m^{\phantom{l}}_D}+i\epsilon \right)}
\nonumber \\
&\;=\;& \frac{i}{4} 2 \mu^{\phantom{l}}_{DD^*}
\int \frac{d^3q}{(2\pi)^3} \frac{1}{\vec{q}^2-2\mu^{\phantom{l}}_{DD^*}\left(E-\Delta\right)
-i\epsilon},
\end{eqnarray}
where $E=\vec{p}^2/2\mu^{\phantom{l}}_{DD^*}$, $\mu^{\phantom{l}}_{DD^*}$ is the reduced mass of
the $DD^*$ system, and we have used the residue theorem to evaluate the integral over $q^0$.
The divergence of the integral of Eq.~(\ref{Integral}), as usual, is removed by
renormalization. We choose to define a renormalized $\lambda^{\phantom{l}}_R$ within the $MS$
subtraction
scheme in dimensional regularization. In this scheme the integral $\widetilde{A}$ is
finite, which corresponds to an implicit subtraction of power divergences in
Eq.~(\ref{Integral}). Computing the second integral in Eq.~(\ref{Integral})
by analytically continuing to $d-1$ dimensions yields
\begin{eqnarray}
\widetilde{A}= -\frac{1}{8 \pi} \mu^{\phantom{l}}_{DD^*}
|\vec{p}| \sqrt{1-\frac{2 \mu^{\phantom{l}}_{DD^*}\Delta}{\vec{p}^2}}.
\end{eqnarray}
This implies for the transition amplitude
\begin{eqnarray}\label{FinAmp}
T_{++}=
\frac{\lambda^{\phantom{l}}_R}{1+(i/{8\pi})\lambda^{\phantom{l}}_R\, \mu^{\phantom{l}}_{DD^*}
|\vec{p}|
\sqrt{1-2 \mu^{\phantom{l}}_{DD^*}\Delta/{\vec{p}^2}}}.
\end{eqnarray}
The position of the pole of the molecular state on the energy scale
can be read off Eq.~(\ref{FinAmp}),
\begin{equation}\label{Pole}
E_{\rm Pole}=\frac{32 \pi^2}{\lambda_R^2 \mu_{DD^*}^3}-\Delta.
\end{equation}
This is the amount of energy we must subtract from the ``constituent mass''
of the system, determined above as $2 m^{\phantom{l}}_D$,
in order to calculate the mass
\begin{equation}
M^{\phantom{l}}_X=2 m^{\phantom{l}}_D-E_{\rm Pole}=2
m^{\phantom{l}}_D+\Delta-\frac{32 \pi^2}{\lambda_R^2 \mu_{DD^*}^3}.
\end{equation}
Recalling the definition of binding energy, Eq.~(\ref{bindex}),
and that $m^{\phantom{l}}_{D^*}$ = $m^{\phantom{l}}_{D}$ + $\Delta$,
we infer
\begin{equation}\label{Binding}
E_b=\frac{32 \pi^2}{\lambda_R^2 \mu_{DD^*}^3}.
\end{equation}
Assuming $E_b$ = 0.5 MeV, which is one sigma below the central value in
Eq.~(\ref{bindex}) \cite{MolExp}, and the experimental values for the
masses~\cite{PDG}, we obtain
\begin{equation}\label{Lambda}
\lambda^{\phantom{l}}_R \simeq 8.4 \times 10^{-4} \ {\rm MeV}^{-2}.
\end{equation}
Note that the binding energy scales as $1/M$ in the
heavy-quark limit. Thus, the smallness of the binding energy is implied in the
heavy-quark limit. The small binding energy of the $X(3872)$
state implies that the scattering length
$a^{\phantom{l}}_D$ is large and can be written as
\begin{equation}
a^{\phantom{l}}_D=\sqrt{\left(2 \mu^{\phantom{l}}_{DD^*} E_b \right)^{-1}}=
\frac{\lambda^{\phantom{l}}_R \mu^{\phantom{l}}_{DD^*}}
{8 \pi},
\end{equation}
yielding a numerical value $a^{\phantom{l}}_D$ = 6.3 fm.
Since the scattering length is large, universality implies that the
leading-order wave function of $X(3872)$ is known,
\begin{equation}
\psi^{\phantom{l}}_{DD^*}(r) =\frac{e^{-r/a^{\phantom{l}}_D}}{\sqrt{2\pi a^{\phantom{\dagger}}_D}r}.
\end{equation}
This can be used to predict the production and decay properties of
$X(3872)$~\cite{Braaten,ExpX}.
Once we establish the molecular nature of $X(3872)$, its experimental
mass gives us its binding energy. The latter is dependent on the coupling
constant $\lambda^{\phantom{l}}_R$, and may be used to predict the binding energies of
hypothetical molecular bound states in the beauty sector,
as well as to discuss its implications for the beauty-charm sector. Alternatively, the coupling
constant $\lambda^{\phantom{l}}_R$ can be fixed from the resonance-exchange model,
in which case (model-dependent) predictions are possible for all the heavy sectors.
Taking into account the scaling of $\lambda^{\phantom{l}}_R$ with $M$ given by
Eq.~(\ref{Scaling}), we obtain
\begin{equation}\label{Rescale}
\lambda_R^B \sim \lambda_R^D \frac{\mu^{\phantom{l}}_{DD^*}}{\mu^{\phantom{l}}_{BB^*}}.
\end{equation}
Formula (\ref{Binding}) can now be used for the $B$ system.
This implies the existence of an S-wave bound states with
binding energy $E_b$ = 0.18 MeV, mass $M_{X_b}$ = 10604 MeV, and a
scattering length $a^{\phantom{l}}_B$ = $a^{\phantom{l}}_D$ = 6.3 fm,
because of the lack of scaling of the scattering length with the
heavy-quark mass. The above prediction for the
binding energy is lower than the quark-model prediction of Wong
in~\cite{MoleculeX}. Similar considerations apply to $D^0 \overline D{}^0$ and $B^0 \overline B{}^0$ states:
In their case the starting point is the
Lagrangian term in Eq.~(\ref{LocalLagrPP}). Since it involves only a
single term, the calculations are actually easier and involve only
one Lippmann-Schwinger equation. The resulting binding energy is then
\begin{equation}\label{BindingC}
E_b=\frac{256 \pi^2}{C_{1R}^2 m_D^3},
\end{equation}
and the equivalent formula for the $B^0 \overline B{}^0$ system may be obtained
by rescaling the coupling constant $C_{1R}^2$ as we did with
$\lambda^{\phantom{l}}_R$ in Eq.~(\ref{Rescale}).
Examining Eq.~(\ref{BindingC}) we immediately notice that the existence of
a bound state in the $D^*\overline{D}$ channel does not dictate the properties
of a possible bound state in the $D^0 \overline D{}^0$ or $B^0 \overline B{}^0$ channels, since $C_1$ and
$C_2$ are generally not related to each other.
The discussion of the beauty-charm system parallels what was done above. In this case the situation is
a bit different, because there are two states, $B^0 \overline{D}^{*0}$ and $D^0 \overline{B}^{*0}$.
The treatment of these states depends on how the heavy-quark limit is taken.
They have different masses, since $\Delta_{BB^*}\neq\Delta_{DD^*}$.
This implies that these two states do not mix and have to be treated
separately, so that their binding energies could be obtained by respectively
substituting $\lambda^{\phantom{l}}_R$ $\to$ $C_{1R}$ and $\mu^{\phantom{l}}_{DD^*}$ $\to$
$\mu^{\phantom{l}}_{BD^*}$ or $\mu^{\phantom{l}}_{DB^*}$ in Eq.~(\ref{Binding}). But just like in
the case of $D^0 \overline D{}^0$ or $B^0 \overline B{}^0$ channels, we cannot
predict bound states in these channels from the heavy-quark
symmetry arguments alone.
\section{Conclusion}
We have used an effective field theory approach in the analysis of the
likely molecular state $X(3872)$, by describing its binding
interaction with contact terms in a heavy-quark symmetric
Lagrangian. The flexibility of this description allows us to ignore the
details of the interaction and to concentrate on its effects, namely a
shallow bound state and a large scattering length. Taking into account
the universality and the scaling of the effective coupling constants
we are able to extend our description to the $B$ system.
We found that if $X(3872)$ is indeed a molecular bound state of $D^{*0}$ and
$\overline D{}^0$ mesons, the heavy-quark symmetry requires the existence of the molecular
bound state $X_b$ of $B^{*0}$ and $\overline B{}^0$ with the mass of 10604 MeV.
\section*{ACKNOWLEDGMENTS}
A.~P. thanks T.~Mehen and R.~Hill for useful conversations.
F.~G. thanks P.~Bedaque for his clarifying remarks.
The authors would like to thank E.~Braaten for reading the manuscript and for
helpful comments. A.~P. also thanks the Institute for Nuclear Theory at the
University of Washington for its hospitality and the Department of Energy for
partial support during the completion of this work.
This work was supported in part by the U.S.\ National Science Foundation under
Grant PHY--0244853, and by the U.S.\ Department of Energy under Contract
DE-FG02-96ER41005.
|
1,108,101,566,487 | arxiv | \section{Introduction}
Due to the importance of convexity and its generalisations in the study of optimality to resolve mathematical issues, researchers have concentrated a lot of their efforts on generalised convex functions for this purpose. As an illustration, Hudzik and Maligranda (1994) \cite{Hudzik}, investigated at two distinct forms of $s$-convexity and found that $s$-convexity in the next meaning is basically more significant than in the first sense whenever $(0< s< 1)$. Youness (1999) \cite{Youness} expanded the definitions of convex sets and functions to create a new class of sets and functions known as $E$-convex sets and $E$-convex functions.
Yang (2001) \cite{X} enhanced Youness's paper \cite{Youness} by incorporating certain illustrations.
In recent years, academic experts have given these generalized convex functions in additional consideration. The semi-preinvex functions were studied by X.J. Long and J.W. Peng in 2006 \cite{Long} as a generalization of the semi-preinvex functions and the $b$-vex functions. Y. Syau et al. (2009)\cite{Syau} developed the $E$-$b$-vex function family, a novel class of functions which are the generalizations of $b$-vex functions and $E$-vex functions. In 2011, T. Emam investigated a novel class of functions known as approximately $b$-invex functions. He also discussed some of its properties and discovered the necessary optimality conditions for nonlinear programming using these functions. In their investigation of a novel class of generalized sub-$b$-convex functions and sub-$b$-convex sets, M.T. Chao et al. (2012) \cite{Chao} showed the conditions for the existence of optimal solutions for both unconstrained and inequality-constrained sub-$b$-convex programming.
The study in our paper aims to introduce a new class of generalized exponential kind of convex functions termed as $GS$-exponential kind of convex functions and explores certain characteristics of the same class. This paper draws inspiration from a number of research papers \cite{Butt,Fakhar,Fulga,IH,Kadakal1,Mishra,ozcan,Shi,Wang,Zhao}. Additionally, we offer the adequate $GS$-exponential kind of convexity-derived criteria of optimality for programming with variables which are both unconstrained and inequality-constrained.
\section{Preliminaries}
We will go through the definitions of sub-$b$-$s$-convexity, exponential kind of convexity, and $s$-convexity of functions in this section of the manuscript. For the remainder of this work, let $V$ stand for any non-empty convex subset in $\mathbb{R}^n$.
\begin{definition}\label{d3} \cite{Liao}
The function $Q: V \rightarrow \mathbb{R}$ is known as sub-$b$-$s$-convex in the second sense associated with the map $G: V \times V \times (0,1]\rightarrow \mathbb{R}$, if
$$ Q(am_1+(1-a)m_2)\leq a^sQ(m_1)+(1-a)^sQ(m_2)+G(m_1,m_2,s)$$
holds for all $m_1,m_2\in V, a \in [0,1 ]$ and for any fixed $s \in (0,1]$.
\end{definition}
\begin{definition}\label{d4}\cite{Hudzik}
The function $Q: V \rightarrow \mathbb{R}$ is known as $s$-convex in the second sense, if for all $m_1,m_2\in V, a \in [0,1 ]$ and for any fixed $s \in (0,1]$, we have
$$ Q(am_1+(1-a)m_2)\leq a^sQ(m_1)+(1-a)^sQ(m_2)$$
\end{definition}
\begin{definition}\label{d5}\cite{Kadakal}
A positive function $Q: V \rightarrow \mathbb{R}$ is known as exponential kind of convex function, if
$$Q(am_1+(1-a)m_2)\leq(e^a-1)Q(m_1)+(e^{1-a}-1)Q(m_2)$$
holds for all $m_1,m_2\in V, a \in [0,1 ]$.
\end{definition}
The concepts defined as in \ref{d3}, \ref{d4} and \ref{d5}, motivates us to explore a new idea known as $GS$-exponential kind of convex function.
\section{Main Results}
\begin{definition}\label{d1}
The function $ Q : V\rightarrow \mathbb{R}$ is known as $GS$-exponential kind of convex function on $V$ associated with the map $G : V \times V \times (0, 1] \rightarrow \mathbb{R}$, if
\begin{equation}\label{i1}
Q(am_1+(1-a)m_2)\leq(e^a-1)^sQ(m_1)+(e^{1-a}-1)^sQ(m_2)+aG(m_1,m_2,s)
\end{equation} holds for each $m_1, m_2 \in V, a \in [0,1]$ and for any fixed $s \in(0,1].$
\end{definition}
\begin{remark}\label{r1}
If we take $s=1$, $Q(m_1)$ is non-negative and $G(m_1,m_2,s)=0$, the $GS$-exponential kind of convex function reduces to be exponential kind of convex function.
\end{remark}
\begin{theorem}\label{t1}
If $Q_1, Q_2 : V \rightarrow\mathbb{R}$ are $GS$-exponential kind of convex function associated with the map $G_1, G_2$ respectively, then $Q_1+Q_2$ and $\beta Q_1$, $(\beta \geq 0)$ are also a $GS$-exponential kind of convex function.
\end{theorem}
\begin{corollary}\label{c1}
If $ Q_i : V \rightarrow \mathbb{R}$, $(i=1,2,....., n)$ are $GS$-exponential kind of convex function associated with the map $G_i : V \times V \times (0,1 ] \rightarrow \mathbb{R},$ $(i=1,2,....,n)$, respectively, then $Q=\sum_{i=1}^{n}\beta_iQ_i, \beta\geq0, (i=1,2,...,n)$ is $GS$-exponential kind of convex function associated with the map $G=\sum_{i=1}^{n} \beta_iG_i$.
\end{corollary}
\begin{lemma}\label{l1}
For all $a \in [0,1]$ and $s \in (0,1]$, the inequalities $(e^a-1)^s \geq a$ and $(e^{1-a}-1)^s \geq 1-a$ hold.
\end{lemma}
\begin{proposition}\label{p1}
Every convex function is $GS$-exponential kind of convex function if it has a map $G$ associated with it that is non-negative.
\end{proposition}
\begin{theorem}\label{t2}
If $Q: V \rightarrow \mathbb{R}$ is the GS-exponential kind of convex function associated with the map $G$ and $ S : \mathbb{R} \rightarrow \mathbb{R}$ is a non-negative function in addition to being linear, then $S \circ Q$ is a GS-exponential kind of convex function associated with the map $S \circ G$.
\end{theorem}
\begin{definition}\label{d2}
Assume that $U$ be a non-empty subset of $\mathbb{R}^{n+1}$. Then, $U$ is known as $GS$-exponential kind of convex set associated with the map $G : \mathbb{R}^{n} \times \mathbb{R}^{n} \times (0,1] \rightarrow \mathbb{R}$ if for all $(m_1,\alpha_1),(m_2, \alpha_2) \in U, m_1, m_2 \in \mathbb{R}^n, a \in [0, 1]$ and some fixed $ s \in (0,1 ],$ we have $$(am_1+(1-a)m_2,(e^a-1)^s\alpha_1+(e^{1-a}-1)^s\alpha_2+aG(m_1,m_2,s)) \in U.$$
\end{definition}
Now, we provide a characterization of $GS$-exponential kind of convex function $ Q : V \rightarrow \mathbb{R}$ based on their respective epigraphs, given by $$E(Q)=\{(m, \alpha): m \in V, \alpha \in \mathbb{R}, Q(m)\leq \alpha\}.$$
\begin{theorem}\label{t3}
A function $ Q : V \rightarrow \mathbb{R}$ is a $GS$-exponential kind of convex function associated with the map $G : V \times V \times (0,1 ] \rightarrow \mathbb{R}$, if and only if $E(Q)$ is a $GS$-exponential kind of convex set associated with the map $G$.
\end{theorem}
\begin{theorem}\label{t4}
Assume that $m_2>0$ and $Q_\beta: [m_1,m_2]\rightarrow \mathbb{R}$ is a family of numerical functions associated with the map $G_\beta$ and each $G_\beta$ is a $GS$-exponential kind of convex functions and each $G_\beta$ is bounded function, also assume that $Q(m)=\sup_\beta Q_\beta(m)$ and $G(m_1,m_2,s)=\sup_\beta G_\beta(m_1,m_2,s)$. If the set (non-empty) $K=\{r \in [m_1,m_2] | Q(r)<\infty\}$, then $K$ is an interval and $Q$ is $GS$-exponential kind of convex function on $K$.
\end{theorem}
\begin{theorem}\label{t5}
Let $Q:[m_1,m_2] \rightarrow \mathbb{R}$ be a $GS$-exponential kind of convex function associated with the map $G: [m_1,m_2] \times [m_1,m_2] \times (0,1] \rightarrow \mathbb{R}$ and also let $G(m_1,m_2,s)$ is bounded, then $Q$ is also bounded on $[m_1,m_2].$
\end{theorem}
In this section, $Q$ is considered to be a differentiable function and $ s,a \in (0,1].$
\begin{theorem}\label{t6}
Let $Q: V \rightarrow\mathbb{R}$ be a non-negative differentiable $GS$-exponential kind of convex function associated with the map $G$. Then
$$ (i) \nabla Q(m_2)^T(m_1-m_2)< \dfrac{(e^a-1)^s}{a}Q(m_1)+\dfrac{e^{(1-a)s}}{a}Q(m_2)+G(m_1,m_2,s)-\dfrac{o(a)}{a},$$
$$
(ii)\nabla Q(m_2)^T(m_1-m_2)<\dfrac{(e^s-1)^s(Q(m_1)-Q(m_2))+3Q(m_2)-o(a)}{a}+G(m_1,m_2,s)
$$
\end{theorem}
\begin{theorem}\label{t7}
Let $Q: V \rightarrow \mathbb{R}$ be a non-positive differentiable $GS$-exponential kind of convex function associated with the map $G$. Then
$$ \nabla Q(m_2)^T(m_1-m_2)\leq\dfrac{(e^a-1)^s}{a}[Q(m_1)-Q(m_2)]+G(m_1,m_2,s)-\dfrac{o(a)}{a}.$$
\end{theorem}
\begin{corollary}\label{c2}
Assume that $Q:V \rightarrow \mathbb{R}$ is a positive differentiable $GS$-exponential kind of convex function, then
\begin{eqnarray*}
\nabla [Q(m_2)-Q(m_1)]^T(m_1-m_2)&<& \dfrac{(e^a-1)^s}{a}[Q(m_1)+Q(m_2)]+\dfrac{e^{(1-a)s}}{a}[Q(m_1)+Q(m_2)]\\&&+G(m_1,m_2,s)+G(m_2,m_1,s)-2\dfrac{o(a)}{a}.
\end{eqnarray*}
In case if $Q$ is a negative valued, then
$$ \nabla [Q(m_2)-Q(m_1)]^T(m_1-m_2)\leq G(m_1,m_2,s)+G(m_2,m_1,s)-2\dfrac{o(a)}{a}.$$
\end{corollary}
The following methods are then utilized to apply the above outcomes to nonlinear programming. So, we take the unconstrained problem (S).
\begin{equation}\label{i22}
(S): \min \{Q(m), m \in V\}
\end{equation}
\begin{theorem}\label{t8}
Let $ Q: V \rightarrow \mathbb{R}$ be a positive differentiable $GS$-exponential kind of convex function associated with the map $G$. Also, suppose that $m \in V$ and the inequality
\begin{equation}\label{i10}
\nabla Q(m)^T(n-m)> G(n,m,s)+\dfrac{3Q(m)-o(a)}{a}
\end{equation}
holds for each $n \in V, a \in(0,1)$, and for any particular $s \in (0,1],$, then $n$ is the solution optimal to the problem \eqref{i22} related to $Q$ on $V$.
\end{theorem}
The following example of unconstrained programming is taken into consideration
|
1,108,101,566,488 | arxiv | \section{Introduction}
The steady state radio sky is considerably well studied and modeled (e.g. \citet{Be95, Co98}) but dynamic radio sky is not studied in detail due to one or more reasons including lack of telescopes with large field of view, observational
constraints, band width and poor time resolution etc. There are various types of sources which show variability
at different time scales. For example, brown dwarfs, flaring stars, masers, pulsars, micro-quasars, supernovae, gamma-ray bursts and active galactic nuclei show high levels of variability (e.g. \citet{Co04} for a review). Until recently there are only few radio surveys
which searched for variable and transient radio sources efficiently (e.g., see, \citep{Gr86, Ma09}). So, most of the radio transients have been found through follow-up observations of known or suspected
transient emitters.
Recent programs to search for radio transients from direct and archival observations revealed some potential radio transients which are consistent with the expectation that previous limitations on the detection
of radio transients were instrumental and not astrophysical. For example, A giant burst was detected from a young stellar object \citep{Bo03}. Many 1--3 Jy radio bursts were found at high and low Galactic latitudes \citep{Ni07, Ma07, Ki08}. A periodic, coherent and circularly polarized burst was found in an ultra-cool dwarf \citep{Ha07}.
A few tens of Fast Radio Bursts (FRBs) have been detected so far which lasted only for a few milliseconds (e.g. \citet{Lo07, Ra16}). The origin of FRBs is still unclear but due to relatively high dispersion measures, some believe these sources have extra-galactic origin (e.g. \citet{Lo07}).
Huge improvements in field of view, especially at low radio frequencies, helps to study transient and variable radio sources more effectively (e.g. \citet{Ma14}). However, no detection of any transient sources from recent $12000$ deg$^2$ systemic transient search comprising of 2800 pointings using with the Jansky Very Large Array Low-band Ionosphere and Transient Experiment (VLITE) \citep{Po16} and detection of only one transient source from monitoring of region close to the North Celestial Pole \citep{St16} covering 1925 deg$^2$ using LOFAR and no detection of transient source from 1430 deg$^2$ search using Murchison Wide-field Array (MWA) (\cite{Be14}; also see the MWA study by \citet{Ro16}) show that detections of transient radio sources are currently not very common, especially at low radio frequencies. Future deep, wide-field searches may potentially be far more fruitful (e.g. recent transient rate calculations in \citet{Me15, Mo16}).
Large archival data from various telescopes are important resource to look for transient and variable radio sources. Earlier, \citet{Ba11} reported 15 transient sources from the study of 22 yr archival data of the Molonglo Observatory Synthesis Telescope covering 2776 deg$^2$ survey area. Recently, \citet{Mu17} found a candidate transient source at low frequency from comparison of TIFR GMRT Sky Survey Alternative Data Release 1 (TGSS ADR1, see \citet{In17}) and the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM, see \citet{Hu17}) survey catalogues. Many variable and transient radio sources were reported from archival data search of the NRAO VLA Sky Survey (NVSS) \citep{Co98} and FIRST survey \citep{Le02, Th11}.
Ten milli Jansky level transients were detected from 22 years of archival the Karl G. Jansky Very Large Array (JVLA) observations of a single field-of-view at 5 and 8.4 GHz \citep{Bo07}. Though it was shown later that more than half of the transient sources reported in \citet{Bo07} were due to data artifacts and rest of the sources had low signal-to-noise ratio $(S/N)$ than \citet{Bo07} to conclusively find transient nature of these sources \citep{Fr12}.
Earlier detection of two transient radio sources GCRT J1745--3009 \citep{Hy05,Hy07,Ro10} and GCRT J1742--3001 \citep{Hy09} were made from systematic search near Galactic Center region using The Karl G. Jansky Very Large Array (JVLA) and Giant Meterwave Radio Telescope (GMRT). GCRT J1745--3009 is a unique source which was detected only three times in 2002 \citep{Hy05}, 2003 and 2004 \citep{Hy07}. In 2002, this source exhibited $\sim$10 minute long, $\sim$1 Jy peaked bursts with a $\sim$77 minute period. The emission from the source was coherent \citep{Hy05} with extremely steep spectral index ($\alpha=-13.5\pm3.0$) and high circular polarization \citep{Ro10}. All the three detections of the source were in 330 MHz. The characteristics of GCRT J1742--3001 did not match with any known mechanisms of emission in transient compact sources. As a result, the source seems to represent a member of a new class of coherently emitting objects. GCRT J1742--3001 was active for a few months with $\sim$150 mJy flux density at the peak and showed no periodicity in emission \citep{Hy09}. The source was detected in 235 MHz and exhibited steep spectral index ($\alpha<-2$). For both of these sources, no counterpart was discovered in high energy, making them impossible
to be detected by conventional follow-up radio observations from high energy observations.
In this paper, we look for new variable/transient sources from a single well observed field centered at micro-quasar Cygnus X-1. Though we could not
detect a new source, we discovered hitherto un-reported transient
behavior of one NVSS source, namely, NVSS J195754+353513. This source is located approximately
$23.8$ arcminutes far from the micro-quasar Cygnus X-1 at J2000 co-ordinates 19h57m54.3s
($\pm$ 0.7s) +35$^{\circ}$34$^{\prime}$59.6$^{\prime \prime}$ ($\pm$0.6$^{\prime \prime}$).
In Section 2, we summarize observational details and data analysis procedure. In Section 3,
we summarize various results on the source. We discuss significance of various
findings in Section 4. Finally, we make concluding remarks in Section 5.
\begin{figure*}
\vbox{
\centerline{
\psfig{figure=B_07-11-1999-VS2.PS,height=7cm,width=7cm,angle=0}
\psfig{figure=B_11-02-2000-VS2.PS,height=7cm,width=7cm,angle=0}}}
\vbox{
\centerline{
\psfig{figure=A_04-03-90-VS2.PS,height=7cm,width=7cm,angle=0}
\psfig{figure=C-06-05-84-VS2.PS,height=7cm,width=7cm,angle=0}}}
\vbox{
\centerline{
\psfig{figure=D_18-03-91-1-EDITED.PS,height=7cm,width=7cm,angle=0}
\psfig{figure=C_N_31-08-97-VS2.eps,height=7cm,width=7cm,angle=0}}}
\caption{Images of J195754+353513 in different configurations of JVLA at 1490 MHz. In the upper left and right panels, we have shown images of the source in B configuration on 7th November 1999 and 11th February 2000 with flux density 56.5 mJy and 26.0 mJy respectively. In the middle left, middle right and lower left panels, we have shown examples of detection of the source in A, C and D configurations of JVLA. In the lower right panel, we have shown an example of no detection when JVLA was in C configuration. We have included the synthesized beam at the left-bottom corner of each panel. The location of the NVSS position is shown by a cross in all the images where size of the cross is error in NVSS location multiplied by 10 for easy visualization. For details, see text and Table 1.}
\label{transient-image}
\end{figure*}
\section{Observation and Data analysis with The Karl G. Jansky Very Large Array}
A blind search for new variable radio sources was conducted using archival
JVLA data\footnote{https://science.nrao.edu/facilities/vla/archive/index} at the L-band frequency of 1400 MHz.
L band is good for this kind of search with JVLA as it provides right balance between field of view and sensitivity. Though the field of view in 74 and 325 MHz band with JVLA will be higher, these bands have relatively poor sensitivity. Availability of archival data is also less in frequencies less than 1400 MHz. Also, for some sub-classes, the transient detection rate is higher at 1400 MHz, as discussed in Section 1.
JVLA comprises of twenty seven fully steerable antennas each with 25-meter diameter in a Y-shaped array. It is located around eighty km west of Socorro, New Mexico. Antennas are periodically moved in different configuration to achieve different scientific goals where the most expanded antenna configuration is known as A configuration and the most compact one is D configuration. B and C configurations are intermediate between A and D. Occasionally antennas are placed in a hybrid configurations, like AB or BC when some of the antennas are in one configuration and some of them in other. The maximum size of the baselines \(B_{max}\) in A, B, C and D configurations are 36.4, 11.1, 3.4 and 1.03 km respectively which corresponds to synthesized beam-width \((\theta_{HPBW})\) 1.4, 4.6, 15 and 49 arcsec respectively at 1400 MHz.
The data used for the present work was taken from different configurations of the JVLA between 12 October 1983 (MJD 45619) and 4 June 2003 (MJD 52794). In total, we have used 262 different epochs of observations with various intervals ranging from days to years between successive observations. The antennas were in configuration A, AB, AD, B, BC, C, CD, and D for 53, 16, 5, 46, 15, 45, 26, and 56 days, respectively.
\begin{table*}
\caption{\bf Details of observations corresponding to Figure \ref{transient-image}}
\centering
\begin{tabular}{l c c c c c c c}
\hline
Image &Date & Date & Configuration & Flux Density & RMS &Synthesized beam&On source time\\
Location&(UT) & (MJD) & & (mJy) & (mJy beam$^{-1}$) & & (sec) \\
\hline
Upper Left &07/11/1999&51489& B &56.5&0.7&4.60$^{\prime \prime}\times$ 4.22$^{\prime \prime}$&170 \\
Upper Right &11/02/2000&51585& B &26.0&0.4&4.44$^{\prime \prime}\times$ 4.14$^{\prime \prime}$&120\\
Middle Left &04/03/1990&47954& A &56.2&0.3&1.28$^{\prime \prime}\times$ 1.17$^{\prime \prime}$&13820\\
Middle Right &06/05/1984&45826& C &53.0&1.6&15.56$^{\prime \prime}\times$ 12.48$^{\prime \prime}$&390\\
Lower Left &18/03/1991&48333& D &80.7&2.0&19.72$^{\prime \prime}\times$ 19.72$^{\prime \prime}$&1140\\
Lower Right &31/08/1997&50691& C &--- &5.5&14.70$^{\prime \prime}\times$ 14.48$^{\prime \prime}$&100\\
\hline
\end{tabular}
\end{table*}
We studied a field centered on Cygnus X-1, a radio emitting X-ray binary \citep{Bo65, Gi72} with co-ordinates 19h58m21.676s +35$^{\circ}$12$^{'}$05.78$^{''}$ (J2000). This area of the sky was chosen because the field of Cygnus X-1 is one of the best studied Galactic black hole binaries using JVLA due to the fact that it is the first system widely accepted to contain a black hole \citep{Mu71, Ta72, Gi86}. This make the field ideal for looking variable/transient sources.
There are big data gap between April 1986 to April 1988, May 1991 to October 1996 as well as February 2001 to June 2003. The `on source' observation time in each epoch was between 2 to 15 minutes. Observing bandwidth for most of the days was 50 MHz.
Analysis and imaging of the data was carried out with Astronomical Image Processing System (AIPS)\footnote{http://www.aips.nrao.edu}. Bad data were flagged. \citet{Pe13} flux density scale was used. For eighteen epoch of observations, the quality of data was not good and we could not make reasonably good images. We did not use data for these days. While using data for CD and D configuration, we used lower {\it uv} cut-off to avoid strong background extended radiation from the Galactic plane. We have made correction for the primary beam using {\tt AIPS} task {\tt PBCOR}.
Amplitude calibration was conducted in reference to 3C 286 and phase calibration was based on observations of the nearby source J2007+404 or J2015+371.
We have not performed self calibration due to lack of any strong point source in the field. The integration time used for solving amplitude and phase during calibration was 10 sec. The images of different days have variable noise levels. Since the source is bit far from the field centre, the noise level near the source position is high. Also small on-source time for individual observation resulted relatively high noise level. The noise level near the source position varied between 0.08 mJy beam$^{-1}$ and 18.7 mJy beam$^{-1}$. The median value of noise was 1.8 mJy beam$^{-1}$.
\begin{figure*}
\vbox{
\centerline{
\psfig{figure=light-curve-1-vs3.ps,width=8.5cm}
\psfig{figure=light-curve-2-vs3.ps,width=8.5cm}}}
\caption{Light curves of J195754+353513 at 1490 MHz. In left panel the observations between 1983 and 1991 are plotted and in the right panel the observations between 1996 and 2003 are plotted. The observations are done by The JVLA. Here the triangles show days of non-detection with $3\sigma$ upper limits.}
\label{light-curve}
\end{figure*}
Two models with Intermediate Frequencies (IFs) 1.3851 GHz (model 1) and 1.4649 GHz (model 2) were created combing data of 94 and 18 days respectively. Model with similar frequency configuration was subtracted with individual single epoch observations using the {\tt AIPS} task {\tt UVSUB} to look for variable sources. Apart from Cygnus X-1, we found that only one source J195754+353513 is present in many of the subtracted images with significantly fluctuating flux. The subtracted flux density for J195754+353513 was up to $\sim$120 mJy. The noise levels close to the location of J195754+353513 in model 1 and model 2 were 0.19 mJy beam$^{-1}$ and 0.46 mJy beam$^{-1}$ respectively.
To be sure that the variation of J195754+353513 is not due to any kind of systematic effects or error in amplitude calibration, we measured flux density of another source NVSS J195823+345728 present in our field. The recorded flux density of the source in NVSS catalog is 52.3 $\pm$ 1.6 mJy \citep{Co98}. We found that during our study, the mean and median flux density of the source were 50.8 and 51.7 mJy respectively. The standard deviation in measurements of flux-densities of the source was 5.2 mJy which means the percentage of deviation was 9.9\%. This shows error in amplitude calibration play little role in large variation of J195754+353513, which will be discussed in more detail in coming sections.
\section{Results}
\subsection{Radio light curve of J195754+353513}
We have detected a transient radio source with co-ordinate 19h57m54.3s ($\pm$ 0.7s) +35$^{\circ}$34$^{\prime}$59.6$^{\prime \prime}$ ($\pm$ 0.6$^{\prime \prime})$ (J2000). This position is the average of result from fitting an elliptical Gaussian around the peak of the source, along with a background level to the source and the error is $1\sigma$ uncertainty in the fitting. The corresponding Galactic co-ordinates are $l=71.61^{\circ}$, $b=3.34^{\circ}$. After cross-correlation with NVSS sources, we found the transient source detected by us is NVSS J195754+353513.
In the Figure \ref{transient-image}, examples of images of J195754+353513 are shown during observations in different JVLA configurations and time. The image of the source in B configuration on 7th November 1999 and 11th February 2000 with respective flux density 56.5 mJy and 26.0 mJy is shown in the upper left and upper right panels of the Figure. In the middle left, middle right and lower left panels, we have shown examples of detection of the source in A, C and D configurations of JVLA. In the lower right panel, an example of no detection is shown when JVLA was in C configuration. We have included the synthesized beam at the left-bottom corner of each panel. The observation details and different image parameters corresponding to Figure \ref{transient-image} is summarized in Table 2.
The location of NVSS J195754+353513 is shown in all images of Figure \ref{transient-image}. The source is within extended emission region of J195754+353513. NVSS survey is carried out using D configuration of JVLA and the source is unresolved in NVSS with flux density 50.8 $\pm$ 1.6 mJy at 1400 MHz \citep{Co98}. It was not resolved during our study also when it was observed in D configuration.
The elongation of the source visible in the Figure \ref{transient-image} in north-south direction is most probably not due to intrinsic source structure but an effect of bandwidth smearing due to high channel width of 50 MHz. The direction of elongation of the source, as visible in Figure \ref{transient-image}, is towards the pointing center which is a signature of bandwidth smearing. In all the images, the source is consistently elongated in same direction, when detected.
Light curves displaying variation of the source's flux densities between 1983 to 2003 is shown in Figure \ref{light-curve}. The flux density is calculated using {\tt AIPS} task {\tt JMFIT} using the results of fitting a Gaussian, along with a background level. The non-detections are reflected by the triangular points, which correspond to the $3\sigma$ upper limits. The error bars represent the rms noise levels of the images near the location of J195754+353513 and a 10\% uncertainty in the absolute calibration of each data set added in quadrature. There are only nineteen observations of the field between MJD 46612 (July, 1986) and MJD 50326 (August, 1996). The source was detected in 161 occasion and not detected in 83 occasions. The data was not good enough to make reasonably good images for 18 days. So, the source was successfully detected for 66.0\% days amongst the all observations with good data of the field. The source showed multiple short bursts and high variation. It varied from less than 0.3 mJy to 201 mJy. The median value of flux density was 38.9 mJy. On 25th January 2002 (MJD 52299) J195754+353513 reached maximum value within the observational period reported in this paper with 201 mJy flux density. The source showed signature of various flares and the peaks reached more than 150 mJy on 23rd July 1998 (186 mJy), 24th March 1999 (175 mJy) and 16th June 2000 (160 mJy).
\begin{figure*}
\vbox{
\centerline{
\psfig{figure=light-curve-zoom-vs3.ps,width=9cm,angle=0}
\psfig{figure=intraday-exp-fit-vs3.ps,width=9cm,angle=0}}}
\caption{Left (Inter-day variation): Example of two flares of J195754+353513 observed at L band. These flares were observed in March and April-May 1991. Antennas were in D configuration during these time. Right (Intra-day variation): An example of intra-day variation of J195754+353513 within a scan. The observation took place on 3rd February 2000. Antennas were in B configuration during this observation. Exponential fitting over the points in rising phase is also shown in the figure.}
\label{light-curve-zoom}
\end{figure*}
Due to gaps in the observations, we can study a complete flare only in a few cases. In Figure \ref{light-curve-zoom} we have shown two such flares in March and April-May of 1991 when the peak reached $\sim$85 mJy and $\sim$75 mJy respectively. Antennas were in D configuration during these times. Though there were not enough data points to precisely determine life-span and rise/decay rate of these flares, the approximate life span of these flares were in the range of 10--18 days. Both of these flares show sharp rise and relatively slow decay.
In Figure \ref{image} we have shown the field of J195754+353513 combining 9 days of observations. All observations used in this Figure was made in 1991 when antennas were in D configuration. The RMS noise of the image was 0.17 mJy beam$^{-1}$ and the resolution of the image was 43.23$^{\prime \prime}\times$41.71$^{\prime \prime}$. We can clearly see J195754+353513 with 60 mJy flux density along with Cygnus X-1 and many other background sources.
We looked for possible periodicity in J195754+353513 as was found in some other radio transients. The light-curve data was systematically folded with different period to search for signature of any periodicity in the emission. No periodicity was found in the source light curve.
There are no available JVLA archival data of the source in any other band except L band. Thus a study of the spectral index of the source was not possible using JVLA data. There is a source in 325 MHz WENSS (Westerbork Northern Sky Survey; \citet{Re97}) catalog which is 5.11 arcsec far from the NVSS position of the source. Since the positional uncertainty in WENSS for faint sources are $\sim$ 5 arcsec, this is most probably the same source. The source has a flat spectral index of $\alpha=-0.19$ (assuming $S_\nu \propto \nu^{\alpha}$) between NVSS and WENSS measurements \citep{Ma14b}. Since the source is highly variable and there is gap between measurements between NVSS and WENSS (observations of WENSS took place between 1991--1993 and observations of NVSS took place between 1993--1996), the flat spectral index measured between NVSS and WENSS is misleading.
\subsection{Intra-day flux density variation}
\begin{table*}
\caption{\bf Intra-day Flux Density variation of J195754+353513}
\centering
\begin{tabular}{c c c c c c}
\hline
Date & 1st Scan & 1st Scan & 2nd Scan & 2nd Scan &Time gap\\
(UT) & Time Span & Flux Density & Time Span & Flux Density& between scans \\
& (min) & (mJy) & (min) & (mJy) & (min)\\
\hline
03/07/84 & 6.5 & 50.9 $\pm$ 8.4 & 6.5 & 216 $\pm$ 7.9 & 52.5\\
04/03/90 & 29.16&194 $\pm$ 0.7 & 29 & 31.4 $\pm$ 0.7&204.5\\
16/03/91 & 19.0 & 68.1 $\pm$ 2.4 & 13.5& 55.4 $\pm$ 2.1&5.5\\
25/03/91 & 15.5& 88.1 $\pm$ 3.6 & 14.5& 37.9 $\pm$ 2.6&5.5\\
16/03/97 & 0.92& 89.8 $\pm$ 1.2 & 0.92& 127 $\pm$ 2.8 &0$^{b}$\\
25/01/99 & 1.33& 111 $\pm$ 5.0 & 1.33& 48.4 $\pm$ 3.7 &0$^{b}$\\
24/10/02 & 2.83& 37.6 $\pm$ 2.6 & 3.17& 76.7 $\pm$ 2.8&57.5\\
\hline
\multicolumn{6}{l}{$^{b}$Note: Zero difference between two time-spans mean we are looking for intra-scan variations.}
\end{tabular}
\end{table*}
Based on previous studies of the bursting transient GCRT J1745--3009 \citep{Hy05, Hy07, Ro10} and GCRT J1742--3001 \citep{Hy09}, we searched for flux density variations of J195754+353513 between different scans of the observations of same day. We also imaged each minute separately, when the source is detected, to look for minute-scale variation of J195754+353513. Since the flux density of the source was not adequate to image it all the time with smaller time scale, we could make a study of the intra-day variation only for limited amount of time.
Significant scan to scan and intra-scan flux density variation is detected when it was possible to image separate scans. In the right panel of Figure \ref{light-curve-zoom} we have plotted an example of intra-day variation of J195754+353513 within a scan. The observation took place on 3rd February 2000 (MJD 51577). Antennas were in B configuration during this observation. The source rose from 20 to 180 mJy within 700 seconds and then came back to $\sim$20 mJy level within $\sim$100 seconds. The rise time constant, resulting from the exponential fit over the points in rising phase, is $\tau=342.8 \pm 89.0$ seconds.
A power-law fit ($S_\nu \propto t^\beta$) over points in rising phase of the flare yields $\beta = 1.5 \pm 0.8$.
In Table 2, we have summarized intra-day flux density variations for different days. The error in flux density mentioned in the table is RMS noise near the location of the source. For five different days we found more than 20\% difference in flux densities between two successive scans of observations with time difference 5.5 minute to 204.5 minute. On 4th March 1990 (MJD 47954), the flux decayed from 194 mJy to 31.4 mJy with just 204.5 minute separation and on 3rd July 1984 (MJD 45884), the flux density rose from 50.9 mJy to 216 mJy when the gap between two successive scans were just 5.5 minutes. For two days, we found significant variation within a scan.
The inter and intra scan variation, often more than twice in flux density, suggests the radio emission from J195754+353513 consists of many small bursts.
\subsection{Circularly polarized emission}
We looked for the presence of circularly polarized emission from J195754+353513 as was found for the case of GCRT J1745--3009 \citep{Ro10}. The measurement of Stokes $V$ polarization on 19th March 1991 (MJD 48334) was 12.6 mJy with RMS noise of 1.19 mJy beam$^{-1}$ and the measurement of Stokes $V$ polarization on 2nd May 1991 (MJD 48378) was 11.5 mJy with RMS noise of 1.36 mJy beam$^{-1}$.
The corresponding value of $V/I$ on 19th March and 2nd May 1991 were $0.25$ and $0.24$ respectively with 3\% uncertainty in error. Relatively weak Stokes $V$ detection was done on 1st December 1999 (MJD 51513) with flux density 8.3 mJy and RMS noise 0.6 mJy beam$^{-1}$. The corresponding $V/I$ was $0.15$.
More than $5\sigma$ Stokes $V$ was not detected from data of any other days.
\subsection{Optical/IR and X-ray counterpart of the source}
No known sources are reported to emit X-ray emission from the nearby position of J195754+353513 making it impossible to detect radio emission from follow-up observation of X-ray emission at the time of flare.
We have searched for optical/infra-red counterpart of J195754+353513. There are two sources within 10 arcsec from the source location of J195754+353513 in the Two Micron All-Sky Survey (2MASS) point source catalog \citep{Cu03, Sk06}. The nearest source in 2MASS catalog is J19575420+3535152 whose location is 1.82 arcsec away from J195754+353513 and may be the near infra-red counterpart of J195754+353513. The other source in 2MASS catalog within 10 arcsec from J195754+353513 is J19575378+3535221 whose position is 9.37 arcsec away from J195754+353513. The brightnesses of J19575420+3535152 in {\it J}, {\it H} and {\it K} bands are 15.45, 14.90 and 14.55 mags respectively.
\section{Discussion}
\subsection{The nature of the source J195754+353513}
We have searched the environment of J195754+353513 for associated discrete sources or extended structures. The source is not close to any known supernova remnant. We did not find trace of any extended structure close to the source, either.
We have found one 2MASS source within 1$\sigma$ positional error of J195754+353513 and it is possible that the radio emission of J195754+353513 arises from activities in a foreground flaring star. Many flaring stars are known to exhibit activity in both radio and X-ray wavelengths such as the giant outburst from a young stellar object reported by \citet{Bo03}, but the detection of radio flares having no apparent associated X-ray emission is not uncommon. For example, radio flares from UV Ceti stars with seconds to minutes time-scale were detected at low frequencies by \citet{Sp74} and \citet{Ka77} with YZ Canis Minoris where no corresponding X-ray emission was found. At higher frequencies (4.9 and 8.4 GHz), \citet{Os05} reported short duration radio flares from the dMe flare star EV Lacertae which was not clearly related to the star's X-ray flares. The radio flares reported in these stars range from a few milli Jansky to a few tens of milli Jansky, with rise and decay times of $\sim1$ min and $\sim1$ hr respectively. We have also found small flares from J195754+353513 of $\sim700$ s duration as featured in Figure \ref{light-curve-zoom}.
\citet{Ri03} presented results on five years of continuous monitoring of radio flares of Algol-type and RS CVn systems; many of the flares reached hundreds of milli Jansky with a few days to a month duration. Numerous short bursts within each flare were also visible. Strong periodic activities are also found in these systems where the shortest periodicity was found in $\beta$ Per with 48.9$\pm$1.7 days. Though we could not detect any periodicity in emission from J195754+353513, the source could be RS CVn system due to similarity in time scale of flaring episodes.
Pulsars can produce highly circularly polarized emission in single pulse (eg. \citet{Ka91}). There are some pulsars which show non-periodic flaring events like Crab pulsar (e.g. \citet{Bu12}). No known radio pulsar is reported from the nearby location of J195754+353513. Also we have not seen any sign of nearby supernova remnant or nebula. Though majority of pulsars are associated with supernova remnants \citep{Fr94}, some belief that this association are by chance and actually false \citep{Ga95a, Ga95b, Ga95c}. One should look for possible pulsar emission in the location of J195754+353513.
Variations in the light curve of J195754+353513 could also be due to some kind of extreme scattering of the incident radiation in the interstellar medium of our Galaxy (e.g. \citet{Ri90, De02}). Variation up to few day time-scale can occur due to interstellar scintillation in GHz frequencies \citep{Pe00}. Since J195754+353513 showed variation of different time-scales, from minutes to months, even if interstellar scattering play some role, it is unlikely that all this variations are due to scattering effect.
For the most of black-hole X-ray binaries, a universal correlation between radio and X-ray luminosities has been reported \citep{Co03, Ga03}. Assuming the relation given in \citet{Ga03}, the $\sim 200$ mJy peak should have corresponding X-ray flux of $\sim 0.25$ Crab in its hard state. Such a strong flare in X-ray would not go un-noticed by all sky X-ray monitors, given that the source is in flaring stage quite regularly. Though some of the exceptions are found for X-ray binaries which do not follow this universal correlation, all the examples have lower radio luminosity than the universal value but none of them have significant
lower X-ray luminosity. So, it is unlikely that the system is a black-hole X-ray binary system.
On the other hand, if we consider a two component accretion flow (TCAF, \citet{Ch95}) where the Keplerian disk is flanked by a transonic flow in hard states when jets are stronger, it is well known that the Keplerian disk does not have to penetrate distances close to the black hole and the X-ray could be faint. We postulate that the disk with very low accretion rate could be located at a large distance with inner edge $\sim 20000$ Schwarzschild radii or more so that the source is presently having a quiescence stage, as far as X-rays is concerned. If this is the case, occasional outbursts every few to few tens of years is possible and one could look for them. It is also possible that the disk is shrouded by copious winds from the companion, much like SS 433 \citep{Ma84, Cl84}, where disk X-ray is completely blocked. In fact, the circularly polarized emission is an indication of aligned magnetic fields. If there is an outflow with a small inclination angle with the line of sight, a polarization fraction similar to those reported in Section 3.3 can be produced from its radio emission. However, in that case, a profusely mass-losing star star would be expected in the vicinity. As discussed in Section 3.4, it is possible that 2MASS J19575420+3535152 is the IR counterpart. The time-scale of rising and fading for every micro-flare would be expected to be of much shorter duration for a compact binary.
\begin{figure}
\vbox{
\centerline{
\psfig{figure=TRANSIENT-VS2.eps,height=9cm,width=9cm}}}
\caption{1490 MHz JVLA image of the field of J195754+353513 combining 9 days of observations. All observations used in this image were conducted in 1991 when antennas were in D configuration. The RMS noise of the image is 0.17 mJy beam$^{-1}$. Cygnus X-1 and J195754+353513 are indicated by the arrow. The synthesized beam of the image is 43.24$^{\prime \prime}$ $\times$ 41.71$^{\prime \prime}$.}
\label{image}
\end{figure}
\subsection{Comparison with Galactic Center Transients}
There are many similarities between the temporal evolution of J195754+353513 to that of GCRT J1742--3001 and a transient radio source detected close to Galactic Center region (GCT). While GCRT J1742--3001 was detected as part of transient search program near Galactic centre region in March 2006 to May 2007 at 235 MHz \citep{Hy09}, GCT was detected in monitoring observations of Sgr A* from December 1990 to September 1991 at different radio wavelengths from 1.3 to 22 cm \citep{Zh92}. There are also some similarity with another transient GCRT J1745--3009 \citep{Hy05, Hy07, Ro10} located close to Galactic centre at 330 MHz. The flare of both GCRT J1742--3001 and GCT took about a month to rise while it decayed in about three months. For J195754+353513 we could not catch total rising and fading profile of many flares due to inadequate sampling rate and data gap, typical rising time of flares were $\sim$5 days and typical fading time was $\sim$10 days. Though the active time of individual flares of J195754+353513 were less than GCRT J1742--3001 and GCT, J195754+353513 exhibited higher frequency of such flares unlike other two sources mentioned above. GCRT J1742--3001 also showed fewer small bursts before the main flare and the GCT showed the presence of a significantly intense secondary burst in the 18--22 cm observations about six months after the primary burst. Though GCRT J1745--3009 was detected only three times, it showed intra-day variability with time-scale of hundreds of second as J195754+353513.
GCRT J1742--3001 peak flux density was $\sim$100 mJy in 235 MHz and peak flux density of GCT was $\sim$1 Jy in the wavelength range 18--22 cm ($\sim$1.5 GHz). The peak flux density of individual flares of J195754+353513 varied between 30 to 200 mJy. Since GCRT J1742--3001 had a very steep spectrum ($\alpha \lesssim -2$, $S_\nu \propto \nu^\alpha$), its peak flux density should be fainter than J195754+353513 in 1400 MHz. Assuming $\alpha=-2$, we calculate peak flux density of GCRT J1742--3001 at 1400 MHz to be $\lesssim$21 mJy. The peak flux-densities for three detections of GCRT J1745--3009 were $\sim1$ Jy, $\sim0.5$ Jy and $\sim0.06$ Jy. The peak intensity may indicate strength of magnetic fields.
J195754+353513 showed presence of variable circular polarization as GCRT J1745--3009 \citep{Ro10}. Circular polarization was not reported for the case of GCRT J1742--3001 and GCT.
No X-ray counter-parts were confirmed for the case of GCT, GCRT J1742--3001 and GCRT J1745--3009. J195754+353513 also has no known X-ray counter part. There is some suggestion that GCT may be associated with X-ray binary \citep{Zh92}, which is yet to be established.
\section{Conclusions}
We found transient nature of a radio source J195754+353513, approximately 24 arcminutes far from Galactic micro-quasar Cygnus X-1. The source showed high variability with different time-scales. J195754+353513 showed evidence of variable circular polarized emission. The source has no known X-ray counter part of the system. 2MASS J19575420+3535152 may be an NIR counterpart of the source. The nature of the source is still unknown. It is not unlikely that the system is a black-hole X-ray binary with the disk covered by significant amount of matter from the companion. On the other hand, it is also possible that the emission is coming from a foreground flare star. Clearly more multi-frequency monitoring is required to come to a definite conclusion.
\section*{Acknowledgments}
We acknowledge the anonymous referee whose detail and productive suggestions helped to improve the manuscript significantly. DP and SP acknowledge support of MOES fund to carry out this project. We have used data from JVLA which is run by The National Radio Astronomy Observatory (NRAO). NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
|
1,108,101,566,489 | arxiv | \section{Introduction}
The flexoelectric (FxE) effect, where polarization is induced by a
strain gradient, is universal in all insulators. As devices shrink to
the micro and nano scale, large strain gradients can occur, and
therefore the FxE effect can play a significant role in the properties
of such devices, influencing the so-called dielectric dead
layer\cite{Majdoub2009}, domain walls and domain
structure\cite{Lee2011,Yudin2012,Borisevich2012}, relative
permittivity and Curie temperature\cite{Catalan2004,Catalan2005},
critical thickness of films to exhibit switchable
polarization\cite{Zhou2012}, and spontaneous polarization in the
vicinity of twin and antiphase boundaries\cite{Morozovska2012}. Also,
the FxE effect can be exploited for novel device design paradigms,
such as piezoelectric ``meta-materials'' constructed from
nonpiezoelectric constituents\cite{Zhu2006,Bhaskar2016}, or mechanical
switching of ferroelectric polarization \cite{Lu2012,Gruverman2003}.
One of the crucial limitations to understanding and exploiting the FxE
effect is the lack of a clear experimental and theoretical consensus
on the size and sign of the FxE coefficients, even in commonly studied
materials such at SrTiO$_3$ and BaTiO$_3$\cite{Zubko2013,Yudin2013}. A
key element to forming this understanding is the development of an
efficient first-principles methodology to calculate all of the
components of the bulk FxE tensor. Recently, Stengel,
\cite{Stengel2013} and Hong and Vanderbilt\cite{Hong2011,Hong2013}
(HV), developed the formalism for calculating the full bulk FxE tensor
from first principles.
\footnote{The FxE response of any finite crystal also has an important
contribution from the surface, as discussed in
Refs.~\onlinecite{Stengel2013natcom} and
\onlinecite{StengelChapter}, and calculated using density-functional
theory for SrTiO$_3$ in \onlinecite{Stengel2014}. In this work, we
will exclusively focus on bulk contribution, which poses a more
significant challenge for a computational treatment.}
Each element of the FxE tensor has a ``clamped-ion'' (CI) contribution,
arising from the effect of the strain gradient on the valence
electrons in the crystal, and a ``lattice-mediated'' (LM) contribution,
arising from internal relaxations induced by the applied strain and strain
gradient \cite{Stengel2013,Hong2013}.
In Refs.~\onlinecite{Hong2011} and \onlinecite{Hong2013}, HV described
an implementation for calculating the bulk CI and LM longitudinal FxE
coefficients (i.e., the coefficients relating the induced polarization
in direction $\alpha$ to a gradient of uniaxial strain
$\varepsilon_{\alpha\alpha}$, also in direction $\alpha$). Their
methodology involved using density functional theory (DFT) to
calculate the real-space response of the charge density to atomic
displacements in a simple $N\times 1\times 1$ bulk supercell containing
$N$ repitions of the primitive bulk cell.
In Ref.~\onlinecite{Stengel2014}, Stengel developed a strategy
that allowed a calculation of the full FxE response for cubic SrTiO$_3$
based in part on the charge-density response to a long-wavelength
acoustic phonon, and in part on large slab supercell calculations
(repeated slabs separated by vacuum). The first part of this methodology
allowed the LM contributions to all bulk FxE tensor elements,
as well as the CI contributions to the longitudinal coefficients,
to be determined from linear-response calculation on a single unit
cell using density-functional perturbation theory (DFPT)
\cite{Baroni2001}. However, the ``transverse'' and ``shear'' CI
contributions \cite{Hong2013,Stengel2014,Stengel2013natcom} had to
be calculated indirectly by relating them to the open-circuit
electric field appearing across the slab when a long wavelength
acoustic phonon was applied to the slab supercell as a whole.
As a result, this implementation required DFPT calculations to be
performed on large slab supercells.
The implementation described in Ref.~\onlinecite{Stengel2014} thus
provides a methodology for calculating the full FxE
tensor for a given material. However, the reliance on
computationally intensive slab supercell calculations for the
transverse and shear CI coefficients represents a significant
limitation to efficient calculation, especially in complex
materials. Therefore, it is highly desirable to develop an
approach that allows the full bulk FxE tensor, including its
longitudinal, transverse, and shear components, to be obtained
from DFPT calculations on single unit cells.
The essential problem is that single-unit-cell DFPT calculations
that determine only the charge-density response to a long-wavelength
phonon, as in Ref.~\onlinecite{Stengel2014}, are incapable of
revealing the transverse and shear CI contributions, since the
induced charge is proportional to the \textit{divergence} of the
polarization, which is absent for transverse phonons. To go
further, it is necessary to compute the induced \textit{polarization
itself}. Unfortunately, the well-known Berry-phase formulation
\cite{KingSmith1993,Resta1994} of the electric polarization is
useless here, since it provides only the total polarization, which
averages to zero over a phonon wavelength. Instead, we need access
to the spatially resolved polarization on the scale of the
wavelength. The only clear path to obtaining this local
polarization is via its relation to the adiabatic current
density \cite{Hong2013,Stengel2013,StengelUNPUB}. Thus, the desired
methodology is one that computes the spatially resolved
\textit{ current density} induced by a strain gradient perturbation
\cite{Hong2013,Stengel2013,StengelUNPUB} in the context of
long-wavelength longitudinal \textit{and transverse} phonons.
The microscopic current density is, of course, just
proportional to the quantum-mechanical probability current, as
discussed in any standard textbook \cite{Sakuri}. However, this
standard formula assumes a local Hamiltonian of the form
$H=p^2/2m+V$ with a local potential $V$. Thus, it becomes problematic if
the Hamiltonian of interest contains \emph{nonlocal} potentials,
as the probability current no longer satisfies the continuity
equation\cite{Li2008}. This issue is very relevant in the context of
DFT, since most popular implementations make use of a plane-wave
basis set with a pseudopotential approximation to reduce the size
of the basis set by avoiding an explicit description of the core
electrons. Virtually all modern pseudopotential implementations
contain nonlocal potentials in the form of projectors that operate
on the wavefunctions
\cite{Vanderbilt1990,Hamann1979,Kleinman1982,Blochl1994}.
Therefore, the standard formula for the current density is not a
fit starting point for the current-response theory that we have in
mind (we expand on these considerations in Sec.~\ref{curden}).
The definition and calculation of the microscopic current density in a
nonlocal pseudopotential context is a rather general problem that has
received considerable previous attention
\cite{Umari2001,Li2008,Vignale1991,ICL2001,Pickard2003,Mauri1996,Mauri1996_nmr}
in view of its application to the calculation of magnetic
susceptibility
\cite{Vignale1991,ICL2001,Pickard2003,Mauri1996,Mauri1996_nmr},
nuclear magnetic resonance chemical shifts \cite{Pickard2001},
electron paramagnetic resonance $g$ tensors\cite{Pickard2002}, and so
forth. Unfortunately a general, systematic solution that is
appropriate to our scopes has not emerged yet. To see why this is
challenging, it is important to note that the continuity equation is
only one of the criteria that must be satisfied by a physically
meaningful definition of the current density. Two other criteria are
important. First, the formula must also reduce to the textbook
expression in regions of space that lie outside the range of the
nonlocal operators (pseudopotentials are typically confined to small
spheres surrounding the atoms). Second, it must reduce to the
well-known expressions for the macroscopic current in the
long-wavelength limit. The approaches that have been proposed so far
have either been specialized to a certain physical property (e.g.,
dielectric~\cite{Umari2001} or diamagnetic~\cite{Pickard2003}
response), or limited in scope to a subset of the above criteria. For
example, Li {\em et al.}~\cite{Li2008} proposed a strategy that
guarantees charge continuity by construction but does not satisfy the
two additional criteria, as we shall see in Sec.~\ref{curden}.
In addition to the technical challenges related to nonlocal
pseudopotentials, there is another complication associated with
the calculation of the flexoelectric coefficients using the current
density in bulk. Namely, the bulk nonlongitudinal responses contain
a contribution coming from the gradients of the local rotations in the
crystal. This ``circulating rotation-gradient'' (CRG) contribution,
derived in Ref.~\onlinecite{StengelUNPUB} (where it is referred
to as a ``dynamic'' or ``gauge-field'' term),
must be treated carefully when comparing our calculations
with previous results. We will discuss this point in Sec.~\ref{diamag}.
In this work we develop a first-principles methodology based on DFT to
calculate the full bulk CI FxE tensor from a single unit
cell. At the heart of our technique lies the introduction of a
physically sound microscopic current-density operator in the presence of nonlocal
pseudopotentials
that fulfills all criteria that we
stated in the above paragraphs: (i) it satisfies the continuity
equation; (ii) the contribution of the nonlocal pseudopotentials is
correctly confined to the atomic spheres; and (iii) it reduces to the
macroscopic velocity operator in the long-wavelength limit. We will
discuss our approach for calculating the current density in the
context of earlier works, and how it applies to the problem of
calculating bulk FxE coefficients. Finally, we will demonstrate
that the results for the CI FxE coefficients from our current-density
implementation are in excellent agreement with the previous
charge-density-based DFT implementations described above
\cite{Hong2013,Stengel2014}, confirming that it is an accurate and
efficient method for calculating the FxE response of materials.
The paper is organized as follows. In Sec.~\ref{Approach} we outline
the general approach to determining FxE coefficients; in
Sec.~\ref{Form} we give the formalism used in our calculations of the
current density; in Sec.~\ref{Imp} we provide details of the
implementation of the formalism; Sec.~\ref{Res} presents benchmark
tests for the simple case of isolated noble gas atoms, and results for
several technologically important, cubic oxide compounds; in
Sec.~\ref{Disc}, we discuss some technical issues that are associated
with the current density in the presence of nonlocal pseudopotentials;
and we conclude the paper in Sec.~\ref{Con}.
\section{Approach\label{Approach}}
The goal of this work is to calculate the bulk CI
flexoelectric tensor elements
\begin{equation}
\label{muII}
\mu^{\text{I}}_{\alpha\beta,\omega\nu}=\frac{d
P_\alpha}{d\eta_{\beta,\omega\nu}},
\end{equation}
where $P_\alpha$ is the polarization in direction $\alpha$, and
\begin{equation}
\eta_{\beta,\omega\nu}=\frac{\partial^2u_\beta}{\partial
r_\omega\partial r_\nu}
\end{equation}
is the strain gradient tensor, where $u_\beta$ is the $\beta$
component of the displacement field. The superscript ``I'' indicates
that the tensor elements are defined with respect to the unsymmetrized
displacements \cite{Nye1985}; superscripts ``II'' will be used to
indicate tensor elements defined with respect to symmetrized strain.
Calculating the polarization in Eq.~(\ref{muII}) is tricky from a
quantum-mechanical standpoint, as it does not correspond to the
expectation value of a well-defined operator. As mentioned above, the
Berry-phase method \cite{KingSmith1993,Resta1994} can be used to
obtain the formal macroscopic polarization averaged over the
cell. However, we require access to the local polarization
\emph{density} $P_\alpha(\textbf{r})$. Although the static
microscopic polarization density is not well defined in a quantum
mechanical context, at the linear-response level the \emph{induced}
polarization $P_{\alpha,\lambda}(\textbf{r})=\partial
P_\alpha(\textbf{r})/\partial\lambda$ resulting from a small change in
parameter $\lambda$ can be equated to the local current flow via
$\partial P_\alpha(\textbf{r})/\partial\lambda=
\partial J_\alpha(\textbf{r})/\partial\dot{\lambda}$, where
$\dot{\lambda}$ is the rate of change of the adiabatic parameter,
$\lambda$. Following the approach of Ref.~\onlinecite{Stengel2013},
we now consider an adiabatic displacement of sublattice $\kappa$
(i.e., a given atom in the unit cell along with all of its
periodic images) of a
crystal in direction $\beta$ as given by
\begin{equation}
\label{phon}
u_{\kappa\beta}(l,t)=\lambda_{\kappa\beta\textbf{q}}(t)e^{i\textbf{q}\cdot\textbf{R}_{l\kappa}},
\end{equation}
where $l$ is the cell index. In this case, the induced local
polarization density $P_{\alpha,\kappa\beta\textbf{q}}(\textbf{r})$ in
direction $\alpha$ induced by mode $\kappa\beta$ of wavevector
$\textbf{q}$ is
\begin{equation}
\label{Jrt-dv}
P_{\alpha,\kappa\beta\textbf{q}}(\textbf{r})=
\frac{\partial J_{\alpha}(\textbf{r})}{\partial\dot{\lambda}_{\kappa\beta\textbf{q}}} .
\end{equation}
Using the fact that the linearly induced current will be modulated by
a phase with the same wavevector as the perturbation in
Eq.~(\ref{phon}), we can define
\begin{equation}
\label{Jrt}
P^{\textbf{q}}_{\alpha,\kappa\beta}(\textbf{r})=
P_{\alpha,\kappa\beta\textbf{q}}(\textbf{r})
e^{-i\textbf{q}\cdot\textbf{r}},
\end{equation}
which is therefore a lattice-periodic function. This quantity, the
\emph{cell-periodic part of the first-order induced polarization
density}, will play a central role in our considerations. It is
also convenient to define
\begin{equation}
\label{Pbar}
\overline{P}_{\alpha,\kappa\beta}^{\textbf{q}} \equiv
\frac{1}{\Omega}\int_{\text{cell}}
P_{\alpha,\kappa\beta}^{\textbf{q}}(\textbf{r}) d^3r,
\end{equation}
where $\Omega$ is the cell volume, as the cell average of this
response. In Ref.~\onlinecite{Stengel2013} it was shown that the
CI flexoelectric tensor elements are given by the second
wavevector derivatives of
$\overline{P}_{\alpha,\kappa\beta}^{\textbf{q}}$ via
\begin{equation}
\label{muI}
\begin{split}
\mu^{\text{I}}_{\alpha\beta,\omega\nu}&= -\frac{1}{2}\sum_\kappa\frac{\partial^2\overline{P}_{\alpha,\kappa\beta}^{\textbf{q}}}{\partial
q_\omega\partial q_\nu}\Bigg\vert_{\textbf{q}=0}.
\end{split}
\end{equation}
This formulation suggests that it may be possible to compute the
polarization responses
$\overline{P}^{\textbf{q}}_{\alpha,\kappa\beta}$ entirely from a
single-unit-cell calculation, similar to the way that phonon responses
are computed in DFPT. In fact, this is the case. The formalism
necessary to compute these responses at the DFT level will be
presented in the next sections, giving access to an efficient and
robust means to compute the flexoelectric coefficients through
Eq.~(\ref{muI}).
\section{Formalism\label{Form}}
Given a time-dependent Hamiltonian with a single-particle solution
$\Psi(t)$, the current density at a point \textbf{r} in Cartesian
direction $\alpha$ can be written
\begin{equation}
J_\alpha(\textbf{r})=
\langle\Psi(t)\vert\hat{\mathcal{J}}_\alpha(\textbf{r})\vert\Psi(t)\rangle
\label{Js}
\end{equation}
where $\hat{\mathcal{J}}_\alpha(\textbf{r})$ is the current-density
operator (a caret symbol over a quantity will indicate an
operator). We will first address how to treat the time-dependent
wavefunctions (Sec.~\ref{adpert}), and then discuss the form of the
current-density operator in (Sec.~\ref{curden}) .
\subsection{Adiabatic density-functional perturbation theory\label{adpert}}
\subsubsection{Adiabatic response}
We write the time-dependent Schr\"odinger equation as
\begin{equation}
\label{seq}
i\frac{\partial}{\partial t}\vert\Psi\rangle=\hat{H}(\lambda(t))\vert\Psi\rangle.
\end{equation}
where $\hat{H}(\lambda(t))$ is the Hamiltonian, and $\lambda$
parametrizes the time-dependent atomic motion. Since we are
interested in the current density resulting from adiabatic
displacements, we expand the wavefunction $\vert\Psi(t)\rangle$ to first order in
the velocity, $\dot{\lambda}$: \cite{Messiah1981,Thouless1983,Niu1984}
\begin{equation}
\label{psiad}
\vert\Psi(t)\rangle \simeq e^{i\gamma(t)}e^{i\phi(\lambda(t))}[\vert\psi(\lambda(t))\rangle+\dot{\lambda}(t)\vert\delta\psi(\lambda(t))\rangle],
\end{equation}
where $\vert\psi(\lambda)\rangle$ is the lowest-energy eigenfunction
of the time-independent Hamiltonian at a given $\lambda$, and
$\vert\delta\psi(\lambda)\rangle$ is the first order adiabatic
wavefunction [defined by Eq.~(\ref{psiad})]; $\gamma(t)=-\int_0^t
E(\lambda(t^\prime))d t^\prime$ is the dynamic phase, with
$E(\lambda)$ being the eigenenergy of $\vert\psi(\lambda)\rangle$;
$\phi(\lambda(t))=\int_0^t \langle\psi(\lambda(t^\prime))\vert
i\partial_t \psi(\lambda(t^\prime))\rangle d t^\prime$ is the
geometric Berry phase \cite{Berry1984} (we have used the shorthand
$\partial_t=\partial/\partial t$). We work in the parallel-transport
gauge, $\langle\psi(\lambda)\vert i\partial_\lambda
\psi(\lambda)\rangle=0$, so the Berry phase contribution vanishes.
Equation (\ref{psiad}) is written assuming a single occupied band, but
in the multiband case we shall let the evolution be guided by
multiband parallel transport instead. In this case, the first-order
wavefunctions, $\delta\psi_n$, given by adiabatic perturbation
theory\cite{Messiah1981,Thouless1983,Niu1984}, are
\begin{equation}
\label{deltapsi}
\vert\delta\psi_n\rangle=-i\sum_{m}^\text{unocc}\vert\psi_m\rangle\frac{\langle\psi_m\vert\partial_\lambda\psi_n\rangle}{\epsilon_n-\epsilon_m},
\end{equation}
where $\epsilon_n$ is the eigenvalue of the $n$th single particle
wavefunction, and $\partial_\lambda$ is shorthand for
$\partial/\partial\lambda$. The wavefunction
$\vert\partial_\lambda\psi_n\rangle$ is the first-order wavefunction
resulting from the \emph{static} perturbation
\begin{equation}
\label{delpsi}
\vert\partial_\lambda\psi_n\rangle=\sum_m^\text{unocc}\vert\psi_m\rangle\frac{\langle\psi_m\vert\partial_\lambda\hat{H}\vert\psi_n\rangle}{\epsilon_n-\epsilon_m},
\end{equation}
which is the quantity calculated in conventional DFPT implementations
\cite{Baroni2001,Gonze1997}.
\subsubsection{Density functional theory}
We will implement the calculations of the current density in the
context of plane-wave pseudopotential DFT, so the single-particle
wavefunctions we will use in Eq.~(\ref{deltapsi}) are solutions to the
Kohn-Sham equation for a given band $n$ and wavevector \textbf{k},
\begin{equation}
\label{KSeq}
\hat{H}_{\text{KS}}\vert\psi_{n\textbf{k}}\rangle=\epsilon_{n\textbf{k}}\vert\psi_{n\textbf{k}}\rangle.
\end{equation}
where the Kohn-Sham Hamiltonian is
\begin{equation}
\label{HKS}
\hat{H}_{\text{KS}}=\hat{T}_{\text{s}}+\hat{V}_{\text{H}}+\hat{V}_{\text{XC}}+\hat{V}_{\text{ext}}^{\text{loc}}+\hat{V}_{\text{ext}}^{\text{nl}}.
\end{equation}
Here $\hat{T}_{\text{s}}$ is the single-particle kinetic energy,
$\hat{V}_{\text{H}}$ is the Hartree potential, $\hat{V}_{\text{XC}}$
is the exchange correlation potential, and the external potential
contains both a local and nonlocal part (last two terms). We will
consider norm-conserving, separable, Kleinmann-Bylander type
\cite{Kleinman1982} pseudopotentials. The form of the nonlocal
potential (henceforth referred to as $\hat{V}^{\text{nl}}$) is given
by Eq.~(\ref{VNL}). We will drop the ``KS'' subscript from here on.
Note that, although we focus on norm-conserving pseudopotentials
in this work, the issues pertaining to nonlocal potentials that will
be discussed in Sec.~\ref{curden} would apply to ultrasoft
\cite{Vanderbilt1990} and projector augmented wave (PAW) \cite{Blochl1994}
potentials as well.
\subsubsection{Polarization response}
Using the expansion in Eq.~(\ref{psiad}), the first-order one-particle
density matrix is
\begin{equation}
\label{denmat}
\delta\hat{\rho}=\dot{\lambda}\frac{2}{N_k}\sum_{n\textbf{k}}\left(\vert\delta\psi_{n\textbf{k}}\rangle\langle\psi_{n\textbf{k}}\vert+\vert\psi_{n\textbf{k}}\rangle\langle\delta\psi_{n\textbf{k}}\vert\right)
\end{equation}
where the factors $(2/N_k)\sum_{n\textbf{k}}$ take care of the spin
degeneracy, sum over occupied Bloch bands, and average over the
Brillouin zone. A monochromatic perturbation such as that of
Eq.~(\ref{phon}) always comes together with its Hermitian conjugate,
coupling states at \textbf{k} with those at $\textbf{k}\pm\textbf{q}$,
so that each perturbed wavefunction has two components that we refer
as $\delta\psi_{n,\textbf{k}+\textbf{q}}$ and
$\delta\psi_{n,\textbf{k}-\textbf{q}}$ respectively. We wish to
select the cross-gap response at $+\textbf{q}$, so we project onto
this component of the density matrix to obtain \cite{Adler1962}
\begin{equation}
\label{denmat2}
\delta\hat{\rho}_\textbf{q}=\dot{\lambda}\frac{2}{N_k}\sum_{n\textbf{k}}\left(
\vert\delta\psi_{n,\textbf{k}+\textbf{q}}\rangle\langle\psi_{n\textbf{k}}\vert
+
\vert\psi_{n\textbf{k}}\rangle\langle\delta\psi_{n,\textbf{k}-\textbf{q}}\vert
\right).
\end{equation}
Specializing now to the perturbation of Eq.~(\ref{phon}), the
corresponding polarization response is
\begin{equation}
\label{Plambda1}
\begin{split}
P_{\alpha,\kappa\beta\textbf{q}}(\textbf{r})
&=\frac{2}{N_k}\sum_{n\textbf{k}}\Big[\langle\psi_{n\textbf{k}}\vert\hat{\mathcal{J}}_\alpha(\textbf{r})\vert\delta\psi_{n\textbf{k},\textbf{q}}^{\kappa\beta}\rangle
\\
&\phantom{=\frac{2}{N_k}\sum_{n\textbf{k}}\Big[}+\langle\delta\psi_{n\textbf{k},-\textbf{q}}^{\kappa\beta}\vert \hat{\mathcal{J}}_\alpha(\textbf{r})\vert\psi_{n\textbf{k}}\rangle\Big].
\end{split}
\end{equation}
Using Eqs.~(\ref{deltapsi}) and (\ref{delpsi}), the needed
first-order wave functions are
\begin{equation}
\label{pertwf}
\vert\delta\psi^{\kappa\beta}_{n\textbf{k},\textbf{q}}\rangle=-i\sum_{m}^{\text{unocc}}\vert\psi_{m\textbf{k}+\textbf{q}}\rangle\frac{\langle\psi_{m\textbf{k}+\textbf{q}}\vert\partial_{\lambda_{\kappa\beta\textbf{q}}}\hat{H}\vert\psi_{n\textbf{k}}\rangle}{(\epsilon_{m\textbf{k}+\textbf{q}}-\epsilon_{n\textbf{k}})^2}.
\end{equation}
For Eq.~(\ref{muI}), we require the cell-average of the
$\textbf{q}$-dependent polarization response [Eq.~(\ref{Pbar})].
Defining the operator
\begin{equation}
\label{Jq0}
\hat{\mathcal{J}}_\alpha(\textbf{q})=\frac{1}{\Omega}
\int_{\rm cell} d^3r\,e^{-i\bf q\cdot r}\,\hat{\mathcal{J}}_\alpha(\textbf{r}),
\end{equation}
Eq.~(\ref{Pbar}) can be written
\begin{equation}
\begin{split}
\label{Pq}
\overline{P}_{\alpha,\kappa\beta}^{\textbf{q}}&=\frac{2}{N_k}\sum_{n\textbf{k}} \Big[ \langle \psi_{n\textbf{k}}\vert\hat{\mathcal{J}}_\alpha(\textbf{q})\vert\delta \psi_{n\textbf{k},\textbf{q}}^{\kappa\beta}\rangle
\\
&\phantom{\frac{2}{N_k}\sum_{n\textbf{k}} \Big[}+\langle \delta \psi_{n\textbf{k},-\textbf{q}}^{\kappa\beta}\vert\hat{\mathcal{J}}_\alpha(\textbf{q})\vert \psi_{n\textbf{k}}\rangle \Big].
\end{split}
\end{equation}
The ground-state and first-order wavefunctions can be expressed in
terms of cell-periodic Bloch functions in the normal way:
\begin{equation}
\langle\textbf{s}\vert\psi_{n\textbf{k}}\rangle=u_{n\textbf{k}}(\textbf{s})e^{i\textbf{k}\cdot\textbf{s}}, \;\;\langle\textbf{s}\vert \delta \psi_{n\textbf{k},\textbf{q}}^{\kappa\beta}\rangle=\delta u^{\kappa\beta}_{n\textbf{k},\textbf{q}}(\textbf{s})e^{i(\textbf{k}+\textbf{q})\cdot\textbf{s}}.
\end{equation}
(Indices $\bf s$ and ${\bf s}'$ are not to be confused with the point
\textbf{r} at which the current density is evaluated.) Using this
notation, the cell-periodic first-order static wavefunction is written
$\vert\partial_{\lambda}u^{\kappa\beta}_{n\textbf{k},\textbf{q}}\rangle$,
which is equivalent to $\vert
u_{n\textbf{k},\textbf{q}}^{\tau_{\kappa\beta}}\rangle$ in the
notation of Gonze and Lee \cite{Gonze1997} and $\vert \Delta
u_n^{\textbf{k}+\textbf{q}}\rangle$ in the notation of Baroni
\textit{et al.} \cite{Baroni2001}
By factoring out the phases with wavevector \textbf{k} and \textbf{q},
we can ensure that we only consider cell-periodic quantities, and
therefore all calculations can be performed on a unit
cell. \cite{Baroni2001} To this end, we define a cell-periodic
operator \footnote{Note that the definition of Eq.~(\ref{Jkqdef})
involves a choice of convention in that the exponential factor
$e^{i\textbf{q}\cdot\textbf{r}}$ is placed to the right of
$\hat{\mathcal{J}}_\alpha(\textbf{q})$ as opposed to the
left. Choosing the opposite convention would simply switch the
operators between the two terms in Eq.~(\ref{Pq2}).}
\begin{equation}
\label{Jkqdef}
\hat{\mathcal{J}}_\alpha^{\textbf{k},\textbf{q}}=
e^{-i\textbf{k}\cdot\hat{\textbf{r}}} \hat{\mathcal{J}}_\alpha(\textbf{q})
e^{i(\textbf{k}+\textbf{q})\cdot\hat{\textbf{r}}} .
\end{equation}
Using the fact that $\hat{\mathcal{J}}_\alpha(\textbf{q})=
\hat{\mathcal{J}}^\dagger_\alpha(-\textbf{q})$ it follows that
$\left(\hat{\mathcal{J}}_\alpha^{\textbf{k},-\textbf{q}}\right)^\dagger=
e^{-i(\textbf{k}-\textbf{q})\cdot\hat{\textbf{r}}}
\hat{\mathcal{J}}_\alpha(\textbf{q})
e^{i\textbf{k}\cdot\hat{\textbf{r}}}$ so that Eq.~(\ref{Pq}) can be
written as
\begin{equation}
\begin{split}
\label{Pq2}
\overline{P}_{\alpha,\kappa\beta}^{\textbf{q}}&=\frac{2}{N_k}\sum_{n\textbf{k}} \Big[ \langle u_{n\textbf{k}}\vert\hat{\mathcal{J}}_\alpha^{\textbf{k},\textbf{q}}\vert\delta u_{n\textbf{k},\textbf{q}}^{\kappa\beta}\rangle
\\
&\phantom{=\frac{2}{N_k}\sum_{n\textbf{k}} \Big[}+\langle \delta u_{n\textbf{k},-\textbf{q}}^{\kappa\beta}\vert\left(\hat{\mathcal{J}}_\alpha^{\textbf{k},-\textbf{q}}\right)^\dagger\vert u_{n\textbf{k}}\rangle \Big].
\end{split}
\end{equation}
In this work, we shall limit our focus to materials with time-reversal symmetry (TRS);
then we have
\begin{equation}
\label{TReq}
\langle \textbf{s} \vert u_{n\textbf{k}} \rangle= \langle u_{n-\textbf{k}} \vert\textbf{s} \rangle, \;\;\langle\textbf{s}\vert \delta u_{n\textbf{k},\textbf{q}}^{\kappa\beta}\rangle= -\langle \delta u_{n\,-\textbf{k},-\textbf{q}}^{\kappa\beta}\vert \textbf{s}\rangle,
\end{equation}
where the negative sign in the second expression is a result of the
$-i$ in the first-order adiabatic wavefunction [see
Eq.~(\ref{deltapsi})]. Assuming that the current operator has the
correct ``TRS odd'' nature, i.e., $\Big( \langle {\bf
s}|\hat{\mathcal{J}}^{\bf k,-q}_\alpha |{\bf s}' \rangle \Big)^* =
-\langle {\bf s}|\hat{\mathcal{J}}^{\bf -k,q}_\alpha |{\bf s}'
\rangle$, Eq.~(\ref{Pq2})
simplifies to
\begin{equation}
\begin{split}
\label{PqTR}
\overline{P}_{\alpha,\kappa\beta}^{\textbf{q}}
&=\frac{4}{N_k}\sum_{n\textbf{k}} \langle u_{n\textbf{k}}\vert\hat{\mathcal{J}}_\alpha^{\textbf{k},\textbf{q}}
\vert\delta u_{n\textbf{k},\textbf{q}}^{\kappa\beta}\rangle.
\end{split}
\end{equation}
\subsection{Current-density operator\label{curden}}
We now consider the form of the current-density operator. If
particle density is conserved, any physically meaningful definition
of current density must satisfy the continuity condition
\begin{equation}
\label{conteq}
\nabla\cdot\textbf{J}(\textbf{r})=-\frac{\partial \rho(\textbf{r})}{\partial t},
\end{equation}
where $\rho$ is the particle density. In a quantum mechanical treatment\cite{Sakuri},
$\rho(\textbf{r})=\vert\Psi(\textbf{r})\vert^2$, where $\Psi$ is the
solution to the time-dependent Schr\"odinger equation. Combining
Eq.~(\ref{seq}) with its complex conjugate gives
\begin{equation}
\label{conteq2}
\frac{\partial}{\partial t}\rho(\textbf{r})=-i\langle\Psi\vert\left[\vert\textbf{r}\rangle\langle\textbf{r}\vert,\hat{H}\right]\vert\Psi\rangle=-i\langle\Psi\vert\left[\hat{\rho}(\textbf{r}),\hat{H}\right]\vert\Psi\rangle,
\end{equation}
where $\hat{\rho}(\textbf{r})$ is the particle density operator. (We
use atomic units throughout with an electron charge of $-1$.) In
terms of the first-order adiabatic expansion of Eq.~(\ref{psiad}),
we can use Eq.~(\ref{conteq2}) to write the induced density from an
adiabatic perturbation parameterized by $\lambda$ as
\begin{equation}
\label{conteqlam}
\begin{split}
\rho_\lambda(\textbf{r})=-i\Big(&\langle\psi\vert\left[\hat{\rho}(\textbf{r}),\hat{H}\right]\vert\delta\psi\rangle+\langle\delta\psi\vert\left[\hat{\rho}(\textbf{r}),\hat{H}\right]\vert\psi\rangle\Big) .
\end{split}
\end{equation}
\subsubsection{Local potentials\label{locpot}}
Consider the simplest case of a Hamiltonian of the form
$\hat{H}^{\text{loc}}=\hat{\textbf{p}}^2/2 + \hat{V}^{\text{loc}}$
where $\hat{\textbf{p}}$ is the momentum operator and
$\hat{V}^{\text{loc}}=\int\hat{\rho}(\textbf{r})V(\textbf{r})d^3r$ is
a local scalar potential. The local potential commutes with the
density operator, so the only contribution to the current is from the
momentum operator. Comparing Eqs.~(\ref{conteq}) and (\ref{conteq2})
results in the textbook form of the current-density operator
\begin{equation}
\label{jloc}
\begin{split}
\hat{\mathcal{J}}_\alpha^{\text{loc}}(\textbf{r})&=-\frac{1}{2}\left(\vert\textbf{r}\rangle\langle\textbf{r}\vert\hat{p}_\alpha+\hat{p}_\alpha\vert\textbf{r}\rangle\langle\textbf{r}\vert\right)
\\
&=-\frac{1}{2}\left\{\hat{\rho}(\textbf{r}),\hat{p}_\alpha\right\}.
\end{split}
\end{equation}
Using Eq.~(\ref{Jq0}), we have
\begin{equation}
\label{jqloc}
\begin{split}
\hat{\mathcal{J}}_\alpha^{\text{loc}}(\textbf{q})&=-\frac{1}{2}\left(e^{-i\textbf{q}\cdot\hat{\textbf{r}}}\hat{p}_\alpha+\hat{p}_\alpha e^{-i\textbf{q}\cdot\hat{\textbf{r}}}\right),
\end{split}
\end{equation}
which gives the cell-periodic operator (Appendices \ref{sepICL} and \ref{Jloc})
\begin{equation}
\label{jkqloc}
\begin{split}
\hat{\mathcal{J}}_\alpha^{\textbf{k},\textbf{q},\text{loc}}&=-\left(\hat{p}_\alpha^\textbf{k}+\frac{q_\alpha}{2}\right),
\end{split}
\end{equation}
where
$\hat{p}_\alpha^{\textbf{k}}=-i\hat{\nabla}_\alpha+\hat{k}_\alpha$ is
the cell-periodic momentum operator ($\hat{\nabla}_\alpha$ is a
spatial derivative in the $\alpha$ direction, and the overall minus
sign is from the electron charge).
\subsubsection{Continuity condition and nonlocal potentials\label{contsec}}
As mentioned above, \emph{nonlocal} potentials are ubiquitous in
modern pseudopotential implementations of DFT
\cite{Vanderbilt1990,Hamann1979,Kleinman1982,Blochl1994}. When
nonlocal potentials are present in the Hamiltonian, the current
density in Eq.~(\ref{jloc}) does not satisfy the continuity equation.
To see this, consider a Hamiltonian with a nonlocal potential:
$\hat{H}^{\text{nl}}=\hat{\textbf{p}}^2/2 +
\hat{V}^{\text{loc}}+\hat{V}^{\text{nl}}$ with
$\hat{V}^{\text{nl}}=\int d^3r\int d^3r^\prime
\hat{\rho}(\textbf{r},\textbf{r}^\prime)V(\textbf{r},\textbf{r}^\prime)$
where
$\hat{\rho}(\textbf{r},\textbf{r}^\prime)=\vert\textbf{r}\rangle\langle\textbf{r}^\prime\vert$.
In this case, there is a term in the induced density [Eq.~(\ref{conteqlam})] resulting from the nonlocal potential:
\begin{equation}
\begin{split}
\label{rhoNL}
\rho^{\text{nl}}_\lambda(\textbf{r})=-i\Big(&\langle\psi\vert\left[\hat{\rho}(\textbf{r}),\hat{V}^{\text{nl}}\right]\vert\delta\psi\rangle
\\
&+\langle\delta\psi\vert\left[\hat{\rho}(\textbf{r}),\hat{V}^{\text{nl}}\right]\vert\psi\rangle\Big),
\end{split}
\end{equation}
If we write the total induced
current as the sum of contributions from the local and nonlocal parts,
$\textbf{J}=\textbf{J}^{\text{loc}}+\textbf{J}^{\text{nl}}$, then we
have
\begin{equation}
\label{conteq3}
\nabla\cdot\textbf{J}^{\text{nl}}(\textbf{r})=-\rho^{\text{nl}}_\lambda(\textbf{r}).
\end{equation}
This ``nonlocal charge,'' $\rho^{\text{nl}}_\lambda$, measures the
degree to which the continuity equation, Eq.~(\ref{conteq}), breaks
down if Eq.~(\ref{jloc}) is used in a nonlocal pseudopotential
context.
Li \textit{et al.}\cite{Li2008} argued that such nonlocal charge could
be used to reconstruct the nonlocal contribution to the current
density via a Poisson equation. Indeed, Eq.~(\ref{conteq3}) indicates
that the irrotational part of $\textbf{J}^{\text{nl}}$ can be
determined by calculating Eq.~(\ref{rhoNL}). Their approach yields a
conserved current by construction, but there are two additional
requirements that a physically meaningful definition of the
quantum-mechanical electronic current should satisfy:
\begin{itemize}
\item The nonlocality of the Hamiltonian should be confined to small
spheres surrounding the ionic cores. In the interstitial regions,
the nonlocal part of the pseudopotentials vanish, and the
Hamiltonian operator is local therein. Thus, the current-density
operator should reduce to the simple textbook formula outside the
atomic spheres. The corollary is that
$\textbf{J}^{\text{nl}}(\textbf{r})$ must vanish in the interstitial
regions.
\item The macroscopic average of the microscopic current should reduce
to the well-known expression
$\hat{v}_\alpha= -i[\hat{r}_\alpha,\hat{H}]$ for the electronic
velocity operator
\cite{Starace1971,Hybertsen1987,Giannozzi1991,DalCorso1994}. This is
routinely used in the context of DFPT, e.g., to calculate the
polarization response to ionic displacements needed for the Born
effective charge tensor.
\end{itemize}
The strategy proposed by Li {\em et al.}~\cite{Li2008} falls short of
fulfilling either condition. Regarding the first (spatial
confinement), note that the nonlocal charge associated to individual
spheres generally has a nonzero dipole (and higher multipole)
moments. Therefore, even if the nonlocal charge is confined to the
sphere, an irrotational field whose divergence results in such a
charge density will generally have a long-ranged character and
propagate over all space.
Regarding the relation to the macroscopic particle velocity, note that
the construction proposed by Li {\em et al.}~\cite{Li2008} in practice
discards the solenoidal part of the nonlocal current and hence fails
at describing its contribution to the transverse polarization
response. This is precisely the quantity in which we are interested in the
context of flexoelectricity, and is also crucial for obtaining other
important quantities, such as the Born charge tensor, that are part of
standard DFPT implementations.
Therefore, a calculation of Eqs.~(\ref{rhoNL}) does not contain the
necessary information to determine $\textbf{J}^{\text{nl}}$, and an
alternative derivation to the textbook one outlined in
Sec.~\ref{locpot} is required.
\subsubsection{Current-density operator generalized for nonlocal potentials\label{secJ}}
In light of the previous section, we will now focus on determining an
expression for $\hat{\mathcal{J}}_\alpha$ that is applicable when
nonlocal potentials are present in the Hamiltonian. For the case of a
perturbation that is uniform over the crystal, corresponding to the
long wavelength $\textbf{q}=0$ limit of Eq.~(\ref{phon}), it is well
known that the momentum operator should be replaced with the canonical
velocity operator $\hat{v}_\alpha$
\cite{Starace1971,Hybertsen1987,Giannozzi1991,DalCorso1994} in order
to determine the \emph{macroscopic} current.
In Ref.~\onlinecite{Umari2001}, the expression for the
\emph{microscopic} current operator that was used to calculate the
current induced by a uniform electric field was Eq.~(\ref{jloc}) with
$\hat{p}_\alpha$ replaced by $\hat{v}_\alpha$. Although this treatment
will result in the correct current when averaged over a unit cell,
this operator does not satisfy the continuity condition in
Eq.~(\ref{conteq}) except in the special case of a Hamiltonian with
only local potentials, where it reduces to Eq.~(\ref{jloc}).
Since we shall be treating a long wavelength acoustic phonon in this study,
and we require the polarization response be correct at least to second
order in \textbf{q} [\textit{cf.} Eq.~(\ref{muI})], we
require a version of $\hat{\mathcal{J}}_\alpha$ that is designed to handle spatially
varying perturbations.
Therefore, for our purposes, we need an alternative starting
point for the derivation of a current-density expression,
different from the one
based on the continuity condition that led to, e.g.,
Eq.~(\ref{jloc}).
In general, for an arbitrary electronic Hamiltonian $\hat{H}^{\textbf{A}}$ coupled to a
vector potential $\textbf{A}(\textbf{r})$, the most general form for
the current-density operator is
\begin{equation}
\label{dHdA}
\hat{\mathcal{J}}_\alpha(\textbf{r})=-\frac{\partial\hat{H}^{\textbf{A}}}{\partial A_\alpha(\textbf{r})} .
\end{equation}
Our strategy will be to use a vector potential to probe the response to the strain gradient, which will give us the current density via Eq.~(\ref{dHdA}).
Since we are treating the strain gradient in terms of a long-wavelength acoustic phonon of
wavevector $\bf q$, and we are interested in the response occurring at
the same wavevector $\bf q$,
it is useful to define
\begin{align}
\hat{\mathcal{J}}_\alpha(\textbf{r})&=\sum_{\textbf{G}}\hat{\mathcal{J}}_\alpha(\textbf{G}+\textbf{q})e^{i(\textbf{G}+\textbf{q})\cdot\textbf{r}},
\label{fft-J}
\\
A_\alpha(\textbf{r})&=\sum_{\textbf{G}}A_\alpha(\textbf{G}+\textbf{q})e^{i(\textbf{G}+\textbf{q})\cdot\textbf{r}},
\label{fft-A}
\\
P_{\alpha,\kappa\beta\textbf{q}}(\textbf{r})&=\sum_{\textbf{G}}
P_{\alpha,\kappa\beta\textbf{q}}(\textbf{G}+\textbf{q})e^{i(\textbf{G}+\textbf{q})\cdot\textbf{r}}.
\label{fft-dlP}
\end{align}
With these definitions, Eq.~(\ref{dHdA}) becomes
\begin{equation}
\label{dHdAGq}
\hat{\mathcal{J}}_\alpha(\textbf{G}+\textbf{q})=-\frac{\partial\hat{H}^{\textbf{A}}}{\partial A^*_\alpha(\textbf{G}+\textbf{q})}
\end{equation}
and the desired operator for Eq.~(\ref{Pq}) is
\begin{equation}
\label{dHdAq}
\hat{\mathcal{J}}_\alpha({\textbf{q}})=
-\frac{\partial\hat{H}^{\textbf{A}}}{\partial A^*_\alpha(\textbf{q})}.
\end{equation}
Again, if the Hamiltonian of interest had the form of
$H^{\text{loc}}=(\hat{\textbf{p}}+\hat{\textbf{A}})^2/2 +
\hat{V}^{\text{loc}}$,
where the scalar potential is local and
$\hat{\textbf{A}}=\int\hat{\rho}(\textbf{r})\textbf{A}(\textbf{r})d^3r$
is a local vector potential, then
$\hat{\mathcal{J}}^{\text{loc}}_\alpha(\textbf{r})=-\frac{1}{2}
\left\{\hat{\rho}(\textbf{r}),(\hat{p}_\alpha+\hat{A}_\alpha)\right\}$.
However, for our implementation, we are considering the case where the
potential $\hat{V}$ is nonlocal, so we must determine how to couple a
generally nonlocal Hamiltonian to a spatially nonuniform vector
potential field (which will be the case for a finite \textbf{q}
perturbation).
The standard strategy for describing the coupling to the vector
potential is to multiply the nonlocal operator by a complex phase
containing the line integral of the vector potential
\textbf{A}\cite{ICL2001,Pickard2003,Essin2010}; in the real-space
representation:
\begin{equation}
\label{Aphase}
\mathcal{O}^{\textbf{A}}(\textbf{s},\textbf{s}^\prime)=\mathcal{O}(\textbf{s},\textbf{s}^\prime)e^{-i\int_{\textbf{s}^\prime\rightarrow\textbf{s}}\textbf{A}\cdot d\ell}.
\end{equation}
The different methods that have been proposed for coupling \textbf{A}
to a nonlocal Hamiltonian amount to applying the complex phase in
Eq.~(\ref{Aphase}) to either the entire Hamiltonian\cite{Essin2010} or
just the nonlocal potential\cite{ICL2001,Pickard2003}, and choosing
either a straight-line path\cite{ICL2001,Essin2010} or a path that
passes through the centers of the atoms\cite{Pickard2003} to perform
the line integral.
\subsubsection{Straight-line path \label{formICL}}
Using Feynman path integrals, Ismail-Beigi, Chang, and Louie
\cite{ICL2001} (ICL) derived the following form of a nonlocal
Hamiltonian coupled to a vector potential field:
\begin{equation}
\label{HICL}
\begin{split}
\hat{H}^{\textbf{A}}_{\text{ICL}}&=\frac{1}{2}(\hat{\textbf{p}}+\hat{\textbf{A}})^2+\hat{V}^{\text{loc}}
\\
&+\int d^3s \int d^3s^\prime\hat{\rho}(\textbf{s},\textbf{s}^\prime)V^{\text{nl}}(\textbf{s},\textbf{s}^\prime)e^{-i\int_{\textbf{s}^\prime}^{\textbf{s}}\textbf{A}\cdot d\ell},
\end{split}
\end{equation}
where the line integral is taken along a straight path from \textbf{s}
to $\textbf{s}^\prime$.
Since the approach used in Ref.~\onlinecite{ICL2001} to perform
the minimal substitution
$\hat{\textbf{p}}\rightarrow\hat{\textbf{p}}+\hat{\textbf{A}}$ is
general, applying to both local and nonlocal Hamiltonians, this
approach is equivalent to the
approach of Essin \textit{et al.}, where the coupled Hamiltonian is
written as
\begin{equation}
\label{HA1}
H^{\textbf{A}}(\textbf{s},\textbf{s}^\prime)=H(\textbf{s},\textbf{s}^\prime)e^{-i\int_{\textbf{s}^\prime}^{\textbf{s}}\textbf{A}\cdot d\ell},
\end{equation}
i.e., all of the \textbf{A} dependence is contained in the complex
phase, and the line integral is also taken along a straight path from
\textbf{s} to $\textbf{s}^\prime$.
Expanding Eq.~(\ref{HA1}) to first order gives
\begin{equation}
\label{HA2}
\begin{split}
H^{\textbf{A}}(\textbf{s},\textbf{s}^\prime)&= H(\textbf{s},\textbf{s}^\prime)-iH(\textbf{s},\textbf{s}^\prime)\int_{\textbf{s}^\prime}^{\textbf{s}}\textbf{A}\cdot d\ell+\cdots.
\end{split}
\end{equation}
We would like to evaluate Eq.~(\ref{dHdAq}) for this form of the
Hamiltonian. Since $\textbf{A}(\textbf{r})$ is real we can write
Eq.~(\ref{fft-A}) as $A_\alpha(\textbf{r})=A^*_\alpha(\textbf{r})
=A^*_\alpha(\textbf{q})e^{-i\textbf{q}\cdot\textbf{r}}$ so that
the integral over \textbf{A} for the ICL\cite{ICL2001}
path is
\begin{equation}
\label{AICL}
\begin{split}
\int_{\textbf{s}^\prime}^{\textbf{s}}\textbf{A}\cdot d\ell&=\int_0^1d\tau\textbf{A}[\textbf{s}^\prime+\tau(\textbf{s}-\textbf{s}^\prime)]\cdot(\textbf{s}-\textbf{s}^\prime)
\\
&=\textbf{A}^*(\textbf{q})\cdot(\textbf{s}-\textbf{s}^\prime)\int_0^1d\tau e^{-i{\textbf{q}}\cdot[\textbf{s}^\prime+\tau(\textbf{s}-\textbf{s}^\prime)]}
\\
&=-\textbf{A}^*(\textbf{q})\cdot(\textbf{s}-\textbf{s}^\prime)\frac{e^{-i\textbf{q}\cdot\textbf{s}}-e^{-i\textbf{q}\cdot\textbf{s}^\prime}}{i\textbf{q}\cdot(\textbf{s}-\textbf{s}^\prime)}
\end{split}
\end{equation}
Therefore, from Eqs.~(\ref{HA2}) and (\ref{dHdAq}),
\begin{equation}
\label{JqSL}
\langle\textbf{s}\vert\hat{\mathcal{J}}_{\alpha}^{\text{ICL}}(\textbf{q})\vert\textbf{s}^\prime\rangle=-iH(\textbf{s},\textbf{s}^\prime)(s_\alpha-s_\alpha^\prime)\frac{e^{-i\textbf{q}\cdot\textbf{s}}-e^{-i\textbf{q}\cdot\textbf{s}^{\prime}}}{i\textbf{q}\cdot(\textbf{s}-\textbf{s}^\prime)}.
\end{equation}
In practice we shall normally work in terms of the cell-periodic
current operator of Eq.~(\ref{Jkqdef}), whose position representation
follows as
\begin{equation}
\label{JkqICL}
\langle\textbf{s}\vert\hat{\mathcal{J}}_{\alpha}^{\textbf{k},\textbf{q},\text{ICL}}\vert\textbf{s}^\prime\rangle=-iH^{\textbf{k}}(\textbf{s},\textbf{s}^\prime)(s_\alpha-s_\alpha^\prime)\frac{e^{-i\textbf{q}\cdot(\textbf{s}-\textbf{s}^\prime)}-1}{i\textbf{q}\cdot(\textbf{s}-\textbf{s}^\prime)}.
\end{equation}
We can see that the current operator of Eq.~(\ref{JqSL}) satisfies the
continuity condition of Eq.~(\ref{conteq}) as follows. In reciprocal
space the continuity equation becomes
$i\textbf{q}\cdot[-\hat{\mathcal{J}}^{\text{ICL}}(\textbf{q})]=
-\partial\hat{\rho}_{\textbf{q}}/\partial t$, where
$\hat{\rho}_{\textbf{q}}=e^{-i\textbf{q}\cdot\hat{\textbf{r}}}$ is the
$\textbf{G}=0$ particle density operator for a given \textbf{q}, and
the negative sign in front of the current operator reflects the sign
of the electron charge. But from Eq.~(\ref{JqSL}) it quickly follows
that
\begin{equation}
\label{JqSLp}
-i\textbf{q}\cdot\langle\textbf{s}\vert\hat{\mathcal{J}}_{\alpha}^{\text{ICL}}(\textbf{q})\vert\textbf{s}^\prime\rangle=
i\langle\textbf{s}\vert\left[\hat{\rho}_\textbf{q},\hat{H}\right]\vert\textbf{s}^\prime\rangle
\end{equation}
which, using the Ehrenfest theorem, is nothing other than $-\partial
\hat{\rho}_{\textbf{q}}/\partial t$ in the position representation.
In the case that only local potentials are present, only the kinetic
term in the Hamiltonian contributes to
$\hat{\mathcal{J}}_{\alpha}^{\text{ICL}}(\textbf{q})$. We show in
Appendix \ref{sepICL} that the current operator then reduces to the
form of Eq.~(\ref{jqloc}). The fact that the local and nonlocal parts
can be separated confirms the equivalence of the ICL
[Eq.~(\ref{HICL})] and Essin \textit{et al.} [Eq.~(\ref{HA1})]
approaches.
In the case that nonlocal potentials are present, we show in Appendix
\ref{sepICL} that, for $\textbf{q}=0$, Eq.~(\ref{JqSL}) reduces to the
well-known expression for the canonical velocity
operator\cite{Starace1971,Hybertsen1987,Giannozzi1991,DalCorso1994}
$\hat{\mathcal{J}}^{\text{ICL}}_{\alpha}(\textbf{q}=0)=-\hat{v}_{\alpha}=i\left[\hat{r}_\alpha,\hat{H}\right]$,
where the $-1$ comes from the electron charge. We discuss the case of
nonlocal potentials and finite \textbf{q} perturbations in
Sec.~\ref{longwave}.
\subsubsection{Path through atom center\label{formPM}}
Subsequently, Pickard and Mauri \cite{Pickard2003} (PM) proposed using
a path from \textbf{s} to the atom center, \textbf{R}, and then to
$\textbf{s}^\prime$, which was constructed explicitly to give better
agreement for magnetic susceptibility between pseudopotential and
all-electron calculations. This approach can be regarded as a
generalization to spatially nonuniform fields of the gauge-including
projector augmented-wave (GIPAW) method
\cite{Pickard2001,Pickard2003}, where the PAW transformation is
modified with a complex phase in order to ensure that the
pseudowavefunction has the correct magnetic translational symmetry.
The coupled
Hamiltonian used in Ref.~\onlinecite{Pickard2003} is of the form
\begin{equation}
\label{HPM}
\begin{split}
\hat{H}^{\textbf{A}}_{\text{PM}}&=\frac{1}{2}(\hat{\textbf{p}}+\hat{\textbf{A}})^2+\hat{V}^{\text{loc}}+\sum_{\zeta=1}^N\int d^3s \int d^3s^\prime
\\
&\times \hat{\rho}(\textbf{s},\textbf{s}^\prime)V^{\text{nl}}_\zeta(\textbf{s},\textbf{s}^\prime)e^{-i\int_{\textbf{s}^\prime\rightarrow\textbf{R}_\zeta\rightarrow\textbf{s}}\textbf{A}\cdot d\ell},
\end{split}
\end{equation}
where $N$ is the number of atoms in the cell, $\textbf{R}_\zeta$ is
the position of atom
$\zeta$, and $V_\zeta^{\text{nl}}$ is the nonlocal
potential for that atom. The PM approach explicitly splits the
nonlocal contribution from \textbf{A} into contributions from each
atomic sphere centered at $\textbf{R}_\zeta$.~\footnote{In contrast to
the ICL straight-line path, Eq.~(\ref{HA1}) using the PM
$\textbf{s}^\prime\rightarrow\textbf{R}_\zeta\rightarrow\textbf{s}$
path [i.e., the phase in Eq.~(\ref{HPM}) multiplying the entire
Hamiltonian instead of just
$V^{\text{nl}}(\textbf{s},\textbf{s}^\prime)$] does \emph{not}
recover $\hat{\mathcal{J}}_\alpha^{\text{loc}}$ for local
potentials.} Therefore, the total current operator is
\begin{equation}
\begin{split}
\label{JkqPM}
\hat{\mathcal{J}}_{\alpha}^{\textbf{k},\textbf{q},\text{PM}}=-\left(\hat{p}^{\textbf{k}}_\alpha+\frac{q_\alpha}{2}\right)+\sum_{\zeta=1}^N \hat{\mathcal{J}}_{\alpha,\zeta}^{\textbf{k},\textbf{q},\text{PM,nl}},
\end{split}
\end{equation}
where the superscript ``nl'' and the subscript $\zeta$ emphasize that
each item in the summation describes the contribution to the current
from the nonlocal potential of the atom $\zeta$; it is obvious from
Eqs.~(\ref{HPM}) and (\ref{JkqPM}) that
$\hat{\mathcal{J}}_\alpha^{\text{loc}}$ will be recovered in the case
of a local potential.
\begin{widetext}
For an atom at position $\textbf{R}_\zeta$, the line integral in Eq.~(\ref{HPM}) is
\begin{equation}
\begin{split}
\int_{\textbf{s}^\prime\rightarrow\textbf{R}_\zeta \rightarrow\textbf{s}}\textbf{A}\cdot d\ell&=-\textbf{A}^*(\textbf{q})\cdot(\textbf{R}_\zeta -\textbf{s}^\prime)\frac{e^{-i\textbf{q}\cdot\textbf{R}_\zeta }-e^{-i\textbf{q}\cdot\textbf{s}^\prime}}{i\textbf{q}\cdot(\textbf{R}_\zeta -\textbf{s}^\prime)}-\textbf{A}^*(\textbf{q})\cdot(\textbf{s}-\textbf{R}_\zeta )\frac{e^{-i\textbf{q}\cdot\textbf{s}}-e^{-i\textbf{q}\cdot\textbf{R}_\zeta }}{i\textbf{q}\cdot(\textbf{s}-\textbf{R}_\zeta )}.
\end{split}
\end{equation}
Therefore we have
\begin{equation}
\label{JqPMNL}
\begin{split}
\langle\textbf{s}\vert\hat{\mathcal{J}}_{\alpha,\zeta}^{\text{PM},\text{nl}}(\textbf{q})\vert\textbf{s}^\prime\rangle&=-iV_\zeta^{\text{nl}}(\textbf{s},\textbf{s}^\prime)\bigg[(R_{\alpha,\zeta}-s^\prime_\alpha)\frac{e^{-i\textbf{q}\cdot\textbf{R}_\zeta }-e^{-i\textbf{q}\cdot\textbf{s}^\prime}}{i\textbf{q}\cdot(\textbf{R}_\zeta -\textbf{s}^\prime)}+(s_\alpha-R_{\alpha,\zeta})\frac{e^{-i\textbf{q}\cdot\textbf{s}}-e^{-i\textbf{q}\cdot\textbf{R}_\zeta }}{i\textbf{q}\cdot(\textbf{s}-\textbf{R}_\zeta )}\bigg],
\end{split}
\end{equation}
so the cell-periodic operator is
\begin{equation}
\begin{split}
\label{JkqPMNL}
\langle\textbf{s}\vert\hat{\mathcal{J}}_{\alpha,\zeta}^{\textbf{k},\textbf{q},\text{PM,nl}}\vert\textbf{s}^\prime\rangle=-iV^{\text{nl}}_\zeta(\textbf{s},\textbf{s}^\prime)&\bigg[(R_{\alpha,\zeta}- s^\prime_\alpha)\frac{e^{-i\textbf{q}\cdot(\textbf{R}_\zeta-\textbf{s}^\prime)}-1}{i\textbf{q}\cdot(\textbf{R}_\zeta-\textbf{s}^\prime)}
+(s_\alpha-R_{\alpha,\zeta})\frac{e^{-i\textbf{q}\cdot(\textbf{s}-\textbf{s}^\prime)}-e^{-i\textbf{q}\cdot(\textbf{R}_\zeta-\textbf{s}^\prime)}}{i\textbf{q}\cdot(\textbf{s}-\textbf{R}_\zeta)}\bigg].
\end{split}
\end{equation}
\end{widetext}
From Eqs.~(\ref{JqPMNL}) and (\ref{rhoNL}), we see that
$i\textbf{q}\cdot[-\hat{\mathcal{J}}^{\text{PM},\text{nl}}(\textbf{q})]=i\left[e^{-i\textbf{q}\cdot\hat{\textbf{r}}},\hat{V^{\text{nl}}}\right]=-\hat{\rho}_{\lambda}^{\text{nl}}$. Therefore,
Eq.~(\ref{JkqPM}) satisfies the continuity condition. Also, in the
case of a $\textbf{q}=0$ perturbation,
$\hat{\mathcal{J}}^{\text{PM},\text{nl}}_{\alpha}(\textbf{q}=0)=i\left[\hat{r}_\alpha,\hat{V}^{\text{nl}}\right]$,
which is the nonlocal contribution to $-\hat{v}_\alpha$, as
expected. We discuss the case of nonlocal potentials and finite
\textbf{q} perturbations in the next section.
Finally, we see that for the longitudinal response (where
$\textbf{q}=q_\alpha\hat{\alpha}$), the ICL and PM approaches produce
identical operators. This is expected, since they both satisfy the
continuity equation. Only circulating currents (e.g., transverse or
shear FxE components) may exhibit path dependence.
\subsection{Long wavelength expansion \label{longwave}}
Recall that only the induced polarization up to second order in
$\textbf{q}$ is required for the FxE coefficients
[\textit{cf.}~Eq.~(\ref{muI})]. Therefore, instead of attempting to
calculate Eq.~(\ref{PqTR}) with either Eq.~(\mbox{\ref{JkqICL}}) or
(\ref{JkqPM}) directly, we will expand these expressions for the
current-density operator to second order in \textbf{q}.
Considering the Hamiltonian in Eq.~(\ref{HKS}), there are
contributions to $\hat{\mathcal{J}}_\alpha^{\textbf{q}}$ from the
kinetic energy and nonlocal part of the pseudopotential. We show in
Appendix \ref{sepICL} [Eq.~(\ref{LocNL4})] that the kinetic energy
only contributes up to first order in \textbf{q}, and for a local
Hamiltonian, the current operator reduces to the form of
Eq.~(\ref{jkqloc}).
The nonlocal potential will, however, contribute at all orders. As
mentioned in Sec.~\ref{formICL} and \ref{formPM}, for $\textbf{q}=0$,
both the ICL and PM approaches give
$\hat{\mathcal{J}}_\alpha^{\textbf{k},\textbf{q}=0}=-\hat{v}_\alpha^{\textbf{k}}=i[\hat{r}_\alpha,\hat{H}^{\textbf{k}}]=-\hat{p}_\alpha^{\textbf{k}}+\hat{\mathcal{J}}_\alpha^{\textbf{k},\text{nl}(0)}$,
where we have defined
$\hat{\mathcal{J}}_\alpha^{\textbf{k},\text{nl}(0)}\equiv
i[\hat{r}_\alpha,\hat{V}^{\textbf{k}\text{,nl}}]$. At higher orders in
\textbf{q}, and for nonlongitudinal response, the ICL and PM
approaches may no longer agree.
Up to second order in \textbf{q}, the current operator can be written as
\begin{equation}
\begin{split}
\label{JqExpand} \hat{\mathcal{J}}_{\alpha}^{\textbf{k},\textbf{q}}&\simeq-\left(\hat{p}_\alpha^{\textbf{k}}+\frac{q_\alpha}{2}\right) +\hat{\mathcal{J}}_\alpha^{\textbf{k},\text{nl}(0)}
\\
&\phantom{=}+ \frac{q_\gamma}{2} \hat{\mathcal{J}}_{\alpha,\gamma}^{\textbf{k},\text{nl}(1)} +\frac{q_\gamma q_\xi}{6}\hat{\mathcal{J}}_{\alpha,\gamma \xi}^{\textbf{k},\text{nl}(2)}.
\end{split}
\end{equation}
where the higher order terms in \textbf{q} ($\hat{\mathcal{J}}_{\alpha,\gamma}^{\textbf{k},\textbf{q},\text{nl}(1)}$
and
$\hat{\mathcal{J}}_{\alpha,\gamma\xi}^{\textbf{k},\textbf{q},\text{nl}(2)}$)
are the result of the nonlocal part of the Hamiltonian \emph{and} the
fact that the monochromatic perturbation is nonuniform (i.e, finite
\textbf{q}). Expressions for these last two terms in
Eq.~(\ref{JqExpand}) are derived in Appendix \ref{JNL} for the ICL
path [Eqs.~(\ref{ICL1}) and (\ref{ICL2})] and PM path
[Eqs.~(\ref{PM1}) and (\ref{PM2})].
Plugging the current operator from Eq.~(\ref{JqExpand})
into Eq.~(\ref{PqTR}), readily yields the induced polarization,
\begin{equation}
\overline{P}_{\alpha,\kappa\beta}^{\textbf{q}} = \overline{P}^{\textbf{q},\text{loc}}_{\alpha,\kappa\beta} + \overline{P}^{\textbf{q},\text{nl}}_{\alpha,\kappa\beta},
\label{PqExpand}
\end{equation}
where we have separated the contribution of the local current operator
(loc) from the nonlocal (nl) part. The exact expression for
$\overline{P}^{\textbf{q},\text{loc}}_{\alpha,\kappa\beta}$ is derived
in Appendix \ref{Jloc}, yielding Eq.~(\ref{pkq}); the approximate
(exact only up to second order in ${\bf q}$) expression for
$\overline{P}^{\textbf{q},\text{nl}}_{\alpha,\kappa\beta}$ is derived
in Appendix \ref{JNL} [see Eq.~(\ref{Pqexpand2})].
\subsection{Circulating rotation-gradient contribution and diamagnetic susceptibility \label{diamag} }
Transverse or shear strain gradients result in rigid rotations of unit
cells which must be treated carefully in order to calculate physically
meaningful values of the flexoelectric tensor. This issue can be
loosely compared to the well-known distinction between the proper and
improper piezoelectric tensor,~\cite{Martin1972,Vanderbilt2000} but,
in the case of strain gradients, it is complicated by the fact that
different parts of the sample typically rotate by different amounts.
The reader is referred to Ref.~\onlinecite{StengelUNPUB} for a complete
discussion; only the results of that work necessary for our purposes
will be reproduced here.
Larmor's theorem states that the effects of a uniform rotation and
those of a uniform magnetic field are the same to first order in the
field/angular velocity. Therefore, the local rotations of the sample
dynamically produce circulating diamagnetic currents that will
contribute to the bulk flexoelectric coefficients as defined in
Eq.~(\ref{muI}). As was shown in Ref.~\onlinecite{StengelUNPUB} (see
also Appendix \ref{Appdiamag} for an abridged derivation), this
circulating rotation-gradient (CRG) \footnote{Recall that in
Ref.~\onlinecite{StengelUNPUB}, this contribution is referred to as
the ``dynamic'' gauge-field term.} contribution only concerns the
nonlongitudinal components and is proportional to the diamagnetic
susceptibility of the material, $\chi_{\gamma\lambda}=\partial
M_\gamma/\partial H_\lambda$, where $M$ is the magnetization and $H$
the magnetic field. Specifically,
\begin{equation}
\label{pchi}
\begin{split}
\overline{P}^{(2,\omega\nu),\text{CRG}}_{\alpha,\beta}&=\sum_{\gamma\lambda}\left(\epsilon^{\alpha\omega\gamma}\epsilon^{\beta\lambda\nu}+\epsilon^{\alpha\nu\gamma}\epsilon^{\beta\lambda\omega}\right)\chi_{\gamma\lambda},
\end{split}
\end{equation}
where $\epsilon$'s are the Levi-Civita symbols.
The CRG contribution represents a physical response of the bulk
material to the rotations resulting from such nonlongitudinal strain
gradients. However, in the context of calculating FxE coefficients,
it is useful to remove this contribution. The reasoning for doing this is
based on the fact that, as shown in Ref.~\onlinecite{StengelUNPUB},
the diamagnetic circulating currents from the CRG contribution are
divergentless, and therefore do not result in a build up of charge density
anywhere in the crystal. Therefore, for the experimentally relevant case
of a \emph{finite} crystal, where the polarization response is completely
determined by the induced charge density, the CRG contribution will not
produce an electrically measurable response.
The fact that the CRG does contribute to the bulk FxE coefficients,
but not to the measurable response of a finite sample, highlights the
fact that, for flexoelectricity, the bulk and surface response are
intertwined\cite{Stengel2014,StengelUNPUB,StengelChapter}. Indeed,
it was determined in Ref.~\onlinecite{StengelUNPUB} that there is
a surface CRG contribution that will exactly cancel the bulk one
[Eq.~(\ref{pchi})]. Thus removing the CRG contribution from the bulk
coefficients simply corresponds to a different way of partitioning the
response between the bulk and the surface. In this work we are focused
on the bulk response, and are free to choose a convention for this
partition. In order to make a more direct connection with experiments,
and to be able to directly compare with charge-density-based calculations
\cite{Stengel2014}, we choose to remove the CRG contribution from our
calculated $\overline{P}^{(2,\omega\nu)}_{\alpha,\kappa\beta}$.
To calculate $\chi_{\gamma\lambda}$, there is again a subtlety
involved in the use of nonlocal pseudopotentials. Conventional
calculations of the diamagnetic susceptibility involve applying a
vector potential perturbation and calculating the current response
\cite{ICL2001,Pickard2001,Pickard2003,Vignale1991,Mauri1996}. In the
case of a local Hamiltonian the aforementioned rotational field is
indistinguishable from an electromagnetic vector potential, and the
expression for $\chi_{\gamma\lambda}$ is identical to the diamagnetic
susceptibility. However, in the case of a nonlocal Hamiltonian this is
no longer true. In that case, the perturbation remains the
\emph{local} current operator, $\hat{\mathcal{J}}^{\text{loc}}$, while
the current response is evaluated using the total (local plus
nonlocal) $\hat{\mathcal{J}}$ (\textit{cf.}~Appendix~\ref{Appdiamag}).
This difference indicates that Larmor's theorem may break
down for nonlocal potentials. This is discussed further in
Sec.~\ref{Disc}.
\section{Implementation\label{Imp}}
The procedure for calculating the FxE coefficients using the formalism
in Sec.~\ref{Form} is as follows. We first perform conventional DFPT
phonon calculations [displacing sublattice $\kappa$ in direction
$\beta$, as in Eq.~(\ref{phon})] at small but finite wavevectors
\textbf{q} to obtain the static first-order wavefunctions
$\vert\partial_{\lambda}
u^{\kappa\beta}_{n\textbf{k},\textbf{q}}\rangle$. We choose $\vert
q\vert < 0.04$, where here and henceforth we express $q$ in reduced
units of $2\pi/a$ ($a$ is the cubic lattice constant). To avoid the
sum over empty states in Eq.~(\ref{deltapsi}), we determine the
first-order adiabatic wavefunctions by solving the Sternheimer
equation
\begin{equation}
\label{deltastern}
(H_{\textbf{k}}-\epsilon_{n\textbf{k}})\vert\delta u^{\kappa\beta}_{n\textbf{k},\textbf{q}}\rangle=-i\mathcal{Q}_{c,\textbf{k}+\textbf{q}}\vert\partial_{\lambda} u^{\kappa\beta}_{n\textbf{k},\textbf{q}}\rangle
\end{equation}
where $\epsilon_{n\textbf{k}}$ is the eigenvalue of band $n$ and
$k$-point \textbf{k} and $\mathcal{Q}_{c,\textbf{k}+\textbf{q}}$ is
the projector over conduction band states (implemented as one minus
the projector over valence states). Then we apply the current
operator in Eq.~(\ref{JqExpand}) to obtain
$\overline{P}_{\alpha,\kappa\beta}^{\textbf{q}}$ from Eq.~(\ref{PqTR})
(see Appendices \ref{Jloc} and \ref{JNL} for details).
As will be discussed in Sec.~\ref{Bench}, we will use the ICL path for
most of the calculations in this study, so the explicit expression for
this case is provided in this section. The local contribution to
$\overline{P}_{\alpha,\kappa\beta}^{\textbf{q}}$ is derived in
Appendix \ref{Jloc}, leading to Eq.~(\ref{pkq}).
The three terms in the small-${\bf q}$ expansion of the nonlocal part
are determined in Appendix \ref{ICL} by combining Eqs.~(\ref{JkqICL})
and (\ref{PqTR}), and expanding in powers of \textbf{q}, leading to
Eq.~(\ref{Pqexpand2}). Combining Eq.~(\ref{Pqexpand2}) with
Eqs.~(\ref{ICL0})-(\ref{ICL2}) and adding Eq.~(\ref{pkq}), we have
\begin{widetext}
\begin{equation}
\begin{split}
\label{ICLimp}
\overline{P}_{\alpha,\kappa\beta}^{\textbf{q},\text{ICL}}&=-\frac{4}{N_k}\sum_{n\textbf{k}}\Bigg[\langle u_{n\textbf{k}}\vert\hat{p}_\alpha^{\textbf{k}}+\frac{q_\alpha}{2}\vert\delta u^{\kappa\beta}_{n\textbf{k},\textbf{q}}\rangle+\langle u_{n\textbf{k}}\vert\frac{\partial \hat{V}^{\textbf{k}{,\text{nl}}}}{\partial k_\alpha}\vert\delta u^{\kappa\beta}_{n\textbf{k},\textbf{q}}\rangle
\\
&+\frac{1}{2}\sum_{\gamma=1}^3 q_\gamma\langle u_{n\textbf{k}}\vert\frac{\partial^2 \hat{V}^{\textbf{k},\text{nl}}}{\partial k_\alpha\partial k_\gamma}\vert\delta u^{\kappa\beta}_{n\textbf{k},\textbf{q}}\rangle
+\frac{1}{6}\sum_{\gamma=1}^3\sum_{\xi=1}^3q_\gamma q_\xi\langle u_{n\textbf{k}}\vert\frac{\partial^3 \hat{V}^{\textbf{k},\text{nl}}}{\partial k_\alpha\partial k_\gamma\partial k_\xi}\vert\delta u^{\kappa\beta}_{n\textbf{k},\textbf{q}}\rangle\Bigg],
\end{split}
\end{equation}
\end{widetext}
where we have again assumed TRS [\textit{cf.}~Eq.~\ref{PqTR}]. A
similar equation can be obtained for the PM path using the first-
and second-order current operators derived in Appendix~\ref{PM}
[Eqs.~(\ref{PM1}) and (\ref{PM2})].
In order to obtain
$\overline{P}^{(2,\omega\nu)}_{\alpha,\kappa\beta}$, we calculate
numerical second derivatives with respect to $q_\omega$ and $q_\nu$
yielding the needed flexoelectric coefficients
$\mu^{\text{I}}_{\alpha\beta,\omega\nu}$ via Eq.~(\ref{muI}). Note
that, in addition to the explicit factors of $q$ multiplying the last
two terms, each term has an implicit $q$ dependence through $\delta
u^{\kappa\beta}_{n\textbf{k},\textbf{q}}$
so all terms may contribute to the second derivative.
Since we will consider cubic materials there are three independent FxE coefficients \cite{Hong2013,Stengel2013}:
\begin{equation}
\label{mu}
\begin{split}
&\mu_{\text{L}}=\mu^{\text{II}}_{11,11}=\mu^{\text{I}}_{11,11},
\\
&\mu_{\text{S}}=\mu^{\text{II}}_{12,12}=\mu^{\text{I}}_{11,22},
\\
&\mu_{\text{T}}=\mu^{\text{II}}_{11,22}=2\mu^{\text{I}}_{12,12}-\mu^{\text{I}}_{11,22},
\end{split}
\end{equation}
where L stands for longitudinal, S for shear, and T for transverse.
\subsection{Electrostatic boundary conditions\label{electroBC}}
The current response to a phonon perturbation, just like other
response properties, displays a strongly nonanalytic behavior in a
vicinity of the $\Gamma$ point (${\bf q}=0$), so some care is
required when taking the long-wavelength expansions
described in the previous Sections.
A long-wavelength phonon naturally imposes ``mixed'' electrical (ME)
boundary conditions:~\cite{Hong2013} Along the longitudinal direction
($\hat{\bf q}$) the electric displacement field, ${\bf D}$, must
vanish (${\bf D}\cdot \hat{\bf q}=0)$; conversely, periodicity is
preserved in the planes that are normal to $\hat{\bf q}$, resulting in
a vanishing electric field therein. In general, the bulk FxE tensor
needs to be defined under isotropic ``short-circuit'' (SC) boundary
conditions, which implies that the problematic longitudinal ${\bf
E}$-fields must be suppressed. In our calculations, this goal can
be achieved using the procedure of Refs.~\onlinecite{Stengel2013} and
\onlinecite{Stengel2014}, where the $\textbf{G}=0$ component of the
self-consistent first-order potential is removed in the DFPT
calculation of
$\partial_{\lambda}u^{\kappa\beta}_{n\textbf{k},\textbf{q}}$
[Eq.~(\ref{deltastern})]. We will use this procedure for the
calculations of cubic oxides in Sec.~\ref{Cub}.
For several reasons, one may sometimes be interested in calculating
the flexoelectric coefficients under mixed electrical boundary
conditions; in such a case, of course, the $\textbf{G}=0$ component of
the self-consistent first-order potential should not be removed.
Then, however, one must keep in mind that the long-wavelength expansion of
the polarization response is only allowed along a fixed direction in
reciprocal space.
(This implies performing the calculations at points ${\bf q}= q
\hat{\bf q}$, and subsequently operating the Taylor expansion as a
function of the one-dimensional parameter $q$.)
In crystals where the macroscopic dielectric tensor is isotropic and
$\hat{\bf q}$ corresponds to a high-symmetry direction, the
longitudinal coefficients for mixed electrical boundary conditions
are simply related to the short circuit ones by the dielectric
constant, $\epsilon$,
\begin{equation}
\label{BCs}
\mu_{\rm L}^{\rm SC} = \epsilon \mu_{\rm L}^{\rm ME}.
\end{equation}
We will use mixed electrical boundary conditions for our benchmark
calculations of noble gas atoms in Sec.~\ref{Bench} since, in this
particular system, $\mu_{\rm L}^{\rm ME}$, rather than $\mu_{\rm
L}^{\rm SC}$, can be directly compared to the moments of the
real-space charge density \cite{Hong2013}, as discussed in
Sec.~\ref{IRCmod}.
\subsection{Magnetic susceptibility contribution\label{mag}}
In Sec.~\ref{diamag}, we explained that the diamagnetic susceptibility
is required in order to correct for the CRG contribution to the
FxE coefficients. To avoid the sum over states in Eq.~(\ref{udyn}), we
solve the Sternheimer equation
\begin{equation}
\label{diamagstern}
(\hat{H}_{\textbf{k}}-\epsilon_{n\textbf{k}})\vert\partial_{\dot{\alpha}} u^\alpha_{n\textbf{k},\textbf{q}}\rangle=\mathcal{Q}_{c,\textbf{k}+\textbf{q}}\left(\hat{p}_\alpha^{\textbf{k}}+\frac{q_\alpha}{2}\right)\vert u_{n\textbf{k}}\rangle.
\end{equation}
Recall that
$-\left(\hat{p}_\alpha^{\textbf{k}}+\hat{q}_\alpha/2\right)$ is the
cell-averaged current operator in the case of a local potential. We
then apply the \emph{full} current operator [Eq.~(\ref{JqExpand})] to
obtain Eq.~(\ref{pdyn}) at several small but finite $q$ (as above,
$\vert q\vert < 0.04$) in order to perform a numerical second
derivative and obtain $\overline{P}^{(2,\omega\nu),\text{
CRG}}_{\alpha,\beta}$ from Eq.~(\ref{pchi}).
For the case of a material with cubic symmetry, where
$\chi_{\alpha\beta}=\chi_{\text{mag}}\delta_{\alpha\beta}$, we see
from Eq.~(\ref{pchi}) that there will be two nonzero elements of the
CRG contribution: $\overline{P}^{(2,22),\text{
CRG}}_{1,1}=2\chi_{\text{mag}}$ and $\overline{P}^{(2,12),\text{
CRG}}_{1,2}=-\chi_{\text{mag}}$. Therefore, the CI FxE
constants with the CRG contribution removed, $\mu^\prime$, are
given by \cite{StengelUNPUB}
\begin{equation}
\label{mucorr}
\begin{split}
&\mu_{\text{L}}^\prime=\mu_{\text{L}},
\\
&\mu_{\text{S}}^\prime=\mu_{\text{S}}-\chi_{\text{mag}},
\\
&\mu_{\text{T}}^\prime=\mu_{\text{T}}+2\chi_{\text{mag}},
\end{split}
\end{equation}
for cubic materials.
\subsection{Rigid-core correction \label{RCC}}
It was demonstrated in Ref.~\onlinecite{Hong2011} that the CI
FxE constants depend on the treatment of the core density, which will
be different for a different choice of pseudopotential. This
dependence is exactly canceled when the surface contribution is
calculated consistently with the same pseudopotentials
\cite{Stengel2013natcom,StengelChapter}. In order to report more
``portable'' values for the bulk FxE coefficients, we apply the
rigid-core correction (RCC) of Refs.~\onlinecite{Hong2011} and
\onlinecite{Hong2013}:
\begin{equation}
Q^{\text{RCC}}_\kappa=4\pi\int dr r^4 \left[\rho_\kappa^{\text{AE}}(\textbf{r})-\rho_\kappa^{\text{PS}}(\textbf{r})\right],
\end{equation}
where $\rho_\kappa^{\text{AE}}(r)$ is the all-electron density of the
free atom of type $\kappa$, and $\rho_\kappa^{\text{PS}}(r)$ is the
corresponding pseudocharge density. In Table \ref{RCCtab} we list
$Q^{\text{RCC}}$ for the various atoms that we will require for the
cubic oxides reported below (no RCC is included for the noble gas
atoms in Sec.~\ref{Bench}). Specifically, for short circuit boundary
conditions, $\epsilon\sum_\kappa Q^{\text{RCC}}_{\kappa}/6\Omega$ must
be added to $\mu_{\text{L}}$ and $\mu_{\text{T}}$
\cite{StengelChapter}.
\begin{table}
\caption{$Q^{\text{RCC}}$ for the various atoms in the materials in Sec.~\ref{Cub} in units of e Bohr$^2$.}
\begin{ruledtabular}
\label{RCCtab}
\begin{tabular}{cccc}
& $Q^{\text{RCC}}$ & &$Q^{\text{RCC}}$ \\ \hline
Sr&$-5.93$&Ba &$-13.39$\\
Ti&$-0.54$&Zr &$-4.55$ \\
O &$-0.01$&Pb &$-15.16$\\
Mg &$-4.85$\\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Computational details}
We have implemented the procedure for calculating the FxE coefficients
in the {\sc abinit} code \cite{Abinit_1}. The PBE generalized gradient
approximation functional \cite{pbe} is used throughout. The
conventional phonon and dielectric constant calculations are carried
out using the DFPT implementation available in the code
\cite{Abinit_phonon_1,Gonze1997}. In order to solve the
nonselfconsistent Sternheimer Eqs.~(\ref{diamagstern}) and
(\ref{deltastern}), {\sc abinit}'s implementation of the variational
approach of Ref.~\onlinecite{Gonze1997} is used.
The nuclei and core electrons are described with optimized
norm-conserving Vanderbilt pseudopotentials \cite{Hamann2013} provided
by {\sc abinit}. For the cubic oxides, an $8\times8\times8$
Monkhorst-Pack \cite{Monkhorst1976} $k$-point mesh is used to sample
the Brillouin zone, and the plane-wave energy cutoff is set of 60
Ha. For the isolated atoms, a $2\times2\times2$ $k$-point mesh is used,
and the plane-wave energy cutoff is set of 70 Ha.
\section{Results\label{Res}}
\subsection{Benchmark test: Isolated noble gas atoms\label{Bench}}
\subsubsection{Isolated rigid charge model\label{IRCmod}}
In order to test the implementation described in Sec.~\ref{Imp}, we
consider the toy model of a material made of rigid noninteracting
spherical charge distributions arranged in a simple cubic lattice, as
explored in Refs.~\onlinecite{StengelUNPUB},
\onlinecite{Stengel2013natcom}, and \onlinecite{StengelChapter}. We
shall refer to this henceforth as the ``isolated rigid charge" (IRC)
model. Of course, such a material is fictitious, since it would have
no interatomic forces to hold it together; even so, it serves as an
interesting test case since its FxE properties can be determined
analytically and compared to our numerical calculations. In this
section, we will briefly summarize the expectations of the IRC model
(see Refs.~\onlinecite{StengelUNPUB} and
\onlinecite{Stengel2013natcom} for a more complete discussion).
For the IRC ``material,'' there is only one sublattice per cell. Each
``atom'' is represented by a spherically symmetric charge density
$\rho_{\text{IRC}}(r)$ that falls to zero beyond a cutoff $r_c$ chosen
small enough to ensure that the atomic spheres do not overlap. The
atoms are assumed to be neutral,
$\int_0^{r_c}\rho_{\text{IRC}}(r)\,r^2\,dr=0$. It was shown in
Ref.~\onlinecite{StengelUNPUB} that the longitudinal and shear
coefficients for the IRC model calculated from the induced current-density
are
\begin{equation}
\label{muIRC}
\mu_{\text{L,IRC}}=\mu_{\text{S,IRC}}=\frac{Q_{\text{IRC}}}{2\Omega},
\end{equation}
where $\Omega=a^3$ is the cell volume, and
\begin{equation}
\label{QIRC}
Q_{\text{IRC}}=\int d^3r \rho_{\text{IRC}}(r)x^2
\end{equation}
is the quadrupolar moment of the atomic charge density (of course the
direction $x$ is arbitrary since the charge density is spherically
symmetric).
The FxE constants in Eq.~(\ref{muIRC}) include the CRG
contribution to the current discussed in
Sec.~\ref{diamag}\cite{StengelChapter,
StengelUNPUB,Stengel2014}. Removing this contribution from our bulk
coefficients [see Eq.~(\ref{mucorr})] results in the primed
coefficients for the IRC model\cite{StengelUNPUB}
\begin{equation}
\label{muprimeIRC}
\mu_{\text{L,IRC}}^\prime=\frac{Q_{\text{IRC}}}{2\Omega},\;\;\;\mu_{\text{S,IRC}}^\prime=0,
\end{equation}
where the CRG contribution is given by
\begin{equation}
\label{XIRC}
\chi_{\text{mag,IRC}}=\mu_{\text{S,IRC}}= \frac{Q_{\text{IRC}}}{2\Omega}
\end{equation}
If we assume that Larmor's theorem holds (i.e., that the CRG
contribution is identical to the magnetic susceptibility),
Eq.~(\ref{XIRC}) is just a statement of the Langevin theory of
diamagnetism, which relates the magnetic susceptibility to the
quadrupole moment of a spherical atomic charge (see Sec.~\ref{Disc}).
\subsubsection{Noble gas atoms\label{noble}}
In the following subsections (\ref{RSmoment}, \ref{tstlong},
\ref{tstshear}), we will compare the behavior of this model with the
results of DFT calculations on isolated noble gas atoms. Several
points should be considered when comparing the results of such
calculations to the expectations of the IRC model (relations in
Sec.~\ref{IRCmod}).
Firstly, the noble gas atoms in our DFT calculations are slightly
polarizable, i.e., not perfectly described by rigid charge
densities. For this reason the longitudinal FxE coefficient will
depend on the choice of electrostatic boundary conditions (see
Sec.~\ref{electroBC}). We will use mixed electrical boundary
conditions, where we should find [analogously to Eq.~(\ref{muIRC})]
\begin{equation}
\label{muNG}
\mu_{\text{L,NG}}^{\text{ME}}=\frac{Q_{\text{NG}}}{2\Omega},
\end{equation}
where the subscript ``NG'' indicates a DFT calculation on a noble
gas atom, and $Q_{\text{NG}}$ is the quadropole moment of the
unperturbed, ground-state charge density of the noble gas atom. If we
had used short circuit boundary conditions, there would have been a
factor of $\epsilon$ on the right-hand side of Eq.~(\ref{muNG}). Of
course, in the IRC model, the ``atoms'' are neutral, rigid, and
spherical, so $\epsilon=1$, and, from Eq.~(\ref{BCs}), short circuit
and mixed electric boundary conditions give the same FxE coefficients.
Also, since our noble-gas-atom calculations will use nonlocal
pseudopotentials, the equality of $\mu_{\text{S,NG}}$ and
$Q_{\text{NG}}/2\Omega$ is not guaranteed; in fact, we will see in
Sec.~\ref{tstshear} that they are not equal. This will
be discussed further in Sec.~\ref{Disc} in the context of the
expected symmetry of the charge response.
Similarly, we will find
that $\chi_{\text{mag}}$ does not equal $Q_{\text{NG}}/2\Omega$
[\textit{cf}. Eq.~(\ref{XIRC})], indicating that Larmor's theorem
breaks down for our form of the current in the presence of nonlocal
pseudopotentials (discussed in Sec.~\ref{Disc}).
Note that, as with the IRC model, we will drop the $\kappa$ subscript
when discussing the noble gas atoms since the ``crystals'' that we are
considering have only a single sublattice. Also, as all calculations
will use mixed electrical boundary condition, we will drop the
explicit ``ME'' labels.
\subsubsection{Computational strategy: Real-space moments of the charge density\label{RSmoment}}
In addition to the relations in Eqs.~(\ref{muIRC}),
(\ref{muprimeIRC}), and (\ref{XIRC}) of Sec.~\ref{IRCmod} and
Eq.~(\ref{muNG}) of Sec.~\ref{noble}, we can perform specific tests of
the components of our implementation by exploiting the correspondence
between two methods of calculating the FxE coefficients: (i) the
long-wavelength expansion in reciprocal space of the polarization
induced by a phonon [i.e., Eq.~(\ref{muI})] that we have described so
far in this work, and (ii) the computation of the real-space moments
of the induced microscopic polarization or charge density from the
displacement of an isolated atom in a crystal
\cite{Stengel2013,Hong2013}. For the case of the isolated noble gas
atoms, displacing the entire sublattice (i.e., applying a \textbf{q}=0
acoustic phonon perturbation) is equivalent to displacing a single
atom.
It is particularly useful to compare our methodology to the real-space
moments of the induced charge density, since they can be readily
calculated from a conventional, DFPT phonon calculation (with
$\textbf{q}=0$). Specifically, the longitudinal noble-gas response in
direction $\alpha$ is \cite{Stengel2013,Hong2013}
\begin{equation}
\begin{split}
\label{comp}
\mu_{\text{L,NG}}&=-\frac{1}{2}\frac{\partial^2\overline{P}_{\alpha,\alpha}^{\textbf{q},\text{NG}}}{\partial q_\alpha^2}\Bigg\vert_{\textbf{q}=0}
=\frac{1}{6\Omega}\int_{\text{cell}}d^3r\rho^{\text{NG}}_{\alpha\textbf{q}=0}(\textbf{r})r_\alpha^3.
\end{split}
\end{equation}
where $\rho^{\text{NG}}_{\alpha\textbf{q}}(\textbf{r})\equiv \partial
\rho^{\text{NG}}(\textbf{r})/\partial \lambda_{\alpha\textbf{q}}$ is
the first-order induced charge density from a phonon with wavevector
$\textbf{q}$ and noble gas atoms displaced in the $\alpha$
direction. $\overline{P}_{\alpha,\alpha}^{\textbf{q}}$ is calculated
with mixed electrical boundary conditions. As mentioned in
Sec.~\ref{noble}, the right-hand side of Eq.~(\ref{comp}) equals
$Q_{\text{NG}}/2\Omega$. Recall that, since the charge density is
related to the divergence of the polarization, it only gives the
longitudinal FxE coefficient. Therefore, we can only use an expression
like the one in Eq.~(\ref{comp}) to test our implementation of
$\mu_{\text{L}}$.
In general (i.e., not specific to the case of the isolated noble gas
atoms), the induced charge density can be split into contributions
from the local and nonlocal parts of the Hamiltonian, as we did for
the polarization in Eq.~(\ref{PqExpand}). Using the continuity
condition, we can write the first-order charge
as
\begin{equation}
\label{delchg}
\rho_{\alpha\textbf{q}}(\textbf{G}+\textbf{q})=-i(\textbf{G}+\textbf{q})\cdot\textbf{P}^{\text{loc}}_{\alpha\textbf{q}}(\textbf{G}+\textbf{q})+\rho_{\alpha\textbf{q}}^{\text{nl}}(\textbf{G}+\textbf{q}) .
\end{equation}
Here $\textbf{P}^{\text{loc}}_{\alpha\textbf{q}}$ is the ``local''
part of the induced polarization and
$\rho_{\alpha\textbf{q}}^{\text{nl}}$ is the nonlocal charge
introduced in Sec.~\ref{contsec}. Using the reciprocal-space version
of Eq.~(\ref{jloc}), the local induced polarization is (assuming TRS)
\begin{equation}
\label{Ploc}
\begin{split}
P^{\text{loc}}_{\alpha,\alpha\textbf{q}}(\textbf{G}+\textbf{q})=-\frac{2}{N_k}\sum_{n\textbf{k}}\langle \psi_{n\textbf{k}}\vert \left\{ e^{-i(\textbf{G}+\textbf{q})\cdot\hat{\textbf{r}}}, \hat{p}_\alpha\right\}\vert\delta \psi^{\alpha}_{n\textbf{k},\textbf{q}}\rangle
\end{split}
\end{equation}
and the nonlocal charge density from Eq.~(\ref{rhoNL}) is given (in
reciprocal space) by
\begin{equation}
\label{rhonl}
\begin{split}
\rho^{\text{nl}}_{\alpha\textbf{q}}(\textbf{G}+\textbf{q})=-\frac{4i}{N_k}\sum_{n\textbf{k}}\langle \psi_{n\textbf{k}}\vert \left[ e^{-i(\textbf{G}+\textbf{q})\cdot\hat{\textbf{r}}}, \hat{V}^{\text{nl}}\right]\vert\delta \psi^{\alpha}_{n\textbf{k},\textbf{q}}\rangle
\end{split}
\end{equation}
The first-order charge on the left-hand side of Eq.~(\ref{delchg}) can
be obtained from a conventional DFPT phonon calculation, and thus
Eq.~(\ref{delchg}) allows for several tests of our methodology.
A simple test of the nonlocal contribution at $\textbf{q}=0$ is to
compare the dipole moment of the nonlocal charge with
$\overline{P}_{\alpha,\alpha}^{\textbf{q}\text{,nl}(0)}$ [i.e., the
second term in Eq.~(\ref{ICLimp})], which should give the nonlocal
contribution to the Born effective charge
\begin{equation}
\label{Ztst}
Z^*_{\alpha\beta,\text{nl}}=\overline{P}_{\alpha,\beta}^{\textbf{q}=0,\text{nl}}=\int_{\text{cell}}d^3r\rho^{\text{nl}}_{\beta\textbf{q}=0}(\textbf{r})r_\alpha.
\end{equation}
Again, this relation is generally applicable. For cubic symmetry, the
Born effective charge tensor has only one independent element, which
we write as $Z^*\equiv Z^{*\text{NG}}_{\alpha\alpha}$. Of course, for
the case of the noble gas atom ``material,'' there is only one
sublattice, so the sum of the nonlocal contribution with the local part
(including the ionic charge) will vanish due to the acoustic sum rule
(ASR) \cite{Pick1970}.
For the case of the isolated noble gas atoms, we can use
Eqs.~(\ref{comp}) and (\ref{delchg}) to relate the real-space octupole
moment of $\rho^{\text{nl}}_{\alpha\textbf{q}=0}(\textbf{r})$ [Fourier
transform of Eq.~(\ref{rhonl})] averaged over the cell, to the second
\textbf{q} derivative of
$\overline{P}_{\alpha,\alpha}^{\textbf{q}\text{,nl}}$ [see
Eq.~(\ref{Pqexpand2})] evaluated at
$\textbf{q}=0$. Specifically, we should find that
\cite{Hong2013,Stengel2013}
\begin{equation}
\begin{split}
\label{NLcomp}
-\frac{1}{2}\frac{\partial^2\overline{P}_{\alpha,\alpha}^{\textbf{q},\text{nl,NG}}}{\partial q_\alpha^2}\Bigg\vert_{\textbf{q}=0}=\frac{1}{6\Omega}\int_{\text{cell}}d^3r\rho^{\text{nl,NG}}_{\alpha\textbf{q}=0}(\textbf{r})r_\alpha^3,
\end{split}
\end{equation}
and similarly for the local part,
\begin{equation}
\label{loccomp}
-\frac{1}{2}\frac{\partial^2\overline{P}_{\alpha,\alpha}^{\textbf{q},\text{loc,NG}}}{\partial q_\alpha^2}\Bigg\vert_{\textbf{q}=0}=\frac{1}{6\Omega}\int_{\text{cell}}d^3r\left[-\nabla\cdot\textbf{P}^{\text{loc,NG}}_{\alpha\textbf{q}=0}(\textbf{r})\right]r_\alpha^3,
\end{equation}
where we again perform the reciprocal space calculations using mixed electrical
boundary conditions.
The comparisons in Eqs.~(\ref{NLcomp}) and (\ref{loccomp}) test both
the long-wavelength expansion of the current operator (local and
nonlocal), and the accuracy of the adiabatic first-order wavefunction
at finite \textbf{q}.
\subsubsection{Test of implementation: Longitudinal response\label{tstlong}}
To test $P^{\text{loc}}_{\alpha,\alpha\textbf{q}=0}$ and $\delta
\psi^{\alpha}_{n\textbf{k},\textbf{q}=0}$, we calculate the
first-order charge [left-hand side of Eq.~(\ref{delchg})] from a
$\textbf{q}=0$ phonon by conventional DFPT, and compare to what we
obtain for the right-hand side of Eq.~(\ref{delchg}) calculated using
Eqs.~(\ref{Ploc}) and (\ref{rhonl}) (with $\textbf{q}=0$). We Fourier
transform the quantities in Eq.~(\ref{delchg}) to real space and plot
their planar averages in Fig.~\ref{NLchg} for He, Ne, Ar, and Kr
atoms in $16\times16\times16$ Bohr cells. Summing the contributions
from the nonlocal charge (blue dashed curves) and the gradients of the
local induced polarization (green dot-dashed) gives the red solid
curves in Fig.~\ref{NLchg}. As expected from Eq.~(\ref{delchg}), the
red curve lies on top of the black circles, which correspond to the
first-order charge from the $\textbf{q}=0$ DFPT phonon calculations.
\begin{figure}
\includegraphics[width=\columnwidth]{./16x16x16.pdf}
\caption{\label{NLchg} (Color online) Planar average of the local
[Eq.~(\ref{Ploc}), green dot-dashed curve], nonlocal
[Eq.~(\ref{rhonl}), blue dashed], and total [Eq.~(\ref{delchg}), red
solid] first-order charge for noble gas atoms displaced in the $x$ direction by a $\textbf{q}=0$ phonon. The
black circles correspond to the first-order charge calculated using
a conventional, static, DFPT calculation. The box size is
$16\times16\times16$ Bohr, but zoomed in to only show $\pm 5$
Bohr.}
\end{figure}
Now we can take the real-space moments of the curves in
Fig.~\ref{NLchg} and compare them with the results of our reciprocal
space expansion.
As discussed in Sec.~\ref{RSmoment}, the first moment of the blue
dashed curves gives the nonlocal contribution to the Born effective
charge, which should correspond to
$\overline{P}_{\alpha,\alpha}^{\textbf{q}=0,\text{nl}}$
[Eq.~(\ref{Ztst})]. In Table \ref{NLchgtab} we give the nonlocal
contribution to $Z^*$ for the noble gas atoms in $14\times14\times14$
Bohr boxes. The ASR requires that the total $Z^*$ vanishes; for our
noble gas atoms, we calculate the magnitude of the total $Z^*$ to be
less than $10^{-4}$ e, so the ``local'' part (including the
contribution from the ionic charge) is the same magnitude but opposite
sign as the numbers in the second and third columns of Table
\ref{NLchgtab}.
The second column of Table \ref{NLchgtab}, labeled $P^{\text{nl}}$, is
calculated using the reciprocal space current and the third column
(labeled $\rho^{\text{nl}}$) is from the real-space dipole moment of
the charge density. We see that there is excellent agreement between
the two methods, indicating that
$\overline{P}_{\alpha,\alpha}^{\textbf{q}=0,\text{nl}}$ is accurately
calculated.
\begin{table}
\caption{Calculation of the Born effective charge and $\mu_{\text{L}}$
using the moments of the local and nonlocal charge (columns labeled
$\rho$) compared to the current-density implementation (columns
labeled $P$) for atoms in a $14\times14\times14$ Bohr box. Mixed electrical boundary conditions are used.}
\label{NLchgtab}
\begin{ruledtabular}
\begin{tabular}{c|cc|cccc}
& \multicolumn{2}{c|}{$Z^*$ (e)} &\multicolumn{4}{c}{$\mu_{\text{L}}$ (pC/m)}\\
& $P^\textrm{nl}$ &$\rho^\textrm{nl}$ &$P^\textrm{loc}$ &$\rho^\textrm{loc}$ &$P^\textrm{nl}$ &$\rho^\textrm{nl}$ \\
\hline
He &$-0.027$ &$-0.027$ &$-0.470$ &$-0.470$ &$0.004$ &$0.004$ \\
Ne &$-0.155$ &$-0.155$ &$-1.872$ &$-1.872$ &$0.028$ &$0.028$ \\
Ar &$1.556$ &$1.556$ &$-4.620$ &$-4.623$ &$0.073$ &$0.072$ \\
Kr &$-0.214$ &$-0.214$ &$-5.878$ &$-5.874$ &$-0.099$ &$-0.099$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
It is also clear from Fig.~\ref{NLchg} and Table \ref{NLchgtab}
that the nonlocal correction to the Born effective charge can be very
large, on the order of one electron for Ar. We see a similarly large
contribution for atoms with empty $3d$ shells (but projectors in this
channel) such as a Ca atom or Ti$^{4+}$ ion (not shown).
Now we would like to test the accuracy of our long-wavelength
expansion of the current operator (Sec.~\ref{longwave}) for
calculating $\mu_{\text{L}}$. In Table \ref{NLchgtab} we give both the
local and nonlocal contributions to $\mu_{\text{L}}$ using the
right-hand side of Eqs.~(\ref{NLcomp}) and (\ref{loccomp}) (labeled as
$\rho^{\text{loc}}$ and $\rho^{\text{nl}}$), compared to those
calculated from our current-density implementation [left-hand side of
Eqs.~(\ref{NLcomp}) and (\ref{loccomp}), labeled as $P^{\text{loc}}$
and $P^{\text{nl}}$]. The agreement between the real-space moments and
reciprocal-space derivatives of the expansion in Eq.~(\ref{ICLimp}) is
excellent. Also, we can see that even though the nonlocal contribution
to the Born effective charge is large for Ar, the first-order nonlocal
charge is almost purely dipolar, with the third moment being
almost two orders of magnitude smaller than the contribution of the
local part.
Also, from Table \ref{IRCtab} and Fig.~\ref{IRCplot}, we see that
$\mu_{\text{L}}= Q_{\text{NG}}/2\Omega$ [consistent with
Eq.~(\ref{muNG})] quite accurately for sufficiently large simulation
cells.
\subsubsection{Test of implementation: Shear response\label{tstshear}}
In Table \ref{IRCtab} we give the longitudinal and shear FxE
coefficients, as well as $\chi_{\text{mag}}$ and
$Q_{\text{NG}}/2\Omega$, for noble gas atoms in $14\times14\times14$
Bohr boxes. For $\mu_{\text{S}}$ and $\chi_{\text{mag}}$, we give
values using the ICL and PM paths for the nonlocal correction. In
Fig.~\ref{IRCplot}, we show the dependence of these quantities on the
box size.
\begin{table}
\caption{Longitudinal and shear (ICL and PM path) FxE coefficients for noble gas atoms in $14\times14\times14$ Bohr boxes, as well as the
diamagnetic susceptibility correction, $\chi_{\text{mag}}$ (ICL and PM path), and the quadrupole
moment of the unperturbed charge density divided by two times the
volume [\textit{cf.}~Eqs.~(\ref{muIRC}) and (\ref{QIRC})]. All quantities are in units of pC/m, and mixed electrical boundary conditions used. }
\begin{ruledtabular}
\label{IRCtab}
\begin{tabular}{cccccccc}
&$\mu_{\text{L}}$ &$\mu_{\text{S}}^{\text{ICL}}$&$\mu_{\text{S}}^{\text{PM}}$ & $\chi_{\text{mag}}^{\text{ICL}}$ &$\chi_{\text{mag}}^{\text{PM}}$ &$Q_{\text{NG}}/2\Omega$\\
\hline
He &$-0.468$&$-0.467$&$-0.464$ &$-0.468$&$-0.464$&$-0.466$ \\
Ne &$-1.840$ &$-1.693$ &$-1.655$ &$-1.692$&$-1.655$&$-1.845$ \\
Ar &$-4.545$ &$-5.008$&$-5.086$ &$-5.013$&$-5.081$&$-4.554$ \\
Kr &$-5.968$ &$-5.901$ &$-5.917$ &$-5.903$&$-5.921$&$-5.990$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}
\includegraphics[width=\columnwidth]{./AtDiamag.pdf}
\caption{\label{IRCplot} (Color online) The longitudinal (red squares)
and shear (blue diamonds) FxE coefficients, as well as the
diamagnetic susceptibility correction (black circles) and
$Q_{\text{NG}}/2\Omega$, for (a) He, (b) Ne, (c) Ar, and (d) Kr
atoms in cells with various lattice constants. All quantities are
multiplied by the cell volume, $\Omega$. }
\end{figure}
From Table \ref{IRCtab} and Fig.~\ref{IRCplot}, we see that
$\mu_{\text{S}}=\chi_{\text{mag}}$ (consistent with the isotropic
symmetry of the atoms) for sufficiently large simulation
cells. However, for atoms other than He, $\chi_{\text{mag}}$ is
noticeably different from $Q_{\text{NG}}/2\Omega$, even for large box
sizes. This discrepancy demonstrates that either Larmor's theorem or
the Langevin theory of diamagnetism breaks down when nonlocal
pseudopotentials are present (see Sec.~\ref{Disc} for further
discussion).
When we compare the two path choices, PM (Sec.~\ref{formPM}) and ICL
(Sec.~\ref{formICL}), we find slight quantitative differences for the
shear component and diamagnetic correction. However, the differences
between the paths vanishes for $\mu_{\text{S}}^\prime$ [see
Eq.~(\ref{mucorr})], indicating that although the CRG contribution
is path-dependent, the ``true'' shear response (which is vanishing for
spherical symmetry) is not for this system. This result is an
excellent test that our implementation is sound. Indeed, for a cubic
solid, all three components of the electronic flexoelectric tensor
$\bm{\mu}'$ can be related to the surface charge accumulated via the
mechanical deformation of a finite crystallite; thus, they should not
depend on the aforementioned path choice.
As the path choice is irrelevant in our context, in the next Section
we shall perform our calculations on cubic oxides using the ICL path.
In Sec.~\ref{Disc} we shall provide a critical discussion of the ICL
and PM prescriptions from a more general perspective, and leave a
detailed comparison of the two approaches for a future work.
\subsection{Cubic oxides\label{Cub}}
We now apply our methodology to calculate the bulk, CI FxE
coefficients for several technologically important cubic oxides. As
mentioned before, we will be using short circuit boundary conditions
and the ICL path for the nonlocal contribution.
As an example of a typical calculation, in Fig.~\ref{STOPq} we plot
the induced polarization [Eq.~(\ref{ICLimp})] versus
$\textbf{q}=(q_x,0,0)$ for cubic SrTiO$_3$, both for polarization
direction and atomic displacement $\alpha=\beta=x$ and
$\alpha=\beta=y$. As expected, the dependence on $q$ is quadratic
(there is no linear term since cubic SrTiO$_3$ is not piezoelectric
\cite{Hong2013,Stengel2013}), and $\overline{P}^{\textbf{q}}=0$ at
$\textbf{q}=0$, which is required by the ASR condition that the sum of
the Born effective charges should vanish\cite{Pick1970}. By taking the
second derivative of the black (red) dashed curves in
Fig.~\ref{STOPq}, we can obtain $\mu_{11,11}^{\text{I}}$
($\mu_{11,22}^{\text{I}}$). The remaining coefficient
$\mu_{12,12}^{\text{I}}$ is obtained by calculating
$\overline{P}^{\textbf{q}}_{12}$ at various $\textbf{q}=(q_x,q_y,0)$,
and performing a numerical mixed derivative $\partial^2/\partial
q_x\partial q_y$ (not shown).
\begin{figure}
\includegraphics[width=\columnwidth]{./STOPq.pdf}
\caption{\label{STOPq} (Color online) Induced polarization vs
$\textbf{q}=(q_x,0,0)$ for cubic SrTiO$_3$. The black (red) points
correspond to the $x$ ($y$) component of the polarization for atomic
displacements of the atoms in the $x$ ($y$) direction. Dashed curves
are quadratic fits. Units are with respect to the calculated SrTiO$_3$ lattice constant $a=7.435$ Bohr.}
\end{figure}
In Table \ref{cubtab}, we give the FxE coefficients corrected for the
CRG contribution [\textit{cf.} Eq.~(\ref{mucorr})] and the RCC
(Sec.~\ref{RCC}). As discussed above, the RCC is added to the
longitudinal and transverse coefficients \cite{StengelChapter}. Note
that the reported $\chi_{\text{mag}}$ is given in pC/m, whereas other
quantities are in nC/m, so this correction is quite small for the
materials calculated. The contribution of the nonlocal potentials to
the FxE coefficients in Table \ref{cubtab}, which are computed using
the ICL path of Appendix \ref{ICL}, represents a more significant
correction than was the case in Sec.~\ref{Bench}: they are in the
range of $0.03$ to $0.12$ nC/m for the longitudinal and transverse
coefficients, and in the range of $-0.02$ to $0.008$ nC/m for the
shear coefficients.
\begin{table*}
\caption{ Lattice constant, CI dielectric constant, rigid-core correction, and longitudinal, transverse, and shear CI FxE
coefficients (under short circuit boundary conditions), as well as the diamagnetic susceptibility in units
of nC/m. The FxE constants include the CRG correction (Sec.~\ref{diamag}) and RCC (Sec.~\ref{RCC}).}
\begin{ruledtabular}
\label{cubtab}
\begin{tabular}{cccccccc}
& $a$ (Bohr) &$\epsilon$&RCC&$\mu^\prime_{\text{L}}$ &$\mu^\prime_{\text{T}}$ &$\mu^\prime_{\text{S}}$&$\chi_{\text{mag}}\times 10^{3}$ \\
\hline
SrTiO$_3$ &7.435&6.191 &$-0.049$&$-0.87$ ($-0.9$\footnotemark[1],$-0.88$\footnotemark[2]) & $-0.84$ ($-0.83$\footnotemark[2]) &$-0.08$($-0.08$\footnotemark[2])&$-7.3$ \\
BaTiO$_3$ &7.601& 6.657 &$-0.107$& $-1.01$ ($-1.1$\footnotemark[1])&$-0.99$ &$-0.08$& $-1.7$ \\
SrZrO$_3$ &7.882 &4.558 & $-0.049$& $-0.63$ &$-0.58$ & $-0.05$&$-36.0$ \\
PbTiO$_3$ &7.496 & 8.370&$-0.158$ &$-1.39$ ($-1.5$\footnotemark[1]) &$-1.35$ &$-0.09$ & $-22.4$\\
MgO &8.058 &3.148&$-0.015$ &$-0.28$ ($-0.3$\footnotemark[1]) & $-0.30$ &$-0.07$ & $-66.1$\\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Reference~\onlinecite{Hong2013}}
\footnotetext[2]{Reference~\onlinecite{Stengel2014}}
\end{table*}
The only material for which first-principles calculations of the
transverse and shear coefficients are available (in parentheses in
Table \ref{cubtab}) is SrTiO$_3$, and our values are in excellent
agreement with those previous calculations \cite{Stengel2014}.
For all of the materials, the longitudinal and transverse responses
are of similar magnitude, and the shear response is significantly
smaller. This is a similar trend to that of the isolated noble gas
atoms and of the IRC model [\textit{cf.}~Eq.~(\ref{muprimeIRC})],
suggesting that the response is dominated by the ``spherical''
contribution. The behavior of the cubic oxides differ significantly
from the IRC model, however, when it comes to the contribution of the
CRG correction $\chi_{\text{mag}}$. For isolated atoms,
$\chi_{\text{mag}}$ is equal to $\mu_{\text{IRC,S}}$, and is of the
same order as $\mu^\prime_{\text{IRC,L}}$; therefore, a vanishing
value of $\mu^\prime_{\text{IRC,S}}$ is only obtained after removing
the CRG contribution [Eq.~(\ref{mucorr})]. In the case of the
cubic oxides, the CRG correction is only a minor contribution to
$\mu^\prime_{\text{S}}$, and $\chi_{\text{mag}}$ is two orders of
magnitude smaller than $\mu^\prime_{\text{L}}$. In fact,
$\chi_{\text{mag}}$ for the cubic oxides is comparable to that of the
isolated atoms, while the FxE coefficients for the cubic oxides are
two orders of magnitude larger. This indicates that although the
bonding of atoms in the cubic compounds significantly enhances the FxE
coefficients, it does not have a large effect on the CRG
correction.
It should be noted that the value of $\chi_{\text{mag}}$ for SrTiO$_3$
($-2.28\times10^{-7}$ cm$^3$/g after unit conversion) is in fair
agreement with the measured diamagnetic susceptibility of around
$-1\times10^{-7}$ cm$^3$/g from Ref.~\onlinecite{Frederikse1966}.
\section{Discussion\label{Disc}}
Before closing, it is useful to recap the technical issues
that are associated with the calculation of the current density
response in a nonlocal pseudopotential context, and critically
discuss them in light of the result presented in this work.
In particular, it is important to clarify whether our proposed
approach matches the expectations, especially regarding the
known transformation properties of the current density upon
rototranslations, or whether there is any deviation that needs to
be kept in mind when computing flexoelectric coefficients and
other current-related linear-response properties.
As we have already discussed at length in the earlier Sections,
our definition of the current density (i) satisfies the continuity
equation by construction, (ii) correctly reduces to the textbook formula
in the region of space where the Hamiltonian is local, and (iii) is consistent
with the known formula for the macroscopic current operator.
However, we have not yet discussed some additional properties of the
current density that were established in earlier works, that might be
used as ``sanity checks'' of our implementation:
\begin{itemize}
\item Translational invariance of the charge-density response:
As established by Martin \cite{Martin1972}, simultaneous uniform
translation of all atoms in the crystal must yield the same variation in charge
density at every point as if the static charge density were
rigidly shifted. Therefore, if the whole crystal undergoes
a translation with uniform velocity ${\bf v}$, the
current density in the laboratory frame must be
\begin{equation}
{\bf J}({\bf r}) = {\bf v} \rho({\bf r}),
\label{transl}
\end{equation}
where $\rho({\bf r})$ is the static charge density.
\item Larmor's theorem: The circulating currents generated in a
crystallite by a uniform rotation with constant angular velocity
$\bm{\omega}$ (as observed in the frame of the rotating
material) are, in the linear limit of small velocities, identical
to the orbital currents that would be generated by an applied (and
constant in time) ${\bf B}$-field. As a corollary, the rotational
$g$-factor of closed-shell molecules corresponds to their
paramagnetic susceptibility.
\item Langevin's diamagnetism: The magnetic susceptibility of a
spherically symmetric atom is proportional to the quadrupolar moment
of its ground-state charge density.
\end{itemize}
In the following, we shall analyze how our formalism stands in
relationship to these latter ``weak'' [compared to the ``strong''
conditions (i-iii) above] criteria of validity.
(By ``weak'' we mean not required for a
physically sound calculation of the flexoelectric tensor, but possibly
necessary for a wider range of physical properties.)
\subsection{Translational invariance of the charge-density response}
Based on our results of Table III, we can safely conclude that both
flavors of the current-density operator (ICL and PM) break translational
invariance, Eq.~(\ref{transl}).
To see this, consider the shear flexoelectric coefficient of an
isolated atom in a box, (e.g., $\mu_{\rm S,NG}$). This quantity can be
defined in real space as the second moment of the microscopic
current-density response to the displacement of an isolated atom,
\begin{equation}
\mu_{\rm S} = \frac{1}{2\Omega} \int d^3 r \frac{\partial J_y({\bf r})}{\partial \dot{\lambda}_y} x^2,
\label{mus-jr}
\end{equation}
where $\dot{\lambda}_y$ stands for the velocity of the atom along $y$.
This formula, as it stands, is not very practical for calculations:
our implementation does not allow for a fully microscopic
calculation of ${\bf J}({\bf r})$, and therefore we had to
replace Eq.~(\ref{mus-jr}) with computationally more tractable
small-${\bf q}$ expansions.
Still, Eq.~(\ref{mus-jr}) is quite useful for our purposes, as it
allows us to draw general conclusions about ${\bf J}({\bf r})$ without
the need for calculating it explicitly.
In particular, if translational invariance [Eq.~(\ref{transl})] were
satisfied, then we could plug Eq.~(\ref{transl}) into
Eq.~(\ref{mus-jr}) and use Eq.~(\ref{QIRC}) to obtain $\mu_{\rm S,NG}
= \frac{1}{2\Omega}\int d^3r \rho(\textbf{r}) x^2
=Q_{\text{NG}}/2\Omega$. [This equality is a necessary but not
sufficient condition for the validity of Eq.~(\ref{transl}).]
As we can see from Table III, $\mu_{\rm S,NG}$ is only approximately
equal to $Q_{\text{NG}}/2\Omega$ for both the ICL and PM flavors of
the current-density operator.
This implies that neither approach is able to guarantee translational
invariance.
Similarly, the data we have in hand
does not allow us to establish a clear preference between the PM and
ICL recipes, as the discrepancies between the two are typically much
smaller (and devoid of a systematic trend) than their respective
failure at satisfying $\mu_{\rm S,NG} = Q_{\text{NG}}/2\Omega$. Note that the discrepancy
strictly consists of \emph{solenoidal} (i.e., divergenceless)
contributions to the current response; the longitudinal components are
exactly treated, as one can verify from the excellent match between
the longitudinal coefficient, $\mu_{\text{L}}$, and the quadrupolar
estimate in Table III.
\subsection{Langevin diamagetism and Larmor's theorem}
We come now to the assessment of the Larmor and Langevin results.
One of the virtues of the PM recipe resides in its superior accuracy
when comparing the orbital magnetic response to all-electron data.
Indeed, in the context of our discussion, one can verify that it
exactly complies with Langevin's theory of diamagnetism in
the case of isolated spherical atoms.
\footnote{This can be deduced from Eq.(12) and (13) of Ref.~\onlinecite{Pickard2003}:
By placing a single spherical pseudoatom in the gauge origin, all nonlocal
contributions vanish by construction as they are multiplied by ${\bf R}$;
thus, the applied magnetic field enters the Hamiltonian via the usual
substitution ${\bf p} \rightarrow {\bf p} + {\bf A}$. Then, the first order
Hamiltonian is the angular momentum operator, which commutes with the
ground-state density matrix and yields a vanishing linear response, and
the second-order piece picks the quadrupolar moment of the ground-state density,
as in the local case.}
The situation, however, is not so bright regarding Larmor's theorem.
If the latter were satisfied, then the ``rotational orbital susceptibility''
$\chi_{\rm mag}$ would match Langevin's quadrupolar expression, as we know that
Langevin's result holds in the case of a ``true'' ${\bf B}$-field.
By looking, again, at Table III, we clearly see that this is not the case --
again, there is a discrepancy between the last column (based on the static
quadrupole) and the calculated values of $\chi_{\rm mag}$.
Since the deviations in $\chi_{\rm mag}$ and $\mu_{\rm S}$ are essentially
identical in the limit of an isolated atom in a box, it is reasonable to
assume that the underlying factors are similar.
It should be noted that our value for Ne (after unit conversion, ICL path) is
$\chi_{\text{mag}}^{\text{ICL}}=-7.29\times10^{-6}$ cm$^3$/mole, which is fairly
close in magnitude to previously calculated values of the diamagnetic
susceptibility of Ne: $-7.75\times10^{-6}$ cm$^3$/mole\cite{ICL2001}
and $-7.79\times10^{-6}$ cm$^3$/mole\cite{Mauri1996}.
\subsection{Unphysical spatial transfer resulting from nonlocal pseudopotentials}
The reason why the current density violates both translational
invariance and Larmor's theorem has to be sought in the unphysical
transfer of density that can result from the presence of a
nonlocal potential. That is, a nonlocal operator may project the
wavefunction (and therefore the particle amplitude) from a point ${\bf
r}$ to a distant point ${\bf r}'$ in a discontinuous manner, such
that no current flows through a given surface surrounding ${\bf
r}$ even though the charge density within that surface changes. Of course,
this is just a conceptual way of describing the violation of the
continuity equation, discussed in Sec.~\ref{curden}.
Taking the example of a single atom placed at $\textbf{R}=0$ and using
the PM approach, it is shown in Appendix \ref{appDiv} that the current
density can be written as
\begin{equation}
{\bf J}^{\rm nl}({\bf r}) \sim \frac{\hat{\bf r} C(\hat{\bf r})}{r^2} .
\end{equation}
where $C(\hat{\textbf{r}})$ is a direction-dependent constant that
depends on the nonlocal charge [Eq.~(\ref{Cr})]. Therefore, the
current-density field diverges near the atomic site,
$\textbf{r}\rightarrow0$, and such a divergence can have a different
prefactor and sign depending on the direction.
A diverging ${\bf J}$-field is problematic to deal with and
unphysical. One can easily realize that this characteristic is
incompatible, for example, with the correct transformation laws of
${\bf J}$ under rigid translations.
In particular, the electronic charge density is always finite in a
vicinity of the nucleus, even in the all-electron case where the
corresponding potential does, in fact, diverge.
This implies that Eq.~(\ref{transl}) cannot be satisfied by a
diverging ${\bf J}$-field.
For the ICL path, the nonlocal current does not have such a simple
relation to the nonlocal charge as in the case of the PM path [Eq.~(E4)];
therefore a similar derivation as in Appendix E may not be possible
for the ICL case. However, our numerical results in Table III are sufficient to
conclude that the ICL path violates translational symmetry as well. The
extent of the violation can be quantified by looking at the discrepancy
between $\mu_{\text{L}}$ and $\mu_{\text{S}}$, which is comparably
large in the PM and ICL cases---recall that these two values should,
in principle, coincide for the isolated spherical atoms model.
At present it is difficult to predict whether it might be possible to
cure the above drawbacks by simply choosing a different path for the
definition of the current operator, or whether these difficulties may
require a deeper revision of the nonlocal pseudopotential theory in
contexts where the microscopic current density is needed.
In any case, the flexoelectric coefficients we calculated in this work
for cubic materials are unaffected by these issues: Once the
``diamagnetic'' contribution has been removed, the three independent
coefficients are all well defined in terms of the charge-density
response.
Nonetheless, the above caveats should be kept in mind when using the
present current-density implementation to access flexoelectric
coefficients in less symmetric materials, or other response properties
that depend on the microscopic current response.
\section{Conclusions\label{Con}}
We have developed a DFPT implementation for calculating the bulk
CI flexoelectric tensor from a single unit cell. Therefore,
we have overcome the limitations of previous implementations
(Refs.~\onlinecite{Hong2013} and \onlinecite{Stengel2014}), which
required supercells to calculate the transverse and shear CI
FxE coefficients.
Our implementation is based on calculating the microscopic current
density resulting from the adiabatic atomic displacements of a
long-wavelength acoustic phonon. We have determined a form for the
current-density operator that satisfies the continuity condition in
the presence of nonlocal, norm-conserving pseudopotentials, and
reduces to the correct form in the limit of a uniform, macroscopic
perturbation, and/or when only local potentials are present.
In order to benchmark our methodology, we have used noble gas atoms to
model systems of noninteracting spherical charge densities. The tests
demonstrate the accuracy of our nonlocal correction to the current
operator, as well as the calculated CRG corrections derived in
Ref.~\onlinecite{StengelUNPUB}. For our form of the current density,
we demonstrate that nonlocal pseudopotentials result in a violation of
translational invariance and Larmor's theorem, though this does not
affect our FxE coefficients after the CRG contribution has been
removed. Finally, we have applied our methodology to several cubic
oxides, all of which show similar trends in that the longitudinal and
transverse responses are similar ($\sim1$ nC/m), and the shear
response is an order of magnitude smaller.
Combining the methodology of this paper with DFPT implementations for
calculating the lattice-mediated contribution to the bulk FxE
coefficients \cite{Stengel2013,Stengel2014}, and the surface
contribution \cite{Stengel2014}, will allow for efficient calculation
of the full FxE response for a variety of materials.
\begin{acknowledgements}
We are grateful to K.~M.~Rabe, D.~R.~Hamann, B.~Monserrat,
H.~S.~Kim, A.~A.~Schiaffino, C.~J.~Pickard, and S.~Y.~Park for useful
discussions. CED and DV were supported by ONR Grant
N00014-16-1-2951. MS acknowledges funding from from the European
Research Council (ERC) under the European Union's Horizon 2020
research and innovation programme (Grant Agreement No. 724529), and
from Ministerio de Econom\'{i}a, Industria y Competitividad
(MINECO-Spain) through Grants No. MAT2016-77100-C2-2-P and
SEV-2015-0496, and from Generalitat de Catalunya through Grant
No. 2017 SGR1506.
\end{acknowledgements}
\clearpage
\begin{widetext}
|
1,108,101,566,490 | arxiv | \section{Introduction}
\label{sec:intro}
In data centers, applictions are replicated on multiple instances running \textit{e.g.}, \ in containers or virtual machines (VMs) to provide scalable services~\cite{dragoni2017microservices}.
One of the main components in such data centers for optimal resource utilization is \emph{network} (load balancers), whose role is to distribute network traffic \textit{fairly} among application instances.
As ML-based approaches achieve performance gains in different networking problems~\cite{usama2017unsupervised, xie2018survey}, this paper investigates whether ML helps improve network load balancing performance.
The challenges of applying ML on network load balancing problem, especially in real-world systems, are $3$-fold.
First, feature collections require domain knowledge.
Unlike task schedulers~\cite{auto2018sigcomm} or application-level load balancers~\cite{yoda}, network load balancers have limited observations and are not aware of task size and duration before distributing workloads.
They can only extract features from packet headers below the transport layer.
Second, networking systems favor low-latency, high-throughput and scalability.
Adding ML algorithms in the system incurs additional computational overhead for collecting and processing features, making predictions, and online training, which degrades data plane performance and scalability~\cite{taurus2020}.
Third, networking environments are dynamic and heterogenous~\cite{kumar2020fast, fu2021use}.
Asynchronous closed-loop design is required to bring ML algorithms online so that the models can be adapted over time without blocking the system.
This paper proposes Aquarius\ to bridge the different requirements for networking system and ML.
Aquarius\ is an asynchronous and scalable data collection and exploitation mechanism that enables ML-based decisions based on fine-grained observations.
This paper implements Aquarius\ in Vector Packet Processing (VPP)~\cite{vpp}, which is easy to deploy in real-world systems.
Using Aquarius, the potential benefits and challenges of ML for network load balancing problem are investigated.
\section{Related Work}
\label{sec:related}
ML techniques (\textit{e.g.}, \ graph neural networks~\cite{decima2018}, and convolutional neural networks~\cite{naseer2018enhanced}) help optimize and classify network traffic in data centers.
However, applying these techniques alongside the data plane on the fly is computationally intractable~\cite{ahmed2016survey, taurus2020}.
In~\cite{mvfst-rl}, it is shown that asynchronous design helps achieve performance gain without degrading networking performance on emulators.
This paper implements Aquarius\ on a platform compatible to commodity CPUs so that it can be deployed in real-world system.
In~\cite{fu2021use}, the challenges of applying ML algorithms on networking systems are studied using system configurations as features.
In the context of network load balancing\cite{maglev}, ridge regression~\cite{lbas-2020} is used to improve workload distribution fairness using actively probed server load information (CPU and memory usage) as features.
With Aquarius, the same problem can be investigated using a wide range of runtime networking features extracted from packet headers, which makes load balancers no longer necessary to maintain the active probing channel with all servers.
\section{Overview}
\label{sec:overview}
\begin{figure}[t]
\centering
\begin{minipage}{.31\textwidth}
\centering
\includegraphics[height=2.35in]{figures/overview.pdf}
\caption{Overview.}
\label{fig:overview}
\end{minipage}%
\hspace{.15in}
\begin{minipage}{.65\textwidth}
\centering
\includegraphics[height=2.35in]{figures/shm-layout.pdf}
\caption{Aquarius\ \texttt{shm} layout and data flow pipeline.}
\label{fig:design}
\end{minipage}
\vskip -.15in
\end{figure}
Aquarius\ has a $3$-layer architecture (figure~\ref{fig:overview}a).
It extracts network features from the data plane (parser layer) and makes the features available via shared memory (partitioner layer) for the control plane (processor layer).
In the context of network load balancing problem, Aquarius\ is deployed on load balancers.
As depicted in figure~\ref{fig:overview}b, cloud services provided in data centers are identified by virtual IPs (VIPs).
Each VIP corresponds to a cluster of virtualized application servers, each identified by a unique direct IP (DIP).
Within each VIP, Aquarius\ needs to track the state of each server to distinguish the overloaded or malfunctioning ones and make more-informed load balancing decisions.
\section{Design}
\label{sec:design}
In order to apply ML in an asynchronous close-loop load balancing framework with high scalability and low latency, communication between the load balancer data plane and the ML application is implemented via POSIX shared memory (\texttt{shm}).
The design of Aquarius\ allows features to be extracted from the data plane and conveyed to the ML application, and allows data-driven decisions generated by the ML application to be updated asynchronously on the load balancer.
The pipeline of the data flow over the lifetime of a TCP connection is depicted in figure \ref{fig:design}.
On receipt of different networking packets, networking features are gathered as counters or samples.
To avoid I/O conflicts, sampled features are collected using reservoir sampling over the latest time window and counters are collected atomically and made available to the data processing agent using multi-buffering.
The bit-index binary header helps efficiently identify active application servers.
Gathered features are organized by the packets' corresponding VIP and DIP in \texttt{shm} files identified by VIP (\textit{e.g.}, \ \texttt{shm\_vip0}).
With no disruption in the data plane, these features are fetched by ML application periodically to \texttt{shm} files identified by DIP (\textit{e.g.}, \ \texttt{shm\_dip0}), which serve as a database for the ML application.
Only the features with the highest sequence ID are fetched and sequence ID $0$ is used as a writing ``lock''.
Using the same multi-buffering scheme, action buffes and registers allow to effectuate policies generated by the ML application.
This design is asynchronous and has no blocking steps.
This design also favors the discrete arrivals of networking packets, and allows to gather $21$ networking features\footnote{The whole list of networking features are listed in figure \ref{fig:app-feature} in appendix \ref{app:feature}.}.
This design separates gathered networking features by VIP and DIP, and allows to aggregate the features at different levels and make predictions for different purposes.
Updating (adding or removing) services (VIPs) and their associated servers (DIPs) can be achieved by managing different \texttt{shm} files in a scalable way using this design, incurring no disruption on data planes.
\section{Experiments}
\label{sec:experiments}
Using the same topology as in figure \ref{fig:overview}b, $3$ different network traces\footnote{Wiki, Poisson \texttt{for}-loop and file traces. See appendix \ref{app:testbed} for details.} are applied as network traffic over $2$ groups of ($2$-CPU and $4$-CPU) servers with different processing capacities.
Throughout the set of experiments, network features ($8$ counters and $13$ sampled features) are collected as input data for ML models to predict $3$ ground truth values, \textit{i.e.}, \ number of provisioned CPUs (\texttt{\#cpu}), CPU usage, and number of busy worker threads (\texttt{\#thread}).
Each sampled feature channel is reduced to $5$ scalars, \textit{i.e.}, \ average, $90$th-percentile, standard deviation, exponential moving average (\texttt{decay}) of average and $90$th-percentile.
This section illustrates both offline (section \ref{sec:experiments-offline}) and online (section \ref{sec:experiments-online}) application of Aquarius\ for developing an ML-based load balancer.
\subsection{Offline ML Applications}
\label{sec:experiments-offline}
An ECMP load balancer is implemented with Aquarius.
Features and ground truth values are collected every $50$ms along with the different types of input network traffic with different traffic rate.
\textbf{Feature process pipeline}: Collected dataset is preprocessed and converted to have zero mean and unit standard deviation.
They are subtracted by the mean and divided by the standard diviation across the entire training set.
Outlier data-points (any feature or ground truth value beyond $99$th-percentile) are dropped.
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{figures/corr-wiki}}
\caption{Feature correlation obtained from Wikipedia replay traces applied on a $20$-CPU Apache server clusters where $2$ groups of servers have different provisioned capacities ($2$-CPU and $4$-CPU).}
\label{fig:offline-corr}
\end{center}
\vskip -0.25in
\end{figure}
\begin{figure}[t]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[height=1.4in]{figures/experiment/pca-3trace.pdf}
\caption{Network traces clustering with PCA.}
\label{fig:offline-pca}
\end{minipage}%
\begin{minipage}{.47\textwidth}
\centering
\includegraphics[height=1.4in]{figures/experiment/overhead.pdf}
\caption{Aquarius\ overhead analysis.}
\label{fig:offline-overhead}
\end{minipage}
\vskip -.3in
\end{figure}
\textbf{Data analysis}: Correlation between networking features and the $3$ ground truth values using Wiki trace are plotted in figrue \ref{fig:offline-corr}.
It can be observed that under higher traffic rate, the flow completion time (\texttt{fct}) and flow duration have higher positive correlation with the actual server load states (CPU usage and \texttt{\#thread}) and negative correlation with the provisioned processing capacities.
This makes sense since under heavy workloads, servers processing speed decreases and more powerful servers finish tasks faster.
Conducting principal component analysis (PCA) on the collected networking features using $3$ types of traffic gives clustering results as depicted in figure \ref{fig:offline-pca}.
Projected on two principal components (PCs), which accounts for $41\%$ and $30\%$ of the overall variability, $3$ clusters can be clearly observed.
This is a promising result for potential ML-based traffic classifiers, which distinguish traffic types and allocate different computational resources to meet corresponding requirements of quality of service (QoS).
\textbf{Overhead analysis}: The performance of Aquarius\ is compared with state-of-the-art Maglev load balancer~\cite{maglev}.
As depicted in figure \ref{fig:offline-overhead}a, Aquarius\ does not induce notable degradation of QoS.
Aquarius\ introduces an additional $1$k per-packet processing CPU cycles ($0.385\mu$s on a $2.6$GHz CPU) on average (figure \ref{fig:offline-overhead}b), which is trivial comparing with the typical round trip time (higher than $200\mu$s) between network equipments~\cite{guo2015pingmesh}.
\textbf{Training}: $8$ ML models are trained to predict \texttt{\#thread} as server load estimators to make load-aware load balancing decisions.
To adapt the dataset for sequential models, the sequence length is $64$ and stride is $32$, which give $160$k datapoints in total.
These sequential datapoints are randomly splited $80:20$ into training and testing datasets.
Tensorflow~\cite{tensorflow2015-whitepaper} is used for model training.
The hyperparameters for different models are described in appendix \ref{app:model}.
\textbf{Results}: As shown in table \ref{tab:ml-score}, recurrent models have better performance in general (\textit{i.e.}, \ LSTM and RNN).
Applying convolutional layers helps reduce inference delay (\textit{i.e.}, \ 1DConv-GRU1).
More complicated models do not necessarily improve model performance (\textit{i.e.}, \ Wavenet models).
\begin{table}[t]
\caption{Accumulated score board for different models and different tasks executed w/ 1 CPU core.}
\label{tab:ml-score}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccccc}
\toprule
Task & Metrics & Dense1 & RNN2 & LSTM2 & GRU2 & \begin{tabular}[c]{@{}c@{}}1DConv-\\ GRU1\end{tabular} & \begin{tabular}[c]{@{}c@{}}Wavenet-\\ GRU1\end{tabular} & \begin{tabular}[c]{@{}c@{}}Wavenet-\\ Reconst.\end{tabular} \\ \midrule
\multirow{7}{*}{\rotatebox[origin=c]{90}{\hspace{3em}Wiki}} & MSE & $253.203$ & $2.557$ & $\mathbf{1.553}$ & $1.660$ & $1.878$ & $1.923$ & $2.421$ \\
& RMSE & $15.912$ & $1.599$ & $\mathbf{1.245}$ & $1.288$ & $1.371$ & $1.387$ & $1.556$ \\
& MAE & $1.804$ & $1.099$ & $\mathbf{0.916}$ & $0.931$ & $0.988$ & $0.996$ & $1.117$ \\
& Delay (ms) & $50.5\pm1.0$ & $52.1\pm1.7$ & $53.4\pm3.0$ & $53.7\pm1.3$ & $\mathbf{44.8\pm2.8}$ & $54.9\pm0.8$ & $54.6\pm0.5$ \\ \midrule
\multirow{7}{*}{\rotatebox[origin=c]{90}{\hspace{3.2em}Poisson}} & MSE & $1520.888$ & $2.804$ & $0.801$ & $\mathbf{0.774}$ & $0.874$ & $0.965$ & $0.946$ \\
& RMSE & $38.999$ & $1.675$ & $0.895$ & $\mathbf{0.880}$ & $0.935$ & $0.982$ & $0.973$ \\
& MAE & $3.176$ & $1.162$ & $0.602$ & $\mathbf{0.600}$ & $0.635$ & $0.648$ & $0.649$ \\
& Delay (ms) & $\mathbf{55.4\pm0.7}$ & $91.4\pm6.4$ & $69.3\pm2.2$ & $70.9\pm1.3$ & $61.5\pm2.5$ & $65.6\pm1.7$ & $70.2\pm0.4$ \\ \bottomrule
\end{tabular}
}
\vskip -.2in
\end{table}
\subsection{Online ML Applications}
\label{sec:experiments-online}
\begin{figure}[t]
\centering
\begin{subfigure}{.31\columnwidth}
\centerline{\includegraphics[height=.45\columnwidth]{figures/experiment/compare/plt_avg_compare}}
\caption{Page load time}
\label{fig:online-plt-avg}
\end{subfigure}%
\hspace{.1in}
\begin{subfigure}{.31\columnwidth}
\centerline{\includegraphics[height=.45\columnwidth]{figures/experiment/compare/apache_fair_avg_compare}}
\caption{Jain's fairness of \texttt{\#thread}.}
\label{fig:online-apache-fair}
\end{subfigure}%
\hspace{.1in}
\begin{subfigure}{.31\columnwidth}
\centerline{\includegraphics[height=.45\columnwidth]{figures/experiment/compare/apache_over_avg_compare}}
\caption{\texttt{\#thread} overprovision.}
\label{fig:online-apache-over}
\end{subfigure}%
\vspace{.1in}
\begin{subfigure}{.31\columnwidth}
\centerline{\includegraphics[height=.45\columnwidth]{figures/experiment/compare/score_compare}}
\caption{ML score.}
\label{fig:online-ml-score}
\end{subfigure}%
\hspace{.1in}
\begin{subfigure}{.31\columnwidth}
\centerline{\includegraphics[height=.45\columnwidth]{figures/experiment/compare/cpu_mean_avg_compare}}
\caption{Avg. CPU.}
\label{fig:online-cpu-avg}
\end{subfigure}%
\hspace{.1in}
\begin{subfigure}{.31\columnwidth}
\centerline{\includegraphics[height=.45\columnwidth]{figures/experiment/compare/cpu_over_avg_compare}}
\caption{CPU overprovision.}
\label{fig:online-cpu-over}
\end{subfigure}%
\caption{Online load balancing with trained LSTM2 model on Poisson \texttt{for}-loop trace.}
\label{fig:online}
\vskip -.2in
\end{figure}
\textbf{Online performance}: Aquarius\ enables open and closed-loop control.
Based on the results from previous section, LSTM2 model is brought online to make load balancing decisions (changing the server weights and assigning more tasks to servers with higher weights) based on latest observation every $250$ms.
Two Poisson \texttt{for}-loop traces are applied as input traffic.
The average completion times for each query are $140$ms for \texttt{query\_len=4} and $160$ms \texttt{query\_len=4}).
The load balancing performance is compared with Maglev in figure \ref{fig:online} across a wide range of traffic rates.
It is shown that with trained ML models, load balancers allow the same server cluster to serve heavier workloads with reduced page load time (\texttt{fct}), by optimizing resource utilization (improving workload distribution fairness and reducing overprovision factor).
\begin{figure}[t]
\centering
\begin{subfigure}{.45\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/experiment/gen-match.pdf}
\caption{Poisson \texttt{for}-loop traffic.}
\label{fig:general-architecture}
\end{subfigure}%
\hspace{.2in}
\begin{subfigure}{.45\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/experiment/gen-mismatch.pdf}
\caption{Wiki traffic.}
\label{fig:general-lb}
\end{subfigure}
\caption{Predicting $2$ types of network traffic with LSTM2 model trained only with \texttt{for}-loop trace.}
\vskip -.2in
\label{fig:general}
\end{figure}
\textbf{Generality}: To study whether the trained ML models is able to generalize, model LSTM2 is trained only using Poisson \texttt{for}-loop traffic and brought online to work with both \texttt{for}-loop and Wiki traces.
Figure \ref{fig:general} shows that the trained model generalizes poorly if the applied traffic is not seen by the model before, which is consistent to~\cite{fu2021use}.
\section{Conclusion and Future Work}
\label{sec:conclusion}
ML algorithms show promising results on different problems yet it is challenging to apply on realistic networking problems and in real-life systems.
This paper proposes Aquarius\ to bridge ML and distributed networking systems and takes a preliminary step to integrate ML approaches in networking field.
It allows to gather a wide range of networking features and feed them to offline data analysis pipelines or online ML models to make decisions on the fly.
The results demonstrates the potential of Aquarius\ to conduct feature engineering, traffic classification, model selection, and online model deployment.
The models applied in this paper shows the ability to learn and infer server load states with networking features.
It also shows that networking problems are dynamic and heterogenous, thus it is challenging to train a model that generalizes well.
Reinforcement learning will be studied in future work to improve model generality in the interactive real-world system.
This work has several limitations.
ML models and their hyperparameters are not sufficiently explored.
The asynchronous decisions are delayed and impacts of delayed decisions along with action updating frequencies are not fully investigated.
\bibliographystyle{unsrt}
|
1,108,101,566,491 | arxiv | \section*{IEEE Copyright Notice}
© IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
For more details, see the IEEE Copyright Policy\\
\footnotesize \verb+http://www.ieee.org/publications_standards/publications/rights/copyrightpolicy.html+
\normalsize
\section{Introduction}
{M}{arkov} jump linear systems (MJLS) have been largely studied and
disseminated during the last decades. MJLS have a relatively simple structure
that allows for useful, strong properties
\cite{CostaFragosoMarques05,CostaFragosoTodorov,Dragan2009,Dragan2013},
and
provide suitable models for applications \cite{doVal99JE,Sworder83,Siqueira04},
with a booming field in web/internet based control \cite{Geromel09,Huang13}.
One limitation of MJLS is that the sojourn times between jumps is a time-homogeneous
exponential random variable, thus motivating the study of a wider class of systems with
general sojourn-time distributions, the so-called semi-Markov jump linear
systems (sMJLS) or sojourn-time-dependent MJLS \cite{Huang13,Campo91,Schwartz03,Hou06,Huang13IJRNC}.
In this paper, we consider continuous-time sMJLS with instantaneous
(or close to instantaneous) observation of the state of the semi-Markov
chain at time instant $t$, denoted here by $\theta(t)$.
The state space of the semi-Markov chain may be infinite.
We seek for an approximate optimal filter for the variable $x(t)$
that composes the state of the sMJLS jointly with $\theta(t)$.
Of course, estimating the state component $x(t)$ is highly relevant
and allows the use of standard control strategies like linear state feedback.
It is well known that the optimal estimator for $x(t)$ is given by the
standard Kalman--Bucy filter (KBF) \cite{Anderson79,Jazwinski70,Kalman60,Kalman61,Kumar86}
because, given the observation of the past values of $\theta$, the
distribution of the random variable $x(t)$ is exactly the same as
in a time varying system.
The main problem faced when implementing the KBF for MJLS or sMJLS, particularly
in continuous time, is the pre-computation.
Pre-computation refers to the computation of the relevant parameters of the KBF
and storage in the controller/computer memory prior to the system operation,
which makes the implementation of the filter
fast enough to couple with a wide range of applications.
Unfortunately, pre-computation is not viable for (s)MJLS in continuous time,
as it involves solving a Riccati differential equation that branches at every
jump time $T_k$,
and the jumps can occur at any time instant according to an exponential distribution,
so that pre-computation would involve computation of an infinite number of branches.
Another way to explain this drawback of the KBF is to say that the KBF is not a
Markovian linear estimator because the gain at time $t$ does not depend only on $\theta(t)$
but on the whole trajectory $\{\theta(s),0\leq s\leq t\}$.
This drawback of the KBF has motivated the development of other
filters for MJLS, and one of the most successful ones is the Markovian linear minimum
mean squares estimator (LMMSE) that has been derived in \cite{FC10},
whose parameters can be pre-computed,
see also \cite{CostaFragosoTodorov,Costa11autom_filter}.
To our best knowledge, there is no pre-computable filter for sMJLS.
The filter proposed here is built in several steps. The first step is the discretization
by quantization of the Markov chain, providing a finite number of typical trajectories.
The second step consists in solving the Riccati differential equation on each of these trajectories
and store the results. To compute the filter in real time, one just needs to select the appropriate
pre-computed branch at each jump time and follow it until the next jump time.
This selection step is made by looking up the projection of the real jump time in the quantization
grid and choosing the corresponding Riccati branch.
In case the real jump time is observed with some delay (non-instantaneous observation of $\theta$),
then the observed jump time is projected in the quantization grid instead, see Remarks \ref{rem-byproduct}, \ref{rem-delayed-observation}.
The quantization technique selects optimized typical trajectories of the semi-Markov chain.
Optimal quantization methods have been developed recently in numerical probability,
nonlinear filtering or optimal stochastic control for diffusion processes with applications in finance \cite{bally03,bally05,pages98,pages05,pages04b,pages04} or for piecewise deterministic Markov processes with applications in reliability \cite{brandejsky12,brandejsky13,saporta10,saporta12}. To our best knowledge, this technique has not been applied to MJLS or sMJLS yet. The optimal quantization of a random variable $X$ consists in finding a finite grid such that the projection $\widehat{X}$ of $X$ on this grid minimizes some $L^{p}$ norm of the difference $X-\widehat{X}$. Roughly speaking, such a grid will have more points in the areas of high density of $X$.
One interesting feature of this procedure is that the construction of the optimized grids using
the CLVQ algorithm (competitive learning vector quantization) \cite{pages98,gray98}
only requires a simulator of the process and no special knowledge about the distribution of $X$.
As explained for instance in \cite{pages04}, for the convergence of the quantized
process towards the original process, some Lipschitz-continuity conditions are needed,
hence we start investigating the Lipschitz continuity of solutions of Riccati equations.
Of course, this involves evaluating the difference of two Riccati solutions,
which is not a positive semi-definite nor a negative-definite matrix,
preventing us to directly use the positive invariance property of Riccati equations,
thus introducing some complication in the analysis given in Theorem \ref{lem-1-Ric}.
A by product of our procedure is a general result on the convergence of perturbed
solutions of semi-Markov switching Riccati equations,
when the perturbation comes from the driving semi-Markov chain and can be either a random
perturbation of the jump times or a deterministic delay, or both, see Remark \ref{rem-byproduct}.
Regarding the proposed filter, we obtain an error bound w.r.t. the exact KBF depending on the quantization error and time discretization step.
It goes to zero when the number of points in the grids goes to infinity.
The approximation results are illustrated and compared with the exact KBF and the
LMMSE in the Markovian framework for a numerical example of a magnetic suspension system,
confirming via Monte Carlo simulation
that the proposed filter is effective for state estimation
even when a comparatively low number of points in the discretization grids is considered.
The paper is organized as follows. Section \ref{sec-problem} presents the
KBF and the sMJLS setup. The KBF approximation scheme is explained
in Section \ref{sec-KBF}, and its convergence is studied in Section \ref{sec-convergence}.
The results are illustrated in a magnetic suspension system, see Section \ref{sec-example},
and some concluding remarks are presented in Section \ref{sec-conclusion}.
\section{Problem setting}
\label{sec-problem}
We start with some general notation.
For $z,\hat z\in\mathbb{R}$, $z\wedge \hat z=\min\{z,\hat z\}$ is the minimum between $z$ and $\hat z$.
For a vector $X=(x_1,\ldots,x_n)\in\mathbb{R}^n$, $|X|$ denotes its Euclidean norm $|X|^2=\sum x_i^2$ and $X'$ denotes its transpose.
Let $\mathcal{C}(n)$ be the set of $n\times n$ symmetric positive definite matrices and $I_{n}$ (or $I$ when there is no ambiguity) the identity matrix of size $n\times n$.
For any two symmetric positive semi-definite matrices $M$ and $\widehat{M}$, $M\geq \widehat{M}$ means that $M-\widehat{M}$ is
positive semidefinite and $M >\widehat{M}$ means that $M-\widehat{M}\in\mathcal{C}(n)$.
Let $\lambda_{\min}(M)$ and $\lambda_{\max}(M)$ denote the lowest and highest eigenvalue of matrix $M\in\mathcal{C}(n)$ respectively.
For a matrix $M\in\mathbb{R}^{n\times n}$, $M'$ is the transpose of $M$ and $\|M\|$ stands for its $L^2$ matrix norm $\|M\|^2=\lambda_{\max}(M'M)$.
Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, $\mathbb{E}$ denote the expectation with respect to $\mathbb{P}$, and $Var(X)$ is the variance-covariance matrix of the random vector $X$.
Let $\{\theta(t), t\geq 0\}$ be a semi-Markov jump process on the countable state space $\mathcal{S}$.
We denote by $F_i$ the cumulative distribution function of the sojourn time of $\theta$ in state $i$.
For a family $\{M_i, i\in\mathcal{S}\}$ of square matrices indexed by $\mathcal{S}$, we set $\|M\|_{\mathcal{S}}=\sup_{i\in\mathcal{S}}\|M_i\|\leq \infty$.
We consider a sMJLS satisfying
\begin{equation*}
\left\{\begin{array}{rcl}
dx(t)&=&A_{\theta(t)}x(t)dt+E_{\theta(t)}dw(t),\\
dy(t)&=&C_{\theta(t)}x(t)dt+D_{\theta(t)}dv(t),\label{mjls}
\end{array}\right.
\end{equation*}
for $0\leq t\leq T$, where $T$ is a given time horizon,
$\big(x(t),\theta(t)\big)\in\mathbb{R}^{n_1}\times \mathcal S$ is the state process,
$y(t)\in\mathbb{R}^{n_2}$ is the measurement process,
$\{w(t), 0\leq t\leq T\}$ and $\{v(t), 0\leq t\leq T\}$ are independent standard Wiener processes with respective
dimensions $n_3$ and $n_4$, independent from $\{\theta(t), t\geq 0\}$, and
$\{A_i, i\in\mathcal{S}\}$, $\{C_i, i\in\mathcal{S}\}$, $\{D_i, i\in\mathcal{S}\}$ and $\{E_i, i\in\mathcal{S}\}$
are families of matrices with respective size $n_1\times n_1$, $n_2\times n_1$, $n_2\times n_4$ and $n_1\times n_3$ such that $D_iD_i'>0$ is nonsingular for all $i$ (nonsingular measurement noise).
We use two different sets of assumptions for the parameters of our problems. The first one is more restrictive but relevant for applications, and the second more general one will be used in the convergence proofs.
\begin{hyp}\label{hyp:finite}
The state space $\mathcal{S}$ is finite, $\mathcal{S}=\{1,2,\ldots,N\}$ and the cumulative distribution functions of the sojourn times $F_i$ are Lipschitz continuous with Lipschitz constant $\lambda_i$, $i\in\mathcal{S}$.
\end{hyp}
\begin{hyp}\label{hyp:infinite}
The state space $\mathcal{S}$ is countable, the quantities $\|A\|_{\mathcal{S}}$, $\|C\|_{\mathcal{S}}$, $\|D\|_{\mathcal{S}}$, $\|DD'\|_{\mathcal{S}}$ and $\|E\|_{\mathcal{S}}$ are finite. The cumulative distribution functions of the sojourn times $F_i$ are Lipschitz continuous with Lipschitz constant $\lambda_i$, $i\in\mathcal{S}$ and
\begin{equation*}
\overline{\lambda}=\sup_{i\in\mathcal{S}}\{\lambda_{i}\}<\infty.
\end{equation*}
\end{hyp}
Note that the extra assumptions in the infinite case hold true automatically in the finite case,
and that the Lipschitz assumptions hold true automatically for MJLS (i.e., when
the distributions of $F_i$ are exponential).
We address the filtering problem of estimating the value of $x(t)$ given the observations $\{y(s),\theta(s), 0\le s\le t\}$ for $0\leq t\leq T$.
It is well-known that the KBF is the optimal estimator because
the problem is equivalent to estimating the state of a linear time-varying system (with no jumps),
taking into account that the past values of $\theta$ are available.
The KBF satisfies the following equation
\begin{equation*}
d\hat{x}_{KB}(t)\!=\!A_{\theta(t)}\hat{x}_{KB}(t)dt+K_{KB}(t)(dy(t)-C_{\theta(t)}\hat{x}_{KB}(t)dt),
\end{equation*}
for $0\leq t\leq T$, with initial condition $\hat{x}_{KB}(0)=\mathbb{E}[x(0)]$ and gain matrix
\begin{equation}\label{eq Ricc gain}
K_{KB}(t)=P_{KB}(t)C_{\theta(t)}'(D_{\theta(t)}D_{\theta(t)}')^{-1},
\end{equation}
for $0\leq t\leq T$, where $P_{KB}(t)$ is an $n_1\times n_1$ matrix-valued process satisfying the Riccati matrix differential equation
\begin{equation}
\label{eq Ric}
\left\{\begin{array}{rcl}
dP_{KB}(t)&=&R(P_{KB}(t),\theta(t))dt,\\
P_{KB}(0)&=&Var(x(0)),
\end{array}\right.
\end{equation}
for $0\leq t\leq T$, where $R:\mathbb{R}^{n_1\times n_1}\times \mathcal S \rightarrow \mathbb{R}^{n_1\times n_1}$
is defined for any $M\in\mathbb{R}^{n_1\times n_1}$ and $i\in\mathcal{S}$ by
\begin{equation}\label{eq:def:R}
R(M,i)=A_iM+MA_i'+E_iE_i'-MC_i'(D_iD_i')^{-1}C_iM.
\end{equation}
It is usually not possible to pre-compute a solution for this system
(prior to the observation of $\theta(s)$, $0\le s\le t$).
Moreover, to solve it in real time after observing $\theta$ would require
instantaneous computation of $P(t)$; one can obtain a delayed solution
$P(t-\delta)$ where $\delta$ is the time required to solve the system, however
using this solution as if it was the actual $P(t)$ in the filter
may bring considerable error to the obtained estimate depending on $\delta$ and on the
system parameters (e.g., many jumps may occur between $t-\delta$ and $t$).
The aim of this paper is to propose a new filter based on suitably chosen pre-computed solutions of Eq.~(\ref{eq Ric})
under the finiteness assumption~\ref{hyp:finite}
and to show convergence of our estimate to the optimal KBF when the number of discretization points goes to infinity
under the more general countable assumption~\ref{hyp:infinite}.
We also compare its performance with the Fragoso-Costa LMMSE filter \cite{FC10} on a real-world application.
\section{Approximate Kalman--Bucy filter}
\label{sec-KBF}
The estimator is constructed as follows.
We first select an optimized finite set of typical possible trajectories of $\{\theta(t)$, $0\le t\le T\}$ by
discretizing the {semi-}Markov chain and for each such trajectory we solve Eqs.~(\ref{eq Ric}), \eqref{eq Ricc gain}
and store the results.
In real time, the estimate is obtained by looking up the pre-computed solutions
and selecting the suitable gain given the current value of $\theta(t)$.
\subsection{Discretization of the {semi-}Markov chain}
The approach relies on the construction of optimized typical trajectories of the {semi-}Markov chain $\{\theta(t), 0\leq t\leq T\}$.
First we need to rewrite this {semi-}Markov chain in terms of its jump times and post-jump locations.
Let $T_0=0$ and $T_k$ be the $k$-th jump time of $\{\theta(t), 0\leq t\leq T\}$ for $k\geq 1$,
\begin{equation*}
T_{k}=\inf\{t\geq T_{k-1};\ \theta(t)\neq\theta(T_{k-1})\}.
\end{equation*}
For $k\geq 0$ let $Z_k=\theta(T_k)$ be the post-jump locations of the chain.
Let $S_0=0$ and for $k\geq 1$, $S_k=T_k-T_{k-1}$ be the inter-arrival times of
the Markov process $\{\theta(t), 0\leq t\leq T\}$. Using this notation, $\theta(t)$ can be rewritten as
\begin{equation}\label{eq:theta}
\theta(t)=\sum_{k=0}^\infty Z_k\ind{T_k\leq t<T_{k+1}}=\sum_{k=0}^\infty Z_k\ind{0\leq t-T_k<S_{k+1}}.
\end{equation}
Under the finiteness assumption~\ref{hyp:finite},
as the state space $\mathcal{S}$ of $\{\theta(t), 0\leq t\leq T\}$ (and hence of $\{Z_k\}$) is finite, to obtain a fully discretized approximation of $\{\theta(t), 0\leq t\leq T\}$ one only needs to discretize the inter-arrival times $\{S_k\}$ on a finite state space. One thus constructs a finite set of typical possible trajectories of $\{\theta(t), 0\leq t\leq T\}$ up to a given jump time horizon $T_n$ selected such that $T_n\geq T$ with high enough probability.
To discretize the inter-arrival times $\{S_k\}$, we choose a quantization approach that has been recently developed in numerical probability.
Its main advantage is that the discretization is optimal in some way explained below.
There exists an extensive literature on quantization methods for random variables and processes. The interested reader may for instance, consult the following works \cite{bally03,gray98,pages98} and references therein.
Consider $X$ an $\mathbb{R}^m$-valued random variable such that $\mathbb{E}[| X |^2] < \infty$
and $\nu$ a fixed integer;
the optimal $L^{2}$-quantization of the random variable $X$ consists in finding the best possible $L^{2}$-approximation of $X$ by a random vector $\widehat{X}$ taking at most $\nu$ different values, which can be carried out in two steps.
First, find a finite weighted grid $\Gamma\subset \mathbb{R}^{m}$ with $\Gamma= \{\gamma^{1},\ldots,\gamma^{\nu}\}$.
Second, set $\widehat{X}=\widehat{X}^{\Gamma}$ where $\widehat{X}^{\Gamma}=proj_{\Gamma}(X)$ with $proj_{\Gamma}$ denoting the closest neighbor projection on $\Gamma$.
The asymptotic properties of the $L^{2}$-quantization are given in e.g. \cite{pages98}.
\begin{theorem}\label{th:quantize}
If $\mathbb{E}[|X|^{2+\epsilon}]<+\infty$ for some $\epsilon>0$ then one has
\begin{eqnarray*}
\lim_{\nu\rightarrow \infty} \nu^{1/m} \min_{|\Gamma|\leq \nu} \mathbb{E}[| X-\widehat{X}^{\Gamma}|^{2}]^{1/2}& =& C,
\end{eqnarray*}
for some contant $C$ depending only on $m$ and the law of $X$ and where $|\Gamma|$ denote the cardinality of $\Gamma$.
\end{theorem}
Therefore the $L^2$ norm of the difference between $X$ and its quantized approximation $\widehat{X}$ goes to zero with rate $\nu^{-1/m}$ as the number of points $\nu$ in the quantization grid goes to infinity. The competitive learning vector quantization algorithm (CLVQ) provides the optimal grid based on a random simulator of the law of $X$ and a stochastic gradient method.
In the following, we will denote by $\widehat{S}_k$ the quantized approximation of the random variable $S_k$ and $\widehat{T}_k=\widehat{S}_1+\cdots+\widehat{S}_k$ for all $k$.
\subsection{Pre-computation of a family of solutions to Riccati equation}
We start by rewriting the Riccati equation~(\ref{eq Ric}) in order to have a similar
expression to Eq.~(\ref{eq:theta}).
As operator $R$ does not depend on time, the solution $\{P(t), 0\leq t\leq T\}$ to Eq.~(\ref{eq Ric}) corresponding to a given trajectory $\{\theta(t), 0\leq t\leq T\}$ can be rewritten as
\begin{equation*}
P(t)=\sum_{k=0}^\infty P_k(t-T_k)\ind{0\leq t-T_k<S_{k+1}},
\end{equation*}
for $0\leq t\leq T$, where $\{P_0(t), 0\leq t\leq T\}$ is the solution of the system
\begin{equation*}
\left\{\begin{array}{rcl}
d{P}_0(t)&=&R(P_0(t),Z_0)dt,\\
P_0(0)&=&p_0,
\end{array}\right.
\end{equation*}
for $0\leq t\leq T$, with $p_0=Var(x(0))$,
and for $k\geq 1$, $\{P_k(t), 0\leq t\leq T\}$ is recursively defined as the solution of
\begin{equation*}
\left\{\begin{array}{rcl}
d{P}_k(t)&=&R(P_k(t),Z_k)dt,\\
P_k(0)&=&P_{k-1}(S_k).
\end{array}\right.
\end{equation*}
Given the quantized approximation $\{\widehat{S}_k\}$ of the sequence $\{{S}_k\}$, we propose the following approximations $\{\widehat{P}_k(t), 0\leq t\leq T\}$ of $\{P_k(t), 0\leq t\leq T\}$ for all $k$.
First, $\{\widehat{P}_0(t), 0\leq t\leq T\}$ is the solution of
\begin{equation*}
\left\{\begin{array}{rcl}
d{\widehat{P}}_0(t)&=&R(\widehat{P}_0(t),Z_0)dt,\\
\widehat{P}_0(0)&=&p_0,
\end{array}\right.
\end{equation*}
and for $k\geq 1$, $\{\widehat{P}_k(t), 0\leq t\leq T\}$ is recursively defined as the solution of
\begin{equation*}
\left\{\begin{array}{rcl}
d{\widehat{P}}_k(t)&=&R(\widehat{P}_k(t),Z_k)dt,\\
\widehat{P}_k(0)&=&\widehat{P}_{k-1}(\widehat{S}_k).
\end{array}\right.
\end{equation*}
Hence $P_k$ and $\widehat{P}_k$ are defined with the same dynamics, the same horizon $T$, but different starting values, and all the $\widehat{P}_k$ can be computed off-line for each of the finitely many possible values of $(Z_k,\widehat{S}_k)$
(under the finiteness assumption~\ref{hyp:finite} and for a finite number of jumps)
and stored.
\subsection{On line approximation}
We suppose that on-line computations are made on a regular time grid with constant step $\delta t$. Note that in most applications $\delta t$ is small compared to the time $\delta$ of instantaneous computation of $P(t)$. The state of the {semi-}Markov chain $\{\theta(t), 0\leq t\leq T\}$ is observed,
but the jumps can only be considered, in the filter operation,
at the next point in the time grid. Set $\widetilde{T}_0=0$, and for $k\geq 1$ define $\widetilde{T}_k$ as
\begin{equation*}
\widetilde{T}_{k}=\inf\{j;\ T_k< j\delta t\}\delta t,
\end{equation*}
hence $\widetilde{T}_{k}$ is the effective time at which the $k$-th jump is taken into account. One has $\widetilde{T}_{k}>T_k$ and the difference between $\widetilde{T}_{k}$ and $T_k$ is at most $\delta t$. We also set $\widetilde{S}_k=\widetilde{T}_k-\widetilde{T}_{k-1}$ for $k\geq 1$.
Now we construct our approximation $\{\widetilde P(t), 0\leq t\leq T\}$ of $\{P(t), 0\leq t\leq T\}$ as follows
\begin{equation*}
\widetilde{P}(t)=\sum_{k=0}^\infty \widehat{P}_k(t-\widetilde{T}_k)\ind{0\leq t-\widetilde{T}_k<\widetilde{S}_{k+1}}\ind{t\leq T}.
\end{equation*}
Thus we just select the appropriate pre-computed solutions and paste them at the approximate jumps times $\{\widetilde{T}_{k}\}$, which can be done on-line. The approximate gain matrices are simply defined by
\begin{equation*}
\widetilde{K}(t)=\widetilde{P}(t)C_{\theta(t)}'(D_{\theta(t)}D_{\theta(t)}')^{-1},
\end{equation*}
and the estimated trajectory satisfies
\begin{equation*}
d\widetilde{x}(t)=A_{\theta(t)}\widetilde{x}(t)dt+\widetilde{K}(t)(dy(t)-C_{\theta(t)}\widetilde{x}(t)dt),
\end{equation*}
for $0\leq t\leq T$, with initial condition $\widetilde{x}(0)=\mathbb{E}[x(0)]$.
\section{Convergence of the approximation procedure}
\label{sec-convergence}
The investigation of the convergence of our approximation scheme
under the general assumption~\ref{hyp:infinite},
is made in several
steps again. The first one is the evaluation of the error between $P(t)$ and $\widetilde{P}(t)$ up to the time horizon $T$ and requires some Lipschitz regularity assumptions on the solution of Riccati equations. First, we establish these regularity properties. Then we derive the error between $P$ and $\widetilde{P}$, and finally we evaluate the error between the real KBF filter $\widehat{x}_{KB}$ and its quantized approximation $\widetilde{x}$.
\subsection{Regularity of the solutions of Riccati equations}
For all $t\geq 0$, suitable matrix $p\in\mathcal{C}(n_1)$ and $i\in\mathcal{S}$ denote by $\phi_i(p,t)$ the solution at time $t$ of the following Riccati equation starting from $p$ at time $0$,
\begin{equation*}\label{eqi Riccati}
\left\{\begin{array}{rcl}
d{P}(t)&=&R(P(t),i)dt,\\
P(0)&=&p,
\end{array}\right.
\end{equation*}
for $t\geq 0$.
We start with a boundedness result.
\begin{lemma}\label{lem00:Lip}
Under Assumption~\ref{hyp:infinite},
for all $\bar p_0\in\mathcal{C}(n_1)$, there exist a matrix $\bar p_1\in\mathcal{C}(n_1)$ such that $\bar p_1\geq \bar p_0$ and for $p\leq \bar p_0$, $i\in\mathcal{S}$ and times $0\leq t\leq T$, one has
$\phi_i(p,t)\leq \bar{p}_1$.
\end{lemma}
\textit{Proof.}
The Riccati equation can be rearranged in the following form
\begin{eqnarray*}
\frac{d P(t)}{dt} &=& A_{aux}(t)P(t) + P(t) A_{aux}(t)' + E_iE_i' \\
&&+ K_i(t)D_iD_i'K_i(t)',
\end{eqnarray*}
where $K_i(t)= P(t)C_i'(D_iD_i')^{-1}$ and $A_{aux}(t)=A_i - K_i(t)C_i$.
For any matrix $L$ with suitable dimensions, from the optimality of the KBF we have that
$\phi_i(p,t)\leq \phi_L(p,t)$ where $\phi_L(p,t)$ is the covariance
of a linear state observer with gain $L$, so that $\phi_L(p,t)$ is the solution of
\begin{eqnarray*}
\frac{d P(t)}{dt} &= & (A_i-LC_i)(t)P(t) + P(t) (A_i-LC_i)' \\
&&+ E_iE_i' + LD_iD_i'L',\\
P(0)&=&p.
\end{eqnarray*}
In particular, we can set $L=0$, and $\phi_L(p,t)$ is now the solution of the linear differential equation
\begin{equation}\label{eq:covariance:of:trivial:filter}
\frac{d P(t)}{dt} = A_iP(t) + P(t) A_i' + E_iE_i', \qquad P(0)=p,
\end{equation}
which can be expressed in the form $\phi_L(p,t)=\Phi_1(t)p+\Phi_2(t)$ where
$\Phi_1\leq \beta e^{\alpha {\|A_i\|}t} \|p\| I$ and
$\Phi_2\leq \int_0^t \beta e^{\alpha {\|A_i\|}\tau} \|E_iE_i\| I d\tau $
for some scalars $\alpha,\beta$ that do not depend on $p,i$.
Set $\bar p_1=\beta e^{\alpha T{\|A\|_{\mathcal{S}}}}(\|\bar p_0\|p_0+T \|E\|_{\mathcal{S}}^2 I )$, thus completing the proof.
\qquad\hspace{\stretch{1}}$ \Box$
\begin{theorem}\label{lem-1-Ric}
Under Assumption~\ref{hyp:infinite}, for each $\widetilde p\in\mathcal{C}(n_1)$
there exist $\ell,\eta>0$ such that for all $i\in\mathcal{S}$
and $0\leq t,\widehat{t}\leq T$ and $p,\widehat{p}\leq \widetilde p$ one has
\begin{equation*}
\|\phi_i(p,t)-\phi_i(\widehat p,\widehat t)\|\leq \ell |t-\widehat{t}|+\eta\|p-\widehat{p}\|.
\end{equation*}
\end{theorem}
\textit{Proof.}
It follows directly from the definition of $R$ in Eq.~\eqref{eq:def:R} that
one has
\begin{eqnarray*}
\lefteqn{\frac{d\phi_i(p,t)-d\phi_i(\widehat{p},t)}{dt}}\\
&=& A_i \phi_i(p,t) + \phi_i(p,t) A_i'+ E_iE_i' \\
&&- \phi_i(p,t) C_i'(D_iD_i')^{-1}C_i \phi_i(p,t)\\
&&- \big( A_i \phi_i(\widehat{p},t) + \phi_i(\widehat{p},t) A_i'+ E_iE_i' \\
&&- \phi_i(\widehat{p},t) C_i'(D_iD_i')^{-1}C_i \phi_i(\widehat{p},t) \big)\\
&=& A_i (\phi_i(p,t)-\phi_i(\widehat{p},t)) + (\phi_i(p,t)-\phi_i(\widehat{p},t)) A_i'\\
&&- \phi_i(\widehat{p},t) C_i'(D_iD_i')^{-1}C_i (\phi_i(p,t)-\phi_i(\widehat{p},t))\\
&&- (\phi_i(p,t)-\phi_i(\widehat{p},t)) C_i'(D_iD_i')^{-1}C_i \phi_i(\widehat{p},t)\\
&& - (\phi_i(p,t)-\phi_i(\widehat{p},t)) C_i'(D_iD_i')^{-1}C_i\\
&&\times (\phi_i(p,t)-\phi_i(\widehat{p},t)) \\
&=& (A_i - \phi_i(\widehat{p},t) C_i'(D_iD_i')^{-1}C_i) (\phi_i(p,t)-\phi_i(\widehat{p},t)) \\
&&+ (\phi_i(p,t)-\phi_i(\widehat{p},t)) (A_i'-C_i'(D_iD_i')^{-1}C_i \phi_i(\widehat{p},t))\\
&& - (\phi_i(p,t)-\phi_i(\widehat{p},t)) C_i'(D_iD_i')^{-1}C_i \\
&&\times(\phi_i(p,t)-\phi_i(\widehat{p},t)),
\end{eqnarray*}
or, by denoting $X(t)=\phi_i(p,t)-\phi_i(\widehat{p},t)$, one has $X(0)=p-\widehat{p}$ and
\begin{eqnarray}
\frac{d X(t)}{dt} &=& A_{aux}(t)X(t) + X(t) A_{aux}(t)' \nonumber\\
&&- X(t) C_i'(D_iD_i')^{-1}C_i X(t), \label{eq:aux:X}
\end{eqnarray}
where we write $A_{aux}(t)=(A_i - \phi_i(\widehat{p},t) C_i'(D_iD_i')^{-1}C_i)$
for ease of notation.
By setting $Y(0)=\|p-\widehat{p}\|I\geq X(0)$ and using the order preserving property of
the Riccati equation \eqref{eq:aux:X} it follows that $\{Y(t), 0\leq t\leq T\}$ defined as the solution of
\begin{eqnarray}
\frac{d Y(t)}{dt} &=& A_{aux}(t) Y(t) + Y(t) A_{aux}(t)' \nonumber\\
&& - Y(t) C_i'(D_iD_i')^{-1}C_i Y(t), \label{eq:aux:Y}
\end{eqnarray}
satisfies $Y(t)\geq X(t)$ for all $t\geq 0$.
The process $\{Y(t), 0\leq t\leq T\}$ can be interpreted as the error covariance of a filtering problem%
\footnote{Note that this does not hold true for the process $\{X(t), 0\leq t\leq T\}$ as it may not be positive semidefinite.},
more precisely the covariance of the error $\widehat{x}_{aux}(t)-x_{aux}(t)$ where $\{\widehat{x}_{aux}(t), 0\leq t\leq T\}$ satisfies
\begin{equation*}
d \widehat{x}_{aux} = A_{aux}(t) \widehat{x}_{aux} dt + K(t) (dy - C_{aux} \widehat{x}_{aux} dt),
\end{equation*}
with $A_{aux}$ defined above,
$C_{aux}=(C_i'(D_iD_i')^{-1}C_i)^{1/2}$, $\{K(t), 0\leq t\leq T\}$ is the Kalman gain, and
\begin{equation*}
\left\{\begin{array}{rcl}
d x_{aux}(t) &=& A_{aux}(t) x_{aux}(t)dt ,\\
d y_{aux}(t) &=& C_{aux} x_{aux}(t)dt + dv_{aux}(t),
\end{array}\right.
\end{equation*}
where $\{v_{aux}(t), 0\leq t\leq T\}$ is a standard Wiener process with incremental covariance $Idt$,
and $x_{aux}(0)$ is a Gaussian random variable with covariance $p-\widehat{p}$.
Now, if we replace $K$ with the (suboptimal) gain $L=0$
we obtain a larger error covariance $Y_{L} (t)\geq Y(t)$.
With the trivial gain $L=0$ we also have
\begin{equation*}
d\widehat{x}_{aux}- dx_{aux} = A_{aux}(t)(\widehat{x}_{aux}-x_{aux}) dt,
\end{equation*}
so that direct calculation yields
\begin{equation}\label{eq:aux:YL}
\frac{d Y_L(t)}{dt} = A_{aux}(t) Y_L(t) + Y_L(t) A_{aux}(t)',
\end{equation}
with $Y_L(0)=\|p-\widehat{p}\|I$. Recall that $\widehat p\leq \widetilde p$ by hypothesis,
so that from Lemma \ref{lem00:Lip} we get an uniform bound $\bar p_1$ for
$\phi_i(\widehat p,t)$, which in turn yields
that $\|A_{aux}\|_{\mathcal{S}}$ is bounded in the time interval $0\leq t\leq T$
and for all $\widehat p\leq \widetilde p$. This allows to write
\begin{equation*}
Y(t) \leq \ell_1 \|p-\widehat{p}\| I,\qquad 0\leq t\leq T,
\end{equation*}
for some $\ell_1\geq 0$ (uniform on $t$, $p$, $\widehat p$ and $i$).
Gathering some of the above inequalities together, one gets
\begin{equation}\label{eq-aux-main-eval01}
\phi_i(p,t)-\phi_i(\widehat{p},t)=X(t)\leq Y(t)\leq Y_L(t)\leq \ell_1 \|p-\widehat{p}\| I,
\end{equation}
$0\leq t\leq T$. Similarly as above, one can obtain
\begin{equation}\label{eq-aux-main-eval02}
\phi_i(\widehat p,t)-\phi_i(p,t)\leq \ell_2 \|p-\widehat{p}\| I,\qquad 0\leq t\leq T,
\end{equation}
where, again, $\ell_2$ is uniform on $t$, $p$, $\widehat p$ and $i$.
Eqs.~\eqref{eq-aux-main-eval01}, \eqref{eq-aux-main-eval02}
and the fact that
$\phi_i(\widehat p,t)-\phi_i(p,t)$ is symmetric lead to
\begin{eqnarray*}
-\max(\ell_1,\ell_2)&\leq& \lambda_{\min}(\phi_i(\widehat p,t)-\phi_i(p,t)),\\
\lambda_{\min}(\phi_i(\widehat p,t)-\phi_i(p,t))&\leq &\lambda_{\max}(\phi_i(\widehat p,t)-\phi_i(p,t)) \\
&\leq &\max(\ell_1,\ell_2).
\end{eqnarray*}
Hence, one has
\begin{equation*}
\|\phi_i(\widehat p,t)-\phi_i(p,t)\|\leq \max(\ell_1,\ell_2) \|p-\widehat{p}\|,
\end{equation*}
completing the first part of the proof.
For the second part, similarly to the proof of the preceding lemma,
we have that $\phi_i(p,t)$ is bounded from above by $X(t)$
the solution of the linear differential equation
Eq.\eqref{eq:covariance:of:trivial:filter} with initial condition $X(0)=p$,
and it is then simple to find scalars $\eta_1,\eta_2>0$ irrespective of
$i$ such that, for the entire time interval $0\leq t\leq T$,
\begin{equation*}
\|X(t)-p\|_{\mathcal{S}} \leq \|\Phi_1(t)\|_{\mathcal{S}} + \|(\Phi_2(t)-I)p\|_{\mathcal{S}}
\leq \eta_1 t + \eta_2 t \|p\|.
\end{equation*}
Hence, one has
\begin{equation}\label{eq:maj phi}
\phi_i(p,t)-p\leq X(t)-p \leq \|X(t)-p\|_{\mathcal{S}} I \leq (\eta_1 t + \eta_2 t \|p\|) I,
\end{equation}
for all $t\geq 0$, leading to
\begin{equation*}
\|\phi_i(p,t)-p\| \leq \eta_1 t + \eta_2 t \|p\|.
\end{equation*}
As $p\leq \widetilde p$ by hypothesis, we have $\|p\|\leq n_1\|\widetilde p\|$ and
it follows immediately from the above inequality that
\begin{equation}\label{eq:aux:XLips2}
\|\phi_i(p,t)-p\| \leq (\eta_1 + \eta_2 n_1 \|\widetilde p\|) t .
\end{equation}
As operator $R$ does not depend on time,
we have $\phi(p,t_1+t_2)=\phi(\phi(p,t_1),t_2)$, $t_1,t_2\geq 0$,
and defining $\bar p=\phi(p,t_1)$, one has
\begin{equation*}
\|\phi_i(p,t_1+t_2)-\phi_i(p,t_1)\| = \|\phi_i(\bar p,t_2)-\bar p\|
\end{equation*}
and Eq.~\eqref{eq:aux:XLips2} allows to write
\begin{equation*}
\|\phi_i(p,t_1+t_2)-\phi_i(p,t_1)\| \leq (\eta_1 + \eta_2 n_1 \|\widetilde p\|) t_2.
\end{equation*}
The result then follows by setting $t_1=\widehat t$ and $t_2=t-\widehat t$ if $t> \widehat t$
or with $t_1=t$ and $t_2=\widehat t - t$ otherwise.
\qquad\hspace{\stretch{1}}$ \Box$
\subsection{Error derivation for gain matrices}
We proceed in three steps. The first one is to study the error between $P_k(t)$ and $\widehat{P}_k(t)$, the second step is to study the error between $P(t)$ and $\widetilde{P}(t)$ and the last step is to compare the gain matrices $K_{KB}(t)$ and $\widetilde{K}(t)$, for $0\leq t\leq T$. We start with a preliminary important result that will enable us to use Theorem~\ref{lem-1-Ric}
in all the sequel.
\begin{lemma}\label{lem0:Lip}
Under Assumption~\ref{hyp:infinite}, there exist a matrix $\bar p\in\mathcal{C}(n_1)$ such that for all integers $0\leq k\leq n$ and times $0\leq t\leq T$, one has
\begin{equation*}
P_k(t)\leq \bar p,\qquad \widehat{P}_k(t)\leq \bar p.
\end{equation*}
\end{lemma}
\textit{Proof.} We prove the result by induction on $k$. For $k=0$, one has $p_0\in\mathcal{C}(n_1)$ and $P_0(t)=\widehat{P}_0(t)=\phi_{Z_0}(p_0,t)$ for all $t\leq T$. Lemma~\ref{lem00:Lip} thus yields the existence of a matrix $\bar p_0\in\mathcal{C}(n_1)$ such that $P_0(t)\leq \bar p_0$ for all $t\leq T$.
Suppose that for a given $k\leq n-1$, there exists a matrix $\bar p_k\in\mathcal{C}(n_1)$ such that $P_k(t)\leq \bar p_k$ and $\widehat{P}_k(t)\leq \bar p_k$ for all $t\leq T$. Then in particular, if $S_k\leq T$ and $\widehat{S}_k\leq T$, one has $P_{k+1}(0)=P_k(S_k)\leq \bar p_k$ and $\widehat{P}_{k+1}(0)=\widehat{P}_k(\widehat{S}_k)\leq \bar p_k$. Hence, Lemma~\ref{lem00:Lip} gives the existence of a matrix $\bar p_{k+1}\in\mathcal{C}(n_1)$ such that $P_{k+1}(t)\leq \bar p_{k+1}$ and $\widehat{P}_{k+1}(t)\leq \bar p_{k+1}$ for all $t\leq T$. One thus obtains an increasing sequence $(p_k)$ of matrices in $\mathcal{C}(n_1)$ and the result is obtained by setting $\bar p=\bar p_n$.
\qquad \hspace{\stretch{1}}$ \Box$
In the following, for $\bar p$ given by Lemma~\ref{lem0:Lip} we set $\tilde p= \bar p$
in Theorem~\ref{lem-1-Ric}
and denote by $\bar \ell$ and $\bar \eta$ the
corresponding Lipschitz constants.
We now turn to the investigation of the error between the processes $P_k(t)$ and $\widehat{P}_k(t)$.
\begin{lemma}\label{lem1:Lip}
Under Assumption~\ref{hyp:infinite}, for all integers $0\leq k\leq n$ and times $0\leq t\leq T$, one has
\begin{equation*}
\|P_k(t)-\widehat{P}_k(t)\|\leq\bar\ell\|P_{k-1}(S_k)- \widehat{P}_{k-1}(\widehat{S}_k)\|.
\end{equation*}
\end{lemma}
\textit{Proof.} One has $P_k(t)=\phi_{Z_k}(P_{k-1}(S_k),t)$ and $\widehat{P}_k(t)=\phi_{Z_k}(\widehat{P}_{k-1}(\widehat{S}_k),t)$. Hence, Lemma~\ref{lem0:Lip} and Theorem~\ref{lem-1-Ric}
yield
\begin{eqnarray*}
\lefteqn{\|P_k(t)-\widehat{P}_k(t)\|}\\
&=&\|\phi_{Z_k}(P_{k-1}(S_k),t)-\phi_{Z_k}(\widehat{P}_{k-1}(\widehat{S}_k),t)\|\\
&\leq&\bar\ell\|P_{k-1}(S_k)- \widehat{P}_{k-1}(\widehat{S}_k)\|,
\end{eqnarray*}
if $S_k,\widehat{S}_k\leq T$, hence the result.
\qquad \hspace{\stretch{1}}$ \Box$
\begin{lemma}\label{lem2:Lip}
Under Assumption~\ref{hyp:infinite}, for all integers $0\leq k\leq n$ satisfying $S_k,\widehat{S}_k\leq T$, one has
\begin{equation*}
\|P_{k}(S_{k+1})- \widehat{P}_{k}(\widehat{S}_{k+1})\|\leq \sum_{j=0}^{k}\bar\ell^{k-j}\bar\eta|S_{j+1}-\widehat{S}_{j+1}|.
\end{equation*}
\end{lemma}
\textit{Proof.} By definition, one has $P_k(S_{k+1})=\phi_{Z_k}(P_{k-1}(S_k),S_{k+1})$ and $\widehat{P}_k(t)=\phi_{Z_k}(\widehat{P}_{k-1}(\widehat{S}_k),\widehat{S}_{k+1})$. Hence as above, one has
\begin{eqnarray*}
\lefteqn{\|P_{k}(S_{k+1})- \widehat{P}_{k}(\widehat{S}_{k+1})\|}\\
&=&\|\phi_{Z_k}(P_{k-1}(S_k),S_{k+1})-\phi_{Z_k}(\widehat{P}_{k-1}(\widehat{S}_k),\widehat{S}_{k+1})\|\\
&\leq&\bar\ell\|P_{k-1}(S_k)- \widehat{P}_{k-1}(\widehat{S}_k)\|+\bar\eta|S_{k+1}-\widehat{S}_{k+1}|.
\end{eqnarray*}
Then notice that one also has
\begin{eqnarray*}
\lefteqn{\|P_0(S_1)-\widehat{P}_0(\widehat{S}_1)\|}\\
&=&\|\phi_{Z_0}(p_0,S_1)-\phi_{Z_0}(p_0,\widehat{S}_1)\|\leq\bar\eta|S_1-\widehat{S}_1|,
\end{eqnarray*}
and the result is obtained by recursion.
\qquad \hspace{\stretch{1}}$ \Box$
We can now turn to the error between the processes $P(t)$ and $\widetilde{P}(t)$.
\begin{theorem}\label{th:Lip}
Under Assumption~\ref{hyp:infinite}, for all $0\leq t< T\wedge T_{n+1}$, one has
\begin{eqnarray*}
\lefteqn{\mathbb{E}[\|P(t)-\widetilde{P}(t)\|^2\ind{0\leq t\leq T\wedge T_{n+1}}]^{1/2}}\\
&\leq&\sum_{j=0}^{n-1}\bar\ell^{n-j}\bar\eta\mathbb{E}[|S_{j+1}-\widehat{S}_{j+1}|^2]^{1/2}\\
&&+\bar\eta \delta t+n\|\bar p\|(\overline{\lambda}\delta t)^{1/2},
\end{eqnarray*}
where $\bar p$ is defined in Lemma~\ref{lem0:Lip}.
\end{theorem}
\begin{rem}\label{rem-byproduct}
Note that the above result is very general. Indeed, we do not use in its proof that $\widehat{S}_k$ is the quantized approximation of $S_k$. We have established that, given a {semi-}Markov chain $\{\theta(t), 0\leq t\leq T\}$ and a process $\{\widehat{\theta}(t),0\leq t\leq T\}$ obtained by a perturbation of the jump times of $\{\theta(t), 0\leq t\leq T\}$, the two solutions of the Riccati equations driven by these two processes respectively are not far away from each other, as long as the real and perturbed jump times are not far away from each other.
We allow two kinds of perturbations, a random one, given by the replacement of $S_k$ by $\widehat{S}_k$
and a deterministic one given by $\delta t$ corresponding to a delay in the jumps.
In the case of non-instantaneous observation of $\theta(t)$ (i.e., imperfect observation $\widetilde{S}_k$ of
$S_k$), the difference $\mathbb{E}[|\widetilde{S}_{j+1}-\widehat{S}_{j+1}|^2]$ may not converge to zero but is still a valid upper bound for the approximation error of the Riccati solution and can reasonably be supposed small enough.
Note also that the result is still valid for any $L^q$ norm instead of the $L^2$ norm as the initial value of the Riccati solution is deterministic, as long as the distributions $F_i$ have moments of order greater than $q$.
\end{rem}
\textit{Proof.} By definition, one has for all $0\leq t< T\wedge T_{n+1}$
\begin{eqnarray*}
\lefteqn{P(t)-\widetilde{P}(t)}\\
&=&\sum_{k=0}^nP_k(t-T_k)\ind{0\leq t-T_k<S_{k+1}}\\
&&-\widehat{P}_k(t-\widetilde{T}_k)\ind{0\leq t-\widetilde{T}_k<\widetilde{S}_{k+1}}\\
&=&\sum_{k=0}^n\big(P_k(t-T_k)-\widehat{P}_k(t-{T}_k)\big)\ind{0\leq t-T_k<S_{k+1}}\\
&&+\sum_{k=0}^n\big(\widehat{P}_k(t-{T}_k-\widehat{P}_k(t-\widetilde{T}_k)\big)\ind{0\leq t-T_k<S_{k+1}}\\\
&&+\sum_{k=0}^n\!\widehat{P}_k(t-\widetilde{T}_k)(\ind{0\leq t-T_k<S_{k+1}}\!-\!\ind{0\leq t-\widetilde{T}_k<\widetilde{S}_{k+1}})\\
&=&\epsilon_1(t)+\epsilon_2(t)+\epsilon_3(t).
\end{eqnarray*}
From Lemmas \ref{lem1:Lip} and \ref{lem2:Lip}, the first term $\epsilon_1$ can be bounded by
\begin{eqnarray*}
\lefteqn{\|\epsilon_1(t)\|}\\
&\leq&\big\|\sum_{k=0}^n \big(P_k(t-T_k)-\widehat{P}_k(t-T_k)\big)\ind{0\leq t-T_k<S_{k+1}}\big\|\\
&\leq& \sum_{k=0}^n \|P_k(t-T_k)-\widehat{P}_k(t-T_k)\|\ind{0\leq t-T_k<S_{k+1}}\\
&\leq& \sum_{k=0}^n \ell\|P_{k-1}(S_k)- \widehat{P}_{k-1}(\widehat{S}_k)\|\ind{0\leq t-T_k<S_{k+1}}\\
&\leq& \sum_{k=0}^n \sum_{j=0}^{k-1}\bar\ell^{k-j}\bar\eta|S_{j+1}-\widehat{S}_{j+1}|\ind{T_k\leq t<T_{k+1}}\\
&\leq& \sum_{j=0}^{n-1}\bar\ell^{n-j}\bar\eta|S_{j+1}-\widehat{S}_{j+1}|.
\end{eqnarray*}
The second term $\epsilon_2$ is bounded by Lemma~\ref{lem0:Lip} and Theorem~\ref{lem-1-Ric}
as follows
\begin{eqnarray*}
\lefteqn{\|\epsilon_2(t)\|}\\
&\leq&\big\|\sum_{k=0}^n \big(\widehat{P}_k(t-{T}_k-\widehat{P}_k(t-\widetilde{T}_k)\big)\ind{0\leq t-T_k<S_{k+1}}\big\|\\
&\leq& \sum_{k=0}^n \|\widehat{P}_k(t-T_k)-\widehat{P}_k(t-\widetilde{T}_k)\|\ind{0\leq t-T_k<S_{k+1}}\\
&\leq& \sum_{k=0}^n\bar\eta |T_k-\widetilde{T}_k|\ind{0\leq t-T_k<S_{k+1}}\\
&\leq& \bar\eta\delta t,
\end{eqnarray*}
using the fact that the difference between $T_k$ and $\widetilde{T}_k$ is less than $\delta t$ by construction. Finally, the last term $\epsilon_3$ is bounded by using Lemma~\ref{lem0:Lip} and the fact that $0\leq {T}_k\leq \widetilde{T}_k$ for all $k$. Indeed, one has
\begin{eqnarray*}
\lefteqn{\mathbb{E}[\|\epsilon_3(t)\|^2]^{1/2}}\\
&\leq&\mathbb{E}\big[\big\|\sum_{k=0}^n\widehat{P}_k(t-\widetilde{T}_k)(\ind{0\leq t-T_k<S_{k+1}}\\
&&-\ind{0\leq t-\widetilde{T}_k<\widetilde{S}_{k+1}})\big\|^2\big]^{1/2}\\
&\leq&\|\bar p\| \sum_{k=0}^n \mathbb{E}[|\ind{0\leq t-T_k<S_{k+1}}-\ind{0\leq t-\widetilde{T}_k<\widetilde{S}_{k+1}}|^2]^{1/2}\\
&\leq&\|\bar p\| \sum_{k=0}^n \mathbb{P}(t-\delta t\leq T_k\leq t)^{1/2}\\
&\leq& n\|\bar p\|\sum_{i\in\mathcal{S}}\big({\lambda_i \delta t}\big)^{1/2}\mathbb{P}(Z_k=i)\\
&\leq& n\|\bar p\|\big({\overline{\lambda} \delta t}\big)^{1/2}.
\end{eqnarray*}
One obtains the result by taking the $L^2$ expectation norm also on both sides of the inequalities involving $\epsilon_1$ and $\epsilon_2$.
\qquad\hspace{\stretch{1}}$ \Box$
Therefore, as the errors $\mathbb{E}[|S_{j+1}-\widehat{S}_{j+1}|^2]$ go to $0$
as the number of points in the discretization grids goes to infinity, we have the convergence of
$\widetilde{P}(t)$ to ${P}(t)$ as long as the time grid step $\delta t$ also goes to $0$.
Theorem \ref{th:Lip} also gives a convergence rate for $\|P(t)-\widetilde{P}(t)\|$,
providing that $0\leq t< T\wedge T_{n+1}$.
The convergence rate for the gain matrices is now straightforward from their definitions.
\begin{corollary}\label{cor:ErrK}
Under Assumption~\ref{hyp:infinite}, for all $0\leq t< T\wedge T_{n+1}$, one has
\begin{eqnarray*}
\lefteqn{\mathbb{E}[\|K_{KB}(t)-\widetilde{K}(t)\|^2\ind{0\leq t\leq T\wedge T_{n+1}}]^{1/2}}\\
&\leq&\|C'(DD')^{-1}\|_\mathcal{S}\Big(\sum_{j=0}^{n-1}\bar\ell^{n-j}\bar\eta\mathbb{E}[|S_{j+1}-\widehat{S}_{j+1}|^2]^{1/2}\\
&&+\bar\eta \delta t+n\|\bar p\|{(\overline{\lambda}\delta t)}^{1/2}\Big).
\end{eqnarray*}
\end{corollary}
\subsection{Error derivation for the filtered trajectories}
We now turn to the estimation of the error between the exact KBF trajectory and our approximate one. We start with introducing some new notation. Let $b: \mathbb{R}\times\mathbb{R}^{2n_1}\rightarrow\mathbb{R}^{2n_1}$ and $\widetilde{b}: \mathbb{R}\times\mathbb{R}^{2n_1}\rightarrow\mathbb{R}^{2n_1}$ be defined by
\begin{eqnarray*}
b(t,z)&=&\left(
\begin{array}{cc}
A_{\theta(t)}&0\\
K_{KB}(t)C_{\theta(t)}&A_{\theta(t)}-K_{KB}(t)C_{\theta(t)}
\end{array}\right)z,\\
\widetilde{b}(t,z)&=&\left(
\begin{array}{cc}
A_{\theta(t)}&0\\
\widetilde K(t)C_{\theta(t)}&A_{\theta(t)}-\widetilde K(t)C_{\theta(t)}
\end{array}\right)z
\end{eqnarray*}
Let also $\sigma: \mathbb{R}\rightarrow\mathbb{R}^{2n_1\times(n_3+n_4)}$ and $\widetilde\sigma: \mathbb{R}\rightarrow\mathbb{R}^{2n_1\times(n_3+n_4)}$ be defined by
\begin{equation*}
\sigma(t)=\left(
\begin{array}{cc}
E_{\theta(t)}&0\\
0&K_{KB}(t)D_{\theta(t)}
\end{array}\right),
\end{equation*}
\begin{equation*}
\widetilde\sigma(t)=\left(
\begin{array}{cc}
E_{\theta(t)}&0\\
0&\widetilde K(t)D_{\theta(t)}
\end{array}\right).
\end{equation*}
Finally, set $W(t)=(w(t)',v(t)')'$, $X(t)=(x(t)',\widehat{x}_{KB}(t)')'$ and $\widetilde{X}(t)=(x(t)',\widetilde{x}(t)')'$, so that the two processes $\{X(t), 0\leq t\leq T\}$ and $\{\widetilde{X}(t), 0\leq t\leq T\}$ have the following dynamics
\begin{equation*}
\left\{\begin{array}{l}
dX(t)=b(t,X_t)dt+\sigma(t)dW(t),\\
X(0)=(x(0)',\mathbb{E}[x(0)]')',
\end{array}\right.
\end{equation*}
\begin{equation*}
\left\{\begin{array}{l}
d\widetilde{X}(t)=\widetilde{b}(t,\widetilde{X}_t)dt+\widetilde\sigma(t)dW(t),\\
\widetilde{X}(0)=(x(0)',\mathbb{E}[x(0)]')'.
\end{array}\right.
\end{equation*}
The regularity properties of functions $b$, $\widetilde{b}$, $\sigma$ and $\widetilde{\sigma}$ are quite straightforward from their definition.
\begin{lemma}\label{lem:Lipbsig}
Under Assumption~\ref{hyp:infinite}, for all $0\leq t\leq T$ and $z,\widehat{z}\in\mathbb{R}^{2n_1}$, one has
\begin{eqnarray*}
|b(t,z)|\!\!&\!\!\leq\!\!&\!\!(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})|z|,\\
|\widetilde{b}(t,z)|\!\!&\!\!\leq\!\!&\!\!(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})|z|,\\
\|\sigma(t)\|_2\!\!&\!\!\leq\!\!&\!\!\|E\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}\|(DD')^{-1}\|_2\|D\|_\mathcal{S},\\
\|\widetilde\sigma(t)\|_2\!\!&\!\!\leq\!\!&\!\!\|E\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}\|(DD')^{-1}\|_\mathcal{S}\|D\|_\mathcal{S},\\
|b(t,z)-b(t,\widehat{z})|\!\!&\!\!\leq\!\!&\!\!(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_2^2\|(DD')^{-1}\|_\mathcal{S})|z-\widehat{z}|,\\
|\widetilde{b}(t,z)-\widetilde{b}(t,\widehat{z})|\!\!&\!\!\leq\!\!&\!\!(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})|z-\widehat{z}|,
\end{eqnarray*}
where $\bar p$ is the matrix defined in Lemma~\ref{lem0:Lip}.
\end{lemma}
\textit{Proof.}
Upper bounds for $\|K_{KB}(t)\|_2$ and $\|\widetilde K(t)\|_2$ come from the upper bounds for $P_k(t)$ and $\widehat{P}_k(t)$ given in Lemma~\ref{lem0:Lip}.
\qquad\hspace{\stretch{1}}$ \Box$
In particular, the processes $\{X(t), 0\leq t\leq T\}$ and $\{\widetilde{X}(t), 0\leq t\leq T\}$ are well defined and $\mathbb{E}[\sup_{t\leq T}|X(t)|^2]$ and $\mathbb{E}[\sup_{t\leq T}|\widetilde X(t)|^2]$ are finite, see e.g. \cite{KS91}. Set also $\Delta(t)=K_{KB}(t)-\widetilde K(t)$. In order to compare $X(t)$ and $\widetilde X(t)$, one needs first to be able to compare $b$ with $\widetilde{b}$ and $\sigma$ with $\widetilde{\sigma}$. The following result is straightforward from their definition.
\begin{lemma}\label{lem:Lipbsigt}
Under Assumption~\ref{hyp:infinite}, for all $0\leq t\leq T$ and $z\in\mathbb{R}^{2n_1}$, one has
\begin{eqnarray*}
|b(t,z)-\widetilde b(t,z)|&\leq&2\|C\|_\mathcal{S}\|\Delta(t)\||z|,\\
\|\sigma(t)-\widetilde \sigma(t)\|_\mathcal{S}&\leq&\|D\|_\mathcal{S}\|\Delta(t)\|.
\end{eqnarray*}
\end{lemma}
We also need some bounds on the conditional moments of $\{X(t), 0\leq t\leq T\}$. Let $\{\mathcal{F}_t, 0\leq t\leq T\}$ be the filtration generated by the {semi-}Markov process $\{\theta(t), 0\leq t\leq T\}$, and $\mathbb{E}_t[\cdot]=\mathbb{E}[\cdot\ |\ \mathcal{F}_t]$.
\begin{lemma}\label{lem:X4}
Under Assumption~\ref{hyp:infinite}, there exists a constant $c_2$ independent of the parameters of the system such that for $0\leq t \leq T$ one has
\begin{eqnarray*}
\lefteqn{\mathbb{E}_T[\sup_{t\leq T\wedge T_{n+1}}|X(t)|^2]}\\
&\leq& 2c_2T(\|E\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}\|(DD')^{-1}\|_\mathcal{S}\|D\|_\mathcal{S})^2\\
&&\times\exp(2T^2(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})^2).
\end{eqnarray*}
\end{lemma}
\textit{Proof.}
As $\{\theta(t), 0\leq t\leq T\}$ and the noise sequence $\{W(t), 0\leq t\leq T\}$ are independent, and the process $\{K_{KB}(t), 0\leq t\leq T\}$ is only dependent on $\{\theta(t), 0\leq t\leq T\}$ by construction, one has
\begin{eqnarray*}
\lefteqn{\mathbb{E}_T[\sup_{u\leq t\wedge T\wedge T_{n+1}}|X(u)|^2]}\\
&\leq & 2\mathbb{E}_T\Big[\sup_{u\leq t\wedge T\wedge T_{n+1}}\Big|\int_0^{u}\sigma(s)dW(s)\Big|^2\Big]\\
&&+2\mathbb{E}_T\Big[\sup_{u\leq t\wedge T\wedge T_{n+1}}\Big|\int_0^{u}b(s,X(s))ds\Big|^2\Big]\\
&\leq&2c_2\mathbb{E}_T\Big[\int_0^{T\wedge T_{n+1}}\big\|\sigma(s)\big\|^2ds\Big]\\
&&+2T\mathbb{E}_T\Big[\int_0^{t\wedge T\wedge T_{n+1}}\big|b(s,X(s))\big|^2ds\Big],\\
\end{eqnarray*}
from convexity and Burkholder--Davis--Gundy inequalities, see e.g. \cite{KS91}, where $c_2$ is a constant independent of the parameters of the problem. From Lemma~\ref{lem:Lipbsig} one gets
\begin{eqnarray*}
\lefteqn{\mathbb{E}_T[\sup_{u\leq t\wedge T\wedge T_{n+1}}|X(u)|^2]}\\
&\leq&2c_2T(\|E\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}\|(DD')^{-1}\|_\mathcal{S}\|D\|_\mathcal{S})^2\\
&&+2T(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})^2\\
&&\times\int_0^{t}{\mathbb{E}_T[\sup_{u\leq s\wedge T\wedge T_{n+1}}|X(u)|^2]ds}.
\end{eqnarray*}
Finally, we use Gronwall's lemma to obtain
\begin{eqnarray*}
\lefteqn{\mathbb{E}_T[\sup_{t\leq T\wedge T_{n+1}}|X(t)|^2]}\\
&\leq & 2c_2T(\|E\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}\|(DD')^{-1}\|_\mathcal{S}\|D\|_\mathcal{S})^2\\
&&\times\exp(2T^2(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})^2)
\end{eqnarray*}
which proves the result.
\qquad\hspace{\stretch{1}}$ \Box$
In the sequel, let $\overline{X}$ be the upper bound given by Lemma~\ref{lem:X4}:
\begin{eqnarray*}
\overline{X}&=&2c_2T(\|E\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}\|(DD')^{-1}\|_\mathcal{S}\|D\|_\mathcal{S})^2\\
&&\times\exp(2T^2(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})^2).
\end{eqnarray*}
We can now state and prove our convergence result.
\begin{theorem}\label{th:cv filter}
Under Assumption~\ref{hyp:infinite}, for $0\leq t \leq T$ one has
\begin{equation*}
\mathbb{E}[|X(t)-\widetilde{X}(t)|^2\ind{0\leq t\leq T\wedge T_{n+1}}]\leq \overline{c}_1\exp(T\overline{c}_2),
\end{equation*}
with
\begin{eqnarray*}
\overline{c}_1&=&(2\|D\|_\mathcal{S}+8T\|C\|_\mathcal{S}^2\overline{X})\|C_{i}'(D_{i}D_{i}')^{-1}\|_\mathcal{S}\\
&&\times\Big(\sum_{j=0}^{n-1}\bar\ell^{n-j}\bar\eta\mathbb{E}[|S_{j+1}-\widehat{S}_{j+1}|^2]^{1/2}\\
&&+\bar\eta \delta t+n\|\bar p\|{(\overline{\lambda}\delta t)}^{1/2}\Big)^2,\\
\overline{c}_2&=&2T(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})^2.
\end{eqnarray*}
\end{theorem}
\textit{Proof.}
We follow the same lines as in the previous proof. As $\{\theta(t), 0\leq t\leq T\}$ and the noise sequence $\{W(t), 0\leq t\leq T\}$ are independent, and the processes $\{K_{KB}(t), 0\leq t\leq T\}$ and $\{\widetilde K(t), 0\leq t\leq T\}$ are only dependent on $\{\theta(t), 0\leq t\leq T\}$ by construction, one has
\begin{eqnarray*}
\lefteqn{\mathbb{E}_T[|X(t)-\widetilde{X}(t)|^2\ind{0\leq t\leq T\wedge T_{n+1}}]}\\
&\leq & 2\mathbb{E}_T\Big[\Big|\int_0^{t\wedge T\wedge T_{n+1}}\big(\sigma(s)-\widetilde\sigma(s)\big)dW(s)\Big|^2\Big]\\
&&+2\mathbb{E}_T\Big[\Big|\int_0^{t\wedge T\wedge T_{n+1}}\big(b(s,X(s))-\widetilde b(s,\widetilde X(s))\big)ds\Big|^2\Big]\\
&\leq&2\mathbb{E}_T\Big[\int_0^{t\wedge T\wedge T_{n+1}}\big\|\sigma(s)-\widetilde\sigma(s)\big\|^2ds\Big]\\
&&+2T\mathbb{E}_T\Big[\int_0^{t\wedge T\wedge T_{n+1}}\big|b(s,X(s))-\widetilde b(s,\widetilde X(s))\big|^2ds\Big],\\
\end{eqnarray*}
from the isometry property of It\^o integrals and Cauchy--Schwartz inequality.
From Lemmas~\ref{lem:Lipbsig}, \ref{lem:Lipbsigt} and Fubini one gets
\begin{eqnarray*}
\lefteqn{\mathbb{E}_T[|X(t)-\widetilde{X}(t)|^2\ind{0\leq t\leq T\wedge T_{n+1}}]}\\
&\leq&2\|D\|_\mathcal{S}\int_0^{t\wedge T\wedge T_{n+1}}\big\|\Delta(s)\big\|^2ds\\
&&+2T\|C\|_\mathcal{S}^2\int_0^{t\wedge T\wedge T_{n+1}}\big\|\Delta(s)\big\|^2|\mathbb{E}_T[|X(s)|^2]ds\\
&&+2T(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})^2\\
&&\times\mathbb{E}_T\Big[\int_0^{t\wedge T\wedge T_{n+1}}\big|X(s)-\widetilde X(s)\big|^2ds\Big]\\
&\leq&(2\|D\|_\mathcal{S}+8T\|C\|_\mathcal{S}^2\overline{X})\int_0^{t\wedge T\wedge T_{n+1}}\big\|\Delta(s)\big\|^2ds\\
&&+2T(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})^2\\
&&\times\int_0^{t}\mathbb{E}_T[\big|X(s)-\widetilde X(s)\big|^2\ind{0\leq s\leq T\wedge T_{n+1}}]ds\\
&\leq&\widetilde{c}_1+\widetilde{c}_2\int_0^{t}\mathbb{E}_T[\big|X(s)-\widetilde X(s)\big|^2\ind{0\leq s\leq T\wedge T_{n+1}}]ds,
\end{eqnarray*}
from Lemma~\ref{lem:X4}, with
\begin{eqnarray*}
\widetilde c_1&=&(2\|D\|_\mathcal{S}+8T\|C\|_\mathcal{S}^2\overline{X})\int_0^{t\wedge T\wedge T_{n+1}}\big\|\Delta(s)\big\|^2ds,\\
\widetilde c_2&=&2T(\|A\|_\mathcal{S}+\|\bar p\|\|C\|_\mathcal{S}^2\|(DD')^{-1}\|_\mathcal{S})^2.
\end{eqnarray*}
We use Gronwall's lemma to obtain
\begin{eqnarray*}
\mathbb{E}_T[|X(t)-\widetilde{X}(t)|^2\ind{0\leq t\leq T\wedge T_{n+1}}]
&\leq &\widetilde c_1\exp(T\widetilde c_2),
\end{eqnarray*}
and conclude by taking the expectation on both sides and using Corollary~\ref{cor:ErrK} to bound $\mathbb{E}[\widetilde c_1]$.
\qquad\hspace{\stretch{1}}$ \Box$
As a consequence of the previous result, $|\widehat{x}_{KB}(t)-\widetilde{x}(t)|$ goes to $0$ almost surely as the number of points in the discretization grids goes to infinity.
\begin{rem}\label{rem-delayed-observation}
As noted in Remark \ref{rem-byproduct}, in the case of imperfect observation $\widetilde{S}_k$ of
$S_k$, the errors $\mathbb{E}[|\widetilde S_{j+1}-\widehat{S}_{j+1}|^2]$
do not necessarily go to $0$ if $\theta$ is not instantaneously observed,
however the errors are small when the time delays are small.
The previous result implies that the filter performance deterioration
is proportional to these errors. Acceptable performances can still be
achieved in applications where $\theta$ is not instantaneously observed.
\end{rem}
\section{Numerical example}
\label{sec-example}
We now illustrate our results on a magnetic suspension system presented in \cite{ECosta99ieee}.
The system is a laboratory device that consists of a coil whose voltage is
controlled by a rather simple (non-reliable) pulse-width modulation system,
and sensors for position of a suspended metallic sphere and for the
coil current. The model around the origin without jumps and noise is
in the form $\dot x(t)=A x(t)+B u(t)$, $y(t)=C x(t)$, with
\begin{equation*}%
A=\left(\begin{array}{ccc}
0&1&0\\
1750&0&-34.1\\
0& 0 & -0.0383
\end{array}\right),\qquad
B=\left(\begin{array}{c}
0\\
0\\
1.9231
\end{array}\right),
\end{equation*}
\begin{equation*}
C=\left(\begin{array}{ccc}
1&0&0\\
0&0&1
\end{array}\right).
\end{equation*}
The components of vector $x(t)$ are the position of the sphere, its
speed and the coil current. The coil voltage $u(t)$ is controlled
using a stabilizing state feedback control,
leading to the closed loop dynamics $\dot x(t)=A_1 x(t)$,
\begin{equation*}%
A_1=\left(\begin{array}{ccc}
0&1&0\\
1750&0&-34.1\\
4360.2&104.2&-84.3
\end{array}\right).
\end{equation*}
We consider the realistic scenario where the system may be operating in
normal mode $\theta=1$ or in critical failure $\theta=2$
due e.g. to faults in the pulse-width modulation system, which is included in the model
by making $B_2=0$, leading to the closed loop dynamics $\dot x(t)=A_2 x(t)$
with $A_2=A$.
Although it is natural is to consider that the system starts in normal
mode a.s. and never recovers from a failure,
we want to compare the performance of the proposed filter with
the LMMSE \cite{FC10} that requires {a true Markov chain with} positive probabilities
for all modes at all times, then we relax the problem by setting
the initial distribution $\pi(0)=(0.999, 0.001)$ and {the transition rates matrix}
\begin{equation*}
\Lambda=\left(\begin{array}{cc}
-20&20\\
0.1&-0.1
\end{array}\right)
\end{equation*}
with the interpretation that the recovery from failure mode is
relatively slow.
In the overall model Eq.~\eqref{mjls} we set $C_1=C_2=C$ and we also consider that $x(0)$ is
normally distributed with mean
$\mathbb{E}[x(0)]=(0.001,0,0)'$ and variance $Var(x(0))=I_3$,
\begin{equation*}
E_1\!=\!E_2\!=\!\left(\begin{array}{ccc}
1&0.2&-1.9\\
-0.1&1.4&-0.3\\
0.1&0.5&1
\end{array}\right)\!\!,\
D_1\!=\!D_2\!=\!\left(\begin{array}{cc}
1&0\\
0&1
\end{array}\right)\!\!,
\end{equation*}
so that only the position of the sphere and the coil current are
measured through some noise. Speed is not observed.
It is worth mentioning that the system is not mean square
stable, so that the time horizon $T$ is usually short for the
trajectory to stay close to the origin and keep the linearized model valid;
we can slightly increase the horizons during simulations for academic purposes only.
\subsection{Markovian linear minimum mean squares estimator}
Fragoso and Costa proposed in \cite{FC10} the so-called Markovian linear minimum
mean squares estimator (LMMSE) for MJLS with finite state space Markov chains. Under Assumption~\ref{hyp:finite},
the equation of the filter is
\begin{eqnarray*}
d\hat{x}_{FC}(t)&=&A_{\theta(t)}\hat{x}_{FC}(t)dt\\
&&+K_{FC}(\theta(t),t)(dy(t)-C_{\theta(t)}\hat{x}_{FC}(t)dt),
\end{eqnarray*}
for $0\leq t\leq T$, with initial condition $\hat{x}_{FC}(0)=\mathbb{E}[x(0)]$ and gain matrices
\begin{equation*}
K_{FC}(i,t)=P_{FC}(i,t)C_{i}'(D_{i}D_{i}'\pi_i(t))^{-1},
\end{equation*}
where $\pi_i(t)=\mathbb{P}(\theta(t)=i)=(\pi(0)\exp(t\Lambda))_i$ and
$\{P_{FC}(i,t), 0\leq t\leq T\}$ satisfies the system of matrix differential equation
\begin{eqnarray*}
dP_{FC}(i,t)&=&\big(A_{i}P_{FC}(i,t)+P_{FC}(i,t)A_{i}'\\
&&+\sum_{j=1}^N P_{FC}(j,t)\Lambda_{ji}+E_{i}E_{i}'\pi_i(t)\\
&&-P_{FC}(i,t)C_{\theta(t)}'(D_{\theta(t)}D_{\theta(t)}'\pi_i(t))^{-1}\\
&&\times C_{\theta(t)}P_{FC}(i,t)\big)dt,\\
P_{FC}(i,0)&=&Var(x(0))\pi_i(0).
\end{eqnarray*}
The matrices $\{P_{FC}(i,t), 0\leq t\leq T, i\in\mathcal{S}\}$ and $\{K_{FC}(i,t), 0\leq t\leq T, i\in\mathcal{S}\}$ depend only on
the law of $\{\theta(t), 0\leq t\leq T\}$ and not on its current value.
Therefore they can be computed off line on a discrete time grid and
stored but it is sub-optimal compared to the KBF.
\subsection{Approximate filter by quantization}
We start with the quantized discretization of the inter-jump times $\{S_n\}$ of the Markov chain $\{\theta(t), 0\leq t\leq T\}$. We use the CLVQ algorithm described for instance in \cite{pages98}. Table~\ref{tab:quant error JT} gives the error $\mathbb{E}[|S_1-\widehat{S}_1|^2\ |\ \theta(0)=i]^{1/2}$ for $i=1,2$ computed with $10^6$ Monte Carlo simulations for an increasing number of discretization points. This illustrates the convergence of Theorem~\ref{th:quantize}: the error decreases as the number of points increases. The variance of the first jump time in mode $2$ is much higher than in mode $1$ which accounts for the different scales in the errors.
\begin{figure}[t]
\centerline{\includegraphics[height=5cm]{prcomp10bis.pdf}}
\caption{Pre-computed tree of solutions with $10$ grid points.}
\label{tree10}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[height=5cm]{prcomp50bis.pdf}}
\caption{Pre-computed tree of solutions with $50$ grid points.}
\label{tree50}
\end{figure}
\begin{table}[t]
\footnotesize
\begin{center}
\begin{tabular}{ccc}
\hline
Number of grid points&Error for $\theta(0)=1$&Error for $\theta(0)=2$\\
\hline
10& 5.441$\times 10^{-3}$&1017$\times 10^{-3}$\\
50&1.585$\times 10^{-3}$&357.5$\times 10^{-3}$\\
100&0.753$\times 10^{-3}$&175.2$\times 10^{-3}$\\
500&0.173$\times 10^{-3}$&36.22$\times 10^{-3}$\\
1000&0.100$\times 10^{-3}$& 23.35$\times 10^{-3}$\\
\hline
\end{tabular}
\caption{Quantization error for the first jump time depending on the number of points in the discretization grid and the value of the starting point of the Markov chain.}
\label{tab:quant error JT}
\end{center}
\end{table}
The second step consists in solving the Riccati equation (\ref{eq Ric}) for all possible trajectories of $\{\theta(t), 0\leq t\leq T\}$ with inter-jump times in the quantization grids and up to the computation horizon $T=0.02$. Namely, we compute the trajectories \{$\widehat{P}_k(t), 0\leq t\leq T\}$. We chose a regular time grid with time step $\delta t=10^{-4}$. For technical reasons related to the selection of branches, the time horizon $T$ is added in each grid. One thus obtains a tree of pre-computed branches that are solutions of Eq. (\ref{eq Ric}), the branching times being the quantized jump times.
Figures~\ref{tree10} and \ref{tree50} show the pre-computed trees of solutions component-wise for $10$ and $50$ points respectively in the quantization grids. Note the very different scales of the coordinates.
The number of grid points that are actually used (quantized points below the horizon $T$) are given in Table~\ref{tab:points below horizon} for each original quantization grid size, together with the resulting number of pre-computed branches.
\begin{table}[t]
\footnotesize
\begin{center}
\begin{tabular}{cccr}
\hline
Number of &Points below &Points below&Number of\\
grid points&horizon&horizon&branches\\
&for $\theta(0)=1$& for $\theta(0)=2$&\\
\hline
10& 4&1&7\\
50&14&1&17\\
100&33&1&36\\
500&161&2&7763\\
1000&319&3&603784\\
\hline
\end{tabular}
\caption{Number of grid points actually used and corresponding number of pre-computed branches depending on the initial number of points in the discretization grid.}
\label{tab:points below horizon}
\end{center}
\end{table}
The number of pre-computed branches grows exponentially fast when we take into account more grid points. Time taken to pre-compute the branches grows accordingly. In this example, the number of points used in mode $2$ is low, therefore the number of branches remains tractable.
To compute the filtered trajectory in real time, one starts with the approximation of the solution of Eq. (\ref{eq Ric}). The first branch corresponds to the pre-computed branch starting at time $0$ from $\theta(0)$. When the first jump occurs, one selects the nearest neighbor of the jump time in the quantization grid and the corresponding pre-computed branch, and so on for the following jumps.
Figure \ref{ric-error} shows the mean of the relative error between the solution of Eq (\ref{eq Ric}) and its approximation (for the matrix norm 2) for given numbers of points in the quantization grids and $10^5$ Monte Carlo simulations. Again, it illustrates how the accuracy of the approximation increases with the number of points in the quantization grids.
\begin{figure}[t]
\centerline{\includegraphics[height=4cm]{ErrP.pdf}}
\caption{Average relative error between the solution of Riccati equation and its approximation,
from top to bottom: blue: 50 points, red: 100 points, green: 500 point, black: 1000 points in the quantization grids.}
\label{ric-error}
\end{figure}
Finally, the real-time approximation of Eq (\ref{eq Ric}) is plugged into the filtering equations to obtain an approximate KBF.
Figure \ref{filter-error} shows the mean $L^2$ distance between the real KBF $\{\widehat{x}_{KB}(t), 0\leq t\leq T\}$ and its approximation $\{\widetilde{x}, 0\leq t\leq T\}$ following our procedure for an increasing number of points in the quantization grids and for $10^5$ Monte Carlo simulations.
\begin{figure}[t]
\centerline{\includegraphics[height=4cm]{ErrKA.pdf}}
\caption{$L^2$ norm of the difference between $\widehat{x}_{KB}$ and its quantized approximation $\widetilde{x}$, from top to bottom: blue: 50 points, red: 100 points, green: 500 point, black: 1000 points in the quantization grids.}
\label{filter-error}
\end{figure}
\subsection{Comparison of the filters}
For each filter, we ran $10^5$ Monte Carlo simulations and computed the mean of the following error between the real trajectory $\{x(t), 0\leq t\leq T\}$ and the filtered trajectory $\{\hat{x}(t), 0\leq t\leq T\}$ for all of the three filters presented above, the exact Kalman--Bucy filter being the reference.
\begin{equation*}
\int_0^T\Big(\big(x_1(t)-\hat{x}_1(t)\big)^2+\big(x_2(t)-\hat{x}_2(t)\big)^2+\big(x_3(t)-\hat{x}_3(t)\big)^2\Big) dt.
\end{equation*}
Table~\ref{tab:filter_error} gives this error for given numbers of points in the quantization grids. Of course only the error for the approximate filter changes with the quantization grids. Note that our approximate filter is very close to the KBF and performs better than the LMMSE for as little as $10$ points in the quantization grids corresponding to $7$ precomputed branches.
\begin{table}[t]
\footnotesize
\begin{center}
\begin{tabular}{cccc}
\hline
Number of grid points&Error for &Error for &Error for \\
&KBF&approximate filter&LMMSE\\
\hline
10&3.9244&3.9634&3.9850\\
50&3.9244&3.9254&3.9850\\
100&3.9244&3.9246&3.9850\\
500&3.9244&3.9244&3.9850\\
1000&3.9244&3.9244&3.9850\\
\hline
\end{tabular}
\caption{Average error for the different filters depending on the number of points in the quantization grids,
considering horizon $T=0.02$.}
\label{tab:filter_error}
\end{center}
\end{table}
We also ran our simulations with longer horizons. The performance of the filters is given
in Table~\ref{tab:filter_errorLong} and illustrate that our filter can still perform good with a longer horizon.
Note that the computations of the LMMSE is impossible from an horizon of $0.4$ on because the estimated
state space reaches too high values very fast, and they are treated as infinity numerically.
From an horizon of $0.8$ on, all computations are impossible because the system is not mean square
stable, as we explained before.
\begin{table}[t]
\footnotesize
\begin{center}
\begin{tabular}{crrrrr}
\hline
$T$&Grid &Branches&Error for &Error for &Error for \\
& points&&KBF&approx. filter&LMMSE\\
\hline
0.1&10&12&376.3&425.6&812.5\\
0.1&50&110&376.3&379.1&812.5\\
0.1&100&3519&376.3&376.6&812.5\\
\hline
0.2&10&14&8597&10610&13260\\
0.2&50&2832&8597&9715&13260\\
\hline
0.3&10&14&2.325$\times 10^4$&4.893$\times 10^6$&3.023$\times 10^5$\\
0.3&50&11248&2.325$\times 10^4$&4.141$\times 10^6$&3.023$\times 10^5$\\
\hline
0.4&10&14&4.913$\times 10^4$&4.663$\times 10^{10}$&NaN\\
0.4&50&50049&4.913$\times 10^4$&2.102$\times 10^{10}$&NaN\\
\hline
\end{tabular}
\caption{Average error for the different filters depending on the horizon, the number of points in the quantization grids and the number of branches.}
\label{tab:filter_errorLong}
\end{center}
\end{table}
\section{Conclusion}
\label{sec-conclusion}
We have presented a filter for state estimation of {s}MJLS relying on
discretization
by quantization of the {semi-}Markov chain and solving a finite number of filtering
Riccati equations.
The difference between the approximated Riccati solution $\widetilde P(t)$
and the actual Riccati solution $P(t)$ has been studied
and we have shown in Theorem \ref{th:Lip} that it converges to zero in average
when the number of points in the discretization grid goes to infinity;
a convergence rate is also provided, allowing to find a convergence
rate for the gain matrices, see Corollary \ref{cor:ErrK}.
Based on this result, and on an upper bound for the conditional second moment of the KBF
that is derived in Lemma \ref{lem:X4}, we have obtained the main
convergence result in Theorem \ref{th:cv filter}, which implies
convergence to zero of $\mathbb{E}|x_{KB}(t)-\widetilde x(t)|^2$, so that
$\widetilde x(t)$ approaches $x_{KB}(t)$ almost surely as the number of
grid points goes to infinity.
Applications in which $\theta$ is not instantaneously observed can
also benefit from the proposed filter, however it may not completely recover the
performance of the KBF as explained in Remark \ref{rem-delayed-observation}.
The algorithm has been applied to a real-world system and performed
almost as well as the KBF with a small grid of $10$ points.
Although the proposed filter can be pre-computed, the number of
branches of the Riccati equation grows exponentially with the time horizon $T$,
making the pre-computation time too high in some cases. One exception comprises
systems with no more than one fast mode (high transition rates),
because in such a situation the slow modes do not branch much and the
number of branches grows in an almost linear fashion with $T$ as long as
the probability of the slow mode to jump before $T$ remains small.
Examples of applications coping with this setup,
which can benefit from the proposed filter, are systems with
small probability of failure and quick recovery (the failure mode is fast),
or a variable number of permanent failures (the normal mode is fast),
with web-based control as a fertile field of applications.
For general systems, one possible way out of this cardinality issue
is to use a rolling-horizon scheme where the approximate gains are
pre-computed in small batches during the system operation and sent
to the controller memory. Another approach could be to quantize directly the sequence $\{S_k, P_k(S_k)\}$ thus keeping the number of branches at a fixed number, allowing for general transition rate matrices and longer horizons in terms of the number of jumps. However this approach suffers from a curse of dimensionality as the quantization error goes to zero with slower and slower rate as the dimension of the process goes higher, see Theorem~\ref{th:quantize}.
Future work will look into a rolling-horizon implementation scheme,
implementation issues and different compositions of the
KBF/LMMSE, for instance using time-delayed solutions of the KBF that can be computed during
the system operation as a measure for discarding unnecessary branches.
Alternative schemes for discretization/quantization and selection of
the appropriate
pre-computed solutions can be pursued, seeking to reduce the computational load
of the current algorithm while preserving the quality of the estimate.
\section*{Acknowledgment}
This work was supported by FAPESP Grant 13/19380-8,
CNPq Grants 306466/2010 and 311290/2013-2, USP-COFECUB
FAPESP/FAPs/INRIA/INS2i-CNRS Grant 13/50759-3, Inria Associate team CDSS and ANR Grant Piece ANR-12-JS01-0006.
\bibliographystyle{acm}
|
1,108,101,566,492 | arxiv |
\section{Guiding principles}
\label{sec:principles}
This section elaborates on the main design goals, which are portability, versatility, safety and autonomy, and on the implications that they entail for a RPC-based detection setup. This section also summarizes the main technical choices of our current prototype; for more details, the reader is invited to read Refs.~\cite{Wuyckens2018,Basnet2020,Gamage:2021dqd}.
Motivated by the requirement of portability, we are using small scale (active area of $16\times 16~cm^2$) RPCs. This small size is an unconventional choice for the standards of (astro)particle physics and muography, as typical motivations for using RPCs include, in fact, their relative ease of construction for large-area detectors and their relatively low cost per area~\cite{MuographyBook}. However, there are precedents for small-area RPCs in medical physics~\cite{Blanco2006}, although those are not intended to be portable.
Portability also sets constraints on the weight of the complete setup. Currently, each RPC is hosted in a thick aluminum casing that weighs 6.5 kg. A complete standalone setup consisting of four identical RPCs is shown in Fig.~\ref{fig:detector} (A). The data acquisition system (DAQ) is integrated with the HV supply to the RPCs.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{our_muoscope.png}
\caption{\label{fig:detector} (A) Muoscope set-up consisting of four glass RPC layers and DAQ. (B) One of the RPCs inside its casing; it consists of 16 sensitive strips, hosted in an air-tight aluminum box.}
\end{figure}
Our project is not intended for a single type of application, therefore our setup must be versatile. In order to easily repurpose the setup for different types of measurements, we chose a modular design where the overall geometry is not fixed.
Each RPC is housed in an individual aluminium casing and the different casings are separated by spacers, in arrangements that can be adapted to various use cases, as elaborated in Refs~\cite{Moussawi2021,Gamage:2021dqd}.
For example, four RPCs can be arranged in two adjacent pairs, alternating in orthogonal orientations, such that even and odd RPCs measure, respectively, the X and Y coordinates of the muon's passage.
Ensuring safety in confined environments demands more constraints on gaseous detectors, which are typically operated with continuous gas flow using external gas supply (mainly through attached gas bottles); and autonomy means that the detectors should run for extended periods without human intervention, which means that gas refilling has to be as infrequent as possible (ideally only once, in the lab, before moving the setup to the place where it has to take data).
The problem of reducing the necessary gas flow has been studied, for example, in Refs.~\cite{procureur2020we,Assis:2020mvp,nyitrai2021toward}.
The aluminum casings that host our RPCs, shown open in Fig.~\ref{fig:detector} (b), are designed to be air-tight, allowing a stable operation during several weeks. The rate of gas leakage in vacuum conditions was measured using helium to be 10$^{-9}$ mbar~l~s$^{-1}$~\cite{Wuyckens2018}.
Not needing gas bottles or gas flow allows us to use the muoscope in confined areas and also facilitates portability, by reducing weight and size of the overall setup.
\section{Current RPC prototype}
\label{sec:techdescription}
The gas mixture that constitutes the active detecting medium in our RPCs consists of R134a Freon (95.2\%), isobutane (4.5\%) and SF$_{6}$ (0.3\%), kept at a pressure slightly above (by $\sim$0.1 atm) the atmospheric one.
The use of other mixtures is a possibility for the future, also taking into account that R-134a and SF$_6$ are environmentally unfriendly, and intense R\&D is being devoted in the RPC community to the search for new mixtures with a better trade-off between detector performance and global warming potential~\cite{Abbrescia:2016xdh,Guida_2020}.
Glass sheets, 1~mm thick, are used as high-resistance parallel plates, and their exterior sides are painted with a semi-conductive coating that allows to spread high voltage (HV) throughout the plate and makes a uniform electric field across the gas volume.
A uniform distance of 1.1~mm between the glass plates is obtained with nine round edge spacers made of polyether ether ketone (PEEK).
The uniformity of the semi-conductive coating is important for the performances of RPCs, and in particular when they are used for muography~\cite{MuographyBook}.
Therefore, a significant effort has been invested on this front.
At the beginning of this project, we spread the paint manually with a paint-roller~\cite{Wuyckens2018}, a cheap procedure that has two major drawbacks: it does not scale well, and it can not ensure an excellent uniformity (we observed variations of up to 200\% in surface resistivity).
Therefore, we produced a batch of glass plates where the paint was spread by serigraphy, whose variations in surface resistivity are now below 20\%~\cite{Basnet2020}. We monitor their surface resistivity regularly since almost two years, finding so far a slow drift in time but no variation in uniformity, and no visible correlation with environmental parameters~\cite{Gamage:2021dqd}.
RPC signals are picked up by 16 copper strips, 0.9~cm wide and separated by a 0.1~cm gap, meaning a pitch of 1~cm.
Data have been so far acquired via two front-end boards (FEBs) originally developed for the RPCs of the CMS experiment~\cite{FEB1, FEB2}, which can handle 32 analog inputs channels each. It is to be noted that at the start of the project the choice of only 16 strips for our four planes was dictated by the availability of only two FEBs, and not by the intrinsic position resolution (which is known to be potentially better by large factors for RPCs~\cite{Blanco2006}).
Each channel consists of an amplifier with a charge sensitivity of 2 mV/fC, a discriminator, a monostable and a LVDS driver. The LVDS outputs of all the FEBs are connected to a System-on-Chip (SoC) module, which is installed on a carrier board with a wireless connection, to ensure autonomy also from the point of view of data transfer.
\section{Towards Improved and High Resolution Next Prototype}
\label{sec:nxtprtyp}
While the purpose of this first prototype was just to gain building as well as operating experiences, the next prototype will have to pave the way towards our main aim, i.e., performing high resolution muography. Reachable goals for RPCs are $O(1mm)$ for intrinsic spatial resolution and $O(1ns)$ for timing~\cite{MuographyBook}. In order to achieve these resolution goals, a significant increase in a number of sensitive units, and thus, electronic channels, will be required.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.75\textwidth]{Strip_pixels.png}
\caption{\label{fig:multiplex}An illustrative setup with 4 layers of strips (left) and one with 2 layers of pixels (right) providing the same x, y information at the same two positions in z.}
\end{figure}
On the hardware side of things, we are planning to switch from our current CMS chip to MAROC chip~\cite{Barrillon:1091460}. The size of the MAROC chip is much smaller compared to CMS chip (O(1cm) vs O(10cm), respectively), which bodes well with our overall design goal of easy portability, and most importantly, the MAROC chip is also capable of handling eight times as much electronic channels: 8 channels per CMS chip in comparison with 64 channels per MAROC chip. Furthermore, the new chip will enable to exploit the timing information of the muon hits in our analysis of the muoscope data and we will have, for our next prototype, an opportunity to use timing information for better muon track selection as well as for background rejection. We already have access to the MAROC chip and are currently in the testing phase with the development kit with an aim to design a board dedicated and optimized for our muoscope.
Another important consideration being made for the new prototype is a possible switch from strips as sensitive units in our current version to pixels in the next one (see Figure \ref{fig:multiplex}). The obvious advantage of this switch, keeping in mind the design goals discussed earlier, is that both the number of layers and the total weight of the overall setup will be halved. Additionally, the total detector efficiency also will also improve; $\epsilon^{n/2} > \epsilon^{n}$, where $\epsilon$ is the efficiency of a single detector layer. On the other hand, the total power consumption as well as the cost of electronics could be negatively impacted because of the switch. The total cost and power consumption of the electronics scales linearly for the strips option whereas, for the pixels option, they scale quadratically. For the same spatial resolution, the total number of electronic channels required for the muoscope is 4$\times$N for the strips compared to 2$\times$N$^2$ for the pixels, where N is the segmentation along X and Y directions (hence, the number of strips for a 1D detector). For our current resolution and size, this means increasing the total number of electronic channels from 64 (strips) to 512 (pixels).
It is clear that the only major counter-argument for opting the pixels solution is the overall cost of the electronics and one way to address this issue would be to read out several sensitive units (pixels, in this case) to a single electronic channel. This method of reading multiple units with one electronic channel is known as multiplexing~\cite{procureur2013genetic} and in order to make the cost of the strips and pixels option equal, one has to multiplex 10 pixels with one electronic channel. A brief feasibility study of a 2D (i.e., pixel) multiplexing scheme with a worked-out example is discussed in the following section.
\section{Multiplexing}
\label{sec:mult}
In the context of our muoscope, multiplexing is to be done with a combination of hardware (i.e., how the pixels are connected) and software (i.e., optimal clustering of adjacent pixel signals). Even though multiplexing depends equally on both of these aspects, we are currently studying how the pixels should be connected while a thorough study of an optimal de-multiplexing technique will be performed in the near future.
Our current approach is to randomly generate possible multiplexing mapping schemes that fulfill relatively simple rules for the pixel’s connection~\cite{ourPatentApp}. The only rule we impose in our procedure is that the sets of multiplexed pixels that are connected together are evenly distributed in the plane and cannot be adjacent (including the edges). Further optimization of the hardware connections is to be carried out studying multiple mapping schemes and determining the one with least spurious clusters generated after multiplexing.
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\textwidth]{multiplexing.png}
\caption{\label{fig:gridmap} An example multiplexing scheme where a bigger grid of 16$\times$16 pixels is sub-divided, with double lines, into 4 grids each with dimensions 8$\times$8 pixels (i.e., multiplexing factor of 4). The numbers in the matrix represent the pixel labels. The pixels with the same labels from these 4 sub-matrices are to be connected together.}
\end{figure}
Generally, a few main parameters defines multiplexing. For our case, these parameters are listed below.
\begin{itemize}
\item Total number of pixels in the matrix ($N_{x}\times N_{y} = 16\times 16$);
\item Expected Cluster Size ($C_{x} \times C_{y} = 3\times3$);
\item Multiplexing factor ($M$): number of pixels connected to each other, hence number of sub-grids;
\item Total number of readout channels (N$_{r}$).
\end{itemize}
However, not all of these parameters are independent of each other. In particular, the total number of readout channels is simply the ratio of the total number of pixels in the matrix over the multiplexing factor.
As an example on how the pixel connection part of the multiplexing process would look like, firstly we studied an easy case with a matrix of size 16 $\times$ 16 pixels (256 pixels in total) and a relatively mild multiplexing factor, $M=4$, which means 64 readout channels ($N_{r}$) in total. We follow a "divide and conquer" approach where we divide our main 16 $\times$ 16 matrix into 4 sub-matrices (each with 8 $\times$ 8 pixels). Following the rule of adjacency from above, we created a multiplexing mapping scheme such that the pixels from all four sub-matrices with the same labels are going to be connected together. A representative example of such a mapping scheme for $M=4$ is shown in Figure \ref{fig:gridmap}. Since the arguments of cost and power consumption of the electronics demand $M\ge 8$ for making pixels more convenient than strips, we have, in addition to $M=4$, studied the case where $M=8$, i.e. where the main grid is divided into 8 sub-matrices, each with 4 $\times$ 8 pixels and $N_{r}=32$.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{multplex_results_final.png}
\caption{\label{fig:multiplex2} Cluster size distributions comparing three different randomly-generated grid maps as well as various levels of noise for a single grid map (no noise, 1\% and 3\%). Top panels (A and B) and bottom panels (C and D) show results for multiplexing factors $M=4$ and $M=8$, respectively.}
\end{figure}
For de-multiplexing, we exploit the spill-over of the "real" signal over neighboring channels (as in e.g. Ref.~\cite{procureur2013genetic}). Our adjacency rule is meant to ensure that a set of adjacent pixels in the true hit location can not correspond directly to another set of adjacent pixels in other sub-grids. However, accidental clusters of adjacent pixels may still be formed, especially with the help of stochastic noise.
It should also be noted that we do not take into account cases with multiple muons per event; however, these are a rare occurrence when the detector area is small as is our case.
We expect to be able to tune the geometrical parameters, and therefore the signal spill-over, such that a typical "real" signal to spread over immediately adjacent pixels, therefore a typical cluster to be made of 3$\times$3 pixels. All possible single muon signals (14$\times$14 = 196) were generated in the main grid and the resulting cluster size data was produced. In Figure \ref{fig:gridmap}, an example 3$\times$3 signal in the first sub-matrix is shown in the red cells and the corresponding pixels with the same labels in the rest of the sub-matrices are highlighted with green cells. For this signal, we have 1 "accidental" cluster of size 3, 1 "accidental" cluster of size 2, and 22 "accidental" size 1 clusters as well as 1 "real" cluster of size 9. The cluster size for all possible 196 signals for three different grid maps are compared for $M$ of 4 and 8 are shown in Figures \ref{fig:multiplex2}(A) and \ref{fig:multiplex2}(C), respectively.
In order to examine the effects of stochastic noise to our multiplexing approach, we have also simulated, along with 3 $\times$ 3 "real" signals, 1\% and 3\% (which is very pessimistic) (i.e., 8 extra pixels, ) noisy pixels in this study, which for this grid size means roughly 3 and 8 extra noisy pixels, respectively, on average.
The resultant cluster size distributions for a specific grid map ("grid-map 01"), comparing 0\%, 1\% and 3\% noise cases for $M$ of both 4 and 8, are shown in Figures \ref{fig:multiplex2}(B) and \ref{fig:multiplex2}(D), respectively.
From Figure \ref{fig:multiplex2}(A) and (C), it can be clearly seen that the "accidental" clusters rapidly die out as the cluster size increases, dividing the cluster size distributions into two distinct regions: "real" and "accidental" cluster regions (as labelled in the figure). Although a comparison of only three distinct grid maps is shown in this current study, the aforementioned trend seems to be more or less universal as we could observe it in all the O(100) grid maps that we generated for this study.
However, increasing levels of stochastic noise would eventually lead to a situation where the "real" and "accidental" cluster regions begin to be indistinguishable.
To find out what level of noise can be tolerated by our method, we show the cluster size distributions corresponding to 3\% (1\%) noise in blank magenta (shaded teal) histograms in Figures \ref{fig:multiplex2}(B) and \ref{fig:multiplex2}(D). The blurring between "real" and "accidental" regions due to noise is more pronounced for $M=8$ than for $M=4$, as qualitatively expected. In comparison with the simpler no noise cases where no events with cluster size greater than 9 was seen, the tail of the cluster size distributions after the addition of stochastic noise is relatively longer, in some cases extending to the size of 16. These features in the distributions clearly exhibit that a simple cluster size based discrimination is not robust enough for de-multiplexing offline and begins to fail with the introduction of stochastic noise. Therefore, as a future development, we intend to make our de-multiplexing procedure exploit the expectations about the cluster shape; this will demand a detailed simulation of the signal formation and of the cross-talk between channels.
Another important parameter that has not not been explored in this study, but might be worth studying, is the ratio of total grid size and the $M$ factor. Moreover, our current study uses an arbitrary "real" signal size. More grounded results require a realistic estimate on the "real" signal size as well as level of noise expected with pixels, which can only be possible after the fabrication of a pixel based readout board that is already in the pipeline. A more thorough and systematic study with more aggressive multiplexing (factor of $> 8$), taking into account also the cluster-shape, will follow soon.
\section{Conclusion \& Outlook}
We reported on the current status of our project for the development of a portable, compact and versatile muon detection system for muography.
Our technological choices take into account the possibility that such a setup needs be operated in a confined space, possibly with challenging logistics.
Our system, based on mini glass-RPC detectors, is intended to be low cost and portable, not only in terms of size and weight but also with respect to gas tightness and ease of transportation of the full setup, including electronics.
After gaining experience with our first prototype, and having addressed the problem of the uniformity of the resistive coating by the usage of serigraphy, we are laying the groundwork towards a second generation detector with state-of-the-art electronics. For our next prototype, we aim at better spatial resolution and at the introduction of 2D reading (i.e., switching from strips to pixels).
To avoid that these new developments lead to an explosion in the cost and power consumption of the electronics, we are developing a dedicated method for 2D multiplexing.
\section{Introduction}
\input{intro}
\input{bulk_text}
\section*{Acknowledgements}
\input{acknowledgements}
\bibliographystyle{unsrt}
|
1,108,101,566,493 | arxiv | \section{\label{sec:intro}Introduction}
Magnetic materials on lattices comprised of equilateral triangles continue to attract attention from both the experimental and theoretical viewpoint due to the richness of physical properties provided by geometric frustration~\cite{sadoc_mosseri_frustration_1999,diep_frustration_2005,moessner_ramirez-frustration_2006,lacroix_frustration_2011}.
Frustrated antiferromagnets are often characterized by non-collinear spin configurations, since the topology of the lattice forbids a conventional N\'eel order.
Among the many realizations of frustrated systems, kagome magnets with antiferromagnetic nearest-neighbor (nn) interactions have become a staple example of systems with macroscopic degeneracy in the ground state, giving rise to high sensitivity of various symmetry-breaking perturbations~\cite{Chalker_Holdsworth_Shender_kagome_1992_prl,Harris_Kallin_Berlinsky_kagome_1992_prb,Huse_Rutenberg_kagome_1992_prb,Balents_Fisher_Girvin_kagome_2002_prb,Fu_kagome_2015_science,Fujihala_kagome_2020_nature}.
Recently, a family of magnetic materials with AB-stacked kagome layer structure and a general formula $\mathrm{Mn}_3X$ have been experimentally shown to host the anomalous Hall effect (AHE) as well as anomalous Nernst effect (ANE)~\cite{Chen_Niu_MacDonald_ahe_2014_prl,Nakatsuji_Kiyohara_Higo_ahe_2015_nature,Kiyohara_ahe_2016_prap,Nayak_ahe_2016_science,Ikhlas_ane_2017_nature,Hong_ane_2020_prm}.
These discoveries prompted recent theoretical and experimental studies of the magnetic properties in $\mathrm{Mn}_3X$ magnets~\cite{Liu_Balents_gs_2017_prl,Park_gs_2018_nature_pub,Li_gs_2019_nature_comm,Reichlova_gs_2019_nature_comm,Soh_gs_2020_prb,Chen_gs_2020_prb,Zelenskiy_Monchesky_Plumer_Southern_2021_prb}.
Over the last decade, magnets with strong SOC have been under an intense investigation motivated by an ongoing search for unconventional magnetic phases.
On the one hand, these include strongly correlated disordered states with large degeneracy in the ground states, such as various types of spin liquids~\cite{kitaev_spinliquid_2006,Ran_spinliquid_2007_prl,Balents_spinliquid_2010_nature,Yan_spinliquid_2011_science,Messio_spinliquid_2012_prl,Bauer_spinliquid_2014_nature_comm,Savary_Balents_2016_RPP,Takagi_spinliquid_2019_nature}.
On the other hand, however, ordered magnetic textures such as skyrmion lattices~\cite{Bogdanov_Hubert_skyrmions_1994_JMMM,Rossler_Bogdanov_Pfleiderer_skyrmions_2006_nature}, and multi-$\mathbf{Q}$ structures consisting of linear superpositions of non-colinear spin density waves~\cite{Hayami_frustrated_skyrmions_2021_prb,Leonov_Mostovoy_skyrmions_2015_nature}, have attracted increasing interest in the literature. These non-trivial magnetic orders often have nonzero scalar spin chirality, which serves as a source of the emergent electromagnetic fields, within the Berry phase formalism~\cite{Schulz_skyrmions_emergent_2012_nature,Everschor-Sitte_Sitte_the_2014_jap}, giving rise to important transport properties, such as topological Hall effect (THE)~\cite{Neubauer_the_2009_prl,Bruno_the_2004_prl,Everschor-Sitte_Sitte_the_2014_jap,Kurumaji_the_2019_science,He_the_2022_acta_mater} and spin Hall effect (SHE)~\cite{Hirsch_she_1999_prl,Chen_Byrnes_she_2019_prb}.
More recently, magnetic frustration was identified as one of the stabilizing factors for multi-$\mathbf{Q}$ spin configurations, leading to magnetic orders beyond those typically observed in chiral ferromagnets~\cite{Hayami_frustrated_skyrmions_2021_prb}.
In the case of $\mathrm{Mn}_3X$ compounds, both the THE and SHE have been observed experimentally, and studies have also established that both the AHE, and ANE, as well as the magnetic structure are strongly anisotropic, implying that the spin couplings beyond isotropic exchange are crucial for understanding the magnetic properties of these systems~\cite{Kiyohara_ahe_2016_prap,Nayak_ahe_2016_science,Zhang_ahe_anis_2017_prb}.
Previous studies have also established that different types of anisotropic interactions compete with each other leading to additional frustration~\cite{Liu_Balents_gs_2017_prl,Soh_gs_2020_prb,Chen_gs_2020_prb,Zelenskiy_Monchesky_Plumer_Southern_2021_prb}.
These facts motivate a systematic study of the magnetic ground state properties in an extended parameter space.
In our previous work in Ref.~\cite{Zelenskiy_Monchesky_Plumer_Southern_2021_prb} (hereafter referred to as Ref.~I), we derived a magnetic model for these magnetic compounds using general symmetry principles.
Apart from the typical exchange couplings, the symmetry-allowed terms consist of various anisotropic couplings, including Dzyaloshinskii-Moriya (DM), bond-dependent anisotropic exchange, and single-ion anisotropy (SIA).
These anisotropic interactions arise from the coupling of spins to the underlying lattice via SOC.
In Ref.~I, we have studied the ground state properties and spin wave excitations of this model relevant to the magnetism of $\mathrm{Mn}_3X$ systems.
In particular, this work provided a detailed analysis of the interplay between various anisotropic interactions and their effect on the static and dynamic properties of $\mathrm{Mn}_3X$ compounds.
We determined that both SIA and bond-dependent anisotropy compete with the DM interactions, leading to an induced magnetic moment and an excitation spectrum with broken six-fold symmetry.
However, we also found that these properties are extremely sensitive to both the signs and relative magnitudes of the anisotropic interactions.
In the present work, we investigate semi-classical ground state properties of hexagonal AB-stacked kagome systems for an extended range of magnetic interactions using a combination of analytical Luttinger-Tisza (LT) and numerical Monte-Carlo (MC) techniques.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{crystal_structure_mn3x.pdf}
\caption{(a) A sketch of the AB-stacked kagome lattice. The conventional unit cell defined by the lattice vectors $\mathbf{a}_1$ and $\mathbf{a}_2$ is shaded in blue, while the hexagon formed by the four sublattices is shaded in pink. (b) The three types of nn and nnn interactions appearing in (a). Solid black lines indicate the nn in-plane interactions, while the dashed red and green lines represent the nn and nnn out-of-plane interactions respectively ($\boldsymbol{\mathcal{A}}_i = \{J_i,D_i,A_i^{(z)},A_i^{(xy)}\}$). (c) An enlarged diagram of the unit cell convention used in this work. $\mathbf{S}_i$ label the spins on the six sublattices. Spins 1, 2, and 3 reside in layer A, while spins 4, 5, and 6 are in layer B.}
\label{fig:structure}
\end{figure}
To tackle the large parameter space of the magnetic model, we group the magnetic interactions based on the effective spin symmetry imposed by the relative strength of the SOC.
We determine three SOC symmetry regimes and present the corresponding group structure, along with the relevant irreducible representations (irreps).
Furthermore, for each of these symmetry regimes, we identify a set of self-duality transformations that reduce the number of independent points in the parameter space.
We find that in the weak SOC limit the magnetic Hamiltonian has the largest number of dualities which comprise a group with non-Abelian structure.
The numerical and analytical calculations reveal a variety of magnetic phases, including single-$\mathbf{Q}$, multi-$\mathbf{Q}$, as well as more complicated structures with delocalized structure factors in the Brillouin zone.
We parameterize the magnetic phases and study the elementary spin wave excitations.
Finally, we analyze the effects of bond-dependent exchange and SIA on the magnetic phases stabilized by exchange and DM interactions, and discuss the implications for the case of $\mathrm{Mn}_3X$ systems.
The rest of this paper is organized as follows.
In Sec.~\ref{sec:model_methods} we introduce the magnetic model and briefly outline the analytical and numerical methods.
In Sec.~\ref{sec:symmetry} we identify the connection between the strength of the SOC and the effective symmetry of the magnetic system.
Next, in Sec.~\ref{sec:self_duality}, the self-duality transformations are introduced and derived for each SOC symmetry case.
The magnetic ground state phase diagrams for models with exchange and DM interactions are presented in Sec.~\ref{sec:phase_diagram}.
The distinct types of magnetic order are described in Sec.~\ref{sec:phases_structure}, and the corresponding spin-wave excitation spectra are given in Sec.~\ref{sec:spin_waves}.
The effects of anisotropic interactions are discussed in Sec.~\ref{sec:anisotropy}.
Finally, Section~\ref{sec:conclusions} is devoted to concluding remarks and a summary of the results.
\section{\label{sec:model_methods}Model and Methods}
\subsection{\label{subsec:model} Model}
The spin Hamiltonian for $\mathrm{Mn}_3\mathrm{X}$-type AB-stacked kagome lattice systems has been derived in Ref.~I from symmetry principles and contains four different types of interactions:
\begin{align}
&\mathcal{H} = \mathcal{H}_{J} + \mathcal{H}_{D} + \mathcal{H}_{A} + \mathcal{H}_{K} \label{eq:magnetic_hamiltonian}\\
&\mathcal{H}_{J} = \frac{1}{2}\sum_{\mathbf{r}\mathbf{r'}}\sum_{ij} J_{ij}(\mathbf{r}-\mathbf{r}') \mathbf{S}_{i}(\mathbf{r})\cdot\mathbf{S}_{j}(\mathbf{r'})\notag\\
&\mathcal{H}_{D} = \frac{1}{2}\sum_{\mathbf{r}\mathbf{r'}}\sum_{ij} D_{ij}(\mathbf{r}-\mathbf{r}')\mathbf{\hat{z}}\cdot \left(\mathbf{S}_{i}(\mathbf{r})\times\mathbf{S}_{j}(\mathbf{r'})\right)\notag\\
&\mathcal{H}_{A} = \frac{1}{2}\sum_{\mathbf{r}\mathbf{r'}}\sum_{ij}\sum_\alpha A_{ij\alpha}(\mathbf{r}-\mathbf{r'})\left(\mathbf{\hat{n}}_{i\alpha}\cdot\mathbf{S}_{i}(\mathbf{r})\right)\left(\mathbf{\hat{n}}_{j\alpha}\cdot\mathbf{S}_{j}(\mathbf{r'})\right),\notag\\
&\mathcal{H}_{K} = \sum_\mathbf{r}\sum_{i}\sum_\alpha K_{\alpha} \left(\mathbf{\hat{n}}_{i\alpha}\cdot \mathbf{S}_{i}(\mathbf{r})\right)^2.\notag
\end{align} Here, $\mathcal{H}_{J}$ is the isotropic Heisenberg exchange, $\mathcal{H}_{D}$ is the DM interaction, $\mathcal{H}_{K}$ is the SIA, and $\mathcal{H}_{A}$ is the symmetric anisotropic exchange interaction.
Sum indices $\mathbf{r}$, $\mathbf{r'}$ label unit cells, $i, j\in\{1,...,6\}$ label atoms in each unit cell (Fig.~\ref{fig:structure} (c)), and $\alpha \in \{x,y,z\}$ labels the spin vector components.
Vectors $\mathbf{n}_{i\alpha}$ represent local anisotropy axes and can be written as
\begin{equation}
\mathbf{\hat{n}}_{ix} = \begin{bmatrix} \cos{\alpha_i} \\ \sin{\alpha_i} \\ 0\end{bmatrix},
\mathbf{\hat{n}}_{iy} = \begin{bmatrix}-\sin{\alpha_i} \\ \cos{\alpha_i} \\ 0\end{bmatrix},
\mathbf{\hat{n}}_{iz} = \begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix},
\end{equation}
where $\alpha_i$ give the angle of the anisotropy axes with respect to the global $x$-direction.
In this paper, we will restrict our attention to in- and out-of-plane nearest-neighbor (nn) and out-of-plane next-nearest-neighbor (nnn) interactions shown in Fig.~\ref{fig:structure} (a).
The interaction labels are based on $\mathrm{Mn}_3X$ bond distances, with index 1 labeling out-of-plane nn interactions, 2 and 3 labeling the in-plane nn, and 4 and 5 labeling the out-of-plane nnn.
In this work, we will ignore the breathing anisotropy~\cite{Chen_gs_2020_prb}, in order to simplify the analysis, in which case $\boldsymbol{\mathcal{A}}_2=\boldsymbol{\mathcal{A}}_3$, and $\boldsymbol{\mathcal{A}}_4=\boldsymbol{\mathcal{A}}_5$ (Fig.~\ref{fig:structure}).
It has recently been shown that an effective spin Hamiltonians of the form~(\ref{eq:magnetic_hamiltonian}) can be derived through perturbation theory from lattice Kondo model with SOC~\cite{Hayami_Yukitoshi_kondo_2018_prl,Yutaka_Ugadawa_Motome_kondo_2012_prl,Ghosh_kondo_2016_prb}.
The isotropic exchange terms in this case correspond to the Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions while the remaining anisotropic spin interactions originate from the SOC.
However, while the DM interactions depend linearly on the strength of the SOC, the SIA and anisotropic exchange yield quadratic dependence~\cite{Moriya_dmi_1960_pr,Hayami_Yukitoshi_kondo_2018_prl}.
Therefore, within the perturbation theory, we generally expect the magnitudes of the latter two types of the anisotropic interactions to be smaller than that of the DM interaction.
It is sometimes useful to write the Hamiltonian in the general quadratic form:
\begin{equation}
\mathcal{H} = \frac{1}{2}\sum_{\mathbf{r}\mathbf{r}'} \sum_{ij} \mathbf{S}^T_{i}(\mathbf{r}) \boldsymbol{\mathcal{A}}_{ij} (\mathbf{r}-\mathbf{r}') \mathbf{S}_{j}(\mathbf{r}'),
\label{eq:hamiltonian_quadratic}
\end{equation}
where the coupling matrix $\boldsymbol{\mathcal{A}}_{ij}(\boldsymbol{\delta})$ between two inequivalent spins is given by
\begin{equation}
\boldsymbol{\mathcal{A}}_{ij} (\boldsymbol{\delta}) = \begin{bmatrix}V_{ij}^{+}(\boldsymbol{\delta})& W_{ij}^{+}(\boldsymbol{\delta})& 0 \\ W_{ij}^{-}(\boldsymbol{\delta})& V_{ij}^{-}(\boldsymbol{\delta}) & 0 \\ 0 & 0 & X_{ij}(\boldsymbol{\delta})\end{bmatrix},
\label{eq:general_coupling_matrix}
\end{equation}
with
\begin{align}
V_{ij}^{\pm}(\boldsymbol{\delta}) &= J_{ij}(\boldsymbol{\delta}) \pm A^{(xy)}_{ij}(\boldsymbol{\delta})\cos(\bar{\alpha}_{ij}),\notag\\
W_{ij}^{\pm}(\boldsymbol{\delta}) &= \pm D_{ij}(\boldsymbol{\delta}) + A^{(xy)}_{ij}(\boldsymbol{\delta})\sin(\bar{\alpha}_{ij}),\notag\\
X_{ij}(\boldsymbol{\delta}) &= J_{ij}(\boldsymbol{\delta}) + A^{(z)}_{ij}(\boldsymbol{\delta}).\notag
\end{align}
Here, $\boldsymbol{\delta}=\mathbf{r}-\mathbf{r}'$, $\bar{\alpha}_{ij} = \alpha_i+\alpha_j$, $A_{ij}^{(xy)}$ and $A_{ij}^{(z)}$ are the two types of anisotropic exchange parameters allowed by symmetry.
The part of the coupling matrix that corresponds to the single-ion anisotropy is given by
\begin{equation}
\boldsymbol{\mathcal{A}}_{ii} (0) = K^+\boldsymbol{\mathbb{I}} + \begin{bmatrix}K^- \cos 2\alpha_i & K^- \sin 2\alpha_i & 0 \\ K^- \sin 2\alpha_i & -K^- \cos 2\alpha_i & 0 \\ 0 & 0 & K_Z\end{bmatrix},
\label{eq:SIA_coupling_matrix}
\end{equation}
where $\boldsymbol{\mathbb{I}}$ is a $3\times 3$ identity matrix, $K^+ = K_x+K_y$, $K^- = K_x-K_y$, and $K_Z = 2K_z - K^+$.
Note that the first term in this expression is just a constant energy shift, and therefore $K^+$ can be ignored in further calculations.
\subsection{\label{subsec:methods_LT}Luttinger-Tisza method}
To provide an initial characterization of the classical ground states, we define lattice Fourier transforms of the spin vectors as
\begin{equation}
\mathbf{S}_i(\mathbf{r}) = \frac{1}{\sqrt{N}}\sum_\mathbf{q} \mathbf{S}_i(\mathbf{q}) e^{-i\mathbf{q}\cdot\mathbf{r}},
\label{eq:spin_FT}
\end{equation}
where $\mathbf{q}$ are the wave-vectors restricted to the first Brillouin zone, and $\mathbf{S}(\mathbf{q})$ are the Fourier amplitudes.
From Eq.~(\ref{eq:magnetic_hamiltonian}) and (\ref{eq:hamiltonian_quadratic}), the total energy of the system is
\begin{equation}
\mathcal{H} = \frac{1}{2}\sum_{\mathbf{q}} \sum_{ij} \mathbf{S}^T_{i}(\mathbf{q}) \boldsymbol{\mathcal{A}}_{ij} (\mathbf{q}) \mathbf{S}_{j}(-\mathbf{q}),
\label{eq:hamiltonian_q}
\end{equation}
where the Fourier transform of the magnetic interactions are given by
\begin{equation}
\boldsymbol{\mathcal{A}}_{ij} (\mathbf{q}) = \sum_{\boldsymbol{\delta}} \boldsymbol{\mathcal{A}}_{ij} (\boldsymbol{\delta}) e^{-i\mathbf{q}\cdot\boldsymbol{\delta}}.
\end{equation}
The true ground state is calculated by minimizing ~(\ref{eq:hamiltonian_q}), subject to local normalization constraints $|\mathbf{S}_i(\mathbf{r})| = 1$ for all spins in the system.
This strong constraint significantly complicates the problem and often makes it impossible to solve.
Instead, the Luttinger-Tisza (LT) method~\cite{Luttinger_Tisza_1946_pr} replaces the local constraints by a global constraint, whereby the sum of all spin magnitudes is set to be equal to the number of spins.
This simplification allows one to recast the energy minimization in the form of an eigenvalue problem,
\begin{equation}
\sum_{j} \boldsymbol{\mathcal{A}}_{ij} (\mathbf{q}) \mathbf{S}_{j}(\mathbf{q}) = \varepsilon(\mathbf{q})\mathbf{S}_{i}(\mathbf{q})
\end{equation}
Under ideal circumstances, the smallest eigenvalue $\varepsilon_\text{LT}(\mathbf{q})$ and the corresponding eigenvector give the ground state of the magnetic system.
However, this method often produces unphysical solutions for systems with strong anisotropic interactions and multiple sublattices~\cite{Zaliznyak_Zhitomirsky_LT_2003_arxiv,Maximov_Chernyshev_2019_prx}.
As a result, we used the LT method to determine the approximate locations of the phase boundaries, as well as the lower bounds on the ground state energies to guide the numerical calculations.
\subsection{\label{subsec:methods_MC}Monte Carlo}
To identify the ground state magnetic configurations, we utilized classical Monte Carlo (MC) simulations, which were carried out using standard local heat-bath updates.
To resolve the individual phases, we have used system sizes ranging from $6^3$ to $24^3$ unit cells (1296 to 82944 spins respectively) and between $10^4$ and $10^6$ MC steps.
In each simulation, the temperature is reduced down to $T\approx 10^{-6}$, to ensure energy convergence.
All phases presented in this work have $Q_z=0$, meaning that every unit cell along the $c$-axis has exactly the same magnetic structure.
This fact allowed us to determine the phase boundaries using smaller system sizes (between $6\times 6\times 2$ and $18\times 18\times 2$ unit cells).
The MC data is Fourier transformed to obtain the spin structure factor
\begin{equation}
S(\mathbf{q}) = \frac{1}{6}\sum_{ij} \big\langle \mathbf{S}_{i}(\mathbf{q})\cdot\mathbf{S}_{j}(-\mathbf{q})\big\rangle e^{i\mathbf{q}\cdot(\mathbf{r}_i-\mathbf{r}_j)},
\end{equation}
where $\mathbf{r}_i$ are the positions of atoms inside of the unit cell.
\subsection{\label{subsec:methods_dynamics}Spin waves}
The dynamics of the magnetic system can be described by the Landau-Lifshitz equation
\begin{equation}
\frac{\text{d}\mathbf{S}_i(\mathbf{r},t)}{\text{d}t} = \mathbf{H}_{i}(\mathbf{r},t)\times \mathbf{S}_i(\mathbf{r},t),
\label{eq:LL_equation}
\end{equation}
where $\mathbf{H}_{i}(\mathbf{r})$ is the effective field at each site
\begin{equation}
\mathbf{H}_{i}(\mathbf{r},t) = \sum_{\boldsymbol{\delta}} \sum_j \boldsymbol{\mathcal{A}}_{ij}(\boldsymbol{\delta}) \mathbf{S}_{j}(\mathbf{r}+\boldsymbol{\delta},t).
\end{equation}
In this work, we will look for solutions of the linearized form of Eq.~(\ref{eq:LL_equation}) which correspond to the low-energy spin wave excitations.
This is done by first changing into a local coordinate system where the local $z$-components are aligned with the ground state spin configuration.
Note that as long as this coordinate transformation is described by a local rotation $\mathbf{U}_i(\mathbf{r})$, the dynamic evolution of the local spin components can be written in the same form as Eq.~(\ref{eq:LL_equation}), replacing $\mathbf{S}_i(\mathbf{r},t)$ and $\mathbf{H}_i(\mathbf{r},t)$ with $\widetilde{\mathbf{S}}_i(\mathbf{r},t) = \mathbf{U}_i(\mathbf{r})\mathbf{S}_i(\mathbf{r},t)$ and $\widetilde{\mathbf{H}}_i(\mathbf{r},t) = \mathbf{U}_i(\mathbf{r})\mathbf{H}_i(\mathbf{r},t)$ respectively.
\section{\label{sec:symmetry}Spin symmetry}
When studying the properties of a magnetic model, it is important to take a proper account of the symmetries that leave the Hamiltonian invariant.
Apart from the space group transformations imposed by the underlying crystalline lattice, magnetic systems also include symmetries associated with spin rotations and reflections.
The most common spin symmetry is the time reversal, $T$, which must be broken in order to establish magnetic order.
A combination of crystallographic group operations with the time reversal operator leads to \textit{magnetic point and space groups}.
The addition of this one operator significantly extends the number of symmetrically distinct systems: in three dimensions there are 1651 magnetic space groups compared to 230 ``regular'' crystal space groups~\cite{Glazer_Burns_space_groups_book_2013,bradley_cracknell_space_groups_book_2010}.
In materials with large SOC, the spins are typically pinned to the lattice, meaning that for each lattice transformation there is a corresponding spin transformation.
The symmetry group of the Hamiltonian is then the \textit{paramagnetic group} which is a direct product
\begin{equation}
\mathcal{G}_\text{SOC} = \mathcal{G}_L\otimes Z_2^{(T)},
\end{equation}
where $ \mathcal{G}_L$ is the space group and $Z_2^{(T)} = \{E,T\}$.
However, in the limit of decoupled spin and orbital degrees of freedom one has a magnetic system with isotropic Heisenberg exchange interactions which are invariant under all global spin rotations.
In fact, as was noted by Brinkman and Elliott, there are many instances of magnetic systems where the spins are at least partially decoupled from the lattice~\cite{Brinkman_Elliot_Roger_Peierls_spin_groups_1966_prsl,Brinkman_Elliot_spin_groups_1966_jap,Brinkman_spin_groups_1967_jap}.
The symmetry of these systems is then described by the \textit{spin point and space groups}~\cite{Litvin_Opechowski_spin_groups_1974_physica,Litvin_spin_groups_1977_acsa} which are typically much larger than the corresponding magnetic groups.
Although the effects of the extended spin symmetry operations have previously been considered almost exclusively in the context of either isotropic spins or single-ion anisotropy, their importance in the intermediate cases, which include DM and anisotropic exchange interactions, remains relatively underrepresented and has only began gaining interest in recent years~\cite{Corticelli_Moessner_McClarty_spin_groups_2022_prb,Liu_Liu_spin_groups_2022_prx}.
For more information about the spin space groups we refer the reader to reference~\cite{Corticelli_Moessner_McClarty_spin_groups_2022_prb} which provides an excellent review of the subject.
In compounds with $\mathrm{Mn}_3X$-like structure, depending on the strength of the SOC, one can identify three distinct cases for magnetic models depicted in Fig.~\ref{fig:SOC_symmetry_diagram}.
Each case corresponds to a different group of spin symmetries.
In this section, we present the symmetry analysis of these three cases by deriving the corresponding spin groups and determining the resulting irreps for $\mathbf{Q}=0$.
Note that the analysis presented here is for classical magnetic moments (rotations in SO(3)), but could be readily extended to quantum spin operators (rotations in SU(2)).
The details of some derivations can be found in the [\textbf{Supplemental Material}].
\subsection{\label{subsec:symmetry_decoupled}Decoupled case}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.95\textwidth]{SOC_symmetry_diagram_V2.pdf}
\caption{A diagram illustrating the effects of SOC on the symmetry of the spin Hamiltonian. In $\mathrm{Mn}_3X$ systems, we identify three SOC limits resulting in distinct symmetry groups: decoupled, weak SOC, and intermediate SOC. These correspond to, in the same order, the isotropic, XY-anisotropy, and Ising anisotropy (top panel). The total spin group of the Hamiltonian and the corresponding irreps are indicated by the colored blocks. The positive and negative $z$-components of spins are indicated by the filled circles and pluses respectively, and the blue and red colors of the spin components are used to indicate parallel and antiparallel out-of-plane nnn (labels ``g'' and ``u'' respectively). There are four irreps in the decoupled limit, which are first split into ten irreps in the weak SOC limit and then further split into twelve irreps in the intermediate SOC limit. The irreps with the out-of-plane spin components (bottom row) are unchanged going from the weak to intermediate SOC limit, and therefore have the same labels. The remaining planar irreps are labeled using two labels corresponding to the weak and intermediate limits.}
\label{fig:SOC_symmetry_diagram}
\end{figure*}
As mentioned previously, when the spin and orbital degrees of freedom are completely decoupled, the magnetic interactions correspond to the isotropic exchange, $\mathcal{H} = \mathcal{H}_J$, meaning that the spin Hamiltonian is invariant with respect to pure space transformations (lattice site permutations) and global spin rotations by an arbitrary angle.
The crystal symmetries pertaining to this work form a space group P6$_3/mmc$, which we denote as $\mathrm{D}_{6h}^{(L)}$ (indicating also the point group), and the spin rotations combined with the time reversal symmetry form a group $\text{SO}^{(S)}(3)\otimes Z_2^{(T)}$.
Since the elements in these two groups commute, the full group of the isotropic exchange Hamiltonian is simply
\begin{equation}
\mathcal{G}_J = \mathrm{D}_{6h}^{(L)}\otimes \mathrm{SO}^{(S)}(3)\otimes Z_2^{(T)}.
\end{equation}
As a result, the irreps of $\mathcal{G}_J$ are obtained by taking a direct product of the irreps of spin and lattice degrees of freedom.
The former is related to the set of spherical harmonics with $l=1$ and has dimension 3, while the latter depends on the periodicity of the magnetic structure.
For $\mathbf{Q}=0$, the symmetries of $\mathrm{D}_{6h}^{(L)}$ reduce to the point group, and the irrep decomposition becomes $A_{1g}\oplus E_{2g}\oplus B_{2u}\oplus E_{1u}$.
Therefore, there are four irreps in the decomposition of magnetic states: two of dimension 3 ($T_g$, $T_u$), and two of dimension 6 ($Q_g$, $Q_u$), as illustrated in Fig.~\ref{fig:SOC_symmetry_diagram}.
The two triplets correspond to the collinear configurations, while the two six-dimensional irreps are related to the 120 degree configurations.
The labels ``g'' and ``u'' are used to indicate the parity of the irreps under the spatial inversion, as per usual group theory notation.
\begin{table*}[t]
\centering
\resizebox{0.8\textwidth}{!}{\begin{tabular}{c|r r r r r r r r r r r r}
$\mathcal{G}_D$ & $C_1(\phi)$ & $C_6(\phi)$ & $C_3(\phi)$ & $C_2(\phi)$ & $C_2'$ & $C_2''$ & $I(\phi)$ & $IC_6(\phi)$ & $IC_3(\phi)$ & $IC_2(\phi)$ & $IC_2'$ & $IC_2''$ \\
\hline
$E_g^{(nm)}$ & $c^{(0)}_n$ & $c^{(m)}_n$ & $c^{(2m)}_n$ & $c^{(3m)}_n$ & 0 & 0 & $c^{(0)}_n$ & $c^{(m)}_n$ & $c^{(2m)}_n$ & $c^{(3m)}_n$ & 0 & 0 \\
$E_u^{(nm)}$ & $c^{(0)}_n$ & $c^{(m)}_n$ & $c^{(2m)}_n$ & $c^{(3m)}_n$ & 0 & 0 &-$c^{(0)}_n$ &-$c^{(m)}_n$ & -$c^{(2m)}_n$ & -$c^{(3m)}_n$ & 0 & 0
\end{tabular}}
\caption{Characters of the two-dimensional irreps of group $\mathcal{G}_D$ that do not also appear in group $\mathrm{D}_{6h}\otimes Z_2^{(T)}$. For simplicity, the elements with time reversal operator are not included in the table. Here, the classes $C_k(\phi)$ have two elements, $\{C_z(\pm \phi,\pm \phi_k)\}$, which corresponds to a rotation around $z$-axis of spin by an angle $\phi$ and lattice by angle $\phi_k = \frac{2\pi}{k}$. $C_2'$ and $C_2''$ include simultaneous 180 degree rotations of lattice (same as in the $\mathrm{D}_{6h}$ point group) and spin with axes in the $xy$-plane. The second half of classes is obtained by combining the first half with the spatial inversion operator. The non-zero characters are defined as $c_n^{(m)} = 2\cos\left(n\phi+\frac{m\pi}{3}\right)$, where $n \in \mathbb{Z}\setminus\{0\}$, $m \in \mathbb{Z}_6$.}
\label{tab:GD_character_table}
\end{table*}
\subsection{\label{subsec:symmetry_weakly_coupled}Weak coupling case}
As discussed in Sec.~\ref{sec:model_methods}, when the SOC is small but non-zero, it is reasonable to assume that the DM interactions are the dominant type of anisotropy in the system, giving $\mathcal{H}\approx\mathcal{H}_J + \mathcal{H}_D$.
The addition of DM coupling to the spin Hamiltonian significantly complicates the symmetry analysis of the model.
However, it is still possible to determine the structure of the corresponding spin group as well as all irreps.
We also note that the SIA and bond-dependent interactions that couple the $z$-components of spins do not change the symmetry of this spin group.
It can be shown that a spin rotation applied to a DM coupling between two spins gives
\begin{align}
D\mathbf{\hat{z}}\cdot[\mathbf{S}_i'(\mathbf{r})\times\mathbf{S}_j'(\mathbf{r}')] &= D\mathbf{\hat{z}}\cdot[(\mathbf{M}\mathbf{S}_i(\mathbf{r}))\times(\mathbf{M}\mathbf{S}_j(\mathbf{r}'))]\notag\\
&= D(\mathbf{M}^T\mathbf{\hat{z}})\cdot[\mathbf{S}_i(\mathbf{r})\times\mathbf{S}_j(\mathbf{r}')],
\end{align}
where $\mathbf{M}$ is the rotation matrix.
Therefore, all rotations that leave $\mathbf{\hat{z}}$ invariant belong to the symmetry group of DMI.
This constitutes a group of axial rotations in spin space $\mathrm{SO}^{(S)}(2)$, implying XY-anisotropy.
The complications arise from the fact that not all lattice permutations leave the DM interaction invariant.
In particular, transformations that include $C_2$ rotations around axes parallel to the kagome layers, and the corresponding reflections reverse the direction of the bonds, which flips the sign of the DM vector.
In order for these to become proper symmetry operators, they must be combined with the corresponding spin rotations/reflections.
Since the spin and lattice operations are now coupled and do not necessarily commute with each other, we can no longer write the total group of the Hamiltonian as a direct product of spin and lattice symmetry groups.
Nevertheless, the group of DM coupling can then be written as a \textit{semidirect} product
\begin{equation}
\mathcal{G}_D = \left(\mathrm{C}_{6h}^{(L)}\otimes \mathrm{SO}^{(S)}(2)\right)\ltimes \mathrm{C}_2^{SL}\otimes Z_2^{(T)},
\end{equation}
where $\mathrm{C}_{6h}^{(L)}$ is the group of lattice symmetries that leave DM interaction invariant, and $\mathrm{C}_2^{(SL)}$ has one non-trivial element $C_2^S C_2^L$ that rotates both the lattice and spin components around the $x$-axis.
The derivation of the group structure and irreps of $\mathcal{G}_D$ is given in the [\textbf{Supplemental Material}].
The irreps of $\mathcal{G}_D$ include all irreps of a regular $\mathrm{D}_{6h}\otimes Z_2^{(T)}$ magnetic group, and an infinite number of two-dimensional irreps, as presented in table~\ref{tab:GD_character_table}.
Intuitively, one can expect that the continuous axial rotational symmetry would separate the $z$-components of the spins from the planar components, while leaving the latter degenerate.
The decomposition of a $\mathbf{Q}=0$ magnetic structure consists of ten irreps, four of which involve only the $z$-components of spins ($A_{2g}\oplus E_{2g}\oplus B_{1u}\oplus E_{1u}$), and six two-dimensional planar irreps ($E_g^{(10)}\oplus E_g^{(12)}\oplus E_g^{(14)}\oplus E_u^{(11)}\oplus E_u^{(13)}\oplus E_u^{(15)}$), as shown in Fig.~\ref{fig:SOC_symmetry_diagram}.
Note that the order parameters corresponding to the two-dimensional out-of-plane irreps ($E_{2g}$ and $E_{1u}$) do not have fixed norms and therefore by themselves cannot be observed in a classical system~\cite{Essafi_Benton_Jaubert_duality_2017_prb}.
The labels $E_a^{(nm)}$ provide information about the parity of the magnetic order parameters (label $a$), transformation properties under spin rotations (subscript $n$), and the coupling between the spin and spatial transformations (subscript $m$).
In the case of the planar irreps, the $m$ subscript can also be viewed as the ``winding'' of the spins around the hexagon.
\subsection{\label{subsec:symmetry_strongly_coupled}Intermediate coupling case}
Finally, when the SOC is sufficiently strong, the SIA and bond-dependent anisotropies become important, effectively reducing the symmetry of the spin Hamiltonian to the magnetic group $\mathrm{D}_{6h}\otimes Z_2^{(T)}$.
As noted before, only those anisotropic interactions involving the planar spin components explicitly break the global axial rotation symmetry, forcing the spins to align with the local anisotropy axes $\mathbf{\hat{n}}_{i\alpha}$.
Therefore, these interactions are inherently Ising-like.
This anisotropy splits the $E_g^{(14)}$ and $E_u^{(11)}$ configurations each into two singlets, while leaving the degeneracy of the remaining irreps unchanged (Fig.~\ref{fig:SOC_symmetry_diagram}).
\section{\label{sec:self_duality}Self-duality transformations}
In order to systematically describe the classical ground state properties of a magnetic model, one must address the problem of the dimensionality of the parameter space.
In the general case considered in this work, there are more than a dozen independent parameters, making the complete computational analysis of the phase diagram forbiddingly expensive.
As a result, it is necessary to determine the means of reducing the parameter space.
The most common approach is to focus on a particular physical example (\textit{e.g.} a family of compounds) where the ranges of the coupling constants are approximately known from either the experimental data or from \textit{ab initio} calculations.
This approach allows one to bound the values of the parameters, and potentially even ignore some of them.
Although this method is of extreme utility for explaining the properties of the known compounds, it may provide very limited information for describing the functionality of novel compounds, since the relevant parts of the parameter space may fall far beyond the explored subspace.
Moreover, even after the relevant parameter ranges are identified, the dimensionality of the search space may be quite large and still require extensive computations.
For example, in the weak SOC limit, the magnetic model considered in this work still contains 4-5 independent interactions.
Another approach, often neglected in the literature, is to determine hidden relationships between the models with different coupling constants.
It is often true that the parameters in a given model are not completely independent, and one can determine a set of transformations that map the Hamiltonian onto itself, while changing the values of the coupling constants.
Such transformations are referred to as the \textit{self-duality} transformations and are the subject of this section.
We note that while the two methods described here are different in nature, one can and should use them together to achieve a systematic yet physically relevant description of a spin model.
In this work, we constrain the values of the coupling constants, in particular those originating from the SOC, by referring to the experimental and numerical results for the $\mathrm{Mn}_3X$ compounds~\cite{Soh_gs_2020_prb,Chen_gs_2020_prb,Park_gs_2018_nature_pub}.
Self-duality transformations have played an important role in statistical physics, an important example being a Kramers-Wannier duality that relates the ordered and paramagnetic phases in the two-dimensional Ising model on a square lattice~\cite{Kramers_Wannier_duality_1941_pr,Onsager_ising_1944_pr,Wannier_duality_1945_rmp,Wegner_duality_ising_1971_jmp,Wu_Wang_duality_1976_jmp,Savit_duality_1980_rmp}.
Self-duality maps provide a natural formulation of a renormalization flow and have therefore been used in studies of critical phenomena.
More recently, a different class of self-dual transformations has been derived for Heisenberg-Kitaev models on honeycomb and triangular lattices~\cite{Chaloupka_Khaliullin_duality_honeycomb_2015_prb,Kimchi_Vishwanath_duality_lattices,Maximov_Chernyshev_2019_prx}.
These transformations have been referred to in the literature as the Klein duality, since they form a group isomorphic to the Klein group.
The main interest in the self-duality maps has been the search of accidental degeneracy points, where strongly anisotropic systems at times display full rotational symmetry~\cite{Chaloupka_Khaliullin_duality_honeycomb_2015_prb,Maximov_Chernyshev_2019_prx}.
Self-duality in two-dimensional kagome layers has been considered before in~\cite{Essafi_Benton_Jaubert_duality_2016_nature_comm,Essafi_Benton_Jaubert_duality_2017_prb}.
There, it was used to draw connections between different models that support spin liquid phases.
In this section, we determine the relevant self-duality transformations for Hamiltonian~(\ref{eq:magnetic_hamiltonian}).
Since the number of possible self-dualities depends on the types of magnetic interactions in the model, we separate the discussion into three parts corresponding to the three types of spin Hamiltonians discussed in Sec.~\ref{sec:symmetry} (see Fig.~\ref{fig:SOC_symmetry_diagram}).
\subsection{\label{subsec:duality_derivation} Self-duality as a permutation of spin invariants}
For the purposes of this work, we define a self-duality transformation as a simultaneous transformation of the spin variables and model parameters that leaves the Hamiltonian unchanged:
\begin{equation}
\begin{split}
\mu: \{\mathbf{S}_{i}(\mathbf{r});\boldsymbol{\mathcal{A}}_{ij}(\boldsymbol{\delta})\} \longrightarrow \{\widetilde{\mathbf{S}}_{i};\boldsymbol{\widetilde{\mathcal{A}}}_{ij}(\boldsymbol{\delta})\}, \\
\mathcal{H}\big(\{\mathbf{S}_{i};\boldsymbol{\mathcal{A}}_{ij}(\boldsymbol{\delta})\}\big) = \mathcal{H}\big(\{\widetilde{\mathbf{S}}_{i};\boldsymbol{\widetilde{\mathcal{A}}}_{ij}(\boldsymbol{\delta})\}\big).
\end{split}
\label{eq:duality_definition}
\end{equation}
The transformation in Eq.~(\ref{eq:duality_definition}) is implied to include all spins in the system, as well as all symmetry-allowed coupling constants.
We assume that the spin transformations can be expressed as a site-dependent linear operation $\widetilde{\mathbf{S}}_i(\mathbf{r}) = \mathbf{M}_i^T(\mathbf{r})\mathbf{S}_i(\mathbf{r})$, where $\mathbf{M}_i(\mathbf{r})$ is an orthogonal matrix.
Then,
\begin{equation}
\boldsymbol{\widetilde{\mathcal{A}}}_{ij}(\mathbf{r}-\mathbf{r}') = \mathbf{M}_i^T(\mathbf{r})\boldsymbol{\mathcal{A}}_{ij}(\mathbf{r}-\mathbf{r}') \mathbf{M}_j(\mathbf{r}').
\end{equation}
We require that the matrices $\boldsymbol{\mathcal{A}}_{ij}(\boldsymbol{\delta})$ and $\boldsymbol{\widetilde{\mathcal{A}}}_{ij}(\boldsymbol{\delta})$ only differ in the values of the coupling parameters.
Note that matrices $\mathbf{M}_i(\mathbf{r})$ are not necessarily unique: we can define $\widetilde{\mathbf{M}}_i(\mathbf{r}) = \mathbf{R}\mathbf{M}_i(\mathbf{r})$ where $\mathbf{R}$ is a symmetry operation that leaves $\boldsymbol{\mathcal{A}}_{ij}(\boldsymbol{\delta})$ invariant.
To demonstrate the relationship between the symmetry of the system and the number of self-duality transformations, we first note that any quadratic spin Hamiltonian of the form (\ref{eq:hamiltonian_quadratic}) can be written as a sum of bilinear spin invariants:
\begin{equation}
\mathcal{H} = \sum_\lambda \mathcal{A}^{(\lambda)} B^{(\lambda)},
\end{equation}
where $B^{(\lambda)}$ are the invariants, and $\mathcal{A}^{(\lambda)}$ are the corresponding coupling constants.
One can show~[\textbf{Supplemental Material}] that the bilinear spin invariants can be calculated by squaring the symmetry-adapted order parameters corresponding to the irreps in the decomposition of the spin structure (see Sec.~\ref{sec:symmetry}).
The result can be written as
\begin{equation}
B^{(\lambda)} = B_{kl}^{(\Gamma)} = \mathbf{S}_k^{(\Gamma)}\cdot\mathbf{S}_l^{(\Gamma)},
\end{equation}
where $\mathbf{S}_k^{(\Gamma)}$ is the symmetry-adapted order parameter corresponding to irrep $\Gamma$, and $k, l$ label different order parameters belonging to $\Gamma$.
The number of components in this vector is equal to the dimensionality of the $\Gamma$.
It can be shown that a transformation of the spins that results in a permutation of the invariants,
\begin{equation}
\mu\left(B^{(\lambda)}\right) = \sum_\lambda P_{\lambda\mu(\lambda)}B^{(\lambda)} = B^{(\mu(\lambda))},
\end{equation}
satisfies our definition of self-duality since
\begin{equation}
\mu\left(\mathcal{H}\right) = \sum_\lambda \mathcal{A}^{(\lambda)} B^{(\mu(\lambda))} = \sum_\lambda \widetilde{\mathcal{A}}^{(\lambda)} B^{(\lambda)}.
\end{equation}
Here, $P_{\lambda\mu(\lambda)}$ permutes indices $\lambda$ and $\mu(\lambda)$, and
\begin{equation}
\widetilde{\mathcal{A}}^{(\lambda)} = \mu^{-1}\left(\mathcal{A}^{(\lambda)}\right).
\end{equation}
At the same time, a permutation of invariants occurs when we permute the order parameters, and possibly change the sign of some of the components:
\begin{equation}
\mu:S_{k\alpha}^{(\Gamma)} \longrightarrow \pm S_{k'\alpha}^{(\Gamma')},
\end{equation}
where $\alpha$ labels the components of the order parameters,
Therefore, we look for the spin transformations $M_i(\mathbf{r})$ that correspond to permutations between order parameters.
Although this is by no means a rigorous derivation, the self-dualities determined this way are sufficient to describe most interesting properties observed in this paper \footnote{We note that up to now the discussion has been completely general and is applicable to any spin system described by a set of symmetries. The derivation of the general conditions for self-duality go beyond the scope of this work and are a subject of an ongoing investigation.}.
In the remainder of this section we present the relevant duality transformations in the context of the three SOC cases discussed in the last section.
As a final note, we point out that the set of all self-duality maps resulting from the permutations of the symmetry-adapted order parameters forms a group.
This can be deduced from the fact that we only allow orthogonal matrices as the local transformations of spins.
This fact significantly simplifies our search, since the new transformations can be obtained by combining together those already identified in the analysis.
\subsection{\label{subsec:duality_J}Decoupled case}
In the decoupled limit, the coupling matrices are diagonal in the spin components:
\begin{equation}
\boldsymbol{\mathcal{A}}_{ij}(\boldsymbol{\delta}) = J_{ij}(\boldsymbol{\delta}) \boldsymbol{\mathbb{I}},
\end{equation}
where $\boldsymbol{\mathbb{I}}$ is an identity matrix.
As a result, there is a single self-duality transformation, which corresponds to transformations between order parameters symmetric and antisymmetric under inversion operation:
\begin{equation}
\gamma^{(-1)} : \begin{cases} T_g \longleftrightarrow T_u,\\ Q_g \longleftrightarrow Q_u. \end{cases}
\end{equation}
The corresponding group of spin transformations can be written as:
\begin{equation}
\mathbf{M}_i^{(g)} = \begin{cases}g\boldsymbol{\mathbb{I}}& \text{if } i\in\{1,2,3\},\\ \phantom{-}\boldsymbol{\mathbb{I}}& \text{if } i\in\{4,5,6\},\end{cases}
\end{equation}
where $g=\pm 1$.
$\mathbf{M}_i^{(+1)}(\mathbf{r})$ is the identity, and $\mathbf{M}_i^{(-1)}(\mathbf{r})$ flips all spins in layer A, while keeping the layer B unchanged.
While this transformation does not affect the in-plane interactions, the out-of-plane couplings change sign
\begin{equation}
\gamma^{(-1)} : \begin{cases} J_1 \longrightarrow -J_1,\\ J_2 \longrightarrow \phantom{-}J_2,\\ J_4 \longrightarrow -J_4. \end{cases}
\end{equation}
Thus, $\gamma^{(-1)}$ provides a map between models with ferro- and antiferromagnetic out-of-plane interactions.
\subsection{\label{subsec:duality_JD}Weak coupling case}
Next, consider a model with weak SOC.
The coupling matrix between two spins~(\ref{eq:general_coupling_matrix}) can then be simply written as
\begin{equation}
\boldsymbol{\mathcal{A}}_{ij}(\boldsymbol{\delta}) = \begin{bmatrix} J_{ij}(\boldsymbol{\delta}) & D_{ij}(\boldsymbol{\delta}) & 0\\
-D_{ij}(\boldsymbol{\delta}) & J_{ij}(\boldsymbol{\delta}) & 0\\
0 & 0 & J_{ij}(\boldsymbol{\delta}) + A_{ij}^{(z)}(\boldsymbol{\delta})\end{bmatrix}.
\end{equation}
We must formally include $A_{ij}^{(z)}(\boldsymbol{\delta})$ anisotropic interactions in some cases in order to define proper self-duality transformations.
However, as will become clear in the next sections, this formality is not too significant in practice, since the majority of the observed phases in this work are restricted to the $xy$-plane.
Therefore, many useful properties of self-duality relating to the spin structures and phase diagrams remain exact even if we ignore $A_{ij}^{(z)}(\boldsymbol{\delta})$, since the $z$-components of the spins are decoupled from the planar components (Fig.~\ref{fig:SOC_symmetry_diagram}), and so the duality transformations are also decoupled.
We first focus on the permutations of the four out-of-plane order parameters.
There is a single non-trivial duality transformation corresponding to flipping the $z$-components in the A layer.
The group of transformations is then written as
\begin{equation}
\mathbf{M}_i^{(\eta)} = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & \eta_i \end{bmatrix},
\end{equation}
where
\begin{equation}
\eta_i = \begin{cases}\eta & \text{if } i\in\{1,2,3\},\\ 1& \text{if } i\in\{4,5,6\},\end{cases}
\end{equation}
and $\eta = \pm 1$.
The parameters are mapped according to
\begin{equation}
\mu^{(-1)} : \begin{cases} J_i &\longrightarrow \phantom{-}J_i,\\ A_1^{(z)} &\longrightarrow -A_1^{(z)}-2J_1,\\ A_2^{(z)} &\longrightarrow \phantom{-}A_2^{(z)},\\ A_4^{(z)} &\longrightarrow -A_4^{(z)}-2J_4. \end{cases}
\end{equation}
Next, we identify the transformations that permute the six planar order parameters.
As discussed in the previous section, these are labeled by an integer $m$ which represents the six-fold winding number.
Therefore, the first natural choice for a spin transformation is a local rotation written as
\begin{equation}
\mathbf{M}^{(m)}_i = \begin{bmatrix} \cos\theta_i^{(m)} &-\sin\theta_i^{(m)} & 0\\
\sin\theta_i^{(m)} & \phantom{-}\cos\theta_i^{(m)} & 0\\
0 & 0 & 1\end{bmatrix},
\label{eq:matrix_dual_m}
\end{equation}
where
\begin{equation}
\theta_i^{(m)} = \frac{\pi ml_i}{3},
\label{eq:duality_angles}
\end{equation}
$m\in\mathbb{Z}_6$ and $l_i$ label the positions of atoms on the hexagon of the unit cell in the counter clock-wise direction:
\begin{equation}
\{l_1,l_2,l_3,l_4,l_5,l_6\} = \{3,1,5,0,4,2\}.
\end{equation}
However, there is another group of global transformations that leads to a distinct duality:
\begin{equation}
\mathbf{M}^{(\varepsilon)} = \begin{bmatrix} \varepsilon & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1\end{bmatrix},
\end{equation}
with $\varepsilon=\pm 1$.
Note that the elements of $\mathbf{M}_i^{(m)}$ and $\mathbf{M}^{(\varepsilon)}$ in general do not commute.
We use elements $\mathbf{M}^{(\varepsilon)}\mathbf{M}_i^{(m)}$ to define the self-duality transformations $\mu_m^{(\varepsilon)}$, which transform the coupling constants according to
\begin{equation}
\mu_m^{(\varepsilon)}:
\begin{dcases}
J_1 \longrightarrow J_1\cos{\left(\frac{\pi m}{3}\right)} - D_1\varepsilon\sin{\left(\frac{\pi m}{3}\right)},\\
D_1 \longrightarrow J_1\sin{\left(\frac{\pi m}{3}\right)} + D_1\varepsilon\cos{\left(\frac{\pi m}{3}\right)},\\
J_2 \longrightarrow J_2\cos{\left(\frac{2\pi m}{3}\right)} - D_2\varepsilon\sin{\left(\frac{2\pi m}{3}\right)},\\
D_2 \longrightarrow J_2\sin{\left(\frac{2\pi m}{3}\right)} + D_2\varepsilon\cos{\left(\frac{2\pi m}{3}\right)},\\
J_4 \longrightarrow J_4(-1)^m.
\end{dcases}
\label{eq:duality_weak_SOC}
\end{equation}
In addition, if $\widetilde{J}_i$ is the new exchange coupling constant, then
\begin{equation}
\mu_m^{(\varepsilon)}: A_i^{(z)}\longrightarrow A_i^{(z)} + J_i -\widetilde{J}_i.
\end{equation}
One can then prove the following relations:
\begin{align}
&\mu_0^{(-1)} \mu_m^{(+1)} \mu_0^{(-1)} = \mu_{-m}^{(+1)},\\
&\mu_m^{(+1)}\mu_n^{(+1)} = \mu_{m+n}^{(+1)},\\
&\mu_m^{(-1)}\mu_m^{(-1)} = \mu_0^{(+1)},
\end{align}
where $m$ is implied to be a cyclic integer variable.
These relations define the structure of a dihedral group $\mathrm{D}_6$ with $\mu_0^{(+1)}$ serving as the identity element.
The $\mu_m^{(+1)}$ transformations in this case are equivalent to rotations and $\mu_0^{(-1)}$ is the reflection operation.
Since $\mu^{\eta}$ and $\mu_m^{(\varepsilon)}$ commute, the most general duality transformation is written as $\mu^{(\eta)}\mu_m^{(\varepsilon)}$ leading to group structure equivalent to $\mathrm{D}_{6h}$.
Thus, we obtain a very interesting situation where the group of self-dualities is non-Abelian and is isomorphic to the point group of the underlying lattice.
\subsection{\label{subsec:duality_anis}Intermediate coupling case}
When the SOC is sufficiently strong, the coupling matrix $\boldsymbol{\mathcal{A}}_{ij}(\boldsymbol{\delta})$ takes the form of Eq.~(\ref{eq:general_coupling_matrix}) and (\ref{eq:SIA_coupling_matrix}).
The symmetries of the Hamiltonian are reduced down to the paramagnetic group as seen in Fig.~\ref{fig:SOC_symmetry_diagram}.
This has no affect on the out-of-plane order parameters, and $\mu^{(\eta)}$ are still valid self-duality maps.
However, the two-dimensional $E_g^{(14)}$ and $E_u^{(11)}$ planar irreps each split into two one-dimensional irreps ($\{B_{1g}, B_{2g}\}$ and $\{A_{1u},A_{2u}\}$ respectively).
As a result, the dualities obtained from permutations of these irreps with other two-dimensional irreps are no longer exact.
Out of the twelve elements of $\mu_m^{(\varepsilon)}$, only four remain: $\mu_0^{(+1)}$, $\mu_1^{(-1)}$, $\mu_3^{(+1)}$, and $\mu_4^{(-1)}$.
It is straight-forward to check that $\mu_0^{(+1)}$ and $\mu_4^{(-1)}$ leave the anisotropic parameters unchanged,
\begin{equation}
\mu_0^{(+1)},\mu_4^{(-1)}: \begin{cases}K_\alpha\phantom{-}\longrightarrow K_\alpha,\\ A_1^{(xy)}\longrightarrow A_1^{(xy)},\\A_2^{(xy)}\longrightarrow A_2^{(xy)},\\A_4^{(xy)}\longrightarrow A_4^{(xy)},\end{cases}
\end{equation}
while $\mu_3^{(+1)}$ and $\mu_1^{(-1)}$ change the sign of the out-of-plane bond-dependent interactions,
\begin{equation}
\mu_3^{(+1)},\mu_1^{(-1)}: \begin{cases}K_\alpha\phantom{-}\longrightarrow \phantom{-}K_\alpha,\\ A_1^{(xy)}\longrightarrow -A_1^{(xy)},\\A_2^{(xy)}\longrightarrow\phantom{-}A_2^{(xy)},\\ A_4^{(xy)}\longrightarrow-A_4^{(xy)}.\end{cases}
\end{equation}
In addition to these transformations, global $C_4$ rotations around the $z$-axis now lead to a valid self-duality map $\zeta$, since they interchange the one-dimensional planar irreps, thus flipping the sign of the anisotropic interactions:
\begin{equation}
\zeta: \begin{cases}K_\alpha\phantom{-}\longrightarrow -K_\alpha,\\ A_1^{(xy)}\longrightarrow -A_1^{(xy)},\\A_2^{(xy)}\longrightarrow -A_2^{(xy)},\\ A_4^{(xy)}\longrightarrow -A_4^{(xy)}.\end{cases}
\end{equation}
The elements $\{\mu_0^{(+1)},\mu_3^{(+1)},\mu_1^{(-1)},\mu_4^{(-1)}\}$ and $\zeta$ commute (even though the corresponding spin transformation matrices generally do not), and all together they form the Burnside group $B(3,2)$ $\mathbb{Z}_2\otimes \mathbb{Z}_2\otimes \mathbb{Z}_2$, which is Abelian.
Combining it with the $\mu^{(\eta)}$, the group of the self-duality transformations in the intermediate SOC limit becomes the Burnside group $B(4,2)$.
\subsection{\label{subsec:duality_effects}Consequences of self-duality}
The consequences of self-duality are far-reaching.
In general, self-dual transformations can be thought of as the symmetries of the parameter space that do not change the energy of the system.
As a result, the dual points in the parameter space (self-dual images) will have a lot of the same physical properties.
These include the ground state energy and most thermodynamic properties, since the self-duality applies also to the partition function.
These results are also not limited to classical systems either: the self-dual images must have mostly the same quantum properties, within redefinition of quantization axes.
Therefore, given a description of a single phase, one can immediately describe all phases that relate to it via self-duality transformations.
In the following, we discuss a few important properties of self-duality relevant to this work.
From the properties of the duality relations it follows that in order to prove that a certain spin structure is stable for some choice of physical parameters, it is sufficient to show that one of its images is stable somewhere in the parameter space.
Given that even in the simple case of weak SOC limit the parameter space is four-dimensional, the self-duality maps can significantly reduce the time of computations since for every phase boundary $f(\boldsymbol{\mathcal{A}})$, an image $f\left(\mu\left(\boldsymbol{\mathcal{A}}\right)\right)$ must also correspond to a phase boundary.
At the phase boundaries, self-dualities become symmetry operations, and the Hamiltonian remains invariant after the transformation.
\begin{table}[!t]
\centering
\resizebox{0.5\textwidth}{!}{\begin{tabular}{c|r|r|r|r|r|r|r|r}
$m$ &$\widetilde{J}_1$ & $\widetilde{D}_1$ & $\widetilde{A}^{(z)}_1$ & $\widetilde{J}_2$ & $\widetilde{D}_2$ & $\widetilde{A}^{(z)}_2$ & $\widetilde{J}_4$ & $\widetilde{A}^{(z)}_4$ \\
\hline
0 & $J_1$ & 0 & 0 & $J_2$ & 0 & 0 & $J_4$ & 0 \\
1 & $\frac{1}{2}J_1$ & $\frac{\sqrt{3}}{2}J_1$ & $-\frac{1}{2}J_1$ & $-\frac{1}{2}J_2$ & $\frac{\sqrt{3}}{2}J_2$ & $-\frac{3}{2}J_2$ & $-J_4$ & $-2J_4$\\
2 & $-\frac{1}{2}J_1$ & $\frac{\sqrt{3}}{2}J_1$ & $-\frac{3}{2}J_1$ & $-\frac{1}{2}J_2$ & $-\frac{\sqrt{3}}{2}J_2$ & $-\frac{3}{2}J_2$ & $J_4$ & 0\\
3 & $-J_1$ & 0 & $-2J_1$ & $J_2$ & 0 & 0 & $-J_4$ & $-2J_2$ \\
4 & $-\frac{1}{2}J_1$ & $-\frac{\sqrt{3}}{2}J_1$ & $-\frac{3}{2}J_1$ & $-\frac{1}{2}J_2$ & $\frac{\sqrt{3}}{2}J_2$ & $-\frac{3}{2}J_2$ & $J_4$ & 0\\
5 & $\frac{1}{2}J_1$ & $-\frac{\sqrt{3}}{2}J_1$ & $-\frac{1}{2}J_1$ & $-\frac{1}{2}J_2$ & $-\frac{\sqrt{3}}{2}J_2$ & $-\frac{3}{2}J_2$ & $-J_4$ & $-2J_4$
\end{tabular}}
\caption{Points of accidental $\mathcal{G}_J$ symmetry obtained via self duality maps $\mu_m^{(\varepsilon)}$. Note that the value of $\varepsilon$ does not change the number of points.}
\label{tab:accidental_degeneracy}
\end{table}
Self-dualities also allow one to quickly find the points of accidental degeneracy.
Consider, for example, Eq.~(\ref{eq:duality_weak_SOC}) where the original DM parameters are set to zero $D_1=D_2=0$.
We get a set of new parameters, where $\widetilde{D}_1$ and $\widetilde{D}_2$ are generally non-zero.
However, since the self-duality preserves the transformation properties and therefore the symmetry group of the Hamiltonian, all self-dual images in this case must be completely isotropic in spin space (\textit{i.e.} have spin group $\mathcal{G}_J$).
Note that as stated before, in addition to the $\widetilde{J}_k$ and $\widetilde{D}_k$ one must also specify $\widetilde{A}^{(z)}_k = \widetilde{J}_k-J_k$ to ensure that the full rotational symmetry is exact.
With this, we get a set of points with accidental $\mathcal{G}_J$ degeneracy, listed in table~\ref{tab:accidental_degeneracy}.
At these points we expect the spin-wave dispersion to obtain additional zero-energy \textit{pseudo}-Goldstone modes.
It is important to note, however, that the properties which depend directly on the spin structure (\textit{e.g.} average magnetization) are not always the same, since the self-duality maps generally do not conserve the order parameter.
An important consequence of this is that the self-dual images will have the same spin wave excitation spectra only when the corresponding spin transformations produce proper local axes (\textit{i.e.} when the local transformations are rotations).
This will be discussed in more detail in Sec.~\ref{sec:spin_waves}.
\section{\label{sec:phase_diagram}Classical Phase Diagrams for Decoupled and Weakly Coupled Cases}
\subsection{\label{subsec:pd_overview}Brief overview of the results}
In this section we present the classical magnetic phases for models with either negligible or weak SOC.
The effects of intermediate SOC are discussed in Sec.~\ref{sec:anisotropy}.
Before starting a detailed discussion of magnetic ground states, we summarize the main results of this section.
Based on our findings, we group the magnetic structures into three categories: single-$\mathbf{Q}$ configurations, multi-$\mathbf{Q}$ structures, and incommensurate phases with Ising-like ordering.
We label these as $\boldsymbol{\Phi}_m^{(\mathbf{Q})}$, $\boldsymbol{\Psi}_m^{(\mathbf{M})}$, and $\boldsymbol{\Lambda}_m$ respectively.
The integer label $m$ here refers to the different self-dual images produced by the $\mu_m^{(\varepsilon)}$ elements in the weak SOC limit, as discussed in the previous section.
In addition to these states, in the decoupled limit, some phase boundaries correspond to structures with degenerate wavevectors in the Brillouin zone.
The magnetic ordering wavevectors of all observed states are shown in Fig.~\ref{fig:orders_BZ}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.93\textwidth]{pd_j1_j4_h.pdf}
\caption{Ground state phase diagram for exchange-coupled spins on AB-stacked kagome lattice. Here, the dashed and dash-dotted boundaries correspond to the $\mathcal{M}_\Sigma$ and $\mathcal{M}_{[\mathbf{q}\mathbf{q}0]}$ manifold states. Since the exchange interactions do not differentiate between the different chirality values, the $m=1,5$ ($m=2,4$) states are written as linear combinations.}
\label{fig:pd_j1_j4_h}
\end{figure*}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{orders_BZ.pdf}
\caption{Magnetic ordering wavevectors corresponding to phases studied in this paper shown in the first Brillouin zone of an AB-stacked kagome lattice. $\mathcal{M}_{[\mathbf{q}\mathbf{q}0]}$ and $\mathcal{M}_\Sigma$ represent degenerate ground state manifolds. $\boldsymbol{\Phi}_m^{(\mathbf{Q})}$ and $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ label the single- and multi-$\mathbf{Q}$ states, and $\boldsymbol{\Lambda}_m$ labels the incommensurate phases with Ising-type order, as described in the main text.}
\label{fig:orders_BZ}
\end{figure}
LT calculations were found to give correct spin configurations and energies for the single-$\mathbf{Q}$ phases.
For the remaining types of phases, the analytical results gave non-normalized spin configurations and, as a result, lower energies compared to the MC simulations.
However, in most cases, LT provided a decent estimation of the locations of the phase boundaries, allowing us to optimize our numerical boundary search.
Therefore, the exact phase boundaries between $\boldsymbol{\Phi}_m^{(\mathbf{Q})}$-type states were calculated using the LT method, and the locations of the remaining boundaries were determined via numerical MC simulations.
We show that Heisenberg exchange interactions are sufficient to stabilize both single- and multi-$\mathbf{Q}$ phases.
The inclusion of DM interactions lifts the degeneracy of the non-collinear phases by stabilizing configurations with particular chirality, which further enriches the magnetic phases.
For most values of exchange constants, the $D_1 - D_2$ phase diagrams display multiple non-trivial ($\mathbf{Q}\neq 0$) structures, including unconventional states with Ising-like ordering.
\subsection{\label{subsec:pd_exchange}Decoupled case}
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{pds_half.pdf}
\caption{Ground state phase diagrams on AB-stacked kagome lattice for selected values of exchange and a range of DM values. In the middle diagram, the brightened strips between the $\boldsymbol{\Lambda}_{2(4)}$ and $\boldsymbol{\Phi}_{2(4)}^{(\boldsymbol{\Gamma})}$ phases are coexistence regions with many stable multi-\textbf{Q} configurations. }
\label{fig:pds}
\end{figure*}
First, we present phase diagrams for the model in Eq.~(\ref{eq:magnetic_hamiltonian}) with $\mathcal{H} = \mathcal{H}_J$ in Fig.~\ref{fig:pd_j1_j4_h}.
Both diagrams ($J_2>0$ and $J_2<0$) display a clear ``inversion'' symmetry corresponding to flipping the sign of both $J_1$ and $J_4$, as a consequence of the self-duality map $\gamma^{(-1)}$, discussed in Sec.~\ref{subsec:duality_J}.
The results yield eight distinct phases, including single-, and multi-$\mathbf{Q}$ structures.
When $J_1$ and $J_4$ have the same sign and sufficiently large magnitudes, the ground state spin configurations become collinear, with spins in the A and B layers either parallel ($J_1<0$, $J_4<0$) or antiparallel ($J_1>0$, $J_4>0$) to each other.
On the other hand, opposing signs of the interactions introduce frustration, which for large magnitudes of the couplings lead to uniform 120-degree states.
Notably, the exchange interactions alone do not differentiate between states with different chirality, leaving the structures with $m=1,5$ ($m=2,4$) degenerate.
Interestingly enough, for sufficiently large magnitudes of $J_1$ there is a range of $J_4$ values that stabilizes commensurate cycloidal configurations regardless of the sign of $J_2$.
These structures are characterized by a period three modulation ($\mathbf{Q}=K$) with collinear spins in each unit cell (Fig.~\ref{fig:qK_phases}).
The stability of these phases extends indefinitely for large magnitudes of $J_1$.
In the case of antiferromagnetic nn in-plane interactions, one can also stabilize multi-$\mathbf{Q}$ configurations for intermediate values of $J_1$ and $J_4$.
Similar to the uniform 120-degree phases, the $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ structures in Fig.~\ref{fig:pd_j1_j4_h} correspond to degenerate configurations with opposite chiralities.
We also note that at certain phase boundaries structures with degenerate wavevectors can be stabilized.
In particular, the boundaries between $\boldsymbol{\Phi}_0^{(\boldsymbol{\Gamma})}$ ($\boldsymbol{\Phi}_3^{(\boldsymbol{\Gamma})}$) and $\boldsymbol{\Phi}_0^{(\mathbf{K})}$ ($\boldsymbol{\Phi}_3^{(\mathbf{K})}$) states stabilize structures where all wavevectors with $Q_z=0$ are degenerate, leading to a two-dimensional ground state manifold $\mathcal{M}_{[\mathbf{q}\mathbf{q}0]}$ (dash-dotted lines in Fig.~\ref{fig:pd_j1_j4_h}).
Also, the boundaries between $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ and $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ are degenerate along the $\Sigma$ lines in the Brillouin zone and are labeled as $\mathcal{M}_\Sigma$ (dashed lines in Fig.~\ref{fig:pd_j1_j4_h}).
In this work we will focus more on the ordered states and leave the description of the degenerate manifold states outside of the scope of this work.
\subsection{\label{subsec:pd_dmi}Weak coupling case}
Next, we consider the effects of DM interactions.
To reduce the number of independent parameters, we focus on the case where $J_4=0$, and present the diagrams that display the majority of the magnetic phases observed in this study.
The remaining phases can be obtained from the self-duality operations $\mu_m^{\varepsilon}$ presented in Sec.~\ref{subsec:duality_JD} [\textbf{Supplemental Material}].
The phase diagrams are shown in Fig.~\ref{fig:pds}.
We see that DM interactions break the degeneracy associated with the different chirality values of the non-collinear spin structures.
For large magnitudes of both $D_1$ and $D_2$, the ground state eventually becomes one of the $\mathbf{Q}=\Gamma$ 120-degree structures.
For the intermediate values of these constants, the competition between the exchange and DM interactions introduces a plethora of unconventional magnetic phases.
These include single-$\mathbf{Q}$ phases and multi-$\mathbf{Q}$ configurations already discussed, as well as incommensurate phases that manifest themselves as Ising-like stripes.
The latter are stable for a wide range of the DM parameters and extend far beyond the ranges shown in Fig.~\ref{fig:pds}.
These states were found to be hard to resolve in the simulations and required a careful choice of parameters, as well as many MC steps in order to obtain ordered stripe configurations.
Furthermore, for the $J_2>0$ $J_1/|J_2|>\frac{3}{2}$ diagram, we identified thin coexistence regions between the $\boldsymbol{\Lambda}_{2(4)}$ and $\boldsymbol{\Phi}_{2(4)}^{(\boldsymbol{\Gamma})}$ phases where various multi-$\mathbf{Q}$ configurations are stable.
\section{\label{sec:phases_structure}Structure of the magnetic phases}
\subsection{\label{subsec:structure_single_q}Single-$\mathbf{Q}$ phases}
\begin{figure}[!th]
\centering
\includegraphics[width=0.45\textwidth]{q0_table_horizontal.pdf}
\caption{The $\boldsymbol{\Phi}^{(\boldsymbol{\Gamma})}_m$ spin configurations. Blue and red arrows are used to differentiate the structures that are symmetric and antisymmetric under the spatial inversion.}
\label{fig:q0_phases}
\end{figure}
In this work, we encounter two types of single-$\mathbf{Q}$ phases: those with $\mathbf{Q}=\Gamma$ and $\mathbf{Q}=K$, which we label as $\boldsymbol{\Phi}^{(\boldsymbol{\Gamma})}_m$ and $\boldsymbol{\Phi}^{(\mathbf{K})}_m$ respectively.
The spin configurations of these phases are shown in figures~\ref{fig:q0_phases} and \ref{fig:qK_phases}.
Note that for exchange-only interactions, the spins generally also possess out-of-plane components, since the order parameters in this case belong to one of the four irreps in Sec.~\ref{subsec:symmetry_decoupled}.
When a weak SOC is turned on in the system, the in-plane and out-of-plane components are no longer equivalent and correspond to different irreps (Sec.~\ref{subsec:symmetry_weakly_coupled}).
Thus, in the decoupled limit, these structures can be parameterized as $\mathbf{S}_i(\mathbf{r}) = R(\phi_z,\phi)R^{(\mathbf{Q})}(\mathbf{r})M_i^{(m)} \mathbf{\hat{S}}$, where $\mathbf{\hat{S}}$ is an arbitrary in-plane unit vector, $M_i^{(m)}$ is defined in Eq.~(\ref{eq:matrix_dual_m}), and the remaining two rotation matrices are defined as
\begin{align}
R(\phi_z,\phi) &= \begin{bmatrix} \cos(\phi) & -\sin(\phi) & 0\\\cos(\phi_z)\sin(\phi) & \cos(\phi_z)\cos(\phi) &-\sin(\phi_z)\\
\sin(\phi_z)\sin(\phi) & \sin(\phi_z)\cos(\phi) & \cos(\phi_z)\end{bmatrix},
\label{eq:single_Q_param_z}\\
R^{(\mathbf{Q})}(\mathbf{r}) &= \begin{bmatrix} \cos(\mathbf{Q}\cdot\mathbf{r}) &-\sin(\mathbf{Q}\cdot\mathbf{r}) & 0\\
\sin(\mathbf{Q}\cdot\mathbf{r}) & \cos(\mathbf{Q}\cdot\mathbf{r}) & 0\\
0 & 0 & 1\end{bmatrix},
\label{eq:single_Q_param_r}
\end{align}
Matrix $R(\phi_z,\phi)$ is used to define the globally phase of the spin configurations.
In the weakly-coupled limit, the parameterization is changed to $\mathbf{S}_i(\mathbf{r}) = R(0,\phi)R^{(\mathbf{Q})}(\mathbf{r})M_i^{(m)} \mathbf{\hat{S}}$.
The corresponding classical energies of these spin configurations are presented in table~\ref{tab:single_Q_energies}.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c}
$m$ & $\mathcal{H}^{0}\left(\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}\right)$ & $\mathcal{H}^{0}\left(\boldsymbol{\Phi}_m^{(\mathbf{K})}\right)$ \\
\hline
0 & $2J_1+2J_2+3J_4$ & $2J_1+\frac{J_2}{2}$ \\
1 & $J_1-J_2-3J_4-\sqrt{3}(D_1+D_2)$ & $J_1-\frac{J_2}{4}-\sqrt{3}(D_1+\frac{D_2}{4})$ \\
2 & $-J_1-J_2+3J_4-\sqrt{3}(D_1-D_2)$ & $-J_1-\frac{J_2}{4}-\sqrt{3}(D_1-\frac{D_2}{4})$ \\
3 & $-2J_1+2J_2-3J_4$ & $-2J_1+\frac{J_2}{2}$ \\
4 & $-J_1-J_2+3J_4+\sqrt{3}(D_1-D_2)$ & $-J_1-\frac{J_2}{4}+\sqrt{3}(D_1-\frac{D_2}{4})$ \\
5 & $J_1-J_2-3J_4+\sqrt{3}(D_1+D_2)$ & $J_1-\frac{J_2}{4}+\sqrt{3}(D_1+\frac{D_2}{4})$
\end{tabular}
\caption{Classical energies of the single-$\mathbf{Q}$ configurations.}
\label{tab:single_Q_energies}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{qK_table_vertical.pdf}
\caption{The $\boldsymbol{\Phi}^{(\mathbf{K})}_m$ spin configurations.}
\label{fig:qK_phases}
\end{figure}
We note that while formally the $z$-components of spins are decoupled from the planar ones in the weak SOC limit, the energy of the $\boldsymbol{\Phi}_0^{(\mathbf{Q})}$ and $\boldsymbol{\Phi}_3^{(\mathbf{Q})}$ structures is independent of the DM parameters leading to a full rotational degeneracy at the classical level.
This degeneracy originates from the collinear sublattice structure in these phases.
As will be discussed in Sec.~\ref{sec:spin_waves}, thermal fluctuations break this classical degeneracy and orient the structures in the out-of-plane direction.
\subsection{\label{subsec:structure_multi_q}Multi-$\mathbf{Q}$ phases}
The most prominent type of multi-$\mathbf{Q}$ states in the phase diagrams in Fig.~\ref{fig:pd_j1_j4_h} and~\ref{fig:pds} is $\boldsymbol{\Psi}_m^{(\mathbf{M})}$.
The structure factor of these structures displays one peak at the $\Gamma$ point, as well as three peaks at the $M$ points in the first Brillouin zone (forming the star of $M$), as shown in Fig.~\ref{fig:orders_BZ}.
The spin configurations are shown in Fig.~\ref{fig:4q_phases}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.41\textwidth]{4q_table_vertical.pdf}
\caption{Magnetic unit cells of the $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ configurations. Colors indicate spins that belong to the same orbit (see main text).}
\label{fig:4q_phases}
\end{figure}
The parameterization of these structures is not trivial.
To make some further progress, we must consider the symmetry of these configurations in order to reduce the number of independent variables.
The analysis below is performed assuming the weak coupling case and then generalized to also describe the decoupled case.
We first note that the peaks of the structure factor in Fig.~\ref{fig:orders_BZ} are completely symmetric with respect to the $\mathrm{D}_{6h}$ point group operations.
Therefore, even though these spin structures break planar translations by a single unit cell, the point group lattice transformations around the (0,0) site are preserved, provided that we combine them with the necessary spin rotations.
Thus, we can still express the $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ configurations in terms of the two-dimensional $E_g^{(1m)}$ and $E_u^{(1m)}$ irreps of $\mathcal{G}_D$ in table~\ref{tab:GD_character_table}.
The general representation of the spin structures in Fig.~\ref{fig:4q_phases} decomposes as
\begin{equation}
\boldsymbol{\Psi}^{(\mathbf{M})} = \bigoplus_m 4E_{a_m}^{(1m)},
\end{equation}
where $a_m = g$ if $m$ is even and $a_m = u$ otherwise.
Because all of the symmetry operations transform spins globally (\textit{i.e.} no site-dependent transformations), it is impossible to have spin configuration described by irreps with more than one value of the winding $m$, without inevitably breaking the $\mathrm{D}_{6h}$ lattice symmetry.
Therefore, a given spin configuration must decompose into four irreps with the same winding number $m$, $\boldsymbol{\Psi}_m^{(\mathbf{M})} = 4E_1^{(1m)}$.
As a result, the parameterization of $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ can be written in terms of four planar vectors, which describe the spins belonging to the same \textit{orbit}.
Here, we define an orbit as a set of all spin components related to each other via lattice point group transformations (Fig.~\ref{fig:4q_phases}).
The four orbits are given below.
\begin{alignat}{2}
\mathbf{O}^{(1)}_i &: \big\{&&\mathbf{S}_1(0,0),\mathbf{S}_2(0,0),\mathbf{S}_3(0,0),\notag\\
& &&\mathbf{S}_4(0,0),\mathbf{S}_5(0,0),\mathbf{S}_6(0,0)\big\},\\[5pt]
\mathbf{O}^{(2)}_i &: \big\{&&\mathbf{S}_1(1,0),\mathbf{S}_2(1,1),\mathbf{S}_3(0,1),\notag\\
& &&\mathbf{S}_4(1,0),\mathbf{S}_5(1,1),\mathbf{S}_6(0,1)\big\},
\end{alignat}
\begin{alignat}{2}
\mathbf{O}^{(3)}_i,\mathbf{O}^{(4)}_i &: \big\{&&\{\mathbf{S}_1(1,1), \mathbf{S}_2(0,1), \mathbf{S}_3(1,0), \notag\\
& &&\mathbf{S}_4(1,1), \mathbf{S}_5(0,1), \mathbf{S}_6(1,0)\}, \notag\\
& &&\{\mathbf{S}_1(0,1), \mathbf{S}_2(1,0), \mathbf{S}_3(1,1), \notag\\
& &&\mathbf{S}_4(0,1), \mathbf{S}_5(1,0), \mathbf{S}_6(0,1)\}\big\}.
\end{alignat}
The positions of the spins inside of the magnetic unit cell are given by two integers $(k,l)$, which correspond to displacement vector $\mathbf{r} = k\mathbf{a}_1+ l\mathbf{a}_2$.
The last two orbits include different components of spins on the same sites.
Thus, the $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ order parameters can generally be written in terms of the four vectors $\mathbf{O}_i^{(k)}$.
However, due to the constraint on the norm of the spin vectors, the orbits $\mathbf{O}^{(3)}_i$ and $\mathbf{O}^{(4)}_i$ are no longer independent and will share the parameterization parameters.
We can therefore write
\begin{equation}
\boldsymbol{\Psi}_m^{(\mathbf{M})} = \min_{\phi_k}\sum_{k=1}^4 R(0,\phi_k)M_i^{(m)}\mathbf{O}_i^{(k)},
\end{equation} where $M_i^{(m)}$ is given in Eq.~(\ref{eq:matrix_dual_m}), $R(\phi_z,\phi)$ is defined in Eq.~(\ref{eq:single_Q_param_z}), and $\phi_k$ are the angles that parameterize the spin structure.
The exact spin configurations are then obtained by minimizing the energy with respect to the $\phi_k$.
To simplify the minimization, we can remove the in-plane rotations that do not change the energy by fixing the value of $\phi_1$, and also use the equivalence of $\mathbf{O}_i^{(3)}$ and $\mathbf{O}_i^{(4)}$ to deduce that $\phi_3=-\phi_4$.
This leaves us with two independent variables which can be determined using numerical minimization.
Finally, we note that in the decoupled limit the above analysis still holds, although the spin configurations may also possess out-of-plane components, which can be parameterized by introducing a non-zero angle $\phi_z$.
\subsection{\label{subsec:structure_ising} Ising-like phases}
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{ising_order.pdf}
\caption{(a) A snapshot of the low-temperature $\boldsymbol{\Lambda}_4$ spin configuration on a lattice with $18^3\times 6$ spins. This state was obtained using $J_1 = 1$, $J_2 = 1$, $D_1 = -1$, and $D_2 = -0.5$. The order parameters that define the Ising variables are shown in the top right corner. (b) Structure factor calculated for a system with $30\times 30\times 2\times 6$ spins. The values were normalized with respect to the maxima. The parameters used in the simulation are the same as in (a). (c) Fourier transform of the effective Ising Hamiltonian in Eq.~(\ref{eq:corrected_energy}) obtained using the LT approximation.}
\label{fig:ising_phases}
\end{figure*}
Lastly, we consider the $\boldsymbol{\Lambda}_m$ phases.
The complete description of these structures is complicated by the fact that there appears to be a large number of local minima that are very close in energy to the ground state configuration.
Furthermore, the spin configurations depend strongly on the size of the system, indicating an incommensurate nature of the magnetic order.
As a result, in order to resolve a single structure, it is necessary to perform long ($\sim 10^5$ MC steps) simulations on large ($> 3\cdot 10^4$ spins) systems.
The resulting $\boldsymbol{\Lambda}_m$ spin configurations often manifest in non-trivial patterns, where in each unit cell, the order parameter is approximately equal to $\pm\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$.
The sign of this effective order parameter alternates rapidly throughout the system, giving rise to discrete domains.
For this reason, we refer to these states as Ising-like.
In the following, we will focus our attention on the description of this Ising behavior by considering the $\boldsymbol{\Lambda}_4$ phase, although the discussion equally applies to all $\boldsymbol{\Lambda}_m$ states by the virtue of the self-duality transformations (Sec.~\ref{sec:self_duality}).
Since these phases do not appear in the decoupled limit, the discussion below assumes weak SOC.
Fig.~\ref{fig:ising_phases}~(a) shows a typical low-temperature $\boldsymbol{\Lambda}_4$ spin configuration.
It is important to note that each kagome layer is identical, corresponding to $Q_z=0$, but within the plane, the pattern can often appear seemingly random.
The structure factor (Fig.~\ref{fig:ising_phases}~(b)) displays a characteristic ring in the Brillouin zone with a relatively well-defined radius.
We note in this ring there are typically a couple of peaks with larger intensity, corresponding to the dominant orientations of the stripy domains.
In our simulations, the spin deviations from the $\boldsymbol{\Phi}_4^{(\boldsymbol{\Gamma})}$ state in each unit cell were found to be restricted to the plane of the kagome (the out-of-plane deviations are typically four orders of magnitude smaller).
Therefore, for simplicity, we express the spins as two-dimensional vectors
\begin{equation}
\mathbf{S}_i(\mathbf{r}) = \begin{bmatrix}\cos\left( \theta_i^{(4)} + \delta\tilde{\theta}_i(\mathbf{r}) \right)\\\sin\left( \theta_i^{(4)} + \delta\tilde{\theta}_i(\mathbf{r}) \right)\end{bmatrix},
\end{equation} where $\theta_i^{(m)}$ are the angles in Eq.~(\ref{eq:duality_angles}) that correspond to the $\boldsymbol{\Phi}_4^{(\boldsymbol{\Gamma})}$ order parameter, and $\delta\tilde{\theta}_i(\mathbf{r})$ are the deviations from the order parameter.
In order to account for the alternating sign of the $\boldsymbol{\Phi}_4^{(\boldsymbol{\Gamma})}$ order parameter, we further rewrite these deviations as
\begin{equation}
\delta\tilde{\theta}_i(\mathbf{r}) = -\frac{v(\mathbf{r})-1}{2}\pi + \delta\theta_i(\mathbf{r}),
\end{equation} where $v(\mathbf{r})$ are Ising variables that take values $\pm 1$, and $0\leq\delta\theta_i(\mathbf{r})<\pi$ are the residual angle deviations.
Substituting this parameterization into the Hamiltonian with exchange and DM interactions is equivalent to performing a $\mu_4^{(+1)}$ gauge transformation, from which we obtain
\begin{align}
\mathcal{H} = \sum_{\mathbf{r}\mathbf{r}'}\Bigg[\sum_{\langle ij\rangle} &\widetilde{J}_{ij}(\mathbf{r}-\mathbf{r}')\cos\Big(\delta\theta_{ij}(\mathbf{r},\mathbf{r}')\Big) \label{eq:JD_ising_local_coord}\\
+ &\widetilde{D}_{ij}(\mathbf{r}-\mathbf{r}')\sin\Big(\delta\theta_{ij}(\mathbf{r},\mathbf{r}')\Big)\Bigg]v(\mathbf{r})v(\mathbf{r}'),\notag
\end{align} where $\widetilde{J}_{ij}(\boldsymbol{\delta})$ and $\widetilde{D}_{ij}(\boldsymbol{\delta})$ are given in Eq.~(\ref{eq:duality_weak_SOC}) ($m=4$ and $\varepsilon=+1$), and $\delta\theta_{ij}(\mathbf{r},\mathbf{r}')=\delta\theta_i(\mathbf{r})-\delta\theta_j(\mathbf{r}')$.
Eq.~(\ref{eq:JD_ising_local_coord}) can be recast into an Ising model on a triangular lattice with non-local nn interactions:
\begin{equation}
\mathcal{H} = \sum_{\mathbf{r}\mathbf{r}'}\mathcal{J}(\mathbf{r},\mathbf{r}')v(\mathbf{r})v(\mathbf{r}') + \mathcal{B}(\mathbf{r},\mathbf{r}').
\end{equation} Here, both the Ising exchange coupling $\mathcal{J}(\mathbf{r},\mathbf{r}')$, and constant $\mathcal{B}(\mathbf{r},\mathbf{r}')$ depend on $\delta\theta_i(\mathbf{r})$.
It is worth discussing the values of $\widetilde{J}_{ij}(\boldsymbol{\delta})$ and $\widetilde{D}_{ij}(\boldsymbol{\delta})$ for which the $\boldsymbol{\Lambda}_4$ is stable.
In fig~\ref{fig:ising_phases}, the parameters are $J_1=J_2 = 1$, $D_1=-1$, and $D_2=-0.5$, which corresponds to $\widetilde{J}_1 \approx -1.37$, $\widetilde{J}_2 \approx -0.07$, $\widetilde{D}_1 \approx -0.37$, and $\widetilde{D}_2 \approx 1.12$.
Therefore, in the local coordinate frame, the inter-plane interactions are dominated by the ferromagnetic exchange, whereas the in-plane interactions almost exclusively come from the DM term.
Notably, $\boldsymbol{\Lambda}_4$ phase in this case (Fig.~\ref{fig:pds}) appears to be stable for arbitrarily large values of $D_1$, which leads to large negative values of $\widetilde{J}_1$.
Using this information, we can expand Eq.~(\ref{eq:JD_ising_local_coord}) to second order in $\delta\theta_i(\mathbf{r})$, while assuming that $\widetilde{J}_2$ is of the same order of magnitude as the deviations.
We also impose ferromagnetic order along the $z$-direction by setting $\delta\theta_i(\mathbf{r}\pm \mathbf{a}_3) = \delta\theta_i(\mathbf{r})$ and $v(\mathbf{r}\pm \mathbf{a}_3) = v(\mathbf{r})$. Under these assumptions, Eq.~(\ref{eq:JD_ising_local_coord}) simplifies to the following quadratic equation:
\begin{equation}
\mathcal{H} \approx \mathcal{H}^{(0)} + \sum_\mathbf{r}\sum_i g_i(\mathbf{r})\delta\theta_i(\mathbf{r}) + \frac{1}{2}\sum_\mathbf{r}\sum_{ij}H_{ij}\delta\theta_i(\mathbf{r})\delta\theta_j(\mathbf{r}),
\label{eq:quadratic_form}
\end{equation} where $\mathcal{H}^{(0)}$ is independent of $\delta\theta_i(\mathbf{r})$
\begin{widetext}
\begin{align}
\mathcal{H}^{(0)} = \sum_{\mathbf{r}} 6\widetilde{J}_1\Big[&1 + v(\mathbf{r})v(\mathbf{r}+\mathbf{a}_3) \Big] \notag\\+ 2\widetilde{J}_2\Big[&3 + v(\mathbf{r})v(\mathbf{r}+\mathbf{a}_1) + v(\mathbf{r})v(\mathbf{r}+\mathbf{a}_2) + v(\mathbf{r})v(\mathbf{r}+\mathbf{a}_1+\mathbf{a}_2)\Big],
\end{align}
and $g_i(\mathbf{r})$ and $H_{ij}$ are the gradient vector and Hessian matrix respectively:
\begin{equation}
g_i(\mathbf{r}) = -\widetilde{D}_2v(\mathbf{r})\begin{bmatrix} v(\mathbf{r}-\mathbf{a}_2) - v(\mathbf{r}-\mathbf{a}_1-\mathbf{a}_2)\\
v(\mathbf{r}-\mathbf{a}_1) - v(\mathbf{r}+\mathbf{a}_2)\\
v(\mathbf{r}+\mathbf{a}_1+\mathbf{a}_2) - v(\mathbf{r}+\mathbf{a}_1)\\
v(\mathbf{r}+\mathbf{a}_2) - v(\mathbf{r}+\mathbf{a}_1+\mathbf{a}_2)\\
v(\mathbf{r}+\mathbf{a}_1) - v(\mathbf{r}-\mathbf{a}_2)\\
v(\mathbf{r}-\mathbf{a}_1-\mathbf{a}_2) - v(\mathbf{r}-\mathbf{a}_1) \end{bmatrix},
H_{ij} = 2\widetilde{J}_1\begin{bmatrix}-2 & 0 & 0 & 0 & 1 & 1\\
0 &-2 & 0 & 1 & 0 & 1\\
0 & 0 &-2 & 1 & 1 & 0\\
0 & 1 & 1 &-2 & 0 & 0\\
1 & 0 & 1 & 0 &-2 & 0\\
1 & 1 & 0 & 0 & 0 &-2\end{bmatrix}.
\label{eq:grad_hessian}
\end{equation}
\end{widetext}
The Hessian matrix in Eq.~(\ref{eq:grad_hessian}) is positive semi-definite, with a single zero in the eigenvalue spectrum corresponding to global rotations.
After this mode is integrated out by fixing the planar orientations of the spins, a straight-forward minimization of the quadratic equation gives
\begin{equation}
\delta\phi_i^\text{min}(\mathbf{r}) = -\sum_j H^{-1}_{ij}g_j(\mathbf{r}).
\label{eq:solution_normal_quadratic_form}
\end{equation}
Thus, the minimized energy can be written as
\begin{equation}
\mathcal{H}_\text{min} = \mathcal{H}^{(0)} +dE,
\label{eq:corrected_energy}
\end{equation}
where the energy contribution of the spin deviations is given by
\begin{widetext}
\begin{alignat}{2}
dE &= -\frac{1}{2}\sum_\mathbf{r}\sum_{ij} H&&^{-1}_{ij}\mu_i(\mathbf{r})\mu_j(\mathbf{r}) \notag\\
&= \phantom{-}\mathcal{K}\sum_\mathbf{r}v(\mathbf{r})\Bigg[ &&\phantom{-}2v(\mathbf{r}+\mathbf{a}_1) + 2v(\mathbf{r}+\mathbf{a}_2) + 2v(\mathbf{r}+\mathbf{a}_1+\mathbf{a}_2) \notag\\
& &&+ v(\mathbf{r}+2\mathbf{a}_1) + v(\mathbf{r}+2\mathbf{a}_2) + v(\mathbf{r}+2\mathbf{a}_1+2\mathbf{a}_2)\notag\\
& &&+ 2v(\mathbf{r}+\mathbf{a}_1-\mathbf{a}_2) + 2v(\mathbf{r}+2\mathbf{a}_1+\mathbf{a}_2) + 2v(\mathbf{r}+\mathbf{a}_1+2\mathbf{a}_2) -15 \Bigg],
\label{eq:energy_contribution}
\end{alignat}
\end{widetext}
and the constant $\mathcal{K}$ is defined as
\begin{equation}
\mathcal{K} = -\frac{\widetilde{D}_2^2}{12\widetilde{J}_1},
\end{equation}
Note that the positions of the Ising variables in Eq.~(\ref{eq:energy_contribution}) have been shifted to better indicate the nature of the effective interactions.
We see that the spin deviations renormalize nearest-neighbor interactions between Ising variables and also induce second- and third-neighbor in-plane antiferromagnetic interactions (since $\widetilde{J}_1$ is negative).
These values of the coupling constants stabilize incommensurate phases in the triangular Ising antiferromagnets~\cite{Plumer_triangular_nnn_ising_1993_prb}.
Qualitatively \footnote{We note that a more systematic study of these phases is currently underway.}, we can understand the nature of these unusual phases in the following way.
The $\widetilde{J}_1$ interactions establish ferromagnetic ordering along the $z$-axis, ensuring $Q_z=0$.
$\widetilde{D}_2$ interactions couple small spin deviations to stabilize incommensurate in-plane wavevectors with a fixed magnitude $Q_I$.
LT analysis of equation~(\ref{eq:corrected_energy}) gives a dispersion with a degenerate ring with radius $Q_c$, in agreement with the MC results (Fig.~\ref{fig:ising_phases}~(c)).
However, due to the Ising nature of the ordering, we expect there to be a larger degeneracy in the ground state than what is predicted by the LT method.
Although it is not evident from the analysis above, the phase diagrams in Fig.~\ref{fig:pds} indicate that the larger values of $\widetilde{J}_2$ serve to tune $Q_c$, until it either becomes zero ($\mathbf{Q}=\Gamma$) or reaches the zone boundary ($\mathbf{Q}=K,M$).
\section{\label{sec:spin_waves} Spin waves}
\subsection{\label{subsec:spin_waves_dynamic_matrix}Dynamical matrix}
In this section, we consider the elementary spin excitations in certain magnetic phases presented in Sec.~\ref{sec:phase_diagram}.
To do this, we solve the linearized torque equation~(\ref{eq:LL_equation}), as described in Sec.~\ref{subsec:methods_dynamics}.
Once the appropriate local coordinates are selected, the equations of motion can be generally written in the matrix form as
\begin{widetext}
\begin{equation}
\frac{\text{d}}{\text{d}t}\begin{bmatrix}
\widetilde{S}_{ix}(\mathbf{r},t)\\\widetilde{S}_{iy}(\mathbf{r},t)
\end{bmatrix} = \sum_{\mathbf{r}'}\sum_j \begin{bmatrix}
\widetilde{\mathcal{A}}_{yxij}(\mathbf{r}-\mathbf{r}')\widetilde{S}_{jx}(\mathbf{r}',t) + \widetilde{\mathcal{A}}_{yyij}(\mathbf{r}-\mathbf{r}')\widetilde{S}_{jy}(\mathbf{r}',t) - \widetilde{\mathcal{A}}_{zzij}(\mathbf{r}-\mathbf{r}')\widetilde{S}_{iy}(\mathbf{r},t)\\ \widetilde{\mathcal{A}}_{zzij}(\mathbf{r}-\mathbf{r}')S_{ix}(\mathbf{r}) - \widetilde{\mathcal{A}}_{xyij}(\mathbf{r}-\mathbf{r}')\widetilde{S}_{jy}(\mathbf{r}',t) - \widetilde{\mathcal{A}}_{xxij}(\mathbf{r}-\mathbf{r}')\widetilde{S}_{jx}(\mathbf{r}',t)
\end{bmatrix}
\label{eq:linearized_sw_time_space}
\end{equation}
\end{widetext}
where $\widetilde{\mathcal{A}}_{\alpha\beta ij}(\mathbf{r}-\mathbf{r}')$ are the elements of the coupling matrix in the local coordinates, $\mathbf{r}$ and $\mathbf{r}'$ determine the positions of the magnetic unit cells, and $i$ and $j$ generally label the non-equivalent magnetic sublattices.
Eq.~(\ref{eq:linearized_sw_time_space}) is solved by first defining the spatial Fourier transforms of the spin components as in Eq.~(\ref{eq:spin_FT}), and then Fourier transforming in the time domain ($\mathbf{S}_i(\mathbf{q},t) = \sum_{\omega} S_i(\mathbf{q},\omega)e^{-i\omega t}$).
The result is written as an eigenvalue equation:
\begin{equation}
-i\omega(\mathbf{q})\begin{bmatrix}
\mathbf{S}_{x}\\ \mathbf{S}_{y}
\end{bmatrix} = \begin{bmatrix} \mathbf{u}^{(xx)}(\mathbf{q}) & \mathbf{u}^{(xy)}(\mathbf{q})\\ \mathbf{u}^{(yx)}(\mathbf{q}) & \mathbf{u}^{(yy)}(\mathbf{q}) \end{bmatrix}\begin{bmatrix}
\mathbf{S}_{x}\\ \mathbf{S}_{y}
\end{bmatrix}
\label{eq:dynamical_matrix}
\end{equation}
where
\begin{align}
u^{(xx)}_{ij}(\mathbf{q}) &= \widetilde{\mathcal{A}}_{yxij}(\mathbf{q})\\
u^{(xy)}_{ij}(\mathbf{q}) &= \widetilde{\mathcal{A}}_{yyij}(\mathbf{q}) - \sum_k\widetilde{\mathcal{A}}_{yyjk}(\mathbf{q})\Delta_{ij}\\
u^{(yx)}_{ij}(\mathbf{q}) &= \sum_k\widetilde{\mathcal{A}}_{zzjk}(\mathbf{q})\Delta_{ij} - \widetilde{\mathcal{A}}_{xxij}(\mathbf{q})\\
u^{(yy)}_{ij}(\mathbf{q}) &= -\widetilde{\mathcal{A}}_{xyij}(\mathbf{q})
\end{align}
Here, $\Delta_{ij}$ is a Kronecker delta function, and $u^{(ab)}_{ij}(\mathbf{q})$ are the elements of the dynamical spin wave matrix.
For planar spin configurations, it is possible to choose local coordinates such that $u^{(xx)}_{ij}(\mathbf{q}) = u^{(yy)}_{ij}(\mathbf{q}) = 0$.
The frequencies $\omega(\mathbf{q})$ must generally be calculated numerically.
However, for some high symmetry points in the Brillouin zone, the dynamical matrix can be diagonalized analytically.
\subsection{\label{subsec:spin_waves_single_q} Symmetry properties in the decoupled and weak SOC limit}
Before discussing the spin wave dispersions for the single-$\mathbf{Q}$ structures, it is useful to analyze the symmetry properties of the spin configurations.
We are interested in determining the groups of symmetries that leave the magnetic structures unchanged, \textit{i.e.} the \textit{stabilizer subgroups}.
The stabilizers allow one to quickly determine the number and degeneracy of modes at a given wavevector $\mathbf{q}$ in the Brillouin zone.
In the decoupled limit, the stabilizers are subgroups of $\mathcal{G}_J$, which was derived in Sec.~\ref{subsec:symmetry_decoupled}.
Out of all structures listed in Sec.~\ref{subsec:structure_single_q}, $\boldsymbol{\Phi}_0^{(\boldsymbol{\Gamma})}$ (ferromagnetic) and $\boldsymbol{\Phi}_3^{(\boldsymbol{\Gamma})}$ (collinear antiferromagnetic) configurations deserve special attention, since any spin rotation around an axis collinear to the spins leaves the spin configurations unchanged.
Although these configurations are very similar, their symmetry properties turn out to be fundamentally different.
The stabilizer group of the ferromagnetic state contains all lattice symmetries and can therefore be written as a direct product
\begin{equation}
\mathcal{S}\left(\boldsymbol{\Phi}_0^{(\boldsymbol{\Gamma})}\right) = \mathrm{D}_{6h}^{(L)}\otimes \mathrm{SO}^{(S)}(2).
\end{equation}
On the other hand, the collinear antiferromagnetic state is not invariant under lattice transformations that interchange A and B layers.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{sw_exchange.pdf}
\caption{Spin wave dispersions for (a) collinear $\mathbf{Q}=\Gamma$, (b) non-collinear $\mathbf{Q}=\Gamma$, and (c) $\mathbf{Q}=K$ single-$\mathbf{Q}$ structures stabilized in the decoupled limit. In (b), the labels $\boldsymbol{\Phi}^{(\boldsymbol{\Gamma})}_{1,5}$ and $\boldsymbol{\Phi}^{(\boldsymbol{\Gamma})}_{2,4}$ are used to indicate that the phases with $m=1,5$ (2,4) are degenerate and therefore have the same spectra. The parameters used to calculate the dispersions are $J_2<0,J_1/|J_2|=-1,J_4/|J_2|=-0.1$ ($\boldsymbol{\Phi}^{(\boldsymbol{\Gamma})}_{0}$), $J_2<0,J_1/|J_2|=1,J_4/|J_2|=0.1$ ($\boldsymbol{\Phi}^{(\boldsymbol{\Gamma})}_{3}$), $J_2>0,J_1/|J_2|=-1,J_4/|J_2|= 0.1$ ($\boldsymbol{\Phi}^{(\boldsymbol{\Gamma})}_{1,5}$), $J_2>0,J_1/|J_2|=1,J_4/|J_2|=-0.1$ ($\boldsymbol{\Phi}^{(\boldsymbol{\Gamma})}_{2,4}$), $J_2<0,J_1/|J_2|=-1.5,J_4/|J_2|=1$ ($\boldsymbol{\Phi}^{(\mathbf{K})}_{0}$), and $J_2<0,J_1/|J_2|= 1.5,J_4/|J_2|=-1$ ($\boldsymbol{\Phi}^{(\mathbf{K})}_{3}$).}
\label{fig:sw_exchange}
\end{figure*}
The remaining symmetry elements form a group $\mathrm{D}_{3h}^{(L)}$, and the stabilizer can therefore be written as a semidirect product:
\begin{equation}
\mathcal{S}\left(\boldsymbol{\Phi}_3^{(\boldsymbol{\Gamma})}\right) = \mathrm{D}_{3h}^{(L)}\otimes \left[\mathrm{SO}^{(S)}(2)\ltimes \mathrm{C}_2^{(SL)}\right],
\end{equation}
where $\mathrm{C}_2^{(SL)}$ contains a simultaneous $C_2$ rotation of the lattice around $z$-axis and spins around an axis perpendicular to the spins.
This subtle difference in the structure of the stabilizers leads to significant differences in the corresponding spin wave dispersion spectra.
Fig.~\ref{fig:sw_exchange}~(a) demonstrates the differences between the dynamics of these two spin configurations.
The most notable difference is that in the case of $\boldsymbol{\Phi}_3^{(\boldsymbol{\Gamma})}$, all branches are at least doubly degenerate, leading to three distinct modes.
However, as seen in the figure, in the case of a ferromagnetic state, there are generally six non-degenerate modes.
The existence of the two-fold degeneracy in the antiferromagnetic case can be attributed to the fact that at an arbitrary wavevector $\mathbf{q}$ the dynamical matrix in Eq.~(\ref{eq:dynamical_matrix}) is invariant under a simultaneous lattice inversion and reversal of the spin direction.
This situation is equivalent to the spin waves on a linear antiferromagnetic chain where two linearly-polarized magnon modes have the same dispersion.
As a result, the collinear antiferromagnetic state has two linear Goldstone modes at the $\Gamma$ point, corresponding to the in- and out-of-phase fluctuations of spins on A and B sublattices.
In contrast, the ferromagnetic state only has one (quadratic) Goldstone mode.
In the case of the non-collinear single-$\mathbf{Q}$ configurations, the symmetry analysis is simplified by the fact that the stabilizer subgroups are now finite groups, since there are no spin rotations that leave these structures invariant.
One can see from Fig.~\ref{fig:q0_phases} that for each non-collinear $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ configuration there is exactly one spin rotation that can be combined with a given lattice symmetry transformation to leave the spin configuration unchanged.
Therefore, the stabilizer subgroups for $\mathbf{Q}=\Gamma$ non-collinear phases are all isomorphic to $\mathrm{D}_{6h}$.
The situation is similar in the case of $\boldsymbol{\Phi}_m^{(\mathbf{K})}$ structures.
However, because certain lattice symmetries are broken as a result of the cycloidal spin order, we are left with the symmetry operations that belong to the group of the ordering wavevector $\mathbf{Q}=K$.
Thus, the stabilizers of $\boldsymbol{\Phi}_m^{(\mathbf{K})}$ are all isomorphic to group $\mathrm{D}_{3h}$.
Figures~\ref{fig:sw_exchange}~(b-c) show the dispersions for the non-collinear single-$\mathbf{Q}$ spin configurations in phase diagrams in Fig.~\ref{fig:pd_j1_j4_h}.
Note that the non-collinear nature of these phases implies that the Goldstone modes are three-fold degenerate, since there are now three non-equivalent fluctuation axes.
In the weakly-coupled limit, the stabilizers must be the subgroups of $\mathcal{G}_D$.
This does not change the symmetry of the non-collinear single-$\mathbf{Q}$ states, although DM interactions lift the three-fold degeneracy of the Goldstone modes, leaving only one zero-energy mode.
Since the out-of-plane spin rotations are no longer valid symmetry operations, the stabilizers of the two collinear phases discussed above formally become isomorphic to $\mathrm{D}_{6h}$, similar to the rest of the $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ configurations.
We note that when one considers the symmetry of the local spin components used for spin-wave calculations, the stabilizer groups become the same (\textit{i.e.} not just isomorphic).
Therefore, the spin wave eigenvectors in the local frame are exactly the same for all six $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ ($\boldsymbol{\Phi}_m^{(\mathbf{K})}$) phases.
This important property holds true for all magnetic structures in the weak SOC limit and is one of the consequence of the self-duality discussed in Sec.~\ref{sec:self_duality}.
\subsection{\label{subsec:spin_waves_obdo_norec} Order by disorder and non-reciprocity}
As discussed in Sec.~\ref{subsec:structure_single_q}, the classical energy of the collinear phases does not depend on the DM constants, meaning that these spin configurations can be rotated out-of-plane without any energy cost.
Nevertheless, the energy of the spin waves does, in fact, depend on the DM interactions.
As a result, we expect that the fluctuations would break the effective rotational symmetry by selecting a particular orientation of the collinear structures via the order-by-disorder mechanism~\cite{Henley_obdo_1989_prl,Chalker_Holdsworth_Shender_kagome_1992_prl}.
To prove demonstrate predictions, we calculate the magnon free energy as function of the out-of-plane angle $\theta$
\begin{equation}
F(\theta) = \frac{1}{\beta}\sum_\mathbf{q} \ln\left(1-e^{-\beta\omega(\mathbf{q},\theta)}\right).
\end{equation}
The calculations of the frequencies $\omega(\mathbf{q},\theta)$ were performed numerically using a $50\times 50\times 50$ discretized reciprocal lattice grid.
The result is shown in Fig.~\ref{fig:obdo_nonrec}~(a).
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{OBDO_nonrec.pdf}
\caption{(a) Magnon free energy as function of the out-of-plane angle $\theta$ calculated numerically for the two collinear phases ($\boldsymbol{\Phi}_{0,3}^{(\boldsymbol{\Gamma})}$) with $J_2<0$, $J_1/|J_2|=0.5$, $D_1=D_2=0.5$. The minima of the free energy occur at $\theta=\frac{\pi}{2}$, indicating order by disorder. (b) Non-reciprocity in the spin wave spectrum of the antiferromagnetic collinear state in the weak SOC limit.}
\label{fig:obdo_nonrec}
\end{figure}
We can see that in both cases, thermal fluctuations rotate the spins perpendicular to the kagome layers, thus preserving the
continuous rotational symmetry in these phases.
We note that in the case of the collinear antiferromagnet, the stabilization of the out-of-plane order through thermal fluctuations introduces an interesting new property in the spin wave spectrum, namely non-reciprocity~\cite{Rikken_Strohm_Wyder_nonrec_2002_prl,Zakeri_nonrec_2010_prl,Garcia-Sanchez_nonrec_2014_prb,Cheon_Lee_Cheong_norec_2018_prb,Santos_nonrec_2020_prb,Borys_Monchesky_nonrec_2021_prb}.
Non-reciprocity implies that there is no unitary transformation that could make the dynamical matrix~(\ref{eq:dynamical_matrix}) Hermitian.
As a result, the spectrum is no longer symmetric at $\omega(\pm\mathbf{q})$, as seen in Fig.~\ref{fig:obdo_nonrec}~(b).
Finally, we note that the $\boldsymbol{\Phi}_{0,3}^{(\mathbf{K})}$ configurations also possess the same accidental rotational degeneracy.
This is evidenced from the fact that the classical energy of these configurations is independent of the DM constants.
However, because the stabilizer groups of these phases are significantly smaller than those of the collinear phases, the out-of-plane rotations in this case generally lower the symmetry of the spin configurations.
Instead, the breaking of the continuous degeneracy occurs as a result of small spin deviations that establish finite DM coupling.
\subsection{\label{subsec:spin_waves_multi_q}Excitations in the multi-$\mathbf{Q}$ phases}
Next, we calculate the excitation spectra for the $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ phases.
Unlike in the case of the $\mathbf{Q}=K$ single-$\mathbf{Q}$ states, it is not possible to obtain all of the spin wave modes by simply shifting the branches by $\pm K$, and one has to construct a dynamical matrix where each of the four elements $u_{ij}^{(ab)}$ is a $24\times 24$ matrix (as the magnetic unit cell consists of 4 crystallographic unit cells).
As a result, the excitation spectra for these states consist of 24 modes.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{sw_4q.pdf}
\caption{Spin wave dispersion for the four out of six $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ configurations. The parameters used for the calculation are $J_2>0$, $J_1/|J_2|=1$, $D_1/|J_2|=\mp 0.5$, $D_2/|J_2|=\mp 0.5$ for $\boldsymbol{\Psi}_{2,4}^{(\mathbf{M})}$, and $J_2>0$, $J_1/|J_2|=-1$, $D_1/|J_2|=\pm 0.5$, $D_2/|J_2|=\mp 0.5$ for $\boldsymbol{\Psi}_{1,5}^{(\mathbf{M})}$.}
\label{fig:sw_4q}
\end{figure}
Fig.~\ref{fig:sw_4q} shows the dispersions for the $\boldsymbol{\Psi}_m^{(\mathbf{M})}$ phases stabilized for $J_2>0$, $|J_1|=|J_2|$ in the weak SOC limit.
As in the case of the single-$\mathbf{Q}$ states, the DM interactions lead to a single Goldstone mode, which in this case occurs at $\Gamma$ and M points, since the period of the magnetic texture is double that of the lattice.
Certain branches in the spectra also exhibit other interesting features such as flat bands. However further studies are needed for a full description of band topology in these phases.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{q0_anis.pdf}
\caption{(a) A diagram illustrating the splitting of the classical energies of the six planar irreps in the weak SOC limit. Here, the values of the exchange and DM constants are chosen using the self-duality relations to give the same energies in the zero anisotropy limit. Note that the energy scales are not exact. (b) Distortion of the spin structures as a result of small SIA (orange arrows) and bond-dependent anisotropy (grey arrows). The $\boldsymbol{\Phi}_{1,4}^{(\boldsymbol{\Gamma})}$ remain unchanged since the spins are collinear with the anisotropy axes. (c) Phase diagram in Fig.~\ref{fig:pds} (left) with shifted phase boundaries (grey dashed lines) introduced by a small SIA ($K^-/|J_2|= 0.05$).}
\label{fig:anis_effects}
\end{figure*}
\section{\label{sec:anisotropy}Effects of anisotropic interactions}
\subsection{\label{subsec:anisotropy_structure} Phase stability}
Having described the important properties of the magnetic phases in the decoupled and weak SOC limits, we now consider the case of the intermediate SOC.
We assume that in this limit, the SIA and bond-dependent anisotropy are small but non-negligible.
As discussed in Sec.~\ref{sec:symmetry}, the anisotropic interactions break the axial rotational symmetry, reducing the symmetry group of the Hamiltonian to the paramagnetic group.
As a result, the $\boldsymbol{\Phi}_0^{(\boldsymbol{\Gamma})}$ ($\boldsymbol{\Phi}_3^{(\boldsymbol{\Gamma})}$) and $\boldsymbol{\Phi}_2^{(\boldsymbol{\Gamma})}$ ($\boldsymbol{\Phi}_5^{(\boldsymbol{\Gamma})}$) states now belong to the same two-dimensional irreps, while the remaining $\boldsymbol{\Phi}_1^{(\boldsymbol{\Gamma})}$ and $\boldsymbol{\Phi}_4^{(\boldsymbol{\Gamma})}$ split into one-dimensional irreps (see Fig.~\ref{fig:SOC_symmetry_diagram}).
In general, the anisotropic interactions stabilize the spin configurations that belong to the one-dimensional planar irreps, since in these structures the spins are aligned collinear to the anisotropy axes.
At the same time, the configurations belonging to the same two-dimensional planar irreps remain almost degenerate for small values of the anisotropy constants.
This is illustrated in Fig.~\ref{fig:anis_effects}~(a).
We select the exchange and DM parameters using self-duality relations in Sec.~\ref{subsec:duality_JD} such that in the absence of anisotropy the energies of the six planar $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ configurations are the same.
The SIA and the in-plane bond-dependent interactions split the energies by stabilizing $\boldsymbol{\Phi}_1^{(\boldsymbol{\Gamma})}$ and $\boldsymbol{\Phi}_4^{(\boldsymbol{\Gamma})}$ (the sign of $K^-$ and $A_2^{(xy)}$ determines the orientations of spins).
The out-of-plane bond anisotropy further splits the energies based on the parity of the configurations (in the figure, we choose $A_1^{(xy)}>0$, which stabilizes the structures that are antisymmetric under space inversion).
Apart from the energy of the spin configurations, anisotropic interactions have a significant impact on the spin structures themselves.
Since some of the planar phases now belong to the same two-dimensional irreps, the anisotropic interactions couple these states, introducing small deviations from the irreps in the weak SOC limit, as shown in Fig.~\ref{fig:anis_effects}~(b).
This has been shown to play an important effect on the physical properties of $\mathrm{Mn}_3X$ compounds, where the anisotropic interactions couple the $\boldsymbol{\Phi}_2^{(\boldsymbol{\Gamma})}$ 120-degree state and the ferromagnetic state, inducing a small magnetic moment at zero field~\cite{Soh_gs_2020_prb,Chen_gs_2020_prb,Zelenskiy_Monchesky_Plumer_Southern_2021_prb}.
Our previous work in Ref.~I has established that the relative strengths of the SIA and the in-plane bond-anisotropy fix the overall orientation of the spins as well as the direction of the spin deviations, consequently determining both the magnitude and direction of the induced magnetic moment in $\mathrm{Mn}_3X$ magnets.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{SOC_spin_waves.pdf}
\caption{(a) Diagram demonstrating the $\Gamma$-point splitting of the spin wave modes in $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ phases in the three SOC limits. Examples of broken symmetry in the excitations of (b) $\boldsymbol{\Phi}_2^{(\boldsymbol{\Gamma})}$ and (c) $\boldsymbol{\Phi}_5^{(\boldsymbol{\Gamma})}$. The solid blue and dashed red lines correspond to dispersions along the two paths illustrated in the inset of figure (b). In both cases, we set $K^-/|J_2|=0.1$, $J_2>0$, $J_1/|J_2|=1$, $D_2/|J_2|=-0.8$, and set $D_1/|J_2|=0.8$ for $\boldsymbol{\Phi}_2^{(\boldsymbol{\Gamma})}$, and $D_1/|J_2|=-0.8$ for $\boldsymbol{\Phi}_5^{(\boldsymbol{\Gamma})}$.}
\label{fig:SOC_spin_waves}
\end{figure*}
We note that the distortions of the spin structures introduced by the anisotropic couplings may also provide a way for the experimental quantification of these interactions.
Techniques, such as elastic neutron scattering, would allow one to determine the canting angles, which could in turn be related to the values of the anisotropic parameters.
Moreover, in the case of the coupling between the ferromagnetic $\boldsymbol{\Phi}_0^{(\boldsymbol{\Gamma})}$ state and $\boldsymbol{\Phi}_2^{(\boldsymbol{\Gamma})}$, the induced magnetic moment would serve as an excellent probe of the anisotropy in the system.
Although the discussion in this section so far has been in the context of the $\mathbf{Q}=\Gamma$ phases, the general trends in the stabilization energy and spin configurations mostly apply to the remaining types of phases, since every spin structure discussed in this paper can be approximately constructed from the rotated $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ states.
As the deviations from the $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ local order parameters are small, we are still able to make qualitative predictions about the stability of different phases under small anisotropy.
Therefore, we expect the anisotropic interactions to stabilize those magnetic phases characterized by $m=1,4$, thus extending their stability regions as compared to the weak SOC limit.
This is clearly demonstrated in Fig.~\ref{fig:anis_effects}~(c), where the phase boundaries are calculated using $J_2>0$, $J_1/|J_2|=1$, $K^-/|J_2|=0.05$.
The symmetry of the phase diagram is broken, since $\mu_0^{(-1)}$ is no longer a valid self-duality transformation.
Notably, the in-plane bond-dependent anisotropy with $A_2^{(xy)}=0.05$ produces a nearly identical shift of the phase boundaries.
The only exception to the established stability trend appears to be the $\boldsymbol{\Lambda}_4$ phase which shrinks under the applied anisotropy.
However, this is not surprising given the result of Sec.~\ref{subsec:structure_ising} where the spin deviations were shown to be the primary stabilizing factor of the Ising patterns.
\subsection{\label{subsec:anisotropy_spin_waves}Spin wave spectra}
\begin{table}[!t]
\centering
\begin{tabular}{c|c|c}
m & Stabilizer & irrep decomposition at $\Gamma$\\
\hline
0 & C$_{2h}^{(SL)}$ & $A_g\oplus A_u\oplus 2B_g\oplus 2B_u$\\
1 & D$_{6 }^{(SL)}$ & $A_2\oplus B_1\oplus E_1\oplus E_2$\\
2 & C$_{2h}^{(SL)}$ & $A_g\oplus A_u\oplus 2B_g\oplus 2B_u$\\
3 & D$_{2 }^{(SL)}$ & $A_1\oplus 2B_1\oplus B_2\oplus 2B_3$\\
4 & D$_{3d}^{(SL)}$ & $2A_2''\oplus 2E''$\\
5 & D$_{2 }^{(SL)}$ & $A_1\oplus 2B_1\oplus B_2\oplus 2B_3$\\
\end{tabular}
\caption{Stabilizer groups and irrep decomposition at the $\Gamma$ point for the six $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ states in the intermediate anisotropy limit.}
\label{tab:anisotropy_splitting}
\end{table}
The effects of anisotropic interactions are strongly manifested in the excitations of the magnetic structures.
Broken rotational symmetry implies that the spin wave spectra are now completely gapped.
Since the symmetry group of the Hamiltonian is reduced down to the paramagnetic group, the stabilizer subgroups of the spin configurations in the intermediate SOC limit are typically small.
In the case of the six $\mathbf{Q}=\Gamma$ phases, the stabilizer groups and the corresponding irrep decompositions at the $\Gamma$ point are presented in table~\ref{tab:anisotropy_splitting}.
We see that the stabilizers of states with $m=0,2$ ($\mathrm{C}_{2h}$) and $m=3,5$ ($\mathrm{D}_2$) contain only one-dimensional irreps, meaning that apart from the accidental mode crossings, the spin wave spectra will generally only contain non-degenerate modes.
However, the remaining two configurations retain most of their symmetry from the weak SOC limit and therefore have doubly-degenerate modes.
We now briefly summarize the effects of the SOC strength on the spin wave spectra.
Fig.~\ref{fig:SOC_spin_waves}~(a) demonstrates the evolution of the $\Gamma$ point splitting of excitations in the six $\boldsymbol{\Phi}_m^{(\boldsymbol{\Gamma})}$ phases.
The transitions between the different SOC limits are accompanied by a qualitative change in the spectra, associated with either a change in the splitting or an opening of a gap.
In addition, since the spin structures break most of the symmetries of the Hamiltonian, the spectra in general will not be symmetric in the Brillouin zone.
This is demonstrated in Fig.~\ref{fig:SOC_spin_waves}~(b) and (c), where two different paths in the Brillouin zone, related by a reflection yield different values of $\omega(\mathbf{q})$.
These qualitative changes in the spin wave spectra provide a good probe for the SOC strength in the AB-stacked kagome magnets.
By comparing these results to, for example, the inelastic neutron scattering or Raman scattering data, one would be able to determine the appropriate SOC limit, and thus identify the relevant spin interactions.
\section{\label{sec:conclusions} Summary and Conclusions}
In this work, we provide an extensive overview of the properties of the general magnetic Hamiltonian~(\ref{eq:magnetic_hamiltonian}) on hexagonal AB-stacked lattices.
By studying the symmetry of the model, we have determined the connection between the strength of the SOC and the allowed spin symmetries in the system.
We have further identified three cases corresponding to decoupled, weak, and intermediate SOC limits that yield different Hamiltonian symmetry groups.
In addition to the symmetries, we found a large number of self-duality transformations that define the structure of the parameter space of the model.
Since these transformations directly depend on the symmetry of the physical system, we identify three sets of duality transformations corresponding to the three SOC limits.
The fundamental properties of the Hamiltonian allowed us to devise a strategy for efficient exploration of the parameter space.
We studied the ordered phases in the decoupled and weak SOC limits, by constructing parameter-space phase diagrams, using a combination of analytical LT and numerical MC methods.
We analyzed the structures of the resulting spin configurations, including single- and multi-$\mathbf{Q}$ phases, and gave exact or nearly exact parameterizations.
Among the most interesting structures identified in this work are the Ising-like patterns found in large pockets of the parameter space in the weak and intermediate SOC limits.
We determine that these states are stabilized by small deviations from the idealized order parameters, and derive the second order solution for the optimal spin canting, as well as the stabilization energy.
Next, we calculated the spin-wave spectra of some of the single- and multi-$\mathbf{Q}$ phases and analyzed the symmetry of the spin fluctuations.
We found that in the weak SOC limit the fluctuations drive the collinear configurations out of the kagome plane, signifying order-by-disorder.
As a consequence of this rotation, the DM interactions make the excitations in the collinear antiferromagnet phase non-reciprocal.
Finally, we study the effects of the intermediate SOC, manifested by the additional SIA and bond-dependent interactions.
We find that small amounts of anisotropy can produce traceable changes in the structure and excitations of the spin configurations, which opens the doors for potential experimental quantification of these interactions.
As discussed throughout the paper, the results of this work are of direct relevance to the $\mathrm{Mn}_3X$ family of compounds.
For example, the interplay of the anisotropic interactions has already been shown in our previous work~Ref.~I to produce detectable changes in the static and dynamic properties of these compounds.
Another promising magnetic compound with the AB-layered structure is the $\mathrm{Gd}_3\mathrm{Ru}_4\mathrm{Al}_{12}$~\cite{Chandragiri_Gd_2016_IOP,Matsumura_Gd_2019_jpsj,Hirschberger_Gd_2019_nature_comm}.
Although the magnetic parameters for these materials may not fall in any of the non-trivial phases (such as multi-$\mathbf{Q}$ structures), one may still be able to realize these states in systems where the SOC can be tuned experimentally.
For example, in thin-layer systems, the SOC close to the surface can be enhanced by an addition of a heavy-metal capping layer~\cite{Gradmann:1986jmmm,Heinrich:1998}.
Importantly, this study has shown that the unconventional phases often lie within the typical physical range of magnetic parameters, including the DM and anisotropic couplings.
In addition, the existence of self-duality maps significantly increases the chances that a compound with AB-layered kagome structure would exhibit non-trivial magnetic phenomena.
The last statement can be justified by recalling that in Sec.~\ref{subsec:structure_ising} the $\boldsymbol{\Lambda}$ phase was related via duality to a model with large ferromagnetic out-of-plane exchange ($J_1\ll 0$), strong in-plane DM interactions ($D_2\neq 0$) and negligible in-plane exchange ($J_2\approx 0$).
Under normal circumstances, this situation would be extremely difficult to realize, rendering the model as unphysical.
However, the duality provides us with a dual images that lie within a reasonable range of parameters.
In conclusion, we would like to point out that some of the assumptions made in the beginning of this article could have an important impact on some of the magnetic properties of $\mathrm{Mn}_3X$ systems.
Since the interactions are predominantly governed by the itinerant electrons, long-range interactions may actually play an important role in stabilizing certain experimentally observed phases.
For example, further neighbor interactions along the $\mathbf{\hat{z}}$-direction would be required in order to stabilize out-of-plane spatial modulations through competition with the nn couplings~\cite{Zelenskiy_Rb_2021_prb}.
This also means that the breathing anisotropy may also affect certain properties of the magnetic phases.
Furthermore, some recent works have been dedicated to the importance of the coupling between the elastic and magnetic degrees of freedom~\cite{Theuss_strain_2022_prb,Reis_strain_2020_prm,Sukhanov_strain_2018_prb,Sukhanov_strain_2019_prb}.
These are known to introduce effective biquadratic spin interactions, which would further complicate the theoretical analysis.
As it stands however, this work provides an invitation for further investigations of the rich magnetic phenomena offered by the AB-stacked kagome compounds.
\section{\label{sec:acknowledgements} Acknowledgements}
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
The authors would like to thank J. S. R. McCoombs, C Rudderham, D. Kalliecharan, and B. D. MacNeil for helpful discussions.
We acknowledge D. Kalliecharan's suggestions for the visual presentation of this paper.
Finally, we thank S. H. Curnoe and J. P. F. LeBlanc for their useful comments in regards to the symmetry and self-duality sections.
|
1,108,101,566,494 | arxiv |
\section
An atlas of the Richelot isogeny graph
\label{sec:atlas}
We are now ready to compute the neighbourhoods of each type of vertex
and edge in the Richelot isogeny graph.
We begin with general (\textsf{Type-A}\xspace) vertices,
before considering each type with an involution,
in order of increasing speciality,
and ending with \textsf{Type-II}\xspace (which has no involution).
\subsection{The algorithm}
We compute each vertex neighbourhood in the same way:
\begin{enumerate}
\item
Take the generic curve or product for the \ensuremath{\mathrm{RA}}-type.
We use Bolza's
normal forms for the curves with special reduced automorphism
groups from Bolza~\cite{1887/Bolza}, reparametrizing
to force full rational \(2\)-torsion in the Jacobians.
\item
Enumerate the \((2,2)\)-isogeny kernels.
\item
Compute the action of the reduced automorphism group.
\item
For each orbit,
choose a representative kernel,
compute the codomain
using the formul\ae{} of~\S\ref{sec:Richelot}
and~\S\ref{sec:elliptic-product-isogenies},
and identify the \ensuremath{\mathrm{RA}}-type of the codomain
using the criteria of Tables~\ref{tab:RAut-from-Clebsch}
and~\ref{tab:RAut-from-j-invariants}.
The orbit sizes give edge weights.
\end{enumerate}
For subsequent isogenies,
we repeat Steps (2), (3), and (4)
from the current vertex.
\subsection{Diagram notation}
In all of our diagrams,
\textbf{solid vertices} have definite types,
and \textbf{solid edges} have definite weights.
The \textbf{dotted vertices} have an indicative type,
but may change type under specialization,
acquiring more automorphisms,
with the weight of \textbf{dotted edges} increasing proportionally
according to Eq.~\eqref{eq:ratio-principle}.
For example: in Figure~\ref{fig:Type-A},
if one of the dotted neighbours specializes to a \textsf{Type-I}\xspace
vertex, then the returning dotted arrow will become a weight-2 arrow.
All edges from solid vertices are shown;
some edges from dotted vertices, especially to vertices outside the diagram,
are omitted for clarity.
\subsection{General vertices and edges}
Figure~\ref{fig:Type-A} shows the neighbourhood of a \textsf{Type-A}\xspace vertex:
there are weight-1 edges to fifteen neighbouring vertices, generally all \textsf{Type-A}\xspace,
and a weight-1 dual edge returning from each of them.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, inner sep=0.5mm, minimum size=7mm}
]
%
\node (s) [
regular polygon,
regular polygon sides=15,
minimum size=42mm,
above]
at (0,0) {};
\node (c) [vertex] at (s.center) {$A$};
%
\foreach \i in {1,...,15}{
\node (s\i) [vertex, dotted] at (s.corner \i) {$A$} ;
\draw[->] (c) edge[bend right=6] (s\i) ;
\draw[->] (s\i) edge[bend right=6, dotted] (c) ;
}
\end{tikzpicture}
\caption{The neighbourhood of a \protect{\textsf{Type-A}\xspace} vertex.}
\label{fig:Type-A}
\end{figure}
The Richelot isogeny graph is 15-regular (counting weights),
and it is tempting to imagine that locally, the graph looks
like an assembly of copies of the star in Figure~\ref{fig:Type-A},
with each outer vertex becoming the centre of its own star.
However,
the reality is more complicated.
If we look at a pair of neighbouring \textsf{Type-A}\xspace vertices,
then six of the neighbours of one are connected to neighbours of the
other.
Figure~\ref{fig:general-edge} shows this configuration.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=6mm}
]
\node (domain) [vertex, ultra thick] at (0,0) {$A$} ;
\node (codomain) [vertex, ultra thick] at (7,0) {$A$} ;
\draw[->, ultra thick] (codomain) edge[bend right=8] (domain) ;
\draw[->, ultra thick] (domain) edge[bend right=8] (codomain) ;
\node (Xtop) [
regular polygon,
regular polygon sides=4,
minimum size=16mm,
] at (3.5, 3.3) {};
\node (Xtopcod1) [vertex, dotted] at (Xtop.corner 1) {$A$} ;
\node (Xtopdom1) [vertex, dotted] at (Xtop.corner 2) {$A$} ;
\node (Xtopdom2) [vertex, dotted] at (Xtop.corner 3) {$A$} ;
\node (Xtopcod2) [vertex, dotted] at (Xtop.corner 4) {$A$} ;
\foreach \i in {1,2} {
\draw[->] (domain) edge[bend right=-48] (Xtopdom\i) ;
\draw[->] (Xtopdom\i) edge[bend right=40, dotted] (domain) ;
\draw[->] (codomain) edge[bend right=48] (Xtopcod\i) ;
\draw[->] (Xtopcod\i) edge[bend right=-40, dotted] (codomain) ;
\foreach \j in {1,2} {
\draw[->] (Xtopdom\i) edge[bend right=6, dotted] (Xtopcod\j) ;
\draw[->] (Xtopcod\j) edge[bend right=6, dotted] (Xtopdom\i) ;
}
}
\node (Xmid) [
regular polygon,
regular polygon sides=4,
minimum size=16mm,
] at (3.5, 1.4) {};
\node (Xmidcod1) [vertex, dotted] at (Xmid.corner 1) {$A$} ;
\node (Xmiddom1) [vertex, dotted] at (Xmid.corner 2) {$A$} ;
\node (Xmiddom2) [vertex, dotted] at (Xmid.corner 3) {$A$} ;
\node (Xmidcod2) [vertex, dotted] at (Xmid.corner 4) {$A$} ;
\foreach \i in {1,2} {
\draw[->] (domain) edge[bend right=-24] (Xmiddom\i) ;
\draw[->] (Xmiddom\i) edge[bend right=16, dotted] (domain) ;
\draw[->] (codomain) edge[bend right=24] (Xmidcod\i) ;
\draw[->] (Xmidcod\i) edge[bend right=-16, dotted] (codomain) ;
\foreach \j in {1,2} {
\draw[->] (Xmiddom\i) edge[bend right=6, dotted] (Xmidcod\j) ;
\draw[->] (Xmidcod\j) edge[bend right=6, dotted] (Xmiddom\i) ;
}
}
\node (Xbot) [
regular polygon,
regular polygon sides=4,
minimum size=16mm,
] at (3.5, -1.4) {};
\node (Xbotcod1) [vertex, dotted] at (Xbot.corner 1) {$A$} ;
\node (Xbotdom1) [vertex, dotted] at (Xbot.corner 2) {$A$} ;
\node (Xbotdom2) [vertex, dotted] at (Xbot.corner 3) {$A$} ;
\node (Xbotcod2) [vertex, dotted] at (Xbot.corner 4) {$A$} ;
\foreach \i in {1,2} {
\draw[->] (domain) edge[bend right=24] (Xbotdom\i) ;
\draw[->] (Xbotdom\i) edge[bend right=-16, dotted] (domain) ;
\draw[->] (codomain) edge[bend right=-24] (Xbotcod\i) ;
\draw[->] (Xbotcod\i) edge[bend right=16, dotted] (codomain) ;
\foreach \j in {1,2} {
\draw[->] (Xbotdom\i) edge[bend right=6, dotted] (Xbotcod\j) ;
\draw[->] (Xbotcod\j) edge[bend right=6, dotted] (Xbotdom\i) ;
}
}
\node (domainin) [
regular polygon,
regular polygon sides=18,
minimum size=42mm,
] at (domain) {};
\foreach \i in {3,4,5,6,7,8,9,10} {
\node (domA\i) [vertex, dotted] at (domainin.corner \i) {$A$} ;
\draw[->] (domain) edge[bend right=4] (domA\i) ;
\draw[->] (domA\i) edge[bend right=4, dotted] (domain) ;
}
\node (codomainout) [
regular polygon,
regular polygon sides=18,
minimum size=42mm,
] at (codomain) {};
\foreach \i in {11,12,13,14,15,16,17,18} {
\node (codA\i) [vertex, dotted] at (codomainout.corner \i) {$A$} ;
\draw[->] (codomain) edge[bend right=4] (codA\i) ;
\draw[->] (codA\i) edge[bend right=4, dotted] (codomain) ;
}
\end{tikzpicture}
\caption{The neighbourhood of a general edge and its dual.}
\label{fig:general-edge}
\end{figure}
The interconnections in Figure~\ref{fig:general-edge}
are explained as follows.
For each \((2,2)\)-isogeny \(\phi: \ensuremath{\mathcal{A}}_1 \to \ensuremath{\mathcal{A}}_{2}\),
there are \emph{twelve} \((4,4,2,2)\)-isogenies (each a composition of
three \((2,2)\)-isogenies) from \(\ensuremath{\mathcal{A}}_{1}\) to \(\ensuremath{\mathcal{A}}_{2}\);
composing any of these with \(\dualof{\phi}\)
defines a cycle of length 4 in the graph,
which is isomorphic to multiplication-by-4 on \(\ensuremath{\mathcal{A}}_1\).
These cycles of length 4 are the ``small cycles''
exploited
by Flynn and Ti in~\cite[\S2.3]{2019/Flynn--Ti}.
In contrast, composing a central isogeny
with one of the eight isogenies from the far left
or the eight from the far right
of Figure~\ref{fig:general-edge}
yields a \((4,4)\)-isogeny,
and composing with one of each yields an \((8,8)\)-isogeny.
In the terminology of~\cite{2020/Castryck--Decru--Smith},
the isogenies at the far left and far right are ``good'' extensions of
the central pair, while those forming the adjacent edges of squares
are ``bad'' extensions of each other.
This pattern is replicated throughout the Richelot isogeny graph:
each edge is common to twelve of these 4-cycles (counting weights as
multiplicities).
\subsection{General elliptic products:
\texorpdfstring{{\textsf{Type-$\Pi$}}\xspace}{Type-Pi} vertices}
The general {\textsf{Type-$\Pi$}}\xspace vertex
is an elliptic product vertex \(\classof{\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'}\)
where \(\ensuremath{\mathcal{E}}' \not\cong \ensuremath{\mathcal{E}}\),
and neither \(\ensuremath{\mathcal{E}}\) nor \(\ensuremath{\mathcal{E}}'\)
has special automorphisms.
In this case
\(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}') = \subgrp{\sigma} \cong C_2\),
which fixes every \((2,2)\)-isogeny kernel,
so we have a subgroup isomorphic to \(C_2\) in the reduced automorphism group
of every \((2,2)\)-isogeny codomain.
The nine elliptic product neighbours are generally {\textsf{Type-$\Pi$}}\xspace;
the six Jacobian neighbours are generally \textsf{Type-I}\xspace,
the most general type with a reduced involution.
The situation is illustrated at the left of Figure~\ref{fig:TypeExE-TypeI}.
\begin{figure}[ht]
\centering
\begin{tabular}{c@{\qquad}c}
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm}
]
\node (s) [
regular polygon,
regular polygon sides=15,
minimum size=42mm,
above
] at (0,0) {};
\node (c) [vertex] at (s.center) {\ensuremath{\Pi}};
\foreach \i in {1,2,5,6,7,10,11,12,15}{
\node (s\i) [vertex, dotted] at (s.corner \i)
{\ensuremath{\Pi}} ;
\draw[->] (c) edge[bend right=8] (s\i) ;
\draw[->] (s\i) edge[bend right=8, dotted] (c) ;
}
\foreach \i in {3,4,8,9,13,14}{
\node (s\i) [vertex, dotted] at (s.corner \i) {$I$} ;
\draw[->] (c) edge[bend right=8] (s\i) ;
\draw[->] (s\i) edge[bend right=8, dotted] (c) ;
}
\end{tikzpicture}
&
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (s) [
regular polygon,
regular polygon sides=11,
minimum size=42mm,
above
] at (0,0) {};
%
\node (c) [vertex] at (s.center) {$I$};
%
\node (s1) [vertex,dotted] at (s.corner 1) {\ensuremath{\Pi}};
\draw[->] (c) edge[bend right=8] (s1) ;
\draw[->] (s1) edge[bend right=8, dotted] (c) ;
\foreach \i in {2,3,4,9,10,11}{
\node (s\i) [vertex,dotted] at (s.corner \i) {$I$};
\draw[->] (c) edge[bend right=8] (s\i) ;
\draw[->] (s\i) edge[bend right=8, dotted] (c) ;
}
\foreach \i in {5,6,7,8}{
\node (s\i) [vertex,dotted] at (s.corner \i) {$A$};
\draw[->] (c) edge[bend right=8] node[wt]{2}(s\i) ;
\draw[->] (s\i) edge[bend right=8, dotted] (c) ;
}
\end{tikzpicture}
\end{tabular}
\caption{Neighbourhoods of the general {\textsf{Type-$\Pi$}}\xspace and \textsf{Type-I}\xspace vertices.}
\label{fig:TypeExE-TypeI}
\end{figure}
\begin{remark}
Looking at Figure~\ref{fig:TypeExE-TypeI},
we see that
{\textsf{Type-$\Pi$}}\xspace vertices cannot have \textsf{Type-A}\xspace or \textsf{Type-II}\xspace neighbours:
any walk in the graph from a \textsf{Type-A}\xspace vertex to an elliptic product
must have already passed through a vertex with an involution
in its reduced automorphism group.
We will see below that the same applies
to any elliptic product or square vertex,
as well as to \textsf{Type-IV}\xspace, \textsf{Type-V}\xspace, and \textsf{Type-VI}\xspace vertices.
\end{remark}
\subsection{\texorpdfstring{\textsf{Type-I}\xspace}{Type-I} vertices}
\label{sec:TypeI}
The generic \textsf{Type-I}\xspace vertex is \(\classof{\Jac{\ensuremath{\mathcal{C}}_{I}}}\),
where \(\ensuremath{\mathcal{C}}_{I}\)
is defined by
\[
\ensuremath{\mathcal{C}}_{I}: y^2 = F_{I}(x) := (x^2 - 1)(x^2 - s^2)(x^2 - t^2)
\]
with parameters \(s\) and \(t\).
Any Jacobian \(\ensuremath{\mathcal{A}}_{0}\) with \(C_2 \subseteq \ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{A}}_{0})\)
(that is, \textsf{Type-I}\xspace, \textsf{Type-III}\xspace, \textsf{Type-IV}\xspace, \textsf{Type-V}\xspace, or \textsf{Type-VI}\xspace)
is isomorphic to the Jacobian of \(\Jac{\ensuremath{\mathcal{C}}_I}\)
for some \((s,t)\)
such that
\(
st(s^2-1)(t^2-1)(s^2-t^2)
\not=
0
\).
There are maximal \(2\)-Weil isotropic subgroups
\(K_{1},\ldots,K_{15}\) of \(\Jac{\ensuremath{\mathcal{C}}_I}[2]\);
each is the kernel of a \((2,2)\)-isogeny \(\Jac{\ensuremath{\mathcal{C}}_I} = \ensuremath{\mathcal{A}}_{0} \to \ensuremath{\mathcal{A}}_i = \ensuremath{\mathcal{A}}_{0}/K_i\).
The kernels \(K_{i}\) correspond to the following quadratic splittings.
First:
\begin{align*}
K_{1} \leftrightarrow \{ x^2-1, x^2-s^2, x^2-t^2 \}
\,.
\end{align*}
These three quadratics are linearly dependent,
so \(\ensuremath{\mathcal{A}}_{1} \cong \ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'\)
with factors
\(\ensuremath{\mathcal{E}}: y^2 = (x-1)(x-s^2)(x-t^2)\)
and
\(\ensuremath{\mathcal{E}}': y^2 = (x-1)(x-1/s^2)(x-1/t^2)\).
Six of the kernels share a nontrivial element with \(K_{1}\),
namely
\begin{align*}
K_{2} &\leftrightarrow \{ x^2 - 1, x^2 \pm (s + t)x + st \}
\,,
&
K_{3} &\leftrightarrow \{ x^2 - 1, x^2 \pm (s - t)x - st \}
\,,
\\
K_{4} &\leftrightarrow \{ x^2 - s^2, x^2 \pm (t+1)x + t \}
\,,
&
K_{5} &\leftrightarrow \{ x^2 - s^2, x^2 \pm (t-1)x - t \}
\,,
\\
K_{6} &\leftrightarrow \{ x^2 - t^2, x^2 \pm (s+1)x + s \}
\,,
&
K_{7} &\leftrightarrow \{ x^2 - t^2, x^2 \pm (s-1)x - s \}
\,.
\end{align*}
The last eight kernels do not share any nontrivial
elements with \(K_{1}\), namely
\begin{align*}
K_{8} &\leftrightarrow
\{
x^2 + (s - 1)x - s,
x^2 - (t - 1)x - t,
x^2 - (s - t)x - st
\}
\,,
\\
K_{9} &\leftrightarrow
\{
x^2 - (s - 1)x - s,
x^2 + (t - 1)x - t,
x^2 + (s - t)x - st
\}
\,,
\\
K_{10} &\leftrightarrow
\{
x^2 - (s - 1)x - s,
x^2 - (t + 1)x + t,
x^2 + (s + t)x + st
\}
\,,
\\
K_{11} &\leftrightarrow
\{
x^2 + (s - 1)x - s,
x^2 + (t + 1)x + t,
x^2 - (s + t)x + st
\}
\,,
\\
K_{12} &\leftrightarrow
\{
x^2 + (s + 1)x + s,
x^2 + (t - 1)x - t,
x^2 - (s + t)x + st
\}
\,,
\\
K_{13} &\leftrightarrow
\{
x^2 - (s + 1)x + s,
x^2 - (t - 1)x - t,
x^2 + (s + t)x + st
\}
\,,
\\
K_{14} &\leftrightarrow
\{
x^2 + (s + 1)x + s,
x^2 - (t + 1)x + t,
x^2 - (s - t)x - st
\}
\,,
\\
K_{15} &\leftrightarrow
\{
x^2 - (s + 1)x + s,
x^2 + (t + 1)x + t,
x^2 + (s - t)x - st
\}
\,.
\end{align*}
The reduced automorphism group is
\(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{C}}_{I}) = \subgrp{\sigma} \cong C_2\),
where \(\sigma\) acts as
\[
\sigma_*: x \longleftrightarrow -x
\]
on \(x\)-coordinates, and
on (the indices of) the set of kernels \(\{K_{1},\ldots,K_{15}\}\)
via
\begin{align*}
\sigma_*
=
(1)(2)(3)(4)(5)(6)(7)(8,9)(10,11)(12,13)(14,15)
\,.
\end{align*}
The orbits of the kernel subgroups under \(\sigma\)
and the types of the corresponding neighbours
are listed in Table~\ref{tab:TypeI}.
The situation is illustrated on the right of
Figure~\ref{fig:TypeExE-TypeI}.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c||c|c|c}
Kernel orbit
& Stabilizer
& Codomain
& Kernel orbit
& Stabilizer
& Codomain
\\
\hline
\(\{K_{1}\}\) & \(\subgrp{\sigma}\) & {\textsf{Type-$\Pi$}}\xspace
&
\(\{K_{7}\}\) & \(\subgrp{\sigma}\) & \textsf{Type-I}\xspace
\\
\(\{K_{2}\}\) & \(\subgrp{\sigma}\) & \textsf{Type-I}\xspace
&
\(\{K_{8,9}\}\) & \(1\) & \textsf{Type-A}\xspace
\\
\(\{K_{3}\}\) & \(\subgrp{\sigma}\) & \textsf{Type-I}\xspace
&
\(\{K_{10,11}\}\) & \(1\) & \textsf{Type-A}\xspace
\\
\(\{K_{4}\}\) & \(\subgrp{\sigma}\) & \textsf{Type-I}\xspace
&
\(\{K_{12,13}\}\) & \(1\) & \textsf{Type-A}\xspace
\\
\(\{K_{5}\}\) & \(\subgrp{\sigma}\) & \textsf{Type-I}\xspace
&
\(\{K_{14,15}\}\) & \(1\) & \textsf{Type-A}\xspace
\\
\(\{K_{6}\}\) & \(\subgrp{\sigma}\) & \textsf{Type-I}\xspace
\end{tabular}
\caption{%
Edge data for the generic \textsf{Type-I}\xspace vertex.
}
\label{tab:TypeI}
\end{table}
Computing one isogeny step beyond each \textsf{Type-I}\xspace neighbour
of \(\classof{\Jac{\ensuremath{\mathcal{C}}_{I}}}\),
we find six neighbours of \(\classof{\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'}\);
thus we complete Figure~\ref{fig:TypeI-TypeExE-connect},
which shows the neighbourhood of the edge \(\classof{\phi_{1}}\)
and its dual, \(\classof{\dualof{\phi_{1}}} = \classof{\phi_{Id}}\).
This should be compared with Figure~\ref{fig:general-edge}.
Note that \(\phi_{i}\circ\dualof{\phi_{1}}\)
is a \((4,2,2)\)- resp.~\((4,4)\)-isogeny
for \(2 \le i \le 7\) resp. \(8 \le i \le 15\).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=6mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
\node (domain) [vertex, ultra thick] at (0,0) {$I$} ;
\node (codomain) [vertex, ultra thick] at (7,0) {\ensuremath{\Pi}} ;
\draw[->, ultra thick] (codomain) edge[bend right=8] (domain) ;
\draw[->, ultra thick] (domain) edge[bend right=8] (codomain) ;
\node (Xtop) [
regular polygon,
regular polygon sides=4,
minimum size=16mm,
] at (3.5, 3.5) {};
\node (Xtopcod1) [vertex, dotted] at (Xtop.corner 1) {\ensuremath{\Pi}} ;
\node (Xtopdom1) [vertex, dotted] at (Xtop.corner 2) {$I$} ;
\node (Xtopdom2) [vertex, dotted] at (Xtop.corner 3) {$I$} ;
\node (Xtopcod2) [vertex, dotted] at (Xtop.corner 4) {$I$} ;
\foreach \i in {1,2} {
\draw[->] (domain) edge[bend right=-48] (Xtopdom\i) ;
\draw[->] (Xtopdom\i) edge[bend right=40, dotted] (domain) ;
\draw[->] (codomain) edge[bend right=48] (Xtopcod\i) ;
\draw[->] (Xtopcod\i) edge[bend right=-40, dotted] (codomain) ;
\foreach \j in {1,2} {
\draw[->] (Xtopdom\i) edge[bend right=6, dotted] (Xtopcod\j) ;
\draw[->] (Xtopcod\j) edge[bend right=6, dotted] (Xtopdom\i) ;
}
}
\node (Xmid) [
regular polygon,
regular polygon sides=4,
minimum size=16mm,
] at (3.5, 1.5) {};
\node (Xmidcod1) [vertex, dotted] at (Xmid.corner 1) {\ensuremath{\Pi}} ;
\node (Xmiddom1) [vertex, dotted] at (Xmid.corner 2) {$I$} ;
\node (Xmiddom2) [vertex, dotted] at (Xmid.corner 3) {$I$} ;
\node (Xmidcod2) [vertex, dotted] at (Xmid.corner 4) {$I$} ;
\foreach \i in {1,2} {
\draw[->] (domain) edge[bend right=-24] (Xmiddom\i) ;
\draw[->] (Xmiddom\i) edge[bend right=16, dotted] (domain) ;
\draw[->] (codomain) edge[bend right=24] (Xmidcod\i) ;
\draw[->] (Xmidcod\i) edge[bend right=-16, dotted] (codomain) ;
\foreach \j in {1,2} {
\draw[->] (Xmiddom\i) edge[bend right=6, dotted] (Xmidcod\j) ;
\draw[->] (Xmidcod\j) edge[bend right=6, dotted] (Xmiddom\i) ;
}
}
\node (Xbot) [
regular polygon,
regular polygon sides=4,
minimum size=16mm,
] at (3.5, -1.5) {};
\node (Xbotcod1) [vertex, dotted] at (Xbot.corner 1) {\ensuremath{\Pi}} ;
\node (Xbotdom1) [vertex, dotted] at (Xbot.corner 2) {$I$} ;
\node (Xbotdom2) [vertex, dotted] at (Xbot.corner 3) {$I$} ;
\node (Xbotcod2) [vertex, dotted] at (Xbot.corner 4) {$I$} ;
\foreach \i in {1,2} {
\draw[->] (domain) edge[bend right=24] (Xbotdom\i) ;
\draw[->] (Xbotdom\i) edge[bend right=-16, dotted] (domain) ;
\draw[->] (codomain) edge[bend right=-24] (Xbotcod\i) ;
\draw[->] (Xbotcod\i) edge[bend right=16, dotted] (codomain) ;
\foreach \j in {1,2} {
\draw[->] (Xbotdom\i) edge[bend right=6, dotted] (Xbotcod\j) ;
\draw[->] (Xbotcod\j) edge[bend right=6, dotted] (Xbotdom\i) ;
}
}
\node (domainin) [
regular polygon,
regular polygon sides=13,
minimum size=42mm,
] at (domain) {};
\foreach \i in {3,4,5,6} {
\node (domA\i) [vertex, dotted] at (domainin.corner \i) {$A$} ;
\draw[->] (domain) edge[bend right=8] node[wt]{2} (domA\i) ;
\draw[->] (domA\i) edge[bend right=8, dotted] (domain) ;
}
\node (codomainout) [
regular polygon,
regular polygon sides=20,
minimum size=42mm,
] at (codomain) {};
\foreach \i in {13,20} {
\node (codA\i) [vertex, dotted] at (codomainout.corner \i) {$I$} ;
\draw[->] (codomain) edge[bend right=4] (codA\i) ;
\draw[->] (codA\i) edge[bend right=4, dotted] (codomain) ;
}
\foreach \i in {14,15,16,17,18,19} {
\node (codA\i) [vertex, dotted] at (codomainout.corner \i) {\ensuremath{\Pi}} ;
\draw[->] (codomain) edge[bend right=4] (codA\i) ;
\draw[->] (codA\i) edge[bend right=4, dotted] (codomain) ;
}
\end{tikzpicture}
\caption{The neighbourhood of a
general \textsf{Type-I}\xspace vertex and its {\textsf{Type-$\Pi$}}\xspace neighbour.}
\label{fig:TypeI-TypeExE-connect}
\end{figure}
\subsection{General elliptic squares:
\texorpdfstring{{\textsf{Type-$\Sigma$}}\xspace}{Type-Sigma} vertices}
The general {\textsf{Type-$\Sigma$}}\xspace vertex is \(\classof{\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}}\)
where \(\ensuremath{\mathcal{E}}\) has no special automorphisms,
so
\(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}^2) = \subgrp{\sigma,\tau} \cong C_2^2\).
The orbits of the kernel subgroups
under \(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}^2)\)
(with respect to an arbitrary labelling of \(\ensuremath{\mathcal{E}}[2]\))
and the types of the corresponding neighbours
are described by Table~\ref{tab:TypeEsquared},
and
the neighbourhood of the generic {\textsf{Type-$\Sigma$}}\xspace vertex
is shown on the left of Figure~\ref{fig:TypeEsquared-TypeIII}.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c||c|c|c}
Kernel orbit & Stab. & Codomain
&
Kernel orbit & Stab. & Codomain
\\
\hline
\(\{K_{1,1}\}\) & \(\subgrp{\sigma,\tau}\) & {\textsf{Type-$\Sigma$}}\xspace
&
\(\{K_{\text{Id}}\}\) & \(\subgrp{\sigma,\tau}\) & (loop)
\\
\(\{K_{2,2}\}\) & \(\subgrp{\sigma,\tau}\) & {\textsf{Type-$\Sigma$}}\xspace
&
\(\{K_{(1,2)(3)}\}\) & \(\subgrp{\sigma,\tau}\) & \textsf{Type-III}\xspace
\\
\(\{K_{3,3}\}\) & \(\subgrp{\sigma,\tau}\) & {\textsf{Type-$\Sigma$}}\xspace
&
\(\{K_{(1,3)(2)}\}\) & \(\subgrp{\sigma,\tau}\) & \textsf{Type-III}\xspace
\\
\(\{K_{1,2}, K_{2,1}\}\) & \(\subgrp{\sigma}\) & {\textsf{Type-$\Pi$}}\xspace
&
\(\{K_{(2,3)(1)}\}\) & \(\subgrp{\sigma,\tau}\) & \textsf{Type-III}\xspace
\\
\(\{K_{1,3}, K_{3,1}\}\) & \(\subgrp{\sigma}\) & {\textsf{Type-$\Pi$}}\xspace
&
\(\{K_{(1,2,3)}, K_{(1,3,2)}\}\) & \(\subgrp{\sigma}\) & \textsf{Type-I}\xspace
\\
\(\{K_{2,3}, K_{3,2}\}\) & \(\subgrp{\sigma}\) & {\textsf{Type-$\Pi$}}\xspace
\\
\end{tabular}
\caption{Edge data for the generic {\textsf{Type-$\Sigma$}}\xspace vertex.}
\label{tab:TypeEsquared}
\end{table}
\begin{figure}[ht]
\centering
\begin{tabular}{c@{\qquad}c}
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (s) [
regular polygon,
regular polygon sides=11,
minimum size=42mm,
above] at (0,0) {};
\node (c) [vertex] at (s.center) {\ensuremath{\Sigma}};
%
\foreach \i in {1,4,7}{
\node (s\i) [vertex, dotted] at (s.corner \i) {\ensuremath{\Pi}} ;
\draw[->] (c) edge[bend right=12] node[wt]{2} (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\foreach \i in {2,5,8}{
\node (s\i) [vertex, dotted] at (s.corner \i) {\ensuremath{\Sigma}} ;
\draw[->] (c) edge[bend right=12] (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\foreach \i in {3,6,9}{
\node (s\i) [vertex, dotted] at (s.corner \i) {$III$} ;
\draw[->] (c) edge[bend right=12] (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\draw[->] (s2) edge[bend right=48, dotted] (s3) ;
\draw[->] (s3) edge[bend right=-24, dotted] (s2) ;
\draw[->] (s5) edge[bend right=48, dotted] (s6) ;
\draw[->] (s6) edge[bend right=-24, dotted] (s5) ;
\draw[->] (s8) edge[bend right=48, dotted] (s9) ;
\draw[->] (s9) edge[bend right=-24, dotted] (s8) ;
\node (s10) [vertex, dotted] at (s.corner 10) {$I$} ;
\draw[->] (c) edge[bend right=12] node[wt]{2} (s10) ;
\draw[->] (s10) edge[bend right=12, dotted] (c) ;
\draw[->] (c) edge[out=45,in=65,loop, looseness=30] (c) ;
\end{tikzpicture}
&
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (s) [
regular polygon,
regular polygon sides=8,
minimum size=42mm,
above
] at (0,0) {};
\node (c) [vertex] at (s.center) {$III$};
%
\foreach \i in {1,3}{
\node (s\i) [vertex, dotted] at (s.corner \i)
{\ensuremath{\Sigma}};
\draw[->] (c) edge[bend right=8] (s\i) ;
\draw[->] (s\i) edge[bend right=8, dotted] (c) ;
}
\draw[->] (s1) edge[out=-30,in=30, dotted, loop, looseness=5] (s1) ;
\draw[->] (s3) edge[out=75,in=135, dotted, loop, looseness=5] (s3) ;
\node (s6) [vertex, dotted] at (s.corner 6) {$A$};
\draw[->] (c) edge[bend right=12] node[wt]{4} (s6) ;
\draw[->] (s6) edge[bend right=12, dotted] (c) ;
\foreach \i in {4,5,7,8}{
\node (s\i) [vertex, dotted] at (s.corner \i) {$I$} ;
\draw[->] (c) edge[bend right=12] node[wt]{2} (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\draw[->] (c) edge[out=90,in=135,loop, looseness=10] (c) ;
\draw[->] (s1) edge[bend right=30, dotted] (s3) ;
\draw[->] (s3) edge[bend left=10, dotted] (s1) ;
\end{tikzpicture}
\end{tabular}
\caption{%
Neighbourhoods of the general {\textsf{Type-$\Sigma$}}\xspace and \textsf{Type-III}\xspace vertices.
}
\label{fig:TypeEsquared-TypeIII}
\end{figure}
\subsection{\texorpdfstring{\textsf{Type-III}\xspace}{Type-III} vertices}
The generic \textsf{Type-III}\xspace vertex is
\(\classof{\Jac{\ensuremath{\mathcal{C}}_{III}}}\),
where
\[
\ensuremath{\mathcal{C}}_{III}: y^2 = (x^2-1)(x^2-u^2)(x^2-1/u^2)
\]
with \(u\) a free parameter;
note that \(\ensuremath{\mathcal{C}}_{III}(u) = \ensuremath{\mathcal{C}}_{I}(s,t)\)
with \((s,t) = (u,u^{-1})\).
We have
\(\ensuremath{\mathrm{RA}}(\Jac{\ensuremath{\mathcal{C}}}_{III}) = \subgrp{\sigma,\tau} \cong C_2^2\),
where \(\sigma\) is inherited from \textsf{Type-I}\xspace
and \(\tau\) acts on \(x\)-coordinates via
\[
\tau_*: x \longmapsto 1/x \,.
\]
Specializing the kernels and quadratic splittings
of~\S\ref{sec:TypeI}
at \((s,t) = (u,u^{-1})\),
we see that \(\ensuremath{\mathrm{RA}}(\Jac{\ensuremath{\mathcal{C}}_{III}})\)
acts on the kernel indices by
\begin{align*}
\sigma_*
& =
(1)(2)(3)(4)(5)(6)(7)(8,9)(10,11)(12,13)(14,15)
\,,
\\
\tau_*
& =
(1)(2)(3)(4,6)(5,7)(8)(9)(10,13)(11,12)(14)(15)
\,.
\end{align*}
The kernel orbits and the edges leaving \(\classof{\Jac{\ensuremath{\mathcal{C}}_{III}}}\)
are described in Table~\ref{tab:TypeIII}.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c||c|c|c}
Orbit & Stab. & Codomain
&
Orbit & Stab. & Codomain
\\
\hline
\(\{K_{1}\}\) & \(\subgrp{\sigma,\tau}\) & {\textsf{Type-$\Sigma$}}\xspace
&
\(\{K_{5},K_{7}\}\) & \(\subgrp{\sigma}\) & \textsf{Type-I}\xspace
\\
\(\{K_{2}\}\) & \(\subgrp{\sigma,\tau}\) & (loop)
&
\(\{K_{8},K_{9}\}\) & \(\subgrp{\tau}\) & \textsf{Type-I}\xspace
\\
\(\{K_{3}\}\) & \(\subgrp{\sigma,\tau}\) & {\textsf{Type-$\Sigma$}}\xspace
&
\(\{K_{i}: 10\le i\le 13\}\) & \(1\) & \textsf{Type-A}\xspace
\\
\(\{K_{4},K_{6}\}\) & \(\subgrp{\sigma}\) & \textsf{Type-I}\xspace
&
\(\{K_{14},K_{15}\}\) & \(\subgrp{\tau}\) & \textsf{Type-I}\xspace
\end{tabular}
\caption{Edge data for the generic \textsf{Type-III}\xspace vertex.}
\label{tab:TypeIII}
\end{table}
We observe that
\(\Jac{\ensuremath{\mathcal{C}}_{III}}/K_2 \cong \Jac{\ensuremath{\mathcal{C}}_{III}}\):
that is, \(\phi_{2}\) is a \((2,2)\)-endomorphism
of \(\Jac{\ensuremath{\mathcal{C}}_{III}}\),
so \(\classof{\phi_{2}}\) is a weight-1 loop.
The kernels \(K_{1}\) and \(K_{3}\)
are stabilised by \(\ensuremath{\mathrm{RA}}(\Jac{\ensuremath{\mathcal{C}}_{III}})\)
and \(\delta(K_{1}) = \delta(K_{3}) = 0\),
so \(\classof{\phi_{1}}\)
and \(\classof{\phi_{2}}\)
are weight-1 edges to {\textsf{Type-$\Sigma$}}\xspace
vertices
\(\classof{\ensuremath{\mathcal{E}}^2}\)
and
\(\classof{(\ensuremath{\mathcal{E}}')^2}\),
respectively,
where
\(\ensuremath{\mathcal{E}}\) and \(\ensuremath{\mathcal{E}}'\) are the elliptic curves
\begin{align*}
\ensuremath{\mathcal{E}} : y^2 & = (x - 1)(x - u^2)(x - 1/u^2)
\,,
\\
\ensuremath{\mathcal{E}}' :
y^2
& =
-2\left(x - 1\right)
\Big(x^2 + 2\frac{u^4 - 6u^2 + 1}{(u^2+1)^2}x + 1\Big)
\,.
\end{align*}
There is a \(2\)-isogeny
\(\varphi: \ensuremath{\mathcal{E}} \to \ensuremath{\mathcal{E}}'\),
as predicted in~\cite[\S4]{2001/Gaudry--Schost}
(in fact
\( \ker\varphi = \subgrp{(1,0)}\)
and
\( \ker\dualof{\varphi} = \subgrp{(1,0)}\)),
so there are edges \(\classof{\varphi\times\varphi}\)
and \(\classof{\dualof{\varphi}\times\dualof{\varphi}}\)
between \(\classof{\ensuremath{\mathcal{E}}^2}\)
and \(\classof{(\ensuremath{\mathcal{E}}')^2}\).
The neighbourhood of the general \textsf{Type-III}\xspace vertex
is shown on the right of Figure~\ref{fig:TypeEsquared-TypeIII}.
Combining with the {\textsf{Type-$\Sigma$}}\xspace neighbourhood
and extending to include shared adjacent vertices
yields Figure~\ref{fig:TypeEsquared-TypeIII-connect}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=6mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (s) [
regular polygon,
regular polygon sides=11,
minimum size=60mm,
rotate=-20,
above
] at (0,0) {};
\node (c) [vertex] at (s.center) {\ensuremath{\Sigma}} ;
%
\draw[->] (c) edge[out=54, in=82, loop, looseness=24] (c) ;
\node (I) [vertex, dotted] at (s.corner 2) {$I$} ;
\draw[->] (c) edge[bend right=6] node[wt]{2} (I) ;
\draw[->] (I) edge[bend right=6,dotted] (c) ;
\node (Sigma1) [vertex, dotted] at (s.corner 3) {\ensuremath{\Sigma}} ;
\node (III1) [vertex] at (s.corner 4) {$III$} ;
\node (Pi1) [vertex, dotted] at (s.corner 5) {\ensuremath{\Pi}} ;
\node (Sigma2) [vertex, dotted] at (s.corner 6) {\ensuremath{\Sigma}} ;
\node (III2) [vertex] at (s.corner 7) {$III$} ;
\node (Pi2) [vertex, dotted] at (s.corner 8) {\ensuremath{\Pi}} ;
\node (Sigma3) [vertex, dotted] at (s.corner 9) {\ensuremath{\Sigma}} ;
\node (III3) [vertex] at (s.corner 10) {$III$} ;
\node (Pi3) [vertex, dotted] at (s.corner 11) {\ensuremath{\Pi}} ;
\foreach \i in {1,2,3} {
\draw[->] (c) edge[bend right=5] (Sigma\i) ;
\draw[->] (Sigma\i) edge[bend right=5, dotted] (c) ;
\draw[->] (c) edge[bend right=5] (III\i) ;
\draw[->] (III\i) edge[bend right=5] (c) ;
\draw[->] (Sigma\i) edge[bend right=12] (III\i) ;
\draw[->] (III\i) edge[bend right=12,dotted] (Sigma\i) ;
\draw[->] (Sigma\i) edge[out=100*\i-60, in=100*\i, dotted, loop, looseness=8] (Sigma\i) ;
\draw[->] (III\i) edge[out=100*\i+190, in=100*\i+235, loop, looseness=8] (III\i) ;
\node (IIIouter) [
regular polygon,
regular polygon sides=18,
minimum size=48mm,
shape border rotate=100*\i-40
] at (III\i) {};
\foreach \j in {1,2,5,6}{
\node (IIII\i-\j) [vertex, dotted] at (IIIouter.corner \j) {$I$} ;
\draw[->] (III\i) edge[bend right=8] node[wt]{2} (IIII\i-\j) ;
\draw[->] (IIII\i-\j) edge [bend right=8, dotted] (III\i) ;
}
\node (IIIA\i) [vertex, dotted] at (IIIouter.side 3) {$A$} ;
\draw[->] (III\i) edge[bend right=8] node[wt]{4} (IIIA\i) ;
\draw[->] (IIIA\i) edge [bend right=8, dotted] (III\i) ;
\draw[->] (c) edge[bend right=6] node[wt]{2} (Pi\i) ;
\draw[->] (Pi\i) edge[bend right=6, dotted] (c) ;
\draw[->] (Pi\i) edge[bend right=6, dotted] (IIII\i-5) ;
\draw[->] (IIII\i-5) edge[bend right=6, dotted] (Pi\i) ;
\draw[->] (Pi\i) edge[bend right=6, dotted] (IIII\i-6) ;
\draw[->] (IIII\i-6) edge[bend right=6, dotted] (Pi\i) ;
\node (SigmaPi\i) [vertex, dotted] at (IIIouter.corner 17) {\ensuremath{\Pi}} ;
\draw[->] (Sigma\i) edge[bend right=16, dotted] node[wt]{2} (SigmaPi\i) ;
\draw[->] (SigmaPi\i) edge[bend right=16, dotted] (Sigma\i) ;
\draw[->] (SigmaPi\i) edge[bend right=6, dotted] (IIII\i-1) ;
\draw[->] (IIII\i-1) edge[bend right=6, dotted] (SigmaPi\i) ;
\draw[->] (SigmaPi\i) edge[bend right=72, dotted] (IIII\i-2) ;
\draw[->] (IIII\i-2) edge[bend right=-60, dotted] (SigmaPi\i) ;
}
\end{tikzpicture}
\caption{%
The neighbourhood of a generic \protect{{\textsf{Type-$\Sigma$}}\xspace}
vertex and its \protect{\textsf{Type-III}\xspace} neighbours.
}
\label{fig:TypeEsquared-TypeIII-connect}
\end{figure}
\subsection{Elliptic 3-isogenies: \texorpdfstring{\textsf{Type-IV}\xspace}{Type-IV} vertices}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (s) [
regular polygon,
regular polygon sides=7,
minimum size=42mm,
above
] at (0,0) {};
\node (c) [vertex] at (s.center) {$IV$};
%
\node (s1) [vertex, dotted] at (s.corner 1)
{\ensuremath{\Phi}} ;
\draw[->] (c) edge[bend right=12] node[wt]{3} (s1) ;
\draw[->] (s1) edge[bend right=12, dotted] (c) ;
\foreach \i in {2,4,6}{
\node (s\i) [vertex, dotted] at (s.corner \i) {$I$} ;
\draw[->] (c) edge[bend right=12] node[wt]{3} (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\foreach \i in {3,5,7}{
\node (s\i) [vertex, dotted] at (s.corner \i) {$IV$} ;
\draw[->] (c) edge[bend right=12] (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\end{tikzpicture}
\caption{The neighbourhood of the general \textsf{Type-IV}\xspace vertex.}
\label{fig:TypeIV}
\end{figure}
The generic \textsf{Type-IV}\xspace vertex is represented by
\(\Jac{\ensuremath{\mathcal{C}}_{IV}(v)}\),
where \(\ensuremath{\mathcal{C}}_{IV}(v) := \ensuremath{\mathcal{C}}_{I}(s_{IV}(v),t_{IV}(v))\)
with
\[
s_{IV}(v)
:=
\frac{
(v+1)(v-\zeta_3)
}{
(v-1)(v+\zeta_3)
}
\qquad\text{and}\qquad
t_{IV}(v)
:=
\frac{(v+1)(v-\zeta_3^2)}{(v-1)(v+\zeta_3^2)}
\]
where \(\zeta_3\) is a primitive third root of unity
and \(v\) is a free parameter.
We have
\(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{C}}_{IV}(v)) = \subgrp{\sigma,\rho}\cong S_3\),
where \(\sigma\) is inherited from \textsf{Type-I}\xspace
and \(\rho\) is the order-3 automorphism acting
on \(x\)-coordinates via
\[
\rho_*:
x
\longmapsto
\big( (2\zeta_3+1)(v^2-1)x + 3(v+1)^2 \big)
/
\big( 3(v-1)^2x + (2\zeta_3+1)(v^2-1) \big)
\,.
\]
Specializing the kernels and quadratic splittings
from~\S\ref{sec:TypeI},
we see that the action of \(\ensuremath{\mathrm{RA}}(\Jac{\ensuremath{\mathcal{C}}_{IV}})\)
on (the indices of) the \(K_i\)
is given by
\begin{align*}
\rho_*
& =
(1,9,8)
(2,15,14)
(3)
(4,12,13)
(5)
(6,10,11)
(7)
\,,
\\
\sigma_*
& =
(1)(2)(3)(4)(5)(6)(7)(8,9)(10,11)(12,13)(14,15)
\,.
\end{align*}
The kernel orbits
and the edges leaving \(\classof{\Jac{\ensuremath{\mathcal{C}}}_{IV}}\)
described in Table~\ref{tab:TypeIV},
and illustrated in Figure~\ref{fig:TypeIV}.
We find that
\(\classof{\ensuremath{\mathcal{A}}_{1}} = \classof{\ensuremath{\mathcal{A}}_{8}} = \classof{\ensuremath{\mathcal{A}}_{9}}
= \classof{\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'}\),
where
\begin{align*}
\ensuremath{\mathcal{E}}: y^2 & = (x-1)(x-s_{IV}(v)^2)(x - t_{IV}(v)^2)
\shortintertext{and}
\ensuremath{\mathcal{E}}': y^2 & = (x-1)(x-1/s_{IV}(v)^2)(x - 1/t_{IV}(v)^2)
\,.
\end{align*}
There is a \(3\)-isogeny
\(\Phi: \ensuremath{\mathcal{E}} \to \ensuremath{\mathcal{E}}'\),
as predicted in~\cite[\S3]{2001/Gaudry--Schost}:
the kernel of \(\Phi\)
is cut out by \(x - (v+1)^2/(v-1)^2\),
and the kernel of \(\dualof{\Phi}\)
is cut out by \(x - (v-1)^2/(v+1)^2\).
Elliptic products with a 3-isogeny between the factors
therefore play a special role in the Richelot isogeny graph;
we will represent these special {\textsf{Type-$\Pi$}}\xspace vertices
using the symbol \(\Phi\).
We remark that the presence of the 3-isogeny
severely constrains the possible specializations
of a \(\Phi\)-vertex.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c||c|c|c}
Kernel orbit
& Stabilizer
& Codomain
& Kernel orbit
& Stabilizer
& Codomain
\\
\hline
\(\{K_{1}, K_{8}, K_{9}\}\)
& \(\subgrp{\sigma}\)
& {\textsf{Type-$\Pi$}}\xspace (\ensuremath{\Phi})
&
\(\{K_{3}\}\)
& \(\subgrp{\sigma,\rho}\)
& \textsf{Type-IV}\xspace
\\
\(\{K_{2}, K_{14}, K_{15}\}\)
& \(\subgrp{\sigma}\)
& \textsf{Type-I}\xspace
&
\(\{K_{5}\}\)
& \(\subgrp{\sigma,\rho}\)
& \textsf{Type-IV}\xspace
\\
\(\{K_{4}, K_{12}, K_{13}\}\)
& \(\subgrp{\sigma}\)
& \textsf{Type-I}\xspace
&
\(\{K_{7}\}\)
& \(\subgrp{\sigma,\rho}\)
& \textsf{Type-IV}\xspace
\\
\(\{K_{6}, K_{10}, K_{11}\}\)
& \(\subgrp{\sigma}\)
& \textsf{Type-I}\xspace
\\
\end{tabular}
\caption{Edge data for the generic \textsf{Type-IV}\xspace vertex.}
\label{tab:TypeIV}
\end{table}
Figure~\ref{fig:TypeIV-TypeExE-connect}
shows the neighbourhood of the edges between a general \textsf{Type-IV}\xspace vertex
and its \ensuremath{\Phi}-neighbour;
it should be compared with Figures~\ref{fig:general-edge}
and~\ref{fig:TypeI-TypeExE-connect}.
\textsf{Type-IV}\xspace vertices correspond to \ensuremath{\Phi}-vertices,
and edges between \textsf{Type-IV}\xspace vertices
correspond to edges between \ensuremath{\Phi}-vertices.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=6mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
\node (domain) [vertex, ultra thick] at (0,0) {$IV$} ;
\node (codomain) [vertex, ultra thick] at (7,0) {\ensuremath{\Phi}} ;
\draw[->, ultra thick] (codomain) edge[bend right=6] (domain) ;
\draw[->, ultra thick] (domain) edge[bend right=6] node[wt]{3} (codomain) ;
\node (Xtop) [
regular polygon,
regular polygon sides=4,
minimum size=22mm,
] at (3.5, 3.8) {};
\node (Xtopcod1) [vertex, dotted] at (Xtop.corner 1) {\ensuremath{\Phi}} ;
\node (Xtopdom1) [vertex, dotted] at (Xtop.corner 2) {$IV$} ;
\node (Xtopdom2) [vertex, dotted] at (Xtop.corner 3) {$I$} ;
\node (Xtopcod2) [vertex, dotted] at (Xtop.corner 4) {$I$} ;
\foreach \i in {1,2} {
\draw[->] (Xtopdom\i) edge[bend right=40, dotted] (domain) ;
\draw[->] (codomain) edge[bend right=48] (Xtopcod\i) ;
\draw[->] (Xtopcod\i) edge[bend right=-40, dotted] (codomain) ;
\foreach \j in {1,2} {
\draw[->] (Xtopcod\j) edge[bend right=14, dotted] (Xtopdom\i) ;
}
}
\draw[->] (Xtopdom1) edge[bend right=14, dotted] node[wt]{3} (Xtopcod1) ;
\draw[->] (Xtopdom1) edge[bend right=14, dotted] node[wt]{3} (Xtopcod2) ;
\draw[->] (Xtopdom2) edge[bend right=14, dotted] (Xtopcod1) ;
\draw[->] (Xtopdom2) edge[bend right=14, dotted] (Xtopcod2) ;
\draw[->] (domain) edge[bend right=-48] (Xtopdom1) ;
\draw[->] (domain) edge[bend right=-52] node[wt]{3} (Xtopdom2) ;
\node (Xmid) [
regular polygon,
regular polygon sides=4,
minimum size=22mm,
] at (3.5, 1.5) {};
\node (Xmidcod1) [vertex, dotted] at (Xmid.corner 1) {\ensuremath{\Phi}} ;
\node (Xmiddom1) [vertex, dotted] at (Xmid.corner 2) {$IV$} ;
\node (Xmiddom2) [vertex, dotted] at (Xmid.corner 3) {$I$} ;
\node (Xmidcod2) [vertex, dotted] at (Xmid.corner 4) {$I$} ;
\foreach \i in {1,2} {
\draw[->] (Xmiddom\i) edge[bend right=16, dotted] (domain) ;
\draw[->] (codomain) edge[bend right=24] (Xmidcod\i) ;
\draw[->] (Xmidcod\i) edge[bend right=-16, dotted] (codomain) ;
\foreach \j in {1,2} {
\draw[->] (Xmidcod\j) edge[bend right=14, dotted] (Xmiddom\i) ;
}
}
\draw[->] (Xmiddom1) edge[bend right=14, dotted] node[wt]{3} (Xmidcod1) ;
\draw[->] (Xmiddom1) edge[bend right=14, dotted] node[wt]{3} (Xmidcod2) ;
\draw[->] (Xmiddom2) edge[bend right=14, dotted] (Xmidcod1) ;
\draw[->] (Xmiddom2) edge[bend right=14, dotted] (Xmidcod2) ;
\draw[->] (domain) edge[bend right=-24] (Xmiddom1) ;
\draw[->] (domain) edge[bend right=-28] node[wt]{3} (Xmiddom2) ;
\node (Xbot) [
regular polygon,
regular polygon sides=4,
minimum size=22mm,
] at (3.5, -1.5) {};
\node (Xbotcod1) [vertex, dotted] at (Xbot.corner 1) {\ensuremath{\Phi}} ;
\node (Xbotdom1) [vertex, dotted] at (Xbot.corner 2) {$IV$} ;
\node (Xbotdom2) [vertex, dotted] at (Xbot.corner 3) {$I$} ;
\node (Xbotcod2) [vertex, dotted] at (Xbot.corner 4) {$I$} ;
\foreach \i in {1,2} {
\draw[->] (Xbotdom\i) edge[bend right=-16, dotted] (domain) ;
\draw[->] (codomain) edge[bend right=-24] (Xbotcod\i) ;
\draw[->] (Xbotcod\i) edge[bend right=16, dotted] (codomain) ;
\foreach \j in {1,2} {
\draw[->] (Xbotcod\j) edge[bend right=14, dotted] (Xbotdom\i) ;
}
}
\draw[->] (Xbotdom1) edge[bend right=14, dotted] node[wt]{3} (Xbotcod1) ;
\draw[->] (Xbotdom1) edge[bend right=14, dotted] node[wt]{3} (Xbotcod2) ;
\draw[->] (Xbotdom2) edge[bend right=14, dotted] (Xbotcod1) ;
\draw[->] (Xbotdom2) edge[bend right=14, dotted] (Xbotcod2) ;
\draw[->] (domain) edge[bend right=24] (Xbotdom1) ;
\draw[->] (domain) edge[bend right=28] node[wt]{3} (Xbotdom2) ;
\node (codomainout) [
regular polygon,
regular polygon sides=20,
minimum size=42mm,
] at (codomain) {};
\foreach \i in {13,20} {
\node (codA\i) [vertex, dotted] at (codomainout.corner \i) {$I$} ;
\draw[->] (codomain) edge[bend right=4] (codA\i) ;
\draw[->] (codA\i) edge[bend right=4, dotted] (codomain) ;
}
\foreach \i in {14,15,16,17,18,19} {
\node (codA\i) [vertex, dotted] at (codomainout.corner \i) {\ensuremath{\Pi}} ;
\draw[->] (codomain) edge[bend right=4] (codA\i) ;
\draw[->] (codA\i) edge[bend right=4, dotted] (codomain) ;
}
\end{tikzpicture}
\caption{%
The neighbourhood of a
\textsf{Type-IV}\xspace vertex and its {\textsf{Type-$\Pi$}}\xspace neighbour.
}
\label{fig:TypeIV-TypeExE-connect}
\end{figure}
\subsection{The \texorpdfstring{{\textsf{Type-$\Pi_0$}}\xspace}{Type-Pi-0} family}
\label{sec:TypeExEzero}
The {\textsf{Type-$\Pi_0$}}\xspace vertices
are \(\classof{\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}_{0}}\)
for elliptic curves \(\ensuremath{\mathcal{E}}\not\cong\ensuremath{\mathcal{E}}_{0}\).
We have
\(
\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}_{0})
=
\subgrp{\sigma,[1]\times\zeta}
\cong
C_6
\).
The automorphism \(\zeta\) of \(\ensuremath{\mathcal{E}}_0\)
cycles the points of order 2 on \(\ensuremath{\mathcal{E}}_0\),
so \([1]\times\zeta\) fixes no \((2,2)\)-isogeny kernels.
Instead, the kernel subgroups of \(\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}_0[2]\)
form orbits of three,
and so we see the five neighbours with weight-3 edges
in Figure~\ref{fig:TypeExEzero}
(which should be compared with Figure~\ref{fig:TypeExE-TypeI}).
\begin{figure}[ht]
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
\node (s) [
regular polygon,
regular polygon sides=5,
minimum size=42mm,
above] at (0,0) {};
\node (c) [vertex] at (s.center) {\ensuremath{\Pi_0}};
\foreach \i in {1,2,3}{
\node (s\i) [vertex, dotted] at (s.corner \i)
{\ensuremath{\Pi}} ;
\draw[->] (c) edge[bend right=12] node[wt]{3} (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\foreach \i in {4,5}{
\node (s\i) [vertex, dotted] at (s.corner \i) {$I$} ;
\draw[->] (c) edge[bend right=12] node[wt]{3} (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\end{tikzpicture}
\caption{The neighbourhood of a generic {\textsf{Type-$\Pi_0$}}\xspace vertex.}
\label{fig:TypeExEzero}
\end{figure}
\subsection{The \texorpdfstring{{\textsf{Type-$\Pi_{12^3}$}}\xspace}{Type-Pi-1728} family}
\label{sec:TypeExEtwelvecubed}
The {\textsf{Type-$\Pi_{12^3}$}}\xspace vertices
are \(\classof{\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}_{12^3}}\)
for elliptic curves \(\ensuremath{\mathcal{E}}\not\cong\ensuremath{\mathcal{E}}_{12^3}\).
The curve \(\ensuremath{\mathcal{E}}_{12^3}\) has an order-4 automorphism \(\iota\)
which fixes one point \(P_3\) of order 2,
and exchanges \(P_1\) and \(P_2\).
We therefore have
an order-4 element \(\alpha = [1]\times\iota\)
generating \(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}_{12^3}) \cong C_4\),
and \(\alpha^2 = [1]\times[-1] = \sigma\) (which fixes all the kernels).
Hence, with respect to \(\subgrp{\alpha}\),
the isometries form three orbits of size two,
as do the six product kernels not involving \(P_3\);
on the other hand, the kernels \(\subgrp{P}\times\subgrp{P_3}\)
are fixed by \(\alpha\),
and since \(\ensuremath{\mathcal{E}}_{12^3}/\subgrp{P_1}\cong\ensuremath{\mathcal{E}}_{12^3}\)
we get three weight-1 edges
to {\textsf{Type-$\Pi_{12^3}$}}\xspace vertices.
The situation is illustrated in Figure~\ref{fig:TypeExEtwelvecubed}
(which should be compared with Figure~\ref{fig:TypeExE-TypeI}).
\begin{figure}[ht]
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (s) [
regular polygon,
regular polygon sides=9,
minimum size=42mm,
above
] at (0,0) {};
\node (c) [vertex] at (s.center) {\ensuremath{\Pi_{12^3}}};
\foreach \i in {1,4,7}{
\node (s\i) [vertex, dotted] at (s.corner \i) {\ensuremath{\Pi_{12^3}}} ;
\draw[->] (c) edge[bend right=12] (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\foreach \i in {2,5,8}{
\node (s\i) [vertex, dotted] at (s.corner \i)
{\ensuremath{\Pi}} ;
\draw[->] (c) edge[bend right=12] node[wt]{2} (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\foreach \i in {3,6,9}{
\node (s\i) [vertex, dotted] at (s.corner \i) {$I$} ;
\draw[->] (c) edge[bend right=12] node[wt]{2} (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\end{tikzpicture}
\caption{%
The neighbourhoods of the generic {\textsf{Type-$\Pi_{12^3}$}}\xspace vertex.
}
\label{fig:TypeExEtwelvecubed}
\end{figure}
\subsection{The \texorpdfstring{{\textsf{Type-$\Pi_{0,12^3}$}}\xspace}{Type-Pi-0-1728} vertex}
The unique {\textsf{Type-$\Pi_{0,12^3}$}}\xspace vertex is
\(\classof{\ensuremath{\mathcal{E}}_0\times\ensuremath{\mathcal{E}}_{12^3}}\).
Its reduced automorphism group is
\(
\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}_0\times\ensuremath{\mathcal{E}}_{12^3})
=
\subgrp{\zeta\times[1], [1]\times\iota}
\cong C_{12}
\).
The kernel orbits and edges
can be derived using a combination of the analyses
in~\S\ref{sec:TypeExEzero}
and~\S\ref{sec:TypeExEtwelvecubed};
the results
are described in Table~\ref{tab:TypeEzeroxEtwelvecubed}.
The neighbourhood of the {\textsf{Type-$\Pi_{0,12^3}$}}\xspace vertex,
illustrated in Figure~\ref{fig:TypeEzeroxEtwelvecubed},
is a combination of the {\textsf{Type-$\Pi_0$}}\xspace
and {\textsf{Type-$\Pi_{12^3}$}}\xspace neighbourhoods
of Figures~\ref{fig:TypeExEzero} and~\ref{fig:TypeExEtwelvecubed}.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|ccc}
\multirow{2}{*}{Kernel orbit}
& Stabilizer
& \multicolumn{3}{c}{Codomain type}
\\
& (conjugate)
& General \(p\)
& \(p = 7\)
& \(p = 11\)
\\
\hline
\(\{K_{1,3},K_{2,3},K_{3,3}\}\)
& \(\subgrp{[1]\times\iota}\)
& {\textsf{Type-$\Pi_{12^3}$}}\xspace
& {\textsf{Type-$\Pi_{12^3}$}}\xspace
& {\textsf{Type-$\Sigma_{12^3}$}}\xspace
\\
\(\{K_{i,j} : 1 \le i \le 3, 1 \le j \le 2\}\)
& \(\subgrp{\sigma}\)
& {\textsf{Type-$\Pi$}}\xspace
& {\textsf{Type-$\Pi_{12^3}$}}\xspace
& (loops)
\\
\(\{K_{\pi}: \pi \in S_3\}\)
& \(\subgrp{\sigma}\)
& \textsf{Type-I}\xspace
& \textsf{Type-I}\xspace
& \textsf{Type-IV}\xspace
\\
\end{tabular}
\caption{Edge data for the unique {\textsf{Type-$\Pi_{0,12^3}$}}\xspace vertex.}
\label{tab:TypeEzeroxEtwelvecubed}
\end{table}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (c) [vertex] at (0,0) {\ensuremath{\Pi_{0,12^3}}};
%
\node (s1) [vertex, dotted] at (0,2) {\ensuremath{\Pi_{12^3}}} ;
\draw[->] (c) edge[bend right=12] node[wt]{3} (s1) ;
\draw[->] (s1) edge[bend right=12, dotted] (c) ;
\node (s2) [vertex, dotted] at (-2,0)
{\ensuremath{\Pi}} ;
\draw[->] (c) edge[bend right=12] node[wt]{6} (s2) ;
\draw[->] (s2) edge[bend right=12, dotted] (c) ;
\node (s3) [vertex, dotted] at (2,0) {$I$} ;
\draw[->] (c) edge[bend right=12] node[wt]{6} (s3) ;
\draw[->] (s3) edge[bend right=12, dotted] (c) ;
\end{tikzpicture}
\caption{%
The neighbourhood of the {\textsf{Type-$\Pi_{0,12^3}$}}\xspace vertex.
}
\label{fig:TypeEzeroxEtwelvecubed}
\end{figure}
\subsection{The \texorpdfstring{{\textsf{Type-$\Sigma_{12^3}$}}\xspace}{Type-Sigma-1728} vertex}
The unique {\textsf{Type-$\Sigma_{12^3}$}}\xspace vertex is
\(\classof{\ensuremath{\mathcal{E}}_{12^3}^2}\).
We have
\(
\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}_{12^3}^2)
=
\subgrp{\sigma,\tau,[1]\times\iota}
\cong C_2^2\rtimes C_4
\).
The kernel orbits and edges are described in
Table~\ref{tab:TypeEtwelvecubedsquared}.
Figure~\ref{fig:TypeEtwelvecubedsquared},
illustrating the neighbourhood of \(\classof{\ensuremath{\mathcal{E}}_{12^3}^2}\),
should be compared with Figure~\ref{fig:TypeEsquared-TypeIII}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
\node (c) [vertex] at (0,0) {\ensuremath{\Sigma_{12^3}}};
\draw[->] (c) edge[out=-10,in=40,loop, looseness=6] (c) ;
\draw[->] (c) edge[out=250,in=300,loop, looseness=6] node[wt]{2} (c) ;
%
\node (s2) [vertex, dotted] at (-4,0) {\ensuremath{\Sigma}} ;
\draw[->] (c) edge[bend right=8] node[wt]{4} (s2) ;
\draw[->] (s2) edge[bend right=8, dotted] (c) ;
\draw[->] (s2) edge[loop, out=150, in=210, dotted, looseness=6] (s2) ;
%
\node (s3) [vertex, dotted] at (-2,-2) {$III$} ;
\draw[->] (c) edge[bend right=8] node[wt]{4} (s3) ;
\draw[->] (s3) edge[bend right=8, dotted] (c) ;
\draw[->] (s3) edge[loop, out=160, in=220, dotted, looseness=6] (s3) ;
\draw[->] (s2) edge[bend right=8, dotted] (s3) ;
\draw[->] (s3) edge[bend right=8, dotted] (s2) ;
\node (s4) [vertex, dotted] at (3,-2) {\ensuremath{\Pi_{12^3}}} ;
\draw[->] (c) edge[bend right=8] node[wt]{4} (s4) ;
\draw[->] (s4) edge[bend right=8, dotted] (c) ;
\end{tikzpicture}
\caption{%
The neighbourhood of the {\textsf{Type-$\Sigma_{12^3}$}}\xspace vertex.
The dotted neighbour types change for \(p = 7\) and \(11\)
(see Table~\ref{tab:TypeEtwelvecubedsquared}).
}
\label{fig:TypeEtwelvecubedsquared}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c@{\;}c@{\;}c}
\multirow{2}{*}{Kernel orbit}
& Stabilizer
& \multicolumn{3}{c}{Codomain type}
\\
& (conjugate)
& General \(p\)
& \(p = 7\)
& \(p = 11\)
\\
\hline
\(\{K_{1,1},K_{1,2},K_{2,1},K_{2,2}\}\)
& \(\subgrp{\sigma,\tau}\)
& {\textsf{Type-$\Sigma$}}\xspace
& (loops)
& {\textsf{Type-$\Sigma_0$}}\xspace
\\
\(\{K_{1,3},K_{2,3},K_{3,1},K_{3,2}\}\)
& \(\subgrp{[1]\times\iota}\)
& {\textsf{Type-$\Pi_{12^3}$}}\xspace
& (loops)
& {\textsf{Type-$\Pi_{0,12^3}$}}\xspace
\\
\(\{K_{3,3}\}\)
& \(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}_{12^3})\)
& (loop)
& (loop)
& (loop)
\\
\(\{K_{\text{Id}},K_{(1,2)(3)}\}\)
& \(\subgrp{\tau,\iota\times\iota}\)
& (loops)
& (loops)
& (loops)
\\
\(\left\{\!\!\!\!\begin{array}{c}K_{(1,2,3)}, K_{(1,3,2)},\\
K_{(1,3)(2)},
K_{(1)(2,3)}\end{array}\!\!\!\!\right\}\)
& \(\subgrp{\sigma,\tau}\)
& \textsf{Type-III}\xspace
& \textsf{Type-III}\xspace
& \textsf{Type-V}\xspace
\\
\end{tabular}
\caption{Edge data for the unique {\textsf{Type-$\Sigma_{12^3}$}}\xspace vertex.}
\label{tab:TypeEtwelvecubedsquared}
\end{table}
\subsection{The \texorpdfstring{\textsf{Type-V}\xspace}{Type-V} and \texorpdfstring{{\textsf{Type-$\Sigma_0$}}\xspace}{Type-Sigma-0} vertices}
The \textsf{Type-V}\xspace and {\textsf{Type-$\Sigma_0$}}\xspace vertices are always neighbours,
so we treat them simultaneously.
The unique {\textsf{Type-$\Sigma_0$}}\xspace vertex is \(\classof{\ensuremath{\mathcal{E}}_{0}^2}\),
and \(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}_{0}^2) =
\subgrp{\tau,[1]\times\zeta,\zeta\times[1]}/\subgrp{-1} \cong C_6\times S_3\).
The kernel orbits and edges are described in
Table~\ref{tab:TypeEzerosquared}.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|cc}
\multirow{2}{*}{Kernel orbit}
& Stabilizer
& \multicolumn{2}{c}{Codomain type}
\\
& (conjugate)
& General \(p\)
& \(p = 11\)
\\
\hline
\(\{K_{i,j} : 1 \le i, j \le 3\}\)
& \(\subgrp{\sigma,\tau}\)
& {\textsf{Type-$\Sigma$}}\xspace
& {\textsf{Type-$\Sigma_{12^3}$}}\xspace
\\
\(\{K_{\text{Id}}, K_{(1,2,3)}, K_{(1,3,2)}\}\)
& \(\subgrp{\tau,\zeta\times(-\zeta)}\)
& (loop)
& (loop)
\\
\(\{K_{(1,2)(3)}, K_{(1,3)(2)}, K_{(2,3)(1)}\}\)
& \(\subgrp{\tau, \zeta\times\zeta^2}\)
& \textsf{Type-V}\xspace
& \textsf{Type-V}\xspace
\\
\end{tabular}
\caption{Edge data for the unique {\textsf{Type-$\Sigma_0$}}\xspace vertex.}
\label{tab:TypeEzerosquared}
\end{table}
The unique \textsf{Type-V}\xspace vertex is \(\classof{\Jac{\ensuremath{\mathcal{C}}_V}}\),
where \(\ensuremath{\mathcal{C}}_V: y^2 = x^6 + 1\);
note that \(\ensuremath{\mathcal{C}}_{V} = \ensuremath{\mathcal{C}}_{III}(\zeta_6) = \ensuremath{\mathcal{C}}_{I}(\zeta_6,1/\zeta_6)\),
where \(\zeta_6\) is a primitive sixth root of unity.
We have \(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{C}}_{V}) = \subgrp{\sigma,\tau,\zeta}\),
where \(\sigma\) and \(\tau\) are inherited from \(\ensuremath{\mathcal{C}}_{III}\),
and \(\zeta\) is a new automorphism of order \(6\)
such that \(\zeta^3 = \sigma\).
Specializing the kernels and quadratic splittings
from~\S\ref{sec:TypeI},
these automorphisms act on (the indices of) the \(K_i\)
via
\begin{align*}
\tau_*
& =
(1)(2)(3)(4,6)(5,7)(8,9)(10,13)(11,12)(14,15)
\,,
\\
\zeta_*
& =
(1)(2,4,6)(3,5,7)(8,9)(10,14,12,11,15,13)
\,,
\\
\sigma_* = \zeta_*^3
& =
(1)(2)(3)(4)(5)(6)(7)(8,9)(10,11)(12,13)(14,15)
\,.
\end{align*}
The kernel orbits and edges are described in Table~\ref{tab:TypeV}.
Figure~\ref{fig:TypeV} illustrates the shared neighbourhood
of the \textsf{Type-V}\xspace and {\textsf{Type-$\Sigma_0$}}\xspace vertices
for general \(p\);
it should be compared with Figure~\ref{fig:TypeEsquared-TypeIII}.
The \textsf{Type-I}\xspace neighbour of the \textsf{Type-V}\xspace vertex always has four \((2,2)\)-endomorphisms,
and they are included here for completeness,
as well as
the \textsf{Type-I}\xspace and {\textsf{Type-$\Pi$}}\xspace neighbours of the \textsf{Type-IV}\xspace vertex,
since these are also connected to the {\textsf{Type-$\Sigma$}}\xspace and \textsf{Type-I}\xspace
neighbours.
Dashed neighbour types may change for \(p = 11\), \(17\), \(29\), and \(41\)
(see Table~\ref{tab:TypeV}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (cV) [vertex] at (0,0) {$V$};
\node (ezerosquared) [vertex] at (-4,0) {\ensuremath{\Sigma_0}};
\node (cI) [vertex, dotted] at (4,0) {$I$};
\node (esquared) [vertex, dotted] at (-2,-2) {\ensuremath{\Sigma}};
\node (cIV) [vertex, dotted] at (1,-1.3) {$IV$};
\node (ee) [vertex, dotted] at (3,-1.3) {\ensuremath{\Phi}} ;
\node (cInew) [vertex, dotted] at (4,-2) {$I$} ;
\draw[->] (cV) edge[bend right=8] (ezerosquared) ;
\draw[->] (ezerosquared) edge[bend right=8] node[wt]{3} (cV) ;
\draw[->] (ezerosquared) edge[out=140,in=220,loop, looseness=5] node[wt]{3} (ezerosquared) ;
\draw[->] (cV) edge[out=50,in=130,loop, looseness=5] node[wt]{3} (cV) ;
\draw[->] (cV) edge[bend right=8] node[wt]{3} (esquared) ;
\draw[->] (esquared) edge[bend right=8, dotted] (cV) ;
\draw[->] (esquared) edge[out=-190,in=-110,loop, looseness=5] (esquared) ;
\draw[->] (cV) edge[bend right=12] node[wt]{2} (cIV) ;
\draw[->] (cIV) edge[bend right=12, dotted] (cV) ;
\draw[->] (cV) edge[bend right=8] node[wt]{6} (cI) ;
\draw[->] (cI) edge[bend right=8, dotted] (cV) ;
\draw[->] (cI) edge[out=-40,in=40,loop, looseness=5] node[wt]{4} (cI) ;
\draw[->] (ezerosquared) edge[bend right=8] node[wt]{9} (esquared) ;
\draw[->] (esquared) edge[bend right=8, dotted] (ezerosquared) ;
\draw[->] (esquared) edge[bend right=16, dotted] node[wt]{2} (ee) ;
\draw[->] (ee) edge[bend right=-8, dotted] (esquared) ;
\draw[->] (cIV) edge[bend right=-8, dotted] node[wt]{3} (ee) ;
\draw[->] (ee) edge[bend right=20, dotted] (cIV) ;
\draw[->] (cI) edge[bend right=12, dotted] (ee) ;
\draw[->] (ee) edge[bend right=12, dotted] (cI) ;
\draw[->] (esquared) edge[bend right=20, dotted] node[wt]{3} (cInew) ;
\draw[->] (cInew) edge[bend right=-12, dotted] (esquared) ;
\draw[->] (cIV) edge[bend right=20, dotted] node[wt]{3} (cInew) ;
\draw[->] (cInew) edge[bend right=-8, dotted] (cIV) ;
\draw[->] (cI) edge[bend right=8, dotted] (cInew) ;
\draw[->] (cInew) edge[bend right=8, dotted] (cI) ;
\end{tikzpicture}
\caption{%
The neighbourhood of the \protect{\textsf{Type-V}\xspace}
and \protect{{\textsf{Type-$\Sigma_0$}}\xspace} vertices.
}
\label{fig:TypeV}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c@{\;}c@{\;}c@{\;}c@{\;}c}
\multirow{2}{*}{Kernel orbit}
& Stabilizer
& \multicolumn{5}{c}{Codomain type}
\\
& (conj.)
& General \(p\)
& \(p = 11\)
& \(p = 17\)
& \(p = 29\)
& \(p = 41\)
\\
\hline
\(\{K_{1}\}\)
& \(\subgrp{\tau,\zeta}\)
& {\textsf{Type-$\Sigma_0$}}\xspace
& {\textsf{Type-$\Sigma_0$}}\xspace
& {\textsf{Type-$\Sigma_0$}}\xspace
& {\textsf{Type-$\Sigma_0$}}\xspace
& {\textsf{Type-$\Sigma_0$}}\xspace
\\
\(\{K_{2}, K_{4}, K_{6}\}\)
& \(\subgrp{\sigma,\tau}\)
& (loops)
& (loops)
& (loops)
& (loops)
& (loops)
\\
\(\{K_{3}, K_{5}, K_{7}\}\)
& \(\subgrp{\sigma,\tau}\)
& {\textsf{Type-$\Sigma$}}\xspace
& {\textsf{Type-$\Sigma_{12^3}$}}\xspace
& {\textsf{Type-$\Sigma$}}\xspace
& {\textsf{Type-$\Sigma$}}\xspace
& {\textsf{Type-$\Sigma$}}\xspace
\\
\(\{K_{8}, K_{9}\}\)
& \(\subgrp{\tau\zeta, \zeta^2}\)
& \textsf{Type-IV}\xspace
& \textsf{Type-IV}\xspace
& \textsf{Type-IV}\xspace
& \textsf{Type-VI}\xspace
& \textsf{Type-IV}\xspace
\\
\(\{K_{i} : 10\le i\le 15\}\)
& \(\subgrp{\sigma\tau}\)
& \textsf{Type-I}\xspace
& \textsf{Type-IV}\xspace
& (loops)
& \textsf{Type-I}\xspace
& \textsf{Type-III}\xspace
\\
\hline
\end{tabular}
\caption{Edge data for the \textsf{Type-V}\xspace vertex}
\label{tab:TypeV}
\end{table}
\begin{remark}
To see the \textsf{Type-V}\xspace neighbourhood in Figure~\ref{fig:TypeV}
as a specialization of the \textsf{Type-IV}\xspace diagram (Figure~\ref{fig:TypeIV}):
\begin{itemize}
\item
the {\textsf{Type-$\Pi$}}\xspace neighbour specializes to \(\classof{\ensuremath{\mathcal{E}}}^2\),
where \(\ensuremath{\mathcal{E}}\) has \(j\)-invariant 54000
and an endomorphism of degree 3;
\item
one of the \textsf{Type-IV}\xspace neighbours degenerates to {\textsf{Type-$\Sigma_0$}}\xspace;
\item
the other two \textsf{Type-IV}\xspace neighbours merge, yielding a weight-2 edge;
\item
one of the \textsf{Type-I}\xspace neighbours specializes
to \textsf{Type-V}\xspace, yielding a loop;
\item
the other two \textsf{Type-I}\xspace neighbours merge,
yielding a weight-6 edge.
\end{itemize}
\end{remark}
\subsection{The \texorpdfstring{\textsf{Type-VI}\xspace}{Type-VI} vertex}
The unique \textsf{Type-VI}\xspace vertex is \(\classof{\Jac{\ensuremath{\mathcal{C}}_{VI}}}\),
where \(\ensuremath{\mathcal{C}}_{VI} = \ensuremath{\mathcal{C}}_{IV}(v_{VI})\)
with \(v_{VI} = (\zeta_{12}^2 + \zeta_{12} + 1)/\sqrt{2}\)
where \(\zeta_{12}^2 = \zeta_{6}\).
This curve is isomorphic to Bolza's \textsf{Type-VI}\xspace normal form \(y^2 = x(x^4+1)\).
We have \(\ensuremath{\mathrm{RA}}(\Jac{\ensuremath{\mathcal{C}}_{VI}}) = \subgrp{\sigma,\rho,\omega} \cong S_4\),
where \(\sigma\) and \(\rho\) are inherited from \(\ensuremath{\mathcal{C}}_{III}\)
and \(\omega\) is an order-4 automorphism acting as
\[
\omega_*:
x
\longmapsto
(x - (\sqrt{2}+1))
/
((\sqrt{2} - 1)x + 1)
\]
on \(x\)-coordinates.
Specializing the splittings of~\S\ref{sec:TypeI}
at \(s = s_{IV}(v_{VI}) = -\zeta_{12}^3\sqrt{2} - \zeta_{12}^3\)
and \(t = t_{IV}(v_{VI}) = 2\sqrt{2}+3\),
we see that \(\ensuremath{\mathrm{RA}}(\Jac{\ensuremath{\mathcal{C}}_{VI}})\) acts as
\begin{align*}
\sigma_* & = (1)(2)(3)(4)(5)(6)(7)(8,9)(10,11)(12,13)(14,15)
\,,
\\
\rho_* & = (1,9,8)(2,15,14)(3)(4,12,13)(5)(6,10,11)(7)
\,,
\\
\omega_* & = (1,4)(2,14,7,15)(3,10,6,11)(5)(8,9,13,12)
\end{align*}
on kernel indices.
Table~\ref{tab:TypeVI} describes the kernel orbits and edges.
It is interesting to compare this with Table~\ref{tab:TypeIV},
to see how the various neighbours degenerate, specialize, and merge.
The {\textsf{Type-$\Sigma$}}\xspace neighbour is special:
it is \(\classof{\ensuremath{\mathcal{E}}^2}\) where \(\ensuremath{\mathcal{E}}\) is an elliptic curve of
\(j\)-invariant \(8000\);
it is \(\Phi\),
because \(\ensuremath{\mathcal{E}}\) has a degree-3 endomorphism.
Pushing one step beyond the \textsf{Type-IV}\xspace neighbours,
we find new \textsf{Type-I}\xspace and \ensuremath{\Phi}{} vertices connected to
\(\classof{\ensuremath{\mathcal{E}}^2}\),
and we thus complete the neighbourhood
shown in Figure~\ref{fig:TypeVI}.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c@{\;}c@{\;}c}
\multirow{2}{*}{Kernel orbit}
& Stabilizer
& \multicolumn{3}{c}{Codomain type}
\\
& (conjugate)
& General \(p\) & \(p = 7\) & \(p = 13, 29\)
\\
\hline
\(\{K_{1}, K_{4}, K_{8}, K_{9}, K_{12}, K_{13}\}\)
& \(\subgrp{\sigma,\omega^2}\)
& {\textsf{Type-$\Sigma$}}\xspace
& {\textsf{Type-$\Sigma_{12^3}$}}\xspace
& {\textsf{Type-$\Sigma$}}\xspace
\\
\(\{K_{3}, K_{6}, K_{10}, K_{11}\}\)
& \(\subgrp{\sigma,\rho}\)
& \textsf{Type-IV}\xspace
& (loops)
& \textsf{Type-IV}\xspace/\textsf{V}
\\
\(\{K_{2}, K_{7}, K_{14}, K_{15}\}\)
& \(\subgrp{\sigma,\rho}\)
& \textsf{Type-IV}\xspace
& (loops)
& \textsf{Type-V}\xspace/\textsf{IV}
\\
\(\{K_{5}\}\)
& \(\ensuremath{\mathrm{RA}}(\Jac{\ensuremath{\mathcal{C}}_{VI}})\)
& (loop)
& (loop)
& (loop)
\end{tabular}
\caption{%
Edge data for the \textsf{Type-VI}\xspace vertex.
For \(p = 13\) and \(p = 29\),
one of the two \textsf{Type-IV}\xspace
neighbours specializes to \textsf{Type-V}\xspace,
depending on the choice of \(\zeta_{12}\).
}
\label{tab:TypeVI}
\end{table}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (s) [
regular polygon,
regular polygon sides=6,
minimum size=52mm,
above
] at (0,0) {};
\node (VI) [vertex] at (s.corner 3) {$VI$};
\node (eephi) [vertex, dotted] at (s.corner 6) {\ensuremath{\Sigma} \ensuremath{\Phi}};
\node (IVa) [vertex, dotted] at (s.corner 2) {$IV$};
\node (IVb) [vertex, dotted] at (s.corner 4) {$IV$};
\node (I) [vertex, dotted] at (s.corner 5) {$I$};
\node (newphi) [vertex, dotted] at (s.corner 1) {\ensuremath{\Phi}} ;
\draw[->] (VI) edge[out=150,in=210,loop, looseness=6] (VI) ;
\draw[->] (eephi) edge[out=330,in=30,loop, looseness=6] (eephi) ;
\draw[->] (VI) edge[bend right=-24] node[wt]{4} (IVa) ;
\draw[->] (IVa) edge[bend right=8, dotted] (VI) ;
\draw[->] (VI) edge[bend right=24] node[wt]{4} (IVb) ;
\draw[->] (IVb) edge[bend right=-8, dotted] (VI) ;
\draw[->] (IVa) edge[bend right=-24, dotted] node[wt]{3} (newphi) ;
\draw[->] (newphi) edge[bend right=8, dotted] (IVa) ;
\draw[->] (IVa) edge[bend right=36, dotted] node[wt]{3} (I) ;
\draw[->] (I) edge[bend right=-18, dotted] (IVa) ;
\draw[->] (IVb) edge[bend right=24, dotted] node[wt]{3} (I) ;
\draw[->] (I) edge[bend right=-8, dotted] (IVb) ;
\draw[->] (IVb) edge[bend right=36, dotted] node[wt]{3} (newphi) ;
\draw[->] (newphi) edge[bend right=-18, dotted] (IVb) ;
\draw[->] (eephi) edge[bend right=8, dotted] node[wt]{2} (newphi) ;
\draw[->] (newphi) edge[bend right=-24, dotted] (eephi) ;
\draw[->] (eephi) edge[bend right=-8, dotted] node[wt]{2} (I) ;
\draw[->] (I) edge[bend right=24, dotted] (eephi) ;
\draw[->] (VI) edge[bend right=-8] node[wt]{6} (eephi) ;
\draw[->] (eephi) edge[bend right=-8, dotted] (VI) ;
\end{tikzpicture}
\caption{The neighbourhood of the \textsf{Type-VI}\xspace vertex.
The dotted neighbours change type for \(p = 7\), \(13\), and \(29\)
(see Table~\ref{tab:TypeVI}).
}
\label{fig:TypeVI}
\end{figure}
\subsection{The \texorpdfstring{\textsf{Type-II}\xspace}{Type-II} vertex}
The unique \textsf{Type-II}\xspace vertex
is \(\classof{\Jac{\ensuremath{\mathcal{C}}_{II}}}\)
where \(\ensuremath{\mathcal{C}}_{II}\) is defined by \(y^2 = x^5 - 1\);
we have
\(\ensuremath{\mathrm{RA}}(\Jac{\ensuremath{\mathcal{C}}_{II}}) = \subgrp{\zeta} \cong C_5\),
where \(\zeta\) acts as
\[
\zeta_*: x \longmapsto \zeta_5 x
\,.
\]
The 15 kernel subgroups of \(\Jac{\ensuremath{\mathcal{C}}_{II}}[2]\)
form three orbits of five under the action of \(\zeta\).
We fix orbit representatives
\begin{align*}
K_1 & =
\{ x-1, (x-\zeta_5)(x-\zeta_5^2), (x - \zeta_5^3)(x - \zeta_5^4)\}
\,,
\\
K_2 & =
\{ x-1, (x-\zeta_5)(x-\zeta_5^3), (x - \zeta_5^2)(x - \zeta_5^4)\}
\,,
\\
K_3 & =
\{ x-1, (x-\zeta_5)(x-\zeta_5^4), (x - \zeta_5^2)(x - \zeta_5^3)\}
\,,
\end{align*}
and let \(\phi_{i}: \Jac{\ensuremath{\mathcal{C}}_{II}} \to \ensuremath{\mathcal{A}}_{i} := \Jac{\ensuremath{\mathcal{C}}_{II}}/K_i\)
be the quotient isogenies for \(1 \le i \le 3\).
Equation~\eqref{eq:ratio-principle} tells us that
\(w(\classof{\phi_{i}}) = 5\) for each \(i\).
The neighbourhood of \(\classof{\Jac{\ensuremath{\mathcal{C}}_{II}}}\)
is shown in Figure~\ref{fig:TypeII}.
Generally, the \(\classof{\ensuremath{\mathcal{A}}_{i}}\) are \textsf{Type-A}\xspace
(because the stabilizer of each orbit is trivial),
but for \(p = 19\), \(29\), \(59\), \(79\), and \(89\)
the codomain types change (see Table~\ref{tab:TypeII-codomains}).
Note that at \(p = 19\),
the codomain \(\ensuremath{\mathcal{A}}_{2}\) becomes isomorphic to \(\ensuremath{\mathcal{A}}_{0}\),
so \(\classof{\phi_{2}}\) becomes a weight-5 loop.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[
>={angle 60},
thick,
vertex/.style = {circle, draw, fill=white, inner sep=0.5mm, minimum size=7mm},
wt/.style = {fill=white, anchor=center, pos=0.5, minimum size=1mm, inner sep=1pt}
]
%
\node (s) [
regular polygon,
regular polygon sides=3,
minimum size=42mm,
above
] at (0,0) {};
\node (c) [vertex] at (s.center) {$II$};
\foreach \i in {1,2,3}{
\node (s\i) [vertex, dotted] at (s.corner \i) {$A$};
\draw[->] (c) edge[bend right=12] node[wt]{5} (s\i) ;
\draw[->] (s\i) edge[bend right=12, dotted] (c) ;
}
\end{tikzpicture}
\caption{%
The neighbourhood of the (unique) \textsf{Type-II}\xspace vertex.
}
\label{fig:TypeII}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{r|cccccc}
{Characteristic \(p\)} & \(19\) & \(29\) & \(59\) & \(79\) & \(89\) & Other
\\
\hline
\hline
Type of \(\classof{\ensuremath{\mathcal{A}}_{1}}\)
&
\textsf{Type-I}\xspace & \textsf{Type-I}\xspace & \textsf{Type-I}\xspace & \textsf{Type-I}\xspace & \textsf{Type-A}\xspace & \textsf{Type-A}\xspace
\\
\hline
Type of \(\classof{\ensuremath{\mathcal{A}}_{2}}\)
&
\textsf{Type-II}\xspace & \textsf{Type-I}\xspace & \textsf{Type-A}\xspace & \textsf{Type-A}\xspace & \textsf{Type-I}\xspace & \textsf{Type-A}\xspace
\\
\hline
Type of \(\classof{\ensuremath{\mathcal{A}}_{3}}\)
&
\textsf{Type-III}\xspace & \textsf{Type-A}\xspace & \textsf{Type-I}\xspace & \textsf{Type-A}\xspace & \textsf{Type-A}\xspace & \textsf{Type-A}\xspace
\\
\hline
\end{tabular}
\caption{Types of neighbours of the \textsf{Type-II}\xspace vertex.}
\label{tab:TypeII-codomains}
\end{table}
\section
Automorphism groups of abelian surfaces
\label{sec:ppas}
We now consider the impact of automorphisms on edge weights in the
isogeny graph,
following Katsura and Takashima~\cite{2020/Katsura--Takashima},
and recall the explicit classification
of reduced automorphism groups of PPASes.
In contrast with elliptic curves,
where (up to isomorphism) only two curves
have nontrivial reduced automorphism group,
with PPASes we see much richer structures
involving many more vertices.
Proofs for all of the results in this section
can be found
in~\cite{1986/Ibukiyama--Katsura--Oort},
\cite{2020/Katsura--Takashima},
and~\cite{2020/Florit--Smith}.
\subsection{Automorphisms and isogenies}
Let \(\phi : \ensuremath{\mathcal{A}} \to \ensuremath{\mathcal{A}}/K\) be a \((2,2)\)-isogeny
with kernel \(K\).
Let \(\alpha\) be an automorphism of \(\ensuremath{\mathcal{A}}\),
and let \(\phi': \ensuremath{\mathcal{A}} \to \ensuremath{\mathcal{A}}/\alpha(K)\)
be the quotient isogeny;
then \(\alpha\)
induces an isomorphism
\(\alpha_*: \ensuremath{\mathcal{A}}/K \to \ensuremath{\mathcal{A}}/\alpha(K)\)
such that
\(\alpha_*\circ\phi = \phi'\circ\alpha\).
If \(\alpha(K) = K\),
then \(\ensuremath{\mathcal{A}}/K = \ensuremath{\mathcal{A}}/\alpha(K)\),
so \(\alpha_*\) is an automorphism of \(\ensuremath{\mathcal{A}}/K\).
Going further, if \(S\) is the stabiliser
of \(K\) in \(\ensuremath{\mathrm{Aut}}(\ensuremath{\mathcal{A}})\),
then \(S\) induces an isomorphic subgroup \(S'\) of \(\ensuremath{\mathrm{Aut}}(\ensuremath{\mathcal{A}}/K)\),
and in fact \(S'\) is the stabiliser of \(\ker(\dualof{\phi})\) in
\(\ensuremath{\mathrm{Aut}}(\ensuremath{\mathcal{A}}/K)\).
If \(\alpha(K) \not= K\) then
the quotients \(\ensuremath{\mathcal{A}}/K\) and \(\ensuremath{\mathcal{A}}/\alpha(K)\)
are different,
so \(\alpha_*\) is an isomorphism but not an automorphism.
The isogenies \(\phi\) and \(\phi_\alpha := \alpha_*^{-1}\circ\phi'\)
have identical domains and codomains, but distinct kernels;
thus, they both represent the same edge in the isogeny graph,
and \(w(\classof{\phi}) > 1\).
Every PPAS has \([-1]\) in its automorphism group,
but \([-1]\) fixes every kernel and commutes with
every isogeny---so it has no impact on edges or weights in the isogeny
graph.
We can therefore simplify by quotienting
\([-1]\) out of the picture.
\begin{definition}
If \(\ensuremath{\mathcal{A}}\) is a PPAS,
then its \textbf{reduced automorphism group} is
\[
\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{A}}) := \ensuremath{\mathrm{Aut}}(\ensuremath{\mathcal{A}})/\subgrp{[-1]}
\,.
\]
\end{definition}
Since \(\subgrp{[-1]}\) is contained in the centre of \(\ensuremath{\mathrm{Aut}}(\ensuremath{\mathcal{A}})\),
the quotient \(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{A}})\) acts on the set of kernel subgroups of
\(\ensuremath{\mathcal{A}}[2]\).
We have two useful results
for \((2,2)\)-isogenies \(\phi: \ensuremath{\mathcal{A}} \to \ensuremath{\mathcal{A}}/K\).
First,
if \(O_K\) is the orbit of \(K\) under \(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{A}})\),
then there are \(\#O_K\) distinct kernels
of isogenies representing~\(\classof{\phi}\):
that is,
\[
w(\classof{\phi}) = \#O_K\,.
\]
Second,
we have the ``ratio principle''
from~\cite[Lemma~1]{2020/Florit--Smith}:
\begin{equation}
\label{eq:ratio-principle}
\#\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{A}})\cdot w(\classof{\dualof{\phi}})
=
\#\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{A}}')\cdot w(\classof{\phi})
\,.
\end{equation}
\subsection{Reduced automorphism groups of Jacobians}
There are
seven possible reduced automorphism groups
for Jacobian surfaces
(provided \(p > 5\); see~\cite{1887/Bolza}).
Figure~\ref{fig:RA-Jacobian}
gives the taxonomy of Jacobian surfaces by reduced automorphism group,
using Bolza's names (``types'') for the classes of Jacobian surfaces
with each of the reduced automorphism groups
(we add \textsf{Type-A}\xspace for the Jacobians with trivial reduced automorphism
group).
We will give normal forms for each type in~\S\ref{sec:atlas}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node (TypeA) at (0,0) {\textsf{Type-A}\xspace: \(1\)} ;
\node (TypeII) at (4.5,-3) {\textsf{Type-II}\xspace: \(C_5\)} ;
\node (TypeI) at (-1.5,-1) {\textsf{Type-I}\xspace: \(C_2\)} ;
\node (TypeIII) at (-3,-2) {\textsf{Type-III}\xspace: \(C_{2}^2\)} ;
\node (TypeIV) at (0,-2) {\textsf{Type-IV}\xspace: \(S_3\)} ;
\node (TypeV) at (-1.5,-3) {\textsf{Type-V}\xspace: \(D_{2\times 6}\)} ;
\node (TypeVI) at (1.5,-3) {\textsf{Type-VI}\xspace: \(S_4\)} ;
\path (TypeII) edge (TypeA) ;
\path (TypeI) edge (TypeA) ;
\path (TypeIII) edge (TypeI) ;
\path (TypeIV) edge (TypeI) ;
\path (TypeV) edge (TypeIII) ;
\path (TypeV) edge (TypeIV) ;
\path (TypeVI) edge (TypeIV) ;
\node (Mzero) at (-5,-3) {dim \(= 0\)} ;
\node (Mone) at (-5,-2) {dim \(= 1\)} ;
\node (Mtwo) at (-5,-1) {dim \(= 2\)} ;
\node (Mthree) at (-5,0) {dim \(=3\)} ;
\end{tikzpicture}
\caption{Reduced automorphism groups for genus-2 Jacobians.
Dimensions are of the corresponding loci
in the 3-dimensional moduli space of PPASes.
Lines connect sub- and super-types.
}
\label{fig:RA-Jacobian}
\end{figure}
We can identify the isomorphism class of a Jacobian
using the Clebsch invariants:
\[
\classof{\Jac{\ensuremath{\mathcal{C}}}}
\longleftrightarrow
(A:B:C:D)
\in \ensuremath{\mathbb{P}}(2,4,6,10)(\ensuremath{\Bbbk}\xspace)
\,,
\]
where \(A\), \(B\), \(C\), and \(D\)
are homogeneous polynomials of degree 2, 4, 6, and 10
in the coefficients of the sextic defining~\(\ensuremath{\mathcal{C}}\)
(see~\cite[\S1]{1991/Mestre}).
They should be seen as coordinates on the weighted projective space
\(\ensuremath{\mathbb{P}}(2,4,6,10)\):
that is,
\[
(A:B:C:D) = (\lambda^2 A: \lambda^4 B: \lambda^6 C: \lambda^{10} D)
\quad
\text{for all }
\lambda \not= 0 \in \ensuremath{\overline{\Bbbk}}\xspace
\,.
\]
We will not define \((A:B:C:D)\) explicitly here;
in practice, we compute them using
(e.g.) \texttt{ClebschInvariants} in Magma~\cite{2020/Magma}
or \texttt{clebsch\_invariants} in Sage~\cite{2020/SageMath}.
To determine \(\ensuremath{\mathrm{RA}}(\Jac{\ensuremath{\mathcal{C}}})\) for a given \(\ensuremath{\mathcal{C}}\),
we use Bolza's criteria on Clebsch invariants
given in Table~\ref{tab:RAut-from-Clebsch}.
We will need some derived invariants
(see~\cite{1991/Mestre}):
let
\begin{align*}
A_{11} & = 2C + \frac{1}{3}AB
\,,
&
A_{12} & = \frac{2}{3}(B^2 + AC)
\,,
&
A_{23} & = \frac{1}{2}B\cdot A_{12} + \frac{1}{3}C\cdot A_{11}
\,,
\\
A_{22} & = D
\,,
&
A_{31} & = D
\,,
&
A_{33} & = \frac{1}{2}B\cdot A_{22} + \frac{1}{3}C\cdot A_{12}
\,,
\end{align*}
and let \(R\) be defined by \(2R^2 = \det (A_{ij})\)
(we will only need to know
whether \(R = 0\)).
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|}
\hline
Type
& Conditions on Clebsch invariants
\\
\hline
\hline
\textsf{Type-A}\xspace
& \(R \not= 0\), \((A:B:C:D) \not= (0:0:0:1)\)
\\
\hline
\textsf{Type-I}\xspace
& \(R = 0\), \(A_{11}A_{22} \not= A_{12}\)
\\
\hline
\textsf{Type-II}\xspace
& \((A:B:C:D) = (0:0:0:1)\)
\\
\hline
\textsf{Type-III}\xspace
& \(BA_{11} - 2AA_{12} = -6D\),
\(CA_{11} + 2BA_{12} = AD\),
\(6C^2 \not= B^3\),
\(D \not= 0\)
\\
\hline
\textsf{Type-IV}\xspace
& \(6C^2 = B^3\),
\(3D = 2BA_{11}\),
\(2AB \not= 15C\),
\(D \not= 0\)
\\
\hline
\textsf{Type-V}\xspace
& \(6B = A^2\),
\(D = 0\),
\(A_{11} = 0\),
\(A \not= 0\)
\\
\hline
\textsf{Type-VI}\xspace
& \((A:B:C:D) = (1:0:0:0)\)
\\
\hline
\end{tabular}
\caption{%
Determining the \ensuremath{\mathrm{RA}}-type of \(\Jac{\ensuremath{\mathcal{C}}}\)
from its Clebsch invariants.
}
\label{tab:RAut-from-Clebsch}
\end{table}
\subsection{Reduced automorphism groups of elliptic products}
There are seven
possible reduced automorphism groups
for elliptic product surfaces~\cite[Prop.~3]{2020/Florit--Smith}.
Figure~\ref{fig:RA-elliptic}
shows the taxonomy of elliptic product
surfaces by reduced automorphism group.
The names (``types'') for the classes of surfaces
are taken from~\cite{2020/Florit--Smith}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node (TypeExE) at (0,0) {{\textsf{Type-$\Pi$}}\xspace: \(C_2\)} ;
\node (TypeE2) at (0,-1) {{\textsf{Type-$\Sigma$}}\xspace: \(C_2^2\)} ;
\node (TypeExE0) at (-3.5,-1) {{\textsf{Type-$\Pi_0$}}\xspace: \(C_6\)} ;
\node (TypeExE1728) at (3.5,-1) {{\textsf{Type-$\Pi_{12^3}$}}\xspace: \(C_4\)} ;
\node (TypeE02) at (-3.5,-2) {{\textsf{Type-$\Sigma_0$}}\xspace: \(C_6\times S_3\)} ;
\node (TypeE0xE1728) at (0,-2) {{\textsf{Type-$\Pi_{0,12^3}$}}\xspace: \(C_{12}\)} ;
\node (TypeE17282) at (3.5,-2) {{\textsf{Type-$\Sigma_{12^3}$}}\xspace: \(C_2^2\rtimes C_4\)} ;
\path (TypeExE0) edge (TypeExE) ;
\path (TypeE2) edge (TypeExE) ;
\path (TypeExE1728) edge (TypeExE) ;
\path (TypeE02) edge (TypeExE0) ;
\path (TypeE02) edge (TypeE2) ;
\path (TypeE17282) edge (TypeExE1728) ;
\path (TypeE17282) edge (TypeE2) ;
\path (TypeE0xE1728) edge (TypeExE0) ;
\path (TypeE0xE1728) edge (TypeExE1728) ;
\node (Mzero) at (-6,-2) {dim \(= 0\)} ;
\node (Mone) at (-6,-1) {dim \(= 1\)} ;
\node (Mtwo) at (-6,0) {dim \(= 2\)} ;
\end{tikzpicture}
\caption{Reduced automorphism groups of elliptic products.
Dimensions are of the corresponding loci
in the 3-dimensional moduli space of PPASes.
Lines connect sub- and super-types.
}
\label{fig:RA-elliptic}
\end{figure}
Every elliptic product \(\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'\)
has an involution \(\sigma = [1]\times[-1]\)
in \(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}')\).
If \(\ensuremath{\mathcal{E}} \cong \ensuremath{\mathcal{E}}'\)
then there is also the involution \(\tau\)
exchanging the factors of the product.
The situation is more complicated
if either or both factors are isomorphic to
one of
\begin{align*}
\ensuremath{\mathcal{E}}_0: y^2 & = x^3 - 1
\qquad \text{with}
&
\ensuremath{\mathrm{Aut}}(\ensuremath{\mathcal{E}}_{0})
& = \subgrp{
\zeta: (x,y) \mapsto (\zeta_3x,-y)
}
\cong
C_6
\intertext{where \(\zeta_3\) is a primitive 3rd root of unity, or}
\ensuremath{\mathcal{E}}_{12^3}: y^2 & = x^3 - x
\qquad \text{with}
&
\ensuremath{\mathrm{Aut}}(\ensuremath{\mathcal{E}}_{12^3})
& = \subgrp{
\iota: (x,y) \mapsto (-x,\sqrt{-1}y)
}
\cong
C_4
\,.
\end{align*}
When constructing isogenies, we label the \(2\)-torsion of \(\ensuremath{\mathcal{E}}_0\) and
\(\ensuremath{\mathcal{E}}_{12^3}\) as follows:
\begin{align*}
\ensuremath{\mathcal{E}}_{0}[2]
& =
\{
0
,
P_1 = (1,0)
,
P_2 = (\zeta_3,0)
,
P_3 = (\zeta_3^2,0)
\}
\,,
\\
\ensuremath{\mathcal{E}}_{12^3}[2]
& =
\{
0
,
P_1 = (1,0)
,
P_2 = (-1,0)
,
P_3 = (0,0)
\}
\,.
\end{align*}
When navigating isogeny graphs,
we can identify the isomorphism class
of an elliptic product
using the pair of \(j\)-invariants of the factors:
\[
\classof{\ensuremath{\mathcal{E}}_1\times\ensuremath{\mathcal{E}}_2}
\longleftrightarrow
\{j(\ensuremath{\mathcal{E}}_1),j(\ensuremath{\mathcal{E}}_2)\}
\,.
\]
To determine \(\ensuremath{\mathrm{RA}}(\ensuremath{\mathcal{E}}_1\times\ensuremath{\mathcal{E}}_2)\),
we can use the criteria on \(j\)-invariants
given in Table~\ref{tab:RAut-from-j-invariants}.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c||c|c|}
\hline
Type
& Conditions
&
Type
& Conditions
\\
\hline
\hline
{\textsf{Type-$\Pi$}}\xspace
& \(\{j(\ensuremath{\mathcal{E}}_1),j(\ensuremath{\mathcal{E}}_2)\}\cap\{0,1728\} = \emptyset\)
&
\multirow{2}{*}{{\textsf{Type-$\Sigma$}}\xspace}
& \(j(\ensuremath{\mathcal{E}}_1) = j(\ensuremath{\mathcal{E}}_2)\),
\\
\cline{1-2}
{\textsf{Type-$\Pi_0$}}\xspace
& \(j(\ensuremath{\mathcal{E}}_1) = 0\) or \(j(\ensuremath{\mathcal{E}}_2) = 0\)
&
{}
& \(j(\ensuremath{\mathcal{E}}_i) \not\in \{0,1728\}\)
\\
\hline
{\textsf{Type-$\Pi_{12^3}$}}\xspace
& \(j(\ensuremath{\mathcal{E}}_1) = 1728\) or \(j(\ensuremath{\mathcal{E}}_2) = 1728\)
&
{\textsf{Type-$\Sigma_0$}}\xspace
& \(j(\ensuremath{\mathcal{E}}_1) = j(\ensuremath{\mathcal{E}}_2) = 0\)
\\
\hline
{\textsf{Type-$\Pi_{0,12^3}$}}\xspace
& \(\{j(\ensuremath{\mathcal{E}}_1),j(\ensuremath{\mathcal{E}}_2)\} = \{0,1728\}\)
&
{\textsf{Type-$\Sigma_{12^3}$}}\xspace
& \(j(\ensuremath{\mathcal{E}}_1) = j(\ensuremath{\mathcal{E}}_2) = 1728\)
\\
\hline
\end{tabular}
\caption{%
Determining the \ensuremath{\mathrm{RA}}-type of an elliptic product
\(\ensuremath{\mathcal{E}}_1\times\ensuremath{\mathcal{E}}_2\).
}
\label{tab:RAut-from-j-invariants}
\end{table}
\subsection{Superspecial vertices}
\label{sec:superspecial-count}
Ibukiyama, Katsura, and Oort have computed the precise number of superspecial
genus-2 Jacobians (up to isomorphism) of each reduced automorphism
type~\cite[Theorem 3.3]{1986/Ibukiyama--Katsura--Oort}.
We reproduce their results for \(p > 5\),
completing them with the number of superspecial elliptic products
of each automorphism type
(which can be easily derived from the well-known formula
for the number of supersingular elliptic curves over~\(\ensuremath{\mathbb{F}}_{p^2}\))
in Table~\ref{tab:number-of-superspecial-vertices}.
\begin{definition}
\label{def:epsilons}
For each prime \(p > 5\), we define the following quantities:
\begin{itemize}
\item
\(\epsilon_{1,p} = 1\) if \(p \equiv 3\pmod{4}\), 0 otherwise;
\item
\(\epsilon_{2,p} = 1\) if \(p \equiv 5,7\pmod{8}\), 0 otherwise;
\item
\(\epsilon_{3,p} = 1\) if \(p \equiv 2\pmod{3}\), 0 otherwise;
\item
\(\epsilon_{5,p} = 1\) if \(p \equiv 4 \pmod{5}\), 0 otherwise;
\item
\(N_p = (p-1)/12 - \epsilon_{1,p}/2 - \epsilon_{3,p}/3\).
\end{itemize}
Note that \(N_p\), \(\epsilon_{1,p}\), and \(\epsilon_{3,p}\)
count the isomorphism classes of supersingular elliptic curves
over \(\ensuremath{\mathbb{F}}_{p^2}\)
with reduced automorphism group of order \(1\), \(2\), and \(3\),
respectively.
\end{definition}
\begin{table}[ht]
\centering
\begin{tabular}{|r|l||r|l|}
\hline
Type & Vertices in \(\ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace\)
&
Type & Vertices in \(\ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace\)
\\
\hline
\hline
\multirow{2}{*}{\textsf{Type-I}\xspace}
& \( \frac{1}{48}(p-1)(p-17) \)
&
{\textsf{Type-$\Pi$}}\xspace & \( \frac{1}{2}N_p(N_p - 1) \)
\\
\cline{3-4}
{}
& \qquad \quad \(
+ \frac{1}{4}\epsilon_{1,p}
+ \epsilon_{2,p}
+ \epsilon_{3,p}
\)
&
{\textsf{Type-$\Pi_0$}}\xspace & \( \epsilon_{3,p}N_p \)
\\
\hline
\textsf{Type-II}\xspace & \(\epsilon_{5,p}\)
&
{\textsf{Type-$\Pi_{12^3}$}}\xspace & \( \epsilon_{1,p}N_p \)
\\
\hline
\textsf{Type-III}\xspace & \(
\frac{3}{2}N_p
+ \frac{1}{2}\epsilon_{1,p}
- \frac{1}{2}\epsilon_{2,p}
- \frac{1}{2}\epsilon_{3,p}
\)
&
{\textsf{Type-$\Pi_{0,12^3}$}}\xspace & \(\epsilon_{1,p}\cdot\epsilon_{3,p}\)
\\
\hline
\textsf{Type-IV}\xspace & \(2N_p + \epsilon_{1,p} - \epsilon_{2,p}\)
&
{\textsf{Type-$\Sigma$}}\xspace & \(N_p\)
\\
\hline
\textsf{Type-V}\xspace & \(\epsilon_{3,p}\)
&
{\textsf{Type-$\Sigma_0$}}\xspace & \(\epsilon_{3,p}\)
\\
\hline
\textsf{Type-VI}\xspace & \(\epsilon_{2,p}\)
&
{\textsf{Type-$\Sigma_{12^3}$}}\xspace & \(\epsilon_{1,p}\)
\\
\hline
\textsf{Type-A}\xspace & \multicolumn{3}{l|}{\(
\frac{1}{2880}(p-1)(p^2-35p+346)
- \frac{1}{16}\epsilon_{1,p}
- \frac{1}{4}\epsilon_{2,p}
- \frac{2}{9}\epsilon_{3,p}
- \frac{1}{5}\epsilon_{5,p}
\)}
\\
\hline
\end{tabular}
\caption{%
The number of superspecial vertices
of each \(\ensuremath{\mathrm{RA}}\)-type.
}
\label{tab:number-of-superspecial-vertices}
\end{table}
If the reader chooses suitable values of \(p\)
and computes \(\ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace\),
then they will find graphs built from overlapping
copies of the neighbourhoods described in~\S\ref{sec:atlas}.
We will see that \(\ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace\)
is much more complicated
than the elliptic \(2\)-isogeny graph.
\section
Introduction
This article is an illustrated guide to the Richelot isogeny graph.
Following Katsura and Takashima~\cite{2020/Katsura--Takashima},
we present diagrams of the neighbourhoods of general vertices of each type.
Going further,
we also compute diagrams of neighbourhoods of general edges,
which can be used to glue the vertex neighbourhoods together.
Our aim is to build intuition on the various combinatorial
structures in the graph,
providing concrete examples for some of the more pathological cases.
The authors have used the results presented here
to verify computations and form conjectures
when investigating the behaviour of random walks
in superspecial isogeny graphs~\cite{2020/Florit--Smith}.
We work over a ground field \ensuremath{\Bbbk}\xspace
of characteristic not \(2\), \(3\), or \(5\).
In our application to superspecial PPASes,
\(\ensuremath{\Bbbk}\xspace = \ensuremath{\mathbb{F}}_{p^2}\),
though our computations were mostly done over function fields over cyclotomic
fields.
Let \(\ensuremath{\mathcal{A}}/\ensuremath{\Bbbk}\xspace\) be a principally polarized abelian surface (PPAS).
A \emph{\((2,2)\)-isogeny}, or \emph{Richelot isogeny},
is an isogeny \(\phi: \ensuremath{\mathcal{A}} \to \ensuremath{\mathcal{A}}'\) of PPASes
whose kernel is a maximal \(2\)-Weil isotropic subgroup of \(\ensuremath{\mathcal{A}}[2]\).
Such a \(\phi\)
has kernel isomorphic to \((\ensuremath{\mathbb{Z}}/2\ensuremath{\mathbb{Z}})^2\);
it respects the principal polarizations
\(\lambda\) and \(\lambda'\) on \(\ensuremath{\mathcal{A}}\) and \(\ensuremath{\mathcal{A}}'\), respectively,
in the sense that
\(\phi^*(\lambda') = 2\lambda\);
and its (Rosati) \emph{dual isogeny} \(\dualof{\phi}: \ensuremath{\mathcal{A}}' \to \ensuremath{\mathcal{A}}\)
satisfies \(\dualof{\phi}\circ\phi = [2]_{\ensuremath{\mathcal{A}}}\).
The \emph{\((2,2)\)-isogeny} or \emph{Richelot isogeny graph}
is the directed weighted multigraph defined as follows.
The \emph{vertices} are isomorphism classes of
PPASes over \(\ensuremath{\Bbbk}\xspace\).
If \(\ensuremath{\mathcal{A}}\) is a PPAS,
then \(\classof{\ensuremath{\mathcal{A}}}\) denotes the corresponding vertex.
The \emph{edges} are isomorphism classes of
\((2,2)\)-isogenies
(\(\phi_1: \ensuremath{\mathcal{A}}_1 \to \ensuremath{\mathcal{A}}_1'\)
and \(\phi_2: \ensuremath{\mathcal{A}}_2 \to \ensuremath{\mathcal{A}}_2'\)
are isomorphic
if there are isomorphisms
of PPASes
$\alpha \colon \ensuremath{\mathcal{A}}_1 \to \ensuremath{\mathcal{A}}_2$
and
$\beta\colon \ensuremath{\mathcal{A}}_1' \to \ensuremath{\mathcal{A}}_2'$
such that
\(\phi_2\circ\alpha = \beta\circ\phi_1\)).
The edges are \emph{weighted} by the number of distinct kernels yielding
isogenies in their class.
The weight of an edge \(\classof{\phi}\)
is denoted by \(w(\classof{\phi})\).
If \(\classof{\phi}: \classof{\ensuremath{\mathcal{A}}}\to \classof{\ensuremath{\mathcal{A}}'}\) is an edge,
then $w(\classof{\phi}) = n$
if and only if
there are \(n\) kernel subgroups \(K \subset \ensuremath{\mathcal{A}}[2]\)
such that \(\ensuremath{\mathcal{A}}' \cong \ensuremath{\mathcal{A}}/K\)
(this is independent of the choice of
representative isogeny \(\phi\)).
There are fifteen maximal \(2\)-Weil-istropic subgroups in \(\ensuremath{\mathcal{A}}[2]\),
though some (or all) might not be defined over \(\ensuremath{\Bbbk}\xspace\).
The sum of the weights of the edges leaving any vertex
is therefore at most 15.
The isogeny graph breaks up into connected components within isogeny classes.
We are particularly interested in the superspecial isogeny class.
Recall that a PPAS \(\ensuremath{\mathcal{A}}/\ensuremath{\overline{\mathbb{F}}}_p\) is \emph{superspecial}
if its Hasse--Witt matrix vanishes identically.
Equivalently, \(\ensuremath{\mathcal{A}}\) is superspecial
if it is isomorphic \emph{as an unpolarized abelian variety}
to a product of supersingular elliptic curves.
For background on superspecial and supersingular
abelian varieties in low dimension,
we refer to
Ibuyiyama, Katsura, and Oort~\cite{1986/Ibukiyama--Katsura--Oort}
and Brock's thesis~\cite{1993/Brock}.
For more general results, we refer to Li and Oort~\cite{1998/Li--Oort}.
\begin{definition}
The superspecial Richelot isogeny graph
is the subgraph \(\ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace\) of
the Richelot isogeny graph over \(\ensuremath{\mathbb{F}}_{p^2}\)
supported on the superspecial vertices.
\end{definition}
Recall that \ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace has
\(p^3/2880 + O(p)\) vertices
(see~\S\ref{sec:superspecial-count} for a more precise statement),
and is connected~\cite{2020/Jordan--Zaytman}.
If \(\ensuremath{\mathcal{A}}/\ensuremath{\overline{\mathbb{F}}}_p\) represents a vertex in \ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace,
then the invariants corresponding to \(\classof{\ensuremath{\mathcal{A}}}\)
are defined over \(\ensuremath{\mathbb{F}}_{p^2}\),
as are all 15 of the \((2,2)\)-isogeny kernels---so
\ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace is a \(15\)-regular graph.
It has interesting number-theoretic properties and applications
(such as Mestre's \emph{méthode des graphes}~\cite{1986/Mestre}),
and potential cryptographic applications
(including~\cite{2009/CGLgenusg,
2018/Takashima,
2019/Flynn--Ti,
2020/Castryck--Decru--Smith,
2020/Costello--Smith}).
All of these applications
depend on a clear understanding of the structure of~\ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace:
for example,
the local neighbourhoods of vertices with extra automorphisms
(and their inter-relations)
affect
the expansion properties
and random-walk behaviour of \ensuremath{\Gamma^{SS}_{2}(2;p)}\xspace,
as we see in~\cite{2020/Florit--Smith}.
\section
Richelot isogenies and isogeny graphs
\label{sec:basics}
There are two kinds of PPASes:
products of elliptic curves (with the product polarization)
and Jacobians of genus-2 curves.
The algorithmic construction of isogenies
depends fundamentally on whether the PPASes are Jacobians or elliptic products.
We recall the Jacobian case
in~\S\ref{sec:Richelot},
and the elliptic product case
in~\S\ref{sec:elliptic-product-isogenies}.
\subsection{Richelot isogenies}
\label{sec:Richelot}
Let \(\ensuremath{\mathcal{C}}: y^2 = F(x)\) be a genus-2 curve,
with \(F\) squarefree of degree \(5\) or \(6\).
The kernels of \((2,2)\)-isogenies from \(\Jac{\ensuremath{\mathcal{C}}}\)
correspond to factorizations of \(F\) into quadratics
(of which one may be linear, if \(\deg(F) = 5\)):
\[
\ensuremath{\mathcal{C}}: y^2 = F(x) = F_1(x)F_2(x)F_3(x)
\,,
\]
up to permutation of the \(F_i\) and constant multiples.
We call such factorizations \emph{quadratic splittings}.
The kernel (and isogeny) is defined over \(\ensuremath{\Bbbk}\xspace\)
if the splitting is.
Fix one such
quadratic splitting \(\{F_1,F_2,F_3\}\);
then the corresponding subgroup \(K\subset\Jac{\ensuremath{\mathcal{C}}}[2]\)
is the kernel of a \((2,2)\)-isogeny
\(\phi: \Jac{\ensuremath{\mathcal{C}}} \to \Jac{\ensuremath{\mathcal{C}}}/K\).
For each \(1 \le i \le 3\),
we write \(F_i(x) = F_{i,2}x^2 + F_{i,1}x + F_{i,0}\).
Now let
\[
\delta
=
\delta(F_1,F_2,F_3)
:=
\begin{vmatrix}
F_{1,0} & F_{1,1} & F_{1,2}
\\
F_{2,0} & F_{2,1} & F_{2,2}
\\
F_{3,0} & F_{3,1} & F_{3,2}
\end{vmatrix}
\,.
\]
If \(\delta(F_1,F_2,F_3) \not= 0\),
then
\(\Jac{\ensuremath{\mathcal{C}}}/K\) is isomorphic to a Jacobian \(\Jac{\ensuremath{\mathcal{C}}'}\),
which we can compute using Richelot's algorithm
(see~\cite{1988/Bost--Mestre} and~\cite[\S8]{2005/Smith}):
\(\ensuremath{\mathcal{C}}'\) is defined by
\[
\ensuremath{\mathcal{C}}': y^2 = G_1(x)G_2(x)G_3(x)
\quad
\text{where}
\quad
G_i(x) := \frac{1}{\delta}(F_j'(x)F_k(x) - F_k'(x)F_j(x))
\]
for each cyclic permutation \((i,j,k)\) of \((1,2,3)\).
The quadratic splitting \(\{G_1,G_2,G_3\}\)
corresponds to the kernel of the dual isogeny
\(\dualof{\phi}: \Jac{\ensuremath{\mathcal{C}}'}\to\Jac{\ensuremath{\mathcal{C}}}\).
If \(\delta(F_1,F_2,F_3) = 0\),
then \(\Jac{\ensuremath{\mathcal{C}}}/K\) is isomorphic to
an elliptic product \(\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'\).
which we can compute as follows.
There exist linear polynomials \(U\) and \(V\)
such that
\(F_1 = \alpha_1 U^2 + \beta_1 V^2\)
and \(F_2 = \alpha_2 U^2 + \beta_2 V^2\)
for some \(\alpha_1\), \(\beta_1\), \(\alpha_2\), and \(\beta_2\);
and since in this case
\(F_3\) is a linear combination of \(F_1\) and \(F_2\),
we must have
\(F_3 = \alpha_3 U^2 + \beta_3 V^2\)
for some \(\alpha_3\) and \(\beta_3\).
The elliptic factors are defined by
\[
\ensuremath{\mathcal{E}}: y^2 = \prod_{i=1}^3(\alpha_i x + \beta_i)
\qquad
\text{and}
\qquad
\ensuremath{\mathcal{E}}': y^2 = \prod_{i=1}^3(\beta_i x + \alpha_i)
\,,
\]
and
the isogeny \(\phi: \Jac{\ensuremath{\mathcal{C}}} \to \ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'\)
is induced by the product of the
double covers
\(\pi: \ensuremath{\mathcal{C}} \to \ensuremath{\mathcal{E}}\) resp. \(\pi': \ensuremath{\mathcal{C}} \to \ensuremath{\mathcal{E}}'\)
mapping \((x,y)\) to
\((U^2/V^2,y/V^3)\) resp. \((V^2/U^2,y/U^3)\).
\subsection{Isogenies from elliptic products}
\label{sec:elliptic-product-isogenies}
Consider a generic pair of elliptic curves
\begin{align*}
\ensuremath{\mathcal{E}}: y^2 & = (x - s_1)(x - s_2)(x - s_3)
\,,
\\
\ensuremath{\mathcal{E}}': y^2 & = (x - s_1')(x - s_2')(x - s_3')
\,.
\end{align*}
We have
\( \ensuremath{\mathcal{E}}[2] = \{ 0_{\ensuremath{\mathcal{E}}}, P_1, P_2, P_3 \} \)
and
\( \ensuremath{\mathcal{E}}'[2] = \{ 0_{\ensuremath{\mathcal{E}}'}, P_1', P_2', P_3' \} \)
where \(P_i := (s_i,0)\)
and \(P_i' := (s_i',0)\).
For each \(1 \le i \le 3\),
we let
\[
\psi_i: \ensuremath{\mathcal{E}} \longrightarrow \ensuremath{\mathcal{E}}_i := \ensuremath{\mathcal{E}}/\subgrp{P_i}
\quad
\text{and}
\quad
\psi_i': \ensuremath{\mathcal{E}}' \to \ensuremath{\mathcal{E}}_i' := \ensuremath{\mathcal{E}}'/\subgrp{P_i'}
\]
be the quotient \(2\)-isogenies.
These can be computed using Vélu's formul\ae{}~\cite{1971/Velu}.
Nine of the fifteen kernel subgroups of \((\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}')[2]\)
correspond to products of elliptic \(2\)-isogeny kernels.
Namely,
for each \( 1 \le i, j \le 3 \)
we have the kernel
\[
K_{i,j}
:=
\subgrp{
(P_i,0_{\ensuremath{\mathcal{E}}'})
,
(0_{\ensuremath{\mathcal{E}}},P_i')
}
\subset
(\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}')[2]
\]
of the product isogeny
\[
\phi_{i,j}
:=
\psi_i\times\psi_j:
\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'
\longrightarrow
\ensuremath{\mathcal{E}}_i\times\ensuremath{\mathcal{E}}_j'
\cong
(\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}')/K_{i,j}
\,.
\]
The other six kernels correspond to \(2\)-Weil anti-isometries
\(\ensuremath{\mathcal{E}}[2]\cong\ensuremath{\mathcal{E}}'[2]\):
they are
\[
K_{\pi}
:=
\{
(0_{\ensuremath{\mathcal{E}}},0_{\ensuremath{\mathcal{E}}'}),
(P_{1},P_{\pi(1)}'),
(P_{2},P_{\pi(2)}'),
(P_{3},P_{\pi(3)}')
\}
\quad
\text{for }
\pi \in \operatorname{Sym}(\{1,2,3\})
\,,
\]
with quotient isogenies
\[
\phi_{\pi}: \ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}' \longrightarrow \ensuremath{\mathcal{A}}_{\pi} := (\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}')/K_{\pi}
\,.
\]
If the anti-isometry \(P_i \mapsto P_{\pi(i)}'\)
is induced by an isomorphism \(\ensuremath{\mathcal{E}} \cong \ensuremath{\mathcal{E}}'\),
then \(\ensuremath{\mathcal{A}}_{\pi}\cong\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'\).
Otherwise,
following~\cite[Prop.~4]{2000/Howe--Leprevost--Poonen},
\(\ensuremath{\mathcal{A}}_{\pi}\) is the Jacobian of a genus-2 curve
\[
\ensuremath{\mathcal{C}}_{\pi}: y^2 = -F_1(x)F_2(x)F_3(x)
\]
where
\[
F_i(x) := A(s_j-s_i)(s_i-s_k)x^2
+ B(s'_{j}-s'_{i})(s'_{i}-s'_{k})
\]
for each cyclic permutation \((i,j,k)\) of \((1,2,3)\),
with
\begin{align*}
A & := \frac{a_1}{a_2}\prod (s'_i - s'_j)^2
\text{ where }
a_1 := \sum \frac{(s_j - s_i)^2}{s'_j - s'_i}
\text{ and }
a_2 := \sum s_i(s'_k - s'_j)
\,,
\\
B & := \frac{b_1}{b_2}\prod (s_i - s_j)^2
\text{ where }
b_1 := \sum \frac{(s'_j - s'_i)^2}{s_j - s_i}
\text{ and }
b_2 := \sum s'_i(s_k - s_j)
\,,
\end{align*}
where the sums and products
are over cyclic permutations \((i,j,k)\) of \((1,2,3)\).
The dual isogeny \(\dualof{\phi_{\pi}}: \Jac{\ensuremath{\mathcal{C}}_{\pi}} \to
\ensuremath{\mathcal{E}}\times\ensuremath{\mathcal{E}}'\) corresponds
to the splitting \(\{F_1,F_2,F_3\}\).
|
1,108,101,566,495 | arxiv | \section{Introduction}
The {\it IJCAI--23 Proceedings} will be printed from electronic
manuscripts submitted by the authors. These must be PDF ({\em Portable
Document Format}) files formatted for 8-1/2$''$ $\times$ 11$''$ paper.
\subsection{Length of Papers}
All paper {\em submissions} to the main track must have a maximum of seven pages, plus at most two for references / acknowledgements / contribution statement / ethics statement.
The length rules may change for final camera-ready versions of accepted papers and
differ between tracks. Some tracks may disallow any contents other than the references in the last two pages, whereas others allow for any content in all pages. Similarly, some tracks allow you to buy a few extra pages should you want to, whereas others don't.
If your paper is accepted, please carefully read the notifications you receive, and check the proceedings submission information website\footnote{\url{https://proceedings.ijcai.org/info}} to know how many pages you can use for your final version. That website holds the most up-to-date information regarding paper length limits at all times.
\subsection{Word Processing Software}
As detailed below, IJCAI has prepared and made available a set of
\LaTeX{} macros and a Microsoft Word template for use in formatting
your paper. If you are using some other word processing software, please follow the format instructions given below and ensure that your final paper looks as much like this sample as possible.
\section{Style and Format}
\LaTeX{} and Word style files that implement these instructions
can be retrieved electronically. (See Section~\ref{stylefiles} for
instructions on how to obtain these files.)
\subsection{Layout}
Print manuscripts two columns to a page, in the manner in which these
instructions are printed. The exact dimensions for pages are:
\begin{itemize}
\item left and right margins: .75$''$
\item column width: 3.375$''$
\item gap between columns: .25$''$
\item top margin---first page: 1.375$''$
\item top margin---other pages: .75$''$
\item bottom margin: 1.25$''$
\item column height---first page: 6.625$''$
\item column height---other pages: 9$''$
\end{itemize}
All measurements assume an 8-1/2$''$ $\times$ 11$''$ page size. For
A4-size paper, use the given top and left margins, column width,
height, and gap, and modify the bottom and right margins as necessary.
\subsection{Format of Electronic Manuscript}
For the production of the electronic manuscript, you must use Adobe's
{\em Portable Document Format} (PDF). A PDF file can be generated, for
instance, on Unix systems using {\tt ps2pdf} or on Windows systems
using Adobe's Distiller. There is also a website with free software
and conversion services: \url{http://www.ps2pdf.com}. For reasons of
uniformity, use of Adobe's {\em Times Roman} font is strongly suggested.
In \LaTeX2e{} this is accomplished by writing
\begin{quote}
\mbox{\tt $\backslash$usepackage\{times\}}
\end{quote}
in the preamble.\footnote{You may want to also use the package {\tt
latexsym}, which defines all symbols known from the old \LaTeX{}
version.}
Additionally, it is of utmost importance to specify the {\bf
letter} format (corresponding to 8-1/2$''$ $\times$ 11$''$) when
formatting the paper. When working with {\tt dvips}, for instance, one
should specify {\tt -t letter}.
\subsection{Papers Submitted for Review vs. Camera-ready Papers}
In this document, we distinguish between papers submitted for review (henceforth, submissions) and camera-ready versions, i.e., accepted papers that will be included in the conference proceedings. The present document provides information to be used by both types of papers (submissions / camera-ready). There are relevant differences between the two versions. Find them next.
\subsubsection{Anonymity}
For the main track and some of the special tracks, submissions must be anonymous; for other special tracks they must be non-anonymous. The camera-ready versions for all tracks are non-anonymous. When preparing your submission, please check the track-specific instructions regarding anonymity.
\subsubsection{Submissions}
The following instructions apply to submissions:
\begin{itemize}
\item If your track requires submissions to be anonymous, they must be fully anonymized as discussed in the Modifications for Blind Review subsection below; in this case, Acknowledgements and Contribution Statement sections are not allowed.
\item If your track requires non-anonymous submissions, you should provide all author information at the time of submission, just as for camera-ready papers (see below); Acknowledgements and Contribution Statement sections are allowed, but optional.
\item Submissions must include line numbers to facilitate feedback in the review process . Enable line numbers by uncommenting the command {\tt \textbackslash{}linenumbers} in the preamble \footnote{New in IJCAI--23}.
\item The limit on the number of content pages is \emph{strict}. All papers exceeding the limits will be desk rejected.
\end{itemize}
\subsubsection{Camera-Ready Papers}
The following instructions apply to camera-ready papers:
\begin{itemize}
\item Authors and affiliations are mandatory. Explicit self-references are allowed. It is strictly forbidden to add authors not declared at submission time.
\item Acknowledgements and Contribution Statement sections are allowed, but optional.
\item Line numbering must be disabled. To achieve this, comment or disable {\tt \textbackslash{}linenumbers} in the preamble.
\item For some of the tracks, you can exceed the page limit by purchasing extra pages.
\end{itemize}
\subsection{Title and Author Information}
Center the title on the entire width of the page in a 14-point bold
font. The title must be capitalized using Title Case. For non-anonymous papers, author names and affiliations should appear below the title. Center author name(s) in 12-point bold font. On the following line(s) place the affiliations.
\subsubsection{Author Names}
Each author name must be followed by:
\begin{itemize}
\item A newline {\tt \textbackslash{}\textbackslash{}} command for the last author.
\item An {\tt \textbackslash{}And} command for the second to last author.
\item An {\tt \textbackslash{}and} command for the other authors.
\end{itemize}
\subsubsection{Affiliations}
After all authors, start the affiliations section by using the {\tt \textbackslash{}affiliations} command.
Each affiliation must be terminated by a newline {\tt \textbackslash{}\textbackslash{}} command. Make sure that you include the newline after the last affiliation, too.
\subsubsection{Mapping Authors to Affiliations}
If some scenarios, the affiliation of each author is clear without any further indication (\emph{e.g.}, all authors share the same affiliation, all authors have a single and different affiliation). In these situations you don't need to do anything special.
In more complex scenarios you will have to clearly indicate the affiliation(s) for each author. This is done by using numeric math superscripts {\tt \$\{\^{}$i,j, \ldots$\}\$}. You must use numbers, not symbols, because those are reserved for footnotes in this section (should you need them). Check the authors definition in this example for reference.
\subsubsection{Emails}
This section is optional, and can be omitted entirely if you prefer. If you want to include e-mails, you should either include all authors' e-mails or just the contact author(s)' ones.
Start the e-mails section with the {\tt \textbackslash{}emails} command. After that, write all emails you want to include separated by a comma and a space, following the order used for the authors (\emph{i.e.}, the first e-mail should correspond to the first author, the second e-mail to the second author and so on).
You may ``contract" consecutive e-mails on the same domain as shown in this example (write the users' part within curly brackets, followed by the domain name). Only e-mails of the exact same domain may be contracted. For instance, you cannot contract ``[email protected]" and ``[email protected]" because the domains are different.
\subsubsection{Modifications for Blind Review}
When submitting to a track that requires anonymous submissions,
in order to make blind reviewing possible, authors must omit their
names, affiliations and e-mails. In place
of names, affiliations and e-mails, you can optionally provide the submission number and/or
a list of content areas. When referring to one's own work,
use the third person rather than the
first person. For example, say, ``Previously,
Gottlob~\shortcite{gottlob:nonmon} has shown that\ldots'', rather
than, ``In our previous work~\cite{gottlob:nonmon}, we have shown
that\ldots'' Try to avoid including any information in the body of the
paper or references that would identify the authors or their
institutions, such as acknowledgements. Such information can be added post-acceptance to be included in the camera-ready
version.
Please also make sure that your paper metadata does not reveal
the authors' identities.
\subsection{Abstract}
Place the abstract at the beginning of the first column 3$''$ from the
top of the page, unless that does not leave enough room for the title
and author information. Use a slightly smaller width than in the body
of the paper. Head the abstract with ``Abstract'' centered above the
body of the abstract in a 12-point bold font. The body of the abstract
should be in the same font as the body of the paper.
The abstract should be a concise, one-paragraph summary describing the
general thesis and conclusion of your paper. A reader should be able
to learn the purpose of the paper and the reason for its importance
from the abstract. The abstract should be no more than 200 words long.
\subsection{Text}
The main body of the text immediately follows the abstract. Use
10-point type in a clear, readable font with 1-point leading (10 on
11).
Indent when starting a new paragraph, except after major headings.
\subsection{Headings and Sections}
When necessary, headings should be used to separate major sections of
your paper. (These instructions use many headings to demonstrate their
appearance; your paper should have fewer headings.). All headings should be capitalized using Title Case.
\subsubsection{Section Headings}
Print section headings in 12-point bold type in the style shown in
these instructions. Leave a blank space of approximately 10 points
above and 4 points below section headings. Number sections with
Arabic numerals.
\subsubsection{Subsection Headings}
Print subsection headings in 11-point bold type. Leave a blank space
of approximately 8 points above and 3 points below subsection
headings. Number subsections with the section number and the
subsection number (in Arabic numerals) separated by a
period.
\subsubsection{Subsubsection Headings}
Print subsubsection headings in 10-point bold type. Leave a blank
space of approximately 6 points above subsubsection headings. Do not
number subsubsections.
\paragraph{Titled paragraphs.} You should use titled paragraphs if and
only if the title covers exactly one paragraph. Such paragraphs should be
separated from the preceding content by at least 3pt, and no more than
6pt. The title should be in 10pt bold font and to end with a period.
After that, a 1em horizontal space should follow the title before
the paragraph's text.
In \LaTeX{} titled paragraphs should be typeset using
\begin{quote}
{\tt \textbackslash{}paragraph\{Title.\} text} .
\end{quote}
\subsection{Special Sections}
\subsubsection{Appendices}
You may move some of the contents of the paper into one or more appendices that appear after the main content, but before references. These appendices count towards the page limit and are distinct from the supplementary material that can be submitted separately through CMT. Such appendices are useful if you would like to include highly technical material (such as a lengthy calculation) that will disrupt the flow of the paper. They can be included both in papers submitted for review and in camera-ready versions; in the latter case, they will be included in the proceedings (whereas the supplementary materials will not be included in the proceedings).
Appendices are optional. Appendices must appear after the main content.
Appendix sections must use letters instead of Arabic numerals. In \LaTeX, you can use the {\tt \textbackslash{}appendix} command to achieve this followed by {\tt \textbackslash section\{Appendix\}} for your appendix sections.
\subsubsection{Ethical Statement}
Ethical Statement is optional. You may include an Ethical Statement to discuss the ethical aspects and implications of your research. The section should be titled \emph{Ethical Statement} and be typeset like any regular section but without being numbered. This section may be placed on the References pages.
Use
\begin{quote}
{\tt \textbackslash{}section*\{Ethical Statement\}}
\end{quote}
\subsubsection{Acknowledgements}
Acknowledgements are optional. In the camera-ready version you may include an unnumbered acknowledgments section, including acknowledgments of help from colleagues, financial support, and permission to publish. This is not allowed in the anonymous submission. If present, acknowledgements must be in a dedicated, unnumbered section appearing after all regular sections but before references. This section may be placed on the References pages.
Use
\begin{quote}
{\tt \textbackslash{}section*\{Acknowledgements\}}
\end{quote}
to typeset the acknowledgements section in \LaTeX{}.
\subsubsection{Contribution Statement}
Contribution Statement is optional. In the camera-ready version you may include an unnumbered Contribution Statement section\footnote{New in IJCAI--23}, explicitly describing the contribution of each of the co-authors to the paper. This is not allowed in the anonymous submission. If present, Contribution Statement must be in a dedicated, unnumbered section appearing after all regular sections but before references. This section may be placed on the References pages.
Use
\begin{quote}
{\tt \textbackslash{}section*\{Contribution Statement\}}
\end{quote}
to typeset the Contribution Statement section in \LaTeX{}.
\subsubsection{References}
The references section is headed ``References'', printed in the same
style as a section heading but without a number. A sample list of
references is given at the end of these instructions. Use a consistent
format for references. The reference list should not include publicly unavailable work.
\subsubsection{Order of Sections}
Sections should be arranged in the following order:
\begin{enumerate}
\item Main content sections (numbered)
\item Appendices (optional, numbered using capital letters)
\item Ethical statement (optional, unnumbered)
\item Acknowledgements (optional, unnumbered)
\item Contribution statement (optional, unnumbered)
\item References (required, unnumbered)
\end{enumerate}
\subsection{Citations}
Citations within the text should include the author's last name and
the year of publication, for example~\cite{gottlob:nonmon}. Append
lowercase letters to the year in cases of ambiguity. Treat multiple
authors as in the following examples:~\cite{abelson-et-al:scheme}
or~\cite{bgf:Lixto} (for more than two authors) and
\cite{brachman-schmolze:kl-one} (for two authors). If the author
portion of a citation is obvious, omit it, e.g.,
Nebel~\shortcite{nebel:jair-2000}. Collapse multiple citations as
follows:~\cite{gls:hypertrees,levesque:functional-foundations}.
\nocite{abelson-et-al:scheme}
\nocite{bgf:Lixto}
\nocite{brachman-schmolze:kl-one}
\nocite{gottlob:nonmon}
\nocite{gls:hypertrees}
\nocite{levesque:functional-foundations}
\nocite{levesque:belief}
\nocite{nebel:jair-2000}
\subsection{Footnotes}
Place footnotes at the bottom of the page in a 9-point font. Refer to
them with superscript numbers.\footnote{This is how your footnotes
should appear.} Separate them from the text by a short
line.\footnote{Note the line separating these footnotes from the
text.} Avoid footnotes as much as possible; they interrupt the flow of
the text.
\section{Illustrations}
Place all illustrations (figures, drawings, tables, and photographs)
throughout the paper at the places where they are first discussed,
rather than at the end of the paper.
They should be floated to the top (preferred) or bottom of the page,
unless they are an integral part
of your narrative flow. When placed at the bottom or top of
a page, illustrations may run across both columns, but not when they
appear inline.
Illustrations must be rendered electronically or scanned and placed
directly in your document. They should be cropped outside \LaTeX{},
otherwise portions of the image could reappear during the post-processing of your paper.
When possible, generate your illustrations in a vector format.
When using bitmaps, please use 300dpi resolution at least.
All illustrations should be understandable when printed in black and
white, albeit you can use colors to enhance them. Line weights should
be 1/2-point or thicker. Avoid screens and superimposing type on
patterns, as these effects may not reproduce well.
Number illustrations sequentially. Use references of the following
form: Figure 1, Table 2, etc. Place illustration numbers and captions
under illustrations. Leave a margin of 1/4-inch around the area
covered by the illustration and caption. Use 9-point type for
captions, labels, and other text in illustrations. Captions should always appear below the illustration.
\section{Tables}
Tables are treated as illustrations containing data. Therefore, they should also appear floated to the top (preferably) or bottom of the page, and with the captions below them.
\begin{table}
\centering
\begin{tabular}{lll}
\hline
Scenario & $\delta$ & Runtime \\
\hline
Paris & 0.1s & 13.65ms \\
Paris & 0.2s & 0.01ms \\
New York & 0.1s & 92.50ms \\
Singapore & 0.1s & 33.33ms \\
Singapore & 0.2s & 23.01ms \\
\hline
\end{tabular}
\caption{Latex default table}
\label{tab:plain}
\end{table}
\begin{table}
\centering
\begin{tabular}{lrr}
\toprule
Scenario & $\delta$ (s) & Runtime (ms) \\
\midrule
Paris & 0.1 & 13.65 \\
& 0.2 & 0.01 \\
New York & 0.1 & 92.50 \\
Singapore & 0.1 & 33.33 \\
& 0.2 & 23.01 \\
\bottomrule
\end{tabular}
\caption{Booktabs table}
\label{tab:booktabs}
\end{table}
If you are using \LaTeX, you should use the {\tt booktabs} package, because it produces tables that are better than the standard ones. Compare Tables~\ref{tab:plain} and~\ref{tab:booktabs}. The latter is clearly more readable for three reasons:
\begin{enumerate}
\item The styling is better thanks to using the {\tt booktabs} rulers instead of the default ones.
\item Numeric columns are right-aligned, making it easier to compare the numbers. Make sure to also right-align the corresponding headers, and to use the same precision for all numbers.
\item We avoid unnecessary repetition, both between lines (no need to repeat the scenario name in this case) as well as in the content (units can be shown in the column header).
\end{enumerate}
\section{Formulas}
IJCAI's two-column format makes it difficult to typeset long formulas. A usual temptation is to reduce the size of the formula by using the {\tt small} or {\tt tiny} sizes. This doesn't work correctly with the current \LaTeX{} versions, breaking the line spacing of the preceding paragraphs and title, as well as the equation number sizes. The following equation demonstrates the effects (notice that this entire paragraph looks badly formatted, and the line numbers no longer match the text):
\begin{tiny}
\begin{equation}
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
\end{equation}
\end{tiny}%
Reducing formula sizes this way is strictly forbidden. We {\bf strongly} recommend authors to split formulas in multiple lines when they don't fit in a single line. This is the easiest approach to typeset those formulas and provides the most readable output%
\begin{align}
x = & \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \nonumber \\
+ & \prod_{i=1}^n \sum_{j=1}^n j_i.
\end{align}%
If a line is just slightly longer than the column width, you may use the {\tt resizebox} environment on that equation. The result looks better and doesn't interfere with the paragraph's line spacing: %
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
$}.
\end{equation}%
This last solution may have to be adapted if you use different equation environments, but it can generally be made to work. Please notice that in any case:
\begin{itemize}
\item Equation numbers must be in the same font and size as the main text (10pt).
\item Your formula's main symbols should not be smaller than {\small small} text (9pt).
\end{itemize}
For instance, the formula
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j
$}
\end{equation}
would not be acceptable because the text is too small.
\section{Examples, Definitions, Theorems and Similar}
Examples, definitions, theorems, corollaries and similar must be written in their own paragraph. The paragraph must be separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. They must begin with the kind of item written in 10pt bold font followed by their number (e.g.: {\bf Theorem 1}),
optionally followed by a title/summary between parentheses in non-bold font and ended with a period (in bold).
After that the main body of the item follows, written in 10 pt italics font (see below for examples).
In \LaTeX{} we strongly recommend that you define environments for your examples, definitions, propositions, lemmas, corollaries and similar. This can be done in your \LaTeX{} preamble using \texttt{\textbackslash{newtheorem}} -- see the source of this document for examples. Numbering for these items must be global, not per-section (e.g.: Theorem 1 instead of Theorem 6.1).
\begin{example}[How to write an example]
Examples should be written using the example environment defined in this template.
\end{example}
\begin{theorem}
This is an example of an untitled theorem.
\end{theorem}
You may also include a title or description using these environments as shown in the following theorem.
\begin{theorem}[A titled theorem]
This is an example of a titled theorem.
\end{theorem}
\section{Proofs}
Proofs must be written in their own paragraph(s) separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. Proof paragraphs should start with the keyword ``Proof." in 10pt italics font. After that the proof follows in regular 10pt font. At the end of the proof, an unfilled square symbol (qed) marks the end of the proof.
In \LaTeX{} proofs should be typeset using the \texttt{\textbackslash{proof}} environment.
\begin{proof}
This paragraph is an example of how a proof looks like using the \texttt{\textbackslash{proof}} environment.
\end{proof}
\section{Algorithms and Listings}
Algorithms and listings are a special kind of figures. Like all illustrations, they should appear floated to the top (preferably) or bottom of the page. However, their caption should appear in the header, left-justified and enclosed between horizontal lines, as shown in Algorithm~\ref{alg:algorithm}. The algorithm body should be terminated with another horizontal line. It is up to the authors to decide whether to show line numbers or not, how to format comments, etc.
In \LaTeX{} algorithms may be typeset using the {\tt algorithm} and {\tt algorithmic} packages, but you can also use one of the many other packages for the task.
\begin{algorithm}[tb]
\caption{Example algorithm}
\label{alg:algorithm}
\textbf{Input}: Your algorithm's input\\
\textbf{Parameter}: Optional list of parameters\\
\textbf{Output}: Your algorithm's output
\begin{algorithmic}[1]
\STATE Let $t=0$.
\WHILE{condition}
\STATE Do some action.
\IF {conditional}
\STATE Perform task A.
\ELSE
\STATE Perform task B.
\ENDIF
\ENDWHILE
\STATE \textbf{return} solution
\end{algorithmic}
\end{algorithm}
\section{\LaTeX{} and Word Style Files}\label{stylefiles}
The \LaTeX{} and Word style files are available on the IJCAI--23
website, \url{https://ijcai-23.org/}.
These style files implement the formatting instructions in this
document.
The \LaTeX{} files are {\tt ijcai23.sty} and {\tt ijcai23.tex}, and
the Bib\TeX{} files are {\tt named.bst} and {\tt ijcai23.bib}. The
\LaTeX{} style file is for version 2e of \LaTeX{}, and the Bib\TeX{}
style file is for version 0.99c of Bib\TeX{} ({\em not} version
0.98i). The {\tt ijcai23.sty} style differs from the {\tt
ijcai22.sty} file used for IJCAI--22.
The Microsoft Word style file consists of a single file, {\tt
ijcai23.docx}. This template differs from the one used for
IJCAI--22.
These Microsoft Word and \LaTeX{} files contain the source of the
present document and may serve as a formatting sample.
Further information on using these styles for the preparation of
papers for IJCAI--23 can be obtained by contacting {\tt
[email protected]}.
\section{Explainability Through Attention Maps}
Generalizing beyond single-task solutions using large-scale transformer-based language models has gained increasing attention from the community.
In particular, the switch to open-vocabulary predictions promises AI systems capable of adapting beyond before-seen training objectives.
Arguably, transformers are the state-of-the-art method in Natural Language Processing (NLP) and Computer Vision. Most recently, they demonstrated remarkable performance on multi-modal modes, e.g., bridging Computer Vision (CV) capabilities with text understanding to solve Visual Question Answering (VQA) scenarios~\cite{magma,blip,ofa,git}. The increasing adoption of transformers, however, also raises the necessity to better understand the reasons behind their otherwise black-box predictions.
Unfortunately, the ``scale is all you need'' assumption of transformers results in severely large and complex architectures, making their training, inference deployment, and understanding a resource-intensive task that requires multiple enterprise-grade GPUs or even entire computing nodes, along with prolonged runtimes.
\begin{figure}[t]
\centering
\includegraphics[width=.95\linewidth]{images/atman_fancy_title2.pdf}
\caption{\textbf{``What am I looking at?''} The proposed explainability method {\textsc{AtMan}}\ visualizes the most important aspects of the given image while completing the sequence (displayed above the relevance maps).
The generative multi-modal model MAGMA is prompted to describe the shown image with: ``$<$Image$>$ This is a painting of ''. (Best viewed in color.)
}
\label{fig:artistic_explainability}
\end{figure}
Most, if not all, explainable AI (XAI) methods---making the decision-making processes and internal workings of AI models transparent and understandable to humans---for transformers work by propagating (some form of) gradients back through the model. This backpropagation allows for the accumulation of information about how each input feature contributes to output tokens~\cite{Chefer_2021_CVPR,rollout}, utilizing stored activations during the forward pass.
Unfortunately, this leads to a significant memory consumption overhead, which renders their productive deployment to be uneconomical, if not impossible. Often half of the available memory of the GPU has to remain empty on inference, or it requires an entirely separate deployment of the XAI pipeline.
Fortunately, another popular XAI idea, namely \textit{perturbation} \cite{shap,lime}, is much more memory-efficient. Though it has not been proven beneficial for explaining the predictions of transformers, most likely because
of the immense number of necessary forward trials accumulating unreasonable computation time.
To tackle these issues and, in turn, scale explanations with the size of transformers, we propose to bridge
relevance propagation and perturbations.
In contrast to existing perturbation methods, executing perturbations directly in the input space,
we push
them into the latent space, allowing, as we will show, state interpolation and token-based similarity measures.
Specifically, inspired by \cite{elhage2021mathematical} and backpropagation approaches, we introduce \textbf{at}tention \textbf{man}ipulations throughout latent layers of the transformer during the forward pass as a method to steer model predictions. Our explanation methods, called {\textsc{AtMan}}, then leverages these predictions to compute relevance values for transformer networks.
Our experimental evidence demonstrates that {\textsc{AtMan}}~significantly reduces the number of required perturbations, making them applicable at deployment time, and does not require additional memory compared to the forward passes. In short, {\textsc{AtMan}}~can scale with transformers.
Our exhaustive experiments on text and image-text benchmarks also demonstrate that {\textsc{AtMan}}~outperforms current state-of-the-art based on gradients while being computationally efficient. Actually, for the first time, {\textsc{AtMan}}~allows one to study generative model predictions as visualized in Fig.~\ref{fig:artistic_explainability}. During the sequence generation with large multi-modal models, {\textsc{AtMan}}~is able to additionally highlight relevant features wrt.~the input proving novel insights on the generation process.
\paragraph{Contributions.}
In summary, our contributions are:
(i) An examination of the effects of token-based attention score manipulation on generative transformer models.
(ii) The introduction of a novel and memory-efficient XAI perturbation method for large-scale transformer models, called {\textsc{AtMan}}, which reduces the number of required iterations to a computable amount by correlating tokens in the embedding space.
(iii) Exhaustive multi-modal evaluations of XAI methods on several text and image-text benchmarks and autoregressive (AR) transformers.
We release the source code of the proposed method and all evaluation scripts\ifthenelse{\boolean{subvar}}{https://github.com/Mayukhdeb/atman-magma}{}.
We proceed as follows. We start off by discussing related work.
Then, we derive {\textsc{AtMan}}~and explain its attention manipulation as a perturbation technique. Before concluding and discussing the benefits as well as limitations, we touch upon our experimental evaluation, showing that {\textsc{AtMan}}~not only nullifies memory overhead but also outperforms competitors on several visual and textual reasoning benchmarks.
\section{Related Work}
\paragraph{Explainability in CV and NLP.}
Explainability of AI systems is a still ambiguously defined term~\cite{survey:nlp}. XAI methods are expected to show some level of relevance on the input with respect to the computed result of an algorithm.
This task is usually tackled by constructing an input relevance map given the model's prediction.
The nature of relevance can be class-specific, e.g., depending on specific target instances of a task and showing a local solution~\cite{method:gradtimesinput,method:integrgrads}, or class-agnostic, i.e., depending on the global behavior of the model behavior only~\cite{rollout,survey_sml}. The level of fine granularity of the achieved explanation depends, therefore, on the chosen method, the model, and the actual evaluation benchmark.
Explainability in CV is usually evaluated by mapping the relevance maps to a pixel level and regard the evaluation as a weak segmentation task~\cite{method:gradcam,lrp,method:integrgrads}.
On the other hand, NLP explanations are much more vaguely defined and usually mixed with more complex philosophical interpretations, such as labeling a given text to a certain sentiment category~\cite{survey:nlp}.
The majority of XAI methods can be divided into the classes of perturbation and gradient analysis. Perturbations treat the model as a black box and attempt to derive knowledge of the model's behavior by studying changes in input-output pairs only. Gradient-based methods, on the other hand, execute a backpropagation step towards a target and aggregate the model's parameter adoptions to derive insights.
Most of these XAI methods usually are not motivated by a specific discipline, e.g., neither by NLP nor CV. They are so generic that they can be applied to both disciplines, to some extent. However, architecture-specific XAI methods exist, such as GradCAM~\cite{method:gradcam}, leveraging convolutional neural networks' spatial input aggregation in the deepest layers to increase efficiency.
\paragraph{Explainability in Transformers.}
Through their increasing size, transformers are particularly challenging for explainability methods, especially for architecture-agnostic ones.
Transformers' core components, in particular, include an embedding layer followed by multiple layers of alternating attention and feed-forward blocks.
The attention blocks map the input into separate ``query'', ``key'', and ``value'' matrices and are split into an array of ``heads''. As with convolutions in CNN networks, separation heads are believed to relate to specific learned features or tasks~\cite{head-will-role}. Further, the attention matrix dimensions match that of the input sequence dimension, which makes the attention mechanism in particular suited for deriving input explanations.
Consequently, most explainability adoptions to transformers focus on the attention mechanism. \citeauthor{rollout}~(\citeyear{rollout}) assume that
activations in attention layers are combined linearly and considered paths along the pairwise attention graph. However, while being efficient, it often emphasizes irrelevant tokens, in particular, due to its class-agnostic nature. Therefore, the authors also propose attention flow~\cite{rollout}, which is unfeasible to use due to its high computational demands in constructing graphs.
More recently, \citeauthor{Chefer_2021_CVPR}~(\citeyear{Chefer_2021_CVPR}) proposed to aggregate backward gradients and LRP~\cite{lrp} throughout all layers and heads of the attention modules in order to derive explanation relevancy. Their introduced method outperforms previous transformer-specific and unspecific XAI methods on several benchmarks and transformer models. This method is extended to multimodal transformers \cite{Chefer_2021_ICCV} by studying other variations of attention. However, the evaluated benchmarks only include classification tasks, despite transformers' remarkable performance on open-vocabulary tasks, e.g., utilizing InstructGPT~\cite{instructgpt} or multimodal autoregressive transformers such as MAGMA~\cite{magma}, BLIP~\cite{blip} and OFA~\cite{ofa}.
\paragraph{Mutlimodal Transformers.}
Contrarily to these explainability studies evaluating on models like DETR and ViT~\cite{detr,vit}, we study explainability on generated text tokens of a language model, and not specifically trained classifiers.
Due to the multimodality, the XAI method should produce output relevancy either on the input text or the input image as depicted in Fig.~\ref{fig:artistic_explainability}.
To this end, we study the explainability of multimodal transformer architectures such as MAGMA~\cite{magma}.\footnote{An open-source version can be found at \url{https://github.com/Aleph-Alpha/magma}\ .}
Specifically, to obtain image modality, \citeauthor{magma}~(\citeyear{magma}) propose to fine-tune a frozen pre-trained language model by adding sequential adapters to the layers, leaving the attention mechanism untouched. It uses a CLIP~\cite{clip} vision encoder to produce image embeddings. These embeddings are afterward treated as equal, in particular regarding other modalities, and input tokens during model execution. This methodology has shown competitive performance compared to single-task solutions (c.f.~\cite{magma}).
\section{{\textsc{AtMan}}: Attention Manipulation}
We formulate finding the best explainability estimator of a model as solving the following question:
\emph{What is the most important part on the input, annotated by the explanator, to produce the model's output?}
In the following, we derive our perturbation probe mathematically through studies of influence functions and embedding layer updates on autoregressive (AR) models~\cite{influence,muchcommon}.
Then we show how attention manipulation on single tokens can be used in NLP tasks to steer the prediction of a model in directions found within the prompt.
Finally, we derive our multi-modal XAI method {\textsc{AtMan}}~by extending this concept to the cosine neighborhood in the embedding space.
\subsection{Influence Functions as Explainability Estimators}
\label{sec:influence}
Transformer-based language models are probability distribution estimators.
They map from some input space ${\cal X}$ (e.g. text or image embeddings) to an output space ${\cal Y}$ (e.g. language token probabilities).
Let ${\cal E}$ be the space of all explanations (i.e. binary labels) over ${\cal X}$.
An explanator function can then be defined as
\begin{linenomath*}
\begin{equation}
e: ({\cal X} \to {\cal Y}) \times ({\cal X} \times {\cal Y}) \to {\cal E}\ , \nonumber
\end{equation}
\end{linenomath*}
i.e. given a model, an input, and a \emph{target}, derive a label on the input.
\begin{figure}[t
\centering
\includegraphics[width=0.85\linewidth]{images/diagrams-fig-2.pdf}
\caption{\textbf{Transformer decoder architecture.} The left-hand side shows the general components: The token embeddings pass through $n$ transformer blocks to produce output logits, e.g., taken for the next token prediction in the setting of a generative language model.
The middle shows in detail a masked attention block, consisting of MatMul, Mask, and SoftMax steps.
The right-hand side shows our proposed Attention Manipulation method.
We multiply the modifier factors and the attention scores, before applying the diagonal causal attention mask.
Red hollow boxes (\redhollow) indicate one values, and green ones (\greenhollow) -infinity. (Best viewed in color.)}
\label{fig:decoder_architecture}
\end{figure}
Given a sequence $\textbf{w}=[w_1,\ldots,w_N]\in{\cal X}^N$, an AR language model assigns a probability to that sequence $p(\textbf{w})$ by applying factorization
$p(\textbf{w})=\prod_t p(w_t|w_{<t})$. The loss optimization during training can then be formalized by solving:
\begin{linenomath*}
\begin{align}
\max_\theta \log p_\theta(\textbf{w}) &= \sum_t \log p_\theta(w_t|w_{<t}) \nonumber \\
&= \sum_t \log \textrm{softmax}(h_\theta(w_{<t})W_\theta^T)_{\emph{target}^t} \label{eqn:loss}\\
&=: -\sum_t L^\emph{target}(\textbf{w}_{<t},\theta) \nonumber \\
&=: L^\emph{target}(\textbf{w},\theta)\ . \label{eqn:loss2}
\end{align}
\end{linenomath*}
Here $h_\theta$ denotes the model, $W_\theta$ the learned embedding matrix, and $\emph{target}^t$ the vocabulary index of the $t-th$ target of length $|\emph{target}| = N$.
Eq.~\ref{eqn:loss} is derived by integrating the cross-entropy loss, commonly used during language model training with $\emph{target}=\textbf{w}$. Finally, $L$ denotes our loss function.
Perturbation methods study the influence of the model's predictions by adding small noise $\epsilon$ to the input and measuring the prediction change.
We follow the results of the studies~\cite{influence,muchcommon} to approximate the perturbation effect directly through the model's parameters when executing Leaving-One-Out experiments on the input. The influence function estimating the perturbation $\epsilon$ of an input $z$ is then derived as:
\begin{linenomath*}
\begin{align}
{\cal I}^\emph{target}(z_\epsilon,z) &= {dL^\emph{target}(z,\theta_\epsilon) \over d\epsilon}|_{\epsilon=0} \nonumber\\
&\approx L^\emph{target}(z,\theta_{-z_{\epsilon}})-L^\emph{target}(z,\theta)\ .\label{eq:influence}
\end{align}
\end{linenomath*}
Here $\theta_{-z_{\epsilon}}$ denotes the set of model parameters in which $z_{\epsilon}$ would not have been seen during training.
In the following, we further show how to approximate $\theta_{-z_{\epsilon}}$.
\subsection{Single Token Attention Manipulation}
The core idea of {\textsc{AtMan}}~is the shift of the perturbation space from the raw input space to the embedded token space. This allows us to reduce the dimensionality of possible perturbations down to a single scaling factor per token. Moreover, we do not manipulate the value matrix of attention blocks and therewith do not introduce the otherwise inherent input-distribution shift of obfuscation methods. By manipulating the attention entries at the positions of the corresponding input sequence tokens, we are able to interpolate the focus of the prediction distribution of the model---amplifying or suppressing concepts of the prompt.
The following shows that this procedure indeed derives a well-performing XAI method.
Attention was introduced in~\cite{attn_all_need} as:
\mbox{$\textbf{O} = \textrm{softmax}( \textbf{H} )\cdot \textbf{V}$}, where $\cdot$ denotes matrix multiplication.
The pre-softmax query-key attention scores are defined as:
\begin{linenomath*}
\begin{equation}
\textbf{H} = \textbf{Q} \cdot \textbf{K}^T/ \sqrt{d_h}\ .\nonumber
\end{equation}
\end{linenomath*}
In the case of autoregression, a lower left triangular unit mask $\textbf{M}$ is applied to these scores as $\textbf{H}_M = \textbf{H} \circ \textbf{M},$ with $\circ$ the Hadamard product.
The output of the self-attention module is $\textbf{O}\in {\mathbb R}^{h\times s\times d_h}$, the query matrix is $\textbf{Q}\in {\mathbb R}^{h\times s \times d_h}$ and $\textbf{K},\textbf{V} \in {\mathbb R}^{h\times s \times d_h}$ the keys and values matrices.
Finally $\textbf{H},\textbf{M},\textbf{H}_M\in {\mathbb R}^{h \times s \times s}$.
The number of heads is denoted as $h$, and $d_h$ is the embedding dimension of the model.
Finally, there are $s$ query and key tokens that coincide here with the dimension of input-sequence tokens.
\begin{figure}[t]
\centering
\includegraphics[width=.7\linewidth]{images/diagrams-fig-4.pdf}
\caption{\textbf{Illustration of the proposed explainability method.}
First, we collect the original cross-entropy score of the target tokens (1). Then we iterate and suppress one token at a time, indicated by the red box (\redsolid), and track changes in the cross-entropy score of the target token (2).
(Best viewed in color.)}
\label{fig:histogram_method}
\end{figure}
The perturbation approximation $\theta_{-z_\epsilon}$ required by Sec.~\ref{sec:influence} can now be approximated through attention score manipulation as follows:
Let $\textbf{w}$ be an input token sequence of length $|\textbf{w}|=n$. Let $i$ be a token index within this sequence to be perturbated by a factor $f$.
For all layers and all heads $\textbf{H}_u$ we modify the pre-softmax query-key attention scores as:
\begin{linenomath*}
\begin{align}
\widetilde{\textbf{H}}_{u,*,*} = \textbf{H}_{u,*,*} \circ (\textbf{1}-\textbf{f}^i),\label{eq:supr}
\end{align}
\end{linenomath*}
where $\textbf{1} \in [1]^{s\times s}$ denotes the matrix containing only ones and $\textbf{f}^i$ the suppression factor matrix for token $i$. In this section we set $\textbf{f}^i_{k,*}=f$, for $k=i$ and $f\in\mathbb{R}$ and $0$ elsewhere. As depicted in Fig.~\ref{fig:decoder_architecture} we thus only amplify the $i-th$ column of the attention scores of $H$ by a factor $(1-f)$.
This, however for all heads equally.\footnote{We follow the common assumption that all relevant entropy of the $i-th$ input token is processed primarily at that position within the attention module due to the sequence-to-sequence nature of the transformer.
A different variant of this approach is discussed in Appendix~\ref{app:variant}.}
Let us denote this modification to the model by $\theta_{-i}$ and assume a fixed factor $f$.\footnote{We ran a parameter sweep once to fix this parameter.}
We define for a class label $\emph{target}$ the explanation as the vector of the influence functions to all positions:
\begin{linenomath*}
\begin{align}
{\cal E}(\textbf{w},\emph{target}) &:= ({\cal I}^\emph{target}(\textbf{w}_1,\textbf{w}),\ldots,{\cal I}^\emph{target}(\textbf{w}_n,\textbf{w})) \ ,
\label{eq:atman}
\end{align}
\end{linenomath*}
with ${\cal I}^\emph{target}$ derived by Eq.~\ref{eqn:loss2} and Eq.~\ref{eq:influence} as
\begin{linenomath*}
\begin{align*}
{\cal I}^\emph{target}(\textbf{w}_i,\textbf{w}):=L^\emph{target}(\textbf{w},\theta_{-i})-L^\emph{target}(\textbf{w},\theta)\ .
\end{align*}
\end{linenomath*}
In words, we average the cross-entropy of the AR input sequence wrt. all target tokens and measure the change when suppressing token index $i$ to the unmodified one. The explanation becomes this difference vector of all possible sequence position perturbations and thus requires $n$ iterations.
\begin{figure}[t]
\centering
\includegraphics[width= .7\linewidth]{images/diagrams-fig-3.pdf}
\caption{
\textbf{Manipulating the attention scores} of a single token (highlighted in blue) inside a transformer block \textbf{steers the model's prediction} into a different contextual direction (amplifications highlighted in green, suppression in red). (Best viewed in color.)}
\label{fig:example_text_manipulation}
\end{figure}
Fig.~\ref{fig:histogram_method} illustrates this algorithm. The original input prompt is the text ``Ben likes to eat burgers and '' for which we want to extract the most valuable token for the completion and target token ``fries''.
Initially, the model predicts the target token with a cross-entropy score of $1.72$.
We now iterate through the input tokens, suppressing them one by one, and track the changes in the cross-entropy of the target token, as depicted in the right-most column.
In this example, it can be observed that ``burgers'' was the most-influential input token to complete the sentence with ``fries'', with the highest score of $14.2$.
Next, we give a more descriptive intuition about the effects of such modifications on the model's generative nature.
\paragraph{Token attention suppression steers the model's prediction.}
Intuitively, for factors $0 < f < 1$, we call the modifications ``suppression'', as we find the model's output now relatively less influenced by the token at the position of the respective manipulated attention scores.
Contrarily, $f < 0$ ``amplifies'' the influence of the manipulated input token on the output.
An example of the varying continuations when a single token manipulation is applied can be seen in Fig.~\ref{fig:example_text_manipulation}.
We provide the model a prompt in which the focus of continuation largely depends on two tokens, namely ``soccer'' and ``math''. We show how suppressing and amplifying them alters the prediction distributions away from or towards to those concepts. It is precisely this distribution shift we measure and visualize as our explainability.
\begin{figure}[t]
\centering
\includegraphics[trim={0 1cm 0 0.5cm},clip,width= .75\linewidth]{images/diagrams-mesh-3-figs.pdf}
\caption{\textbf{Correlated token suppression of {\textsc{AtMan}}~enhances explainability in the image domain.}
i) Shows an input image along with three perturbation examples ($A,B,C$). In $A$ we only suppress a single image token (blue), in $B$ the same token with its relative cosine neighborhood (yellow). In $C$ a non-related token with its neighborhood. Below are depicted the changes in cross-entropy loss.
$c_0$ is the original score for the target token ``panda''.
$\Delta$ denotes the loss change.
ii) Shows the label, the resulting explanation without Cosine Similarity (CS) and with CS.
(Best viewed in color.)
}
\label{fig:img_conc_supp}
\end{figure}
\subsection{Correlated Token Attention Manipulation}
\label{sec:method}
Suppressing single tokens works well when the entire entropy responsible to produce the target token occurs only once. However, for inputs with redundant information, this approach would often fail.
This issue is, in particular, prominent in the field of CV, as information, e.g., about objects in an image, is often spread across several embeddings due to the split of image parts and the separate application of embedding function.
It is a common finding that applied cosine similarity in the embedding space, e.g., right after the embedding layer, gives a good correlation distance estimator~\cite{retrieval,muchcommon}.
We integrate this finding into {\textsc{AtMan}}~in order to suppress all redundant information corresponding to a particular input token at once, which we refer to as correlated token suppression.
Fig.~\ref{fig:img_conc_supp} summarizes the correlated token suppression visually.
For $n$ input tokens and embedding dimension $d$, the embedded tokens result in a matrix $T = (t_i) \in \mathbb{R}^{n \times d}$. The cosine similarity, in turn, is computed from the normalized embeddings $\widetilde{T} = (\widetilde{t}_i)$, with $\widetilde{t}_i = t_i/||t_i||$, for $i \in 1,\dots n$, as
$S = (s_i) = \widetilde{T} \cdot \widetilde{T}^T \in [-1,1]^{n \times n}$.
Note that the index $i$ denotes a column corresponding to the respective input token index.
Intuitively, the vector $s_i$ then contains similarity scores to all (other) input tokens.
Suppressing the correlated neighborhood to a specific token with the index $i$, we, therefore, adjust the suppression factor matrix for Eq.~\ref{eq:supr} as
\begin{equation}
\begin{array}{l}
{\textbf{f}}^{i}_{k,*} =
\begin{cases}
s_{i,k}, & \text{if}\ \kappa \leq s_{i,k}\leq 1, \\
0, & \text{otherwise.}
\end{cases}
\end{array}\label{eq:cosine}
\end{equation}
As we only want to suppress tokens, we restrict the range of factor values to be greater than $0$. The parameter $\kappa$ is to ensure a lower bound, and in particular, prevents a sign flip. We empirically fixed $\kappa$ through a parameter sweep (Appendix~\ref{sec:parametersweep}).
With that, we arrived at our final version of {\textsc{AtMan}}.
As a final remark note that this form of explanation ${\cal E}$ is local, as $\emph{target}$ refers to our target-class.
We can however straightforward derive a global explanation by setting $\emph{target}=\textbf{y}$, for $\textbf{y}$ a model completion to input $\textbf{w}$ of certain length.
It could then be interpreted rather abstract as the model's general focus \cite{survey_sml}.
\begin{figure}[t
\centering
\includegraphics[width=.88\linewidth]{images/diagrams-fig-8.pdf}
\caption{
An example of a single instance of \textbf{the SQuAD dataset with {\textsc{AtMan}}~Explanations}. An instance contains three questions for a given context, each with a labeled answer pointing to a fragment of the context. {\textsc{AtMan}}~is used to explain, i.e., highlight, the corresponding fragments. It can be observed that the green example is fully, the blue partially, and the yellow not at all recovered according to the labels. However, the yellow highlight seems at least related to the label.
(Best viewed in color.)}
\label{fig:squadv2_dataset_instance}
\end{figure}
\section{Empirical Evaluation}
\label{sec:results}
We ran empirical evaluations on text and image corpora to address the following questions:
\textbf{(Q1)} Does {\textsc{AtMan}}~achieve competitive results compared to previous XAI for transformers in the language as well as vision domain?
\textbf{(Q2)} Does {\textsc{AtMan}}~scale efficiently and, therefore, can be applied to current large-scale AR models?
To answer these questions, we conducted empirical studies on textual and visual XAI benchmarks and compared {\textsc{AtMan}}~to standard approaches such as IxG \cite{method:gradtimesinput}, IG \cite{method:integrgrads}, GradCAM \cite{method:gradcam} and the transformer-specific XAI method of \cite{Chefer_2021_CVPR} called {Chefer}~in the following. Note that all these methods utilize gradients and, therefore, categorize as propagation methods leading to memory inefficiency. We also applied existing perturbation methods such as LIME~\cite{lime} and SHAP~\cite{shap}. However, they failed due to extremely large trials and, in turn, prohibitive computation time.
We adopt common metrics, namely mean average precision (mAP) and recall (mAR), and state their interquartile statistics in all experiments. Whereas through its memory efficiency {\textsc{AtMan}}~can be utilized on larger models, to provide a comparison between XAI methods, we ran the corresponding experiments on MAGMA-6B\footnote{Available at \url{https://github.com/aleph-alpha/magma}\ .} if not stated otherwise.
\subsection{{\textsc{AtMan}}~ can do Language reasoning}
\paragraph{Protocol.}
Since with {\textsc{AtMan}}~we aim to study large-scale generative models, we formulate XAI on generative tasks as described in Sec.~\ref{sec:method}. To this end, we used the Stanford Question Answering (QA) Dataset (SQuAD)~\cite{squad}.
The QA dataset is structured as follows: Given a single paragraph of information, there are multiple questions, each with a corresponding answer referring to a position in the paragraph.
A visualization of an instance of this dataset can be found in Fig.~\ref{fig:squadv2_dataset_instance}.
In total, SQuAD contains 536 unique paragraphs and 107,785 question/explanation pairs.
The average context sequence length is $152.7$ tokens, and the
average label (explanation) length is $3.4$.
\begin{comment}
\subsubsection{Experiment:}
We formulate the text explainability capability of an algorithm as follows: \todo{reformulate - explainability method should be most discrimitive! we want the change in the output}
Given the context and all hints, can the algorithm select the right hint to a certain answer?
I.e. of Fig.~\ref{fig:squadv2_dataset_instance}, given to the context $3$ questions and hints respectively, we first build the probability confusion matrix as depicted in Fig.~\ref{fig:explain_method_text}.
Here $s_{ij}$ refers to the relevancy score as described in the previous section. It is derived as the mean over all hint$_j$ tokens, when presented question$_i$.
We expect a good Interpretability method to assign a high score to the correct hint, and a relatively lower score to the other unrelated ones.
Therefore we take the sum of the diagonal over the sum of all entries as a metric for comparison:
$$m = \sum_i s_{ii}/ \sum_{i,j} s_{ij}.$$
\todo{is there a proper name? ``soft-accurracy?''}
\todo{explanation tokens are averaged}
\end{comment}
\begin{figure*}[t]
\centering
\includegraphics[width=.95\textwidth]{images/atman_vs_chefer_wide.pdf}
\caption{\textbf{{\textsc{AtMan}}~produces less noisy and more focused explanations when prompted with multi-class weak segmentation} compared to {Chefer}. The three shown figures are prompted to explain the target classes above and below separately. It can be observed that both methods produce reasonable, and even similar output. Though more sensitivity and more noise is observed on the method of {Chefer}. In particular on the last example, for the \emph{target}~``birthday'', {Chefer}~highlights more details like the decoration.
However the same is also derived to some extent when just prompting ``bear''.
(Best viewed in color.)}
\label{fig:weak_segment_image}
\end{figure*}
\begin{table}[t]
\centering
\small
\begin{tabular}{l|cccc}
& IxG & IG & {Chefer} & {\textsc{AtMan}} \\
\hline
mAP & 51.7 & 49.5 & $\circ$72.7 & $\bullet$73.7 \\
mAP$_{IQ}$ & 61.4 & 49.5 & $\circ$77.5 & $\bullet$81.8 \\
mAR & 91.8 & 87.1 & $\bullet$96.6 & $\circ$93.4 \\
mAR$_{IQ}$ & $\bullet$100 & 98.6 & $\bullet$100 & $\bullet$100 \\
\end{tabular}
\caption{\textbf{{\textsc{AtMan}}~outperforms XAI methods on the QA dataset SQuAD.} Shown are (interquartile) mean average precision and mean average recall (the higher, the better). Best and second best values are highlighted with $\bullet$ and $\circ$.
}
\label{tab:text_performance}
\end{table}
The model was prompted with the template: ``\{Context\} Q: \{Question\} A:'', and the explainability methods executed to derive scores for the tokens inside the given context, c.f.~Fig.~\ref{fig:squadv2_dataset_instance}.
If there were multiple tokens in the target label, we computed the average of the scores for the target token.
Similar to weak segmentation tasks in computer vision, we regarded the annotated explanations as binary labels and determined precision and recall over all these target tokens.
\paragraph{Results.}
The results are shown in Tab.~\ref{tab:text_performance}.
It can be observed that the proposed ${\textsc{AtMan}}$ method thoroughly outperforms all previous approaches by means of mean average precision. This statement holds as well for the mean average interquartile recall. However, on the mean average recall {Chefer}~slightly outperforms ${\textsc{AtMan}}$.
Furthermore, it is noteworthy that the small average explanation length (such as depicted in Fig.~\ref{fig:squadv2_dataset_instance}) results in
high values for recall scores in all methods.
Further details and some qualitative examples can be found in Appendix~\ref{sec:squadeval}.
\paragraph{Paragraph Chunking.} {\textsc{AtMan}}~can naturally be lifted to the explanation of paragraphs. We ran experiments for ${\textsc{AtMan}}$ splitting the input text into a few paragraphs by splitting by common delimiters and evaluating the resulting chunks simultaneously, despite token-wise evaluations. This significantly decreases the total number of required forward passes and, on top, produces ``more human'' text explanations of the otherwise still heterogeneously highlighted word parts.
Results are shown in Appendix~\ref{sec:qadoc}.
\subsection{{\textsc{AtMan}}~ can do Visual reasoning}
\paragraph{Protocol.} Similar to language reasoning, we again perform XAI on generative models. We evaluated the OpenImages~\cite{openimages} dataset as VQA task and generated open-vocabulary prediction with the autoregressive model. Specifically, the model is prompted with the template: ``\{Image\} This is a picture of '', and the explainability methods executed to derive scores for the pixels of the image with respect to the target label.
If there were multiple tokens in the target label, we take the average of the generated scores for each target token.
For evaluation, we considered the segmentation annotations of the dataset as ground truth explanations.
The segmentation subset contains 2,7M annotated images for 350 different classes. In order to ensure a good performance of the large-scale model at hand and, in turn, adequate explanations of the XAI methods, we filtered the images for a minimum dimension of $200$ pixels and a maximal proportional deviation between width and height of $20\%$.
Moreover, we randomly sample $200$ images per class to avoid overweighting classes. This filtering leads to a dataset of $27.871$ samples. The average context sequence length is $144$ tokens and the average label coverage is $56\%$ of the input image.
\paragraph{Quantitative Results.} The results are shown in Tab.~\ref{tab:visual_performance}.
It can be observed that {\textsc{AtMan}}~thoroughly outperforms all other XAI approaches on the visual reasoning task for all metrics.
Note how explicit transformer XAI methods ({\textsc{AtMan}}, {Chefer}) in particular outperform generic methods (GradCAM, IG, IxG)
in recall. Moreover, while being memory-efficient (see next section), {\textsc{AtMan}}~also generates more accurate explanations compared to {Chefer}. Through the memory efficiency of {\textsc{AtMan}}, we were able to evaluate an intermediate version of a 30B upscaling trial of MAGMA (c.f.~Tab.~\ref{tab:visual_performance}). Interestingly, the general explanation performance slightly decreases compared to the 6B model variant. This could be attributed to the increased complexity of the model and, subsequently, the complexity of the explanation at hand.
Hence, it is not expected that the ``human'' alignment with the model's explanations scales with their size.
\begin{table}[t]
\centering
\small
\def\arraystretch{1}\tabcolsep=1.6pt
\begin{tabular}{l|ccccc|c|c}
& IxG & IG & GradCAM & {Chefer} & {\textsc{AtMan}} & & {\textsc{AtMan}}$_{\text{30B}}$ \\
\cline{1-6}
\cline{8-8}
mAP & 38.0 & 46.1 & 56.7 & 49.9 & $\bullet$65.5 && $\circ$61.2 \\
mAP$_{IQ}$ & 34.1 & 45.2 & 60.4 & 50.2 & $\bullet$70.2 && $\circ$65.1 \\
mAR & 0.2 & 0.3 & 0.1 & 11.1 & $\bullet$13.7 && $\circ$12.2 \\
mAR$_{IQ}$ & 0.1 & 0.1 & 0.1 & 10.1 & $\bullet$19.7 && $\circ$14.5 \\
\end{tabular}
\caption{\textbf{{\textsc{AtMan}}~outperforms XAI methods on the VQA benchmark of OpenImages.} Shown are (interquartile) mean average precision and mean average recall (the higher, the better). Best and second best values are highlighted with $\bullet$ and $\circ$. XAI methods are evaluated on a 6B model, except the last column, in which case only {\textsc{AtMan}}~succeeds in generating explanations.
\label{tab:visual_performance}}
\end{table}
\paragraph{Qualitative Illustration.}
Fig.~\ref{fig:weak_segment_image} shows several generated image explanations of {\textsc{AtMan}}~and {Chefer}~for different concepts. More examples of all methods can be found in Appendix~\ref{sec:qualitativecomparison}.
We generally observe more noise in gradient-based methods, in particular around the edges.
Note that as VQA only changes target-tokens, we do not need to evaluate the prompt more than once with the ${\textsc{AtMan}}$ method for different object classes.
In general, the results clearly provide an affirmative
answer to \textbf{(Q1)}: {\textsc{AtMan}}~ is competitive with previous XAI methods, including transformer-specific ones.
Next, we will analyze the computational efficiency of {\textsc{AtMan}}.
\subsection{{\textsc{AtMan}}~can do large scale}
While {\textsc{AtMan}}~shows competitive performance, it computes, unlike previous approaches, explanations at almost no extra memory cost. Fig.~\ref{fig:performance} illustrates the runtime and memory consumption on a single NVIDIA A100 80GB GPU. We evaluated the gradient-based transformer XAI method \cite{Chefer_2021_CVPR} and {\textsc{AtMan}}. The statistics vary in sequence lengths (colors) from 128 to 1024 tokens, and all experiments are executed with batch size 1 for better comparison.
One can observe that the memory consumption of {\textsc{AtMan}}~is around that of the forward pass (Baseline; green) and increases only marginally over the sequence lengths. In comparison, the method of \cite{Chefer_2021_CVPR}---and other gradient-based methods---exceeds the memory limit with more than double in memory consumption. Therefore, they fail on larger sequence lengths.
Whereas the memory consumption of {\textsc{AtMan}}~stays almost constant, the execution time significantly increases over sequence length when no further token aggregation is applied upfront. However, note that the exhaustive search loop of {\textsc{AtMan}}~can be run in parallel to decrease its runtime. In particular, this can be achieved by increasing the batch size and naturally by a pipeline-parallel\footnote{\url{https://pytorch.org/docs/stable/pipeline.html}} execution. For instance, since large models beyond 100B are scattered among nodes and thus many GPUs, the effective runtime is reduced by magnitudes to a proximate scale of the forward pass.
Overall, these results clearly provide an affirmative answer to \textbf{(Q2)}: Through the memory efficiency of {\textsc{AtMan}}, it can be applied to large-scale transformer-based models.
\begin{figure}[t]
\centering
\includegraphics[width= .95\linewidth]{appendix_images/performance.pdf}
\caption{\textbf{{\textsc{AtMan}}~scales efficiently.} Performance comparison of the explainability methods {\textsc{AtMan}}~and Chefer~\textit{et al.} over various model sizes (x-axis) executed on a single 80GB memory GPU. Current gradient-based approaches do not scale; only {\textsc{AtMan}}~can be utilized on large-scale models.
Solid lines refer to the GPU memory consumption in GB (left y-axis). Dashed lines refer to the runtime in seconds (right y-axis). Colors indicate experiments on varying input sequence lengths. As baseline (green) a plain forward pass with a sequence length of 1024 is measured.
(Best viewed in color.)
\label{fig:performance}}
\end{figure}
\section{Conclusion}
\label{sec:outlook}
We proposed {\textsc{AtMan}}, a modality-agnostic perturbation-based XAI method for transformer networks.
In particular, {\textsc{AtMan}}~reduces the complex issue of finding proper perturbations to a single scaling factor per token.
As our experiments demonstrate, {\textsc{AtMan}}~outperforms current approaches relying on gradient computation. {\textsc{AtMan}}~is memory-efficient and requires forward passes only, enabling its utilization for deployed large models.
However, some limitations remain unresolved. Whereas {\textsc{AtMan}}~reduces the overall noise on the generated explanation, when compared to gradient-based methods, undesirable artifacts still remain. It is unclear to what extent this is due to the method or the underlying transformer architecture.
Through {\textsc{AtMan}}'s memory efficiency, one is able to evaluate whether models' explanatory capabilities scale with their size.
The extent to which larger models produce explanations that are more difficult to understand, is a question that arises when comparing performance scores.
Consequently, scaling explainability with model size should be further studied. Besides this, our paper provides several avenues for future work, including explanatory studies of current generative models impacting our society. Furthermore, it could lay the foundation for not only instructing and, in turn, improving the predictive outcome of autoregressive models based on human feedback \cite{instructgpt} but also their explanations \cite{friedrich2022xiltypology}.
\ifsubvar
\section*{Acknowledgments}
This research has benefited from the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) cluster projects ``The Third Wave of AI'' and hessian.AI as well as from the German Center for Artificial Intelligence (DFKI) project ``SAINT''. Further, we thank Manuel Brack, Felix Friedrich, Marco Bellagente and Constantin Eichenberg for their valuable feedback.
\fi
\bibliographystyle{named}
|
1,108,101,566,496 | arxiv | \section{Introduction}
\label{sn:introduction}
In 1965 Intel's co-founder Gorden Moore presented the now famous \emph{Moore's
law} stating that: \emph{The number of transistors on a chip will roughly
double every 18 months}. Later in 2000 the 18 months were changed to two years
but the law is still followed by electrical engineers. To quote Prof. Yale N.
Patt: \emph{Moore's law is not the law of physics, it is merely the psychology
of electrical engineers what they can do with silicon. The law is alive
only because it is used by industries to compete with each
other}~\cite{yale_patt}.
Computer architects have consensus that in order to increase the performance of
the processor will likely require more transistors on the chip, to support ever
increasing software needs. While Moore's law seems to enable the increase in
the number of transistors, it is not yet clear how these transistors can be
used to achieve the desired improvements. In this paper we summarize this
argument and promotes multi-core design as a promising approach. We present
some technological competition to the Microgrid.
The rest of the paper is organized as follows.
In Section~\ref{sn:performance_frequency_scaling} we describe the performance
improvement in architecture by frequency scaling. We explain the automatic
exploitation of ILP to increase performance
in Section~\ref{sn:parallel_instr_stream} and the use of concurrency in hardware to
improve performance in Section~\ref{sn:performance_concurrency_hardware}. We show the
explicit concurrency in software to achieve performance
in Section~\ref{sn:performance_concurrency_software}. We give the details of modern
many-core systems in Section~\ref{sn:modern_many-cores} and conclude the paper
in Section~\ref{sn:conclusion}.
\section{Performance via frequency scaling}
\label{sn:performance_frequency_scaling}
Since the advent of microprocessors, computer architects have always been in
the quest to achieve the best performance by increasing the frequency of a
single chip. To main focus is to achieve the lowest possible latency in the
execution of individual instructions and highest possible throughput in the
execution of the program. In order to achieve this improvement in performance
via frequency scaling, there are two possibilities; reduce the line width of
the technology and hence increase the switching speed of the transistors or
reduce the amount of work done in a cycle by pipelining instruction
execution.
\subsection{Power consumption}
In order to achieve a higher frequency in a given technology a higher voltage
is required which means more power is consumed. However, power consumption is
at odds with "Green computing"~\cite{Kurp:2008:GC:1400181.1400186}. Also as the
power consumption increases, all kinds of problems come up: atom migration in
gates, change of electronic properties, wear and tear due to thermal expansion
of the material itself, etc. Furthermore, devices are getting portable, and
high voltage means constant access to power which is simply not practical in
a portable environment.
\subsection{Dissipation of heat}
The higher power consumed by a transistor is dissipated from the chip in the
form of heat which must be removed from the chip to avoid damaging it, as
silicon simply can not take heat. Increasing the frequency of the transistor
will increase the quantity of heat, and decreasing the size of the transistor
(decreases the energy required to switch a transistor) will generally increase
the density of heat. It is expensive to provide cooling to a heated processor
and requires extra space to install fans. Portable devices with this much heat
can burn the body of the user. There is also the fact that batteries store
limited energy and hence we want to minimise the power consumption in order to
maximise lifetime.
\subsection{Delay in wires}
The frequency of transistors can easily be improved, but wires are slow and
wire delay becomes slower for a given amount of power dissipation. A wire is
passive and can be made fast but then it will require more energy. It is
required to look at the relative speed when driving a wire by a minimum sized
transistor. Therefore the transaction from one part of the chip to another part
is limited by wires i.e. the frequency of transistors is not limited by the
number of transistors that can be manufactured on a chip, but actually limited
by the number of gates reachable in a
cycle~\cite{Matzke:1997:PSS:619022.620798,Agarwal:2000:CRV:342001.339691}.
Increasing the frequency of the transistor means that there is a higher
communication delay than the computation delay.
In the past years the frequency of the processor has increased at the rate of
75\% per year (not any more) and the frequency of the DRAM has increased at the
rate of just 7\% per year~\cite{McKee:2004:RMW:977091.977115}. Because of this
difference, the DRAM latency has become increasingly large compared to the
processor's cycle time which is very high from processor's point of view. This
divergence in latency is also known as "Memory
wall"~\cite{Wulf:1995:HMW:216585.216588}. To avoid the delay for DRAM access,
concurrency is the next logical step i.e. at the time the processor sends
request to DRAM and then has to wait for the request to complete, the processor
should be able to perform some other activities concurrent to the DRAM
access.
\section{Performance via automatic exploitation of ILP}
\label{sn:parallel_instr_stream}
An instruction stream has a combination of memory and computational operations.
When the memory operations are issued, there can be some independent
computational instructions that can be executed. But additional hardware is
required to find these independent instructions and schedule them so that no
data hazard occurs. This technique is called automatic exploitation of
Instruction Level Parallelism (ILP)~\cite{Wall:1991:LIP:106973.106991}; and a
number of approaches are tried in this direction such as pipelining,
out-of-order execution, branch prediction and cache prefetching.
\subsection{Pipelining}
The function of each instruction (i.e. fetch, decode, execute etc.) can be
broken down in multiple sub-functions. Like a conveyor belt in a factory line,
these sub-functions can be pipelined in parallel where multiple instructions
coming one after another. Pipelining have enabled the simultaneous execution of
multiple instructions. The actual execution time of the individual instruction
remain the same, but the throughput of several instructions is improved
typically by $x$ times where $x$ is the length of the pipeline. e.g. in Pentium
machines the throughput of instructions was increased up to 20 times. In
addition, pipelining also enabled the processor to have a higher clock
frequency because the operations executed at each pipeline stage is simpler and
therefore takes a shorter cycle time.
\subsection{Out-of-order execution}
The instruction stream has a limited number of independent instructions that
can be executed in parallel. Often, an instruction is dependent on a previous
operation (not only loads but also long latency operations like mul, floating
point operations etc.) and must therefore stall the pipeline until the load operation
completes. Out-of-order execution is used to allow instructions in the
pipeline to overtake those issued earlier in order to avoid stalling the
pipeline completely. However, hardware logic and energy is required for the
dependency analysis to determine if instructions can overtake each other. Also,
since instructions are executing out-of-order, a reorder buffer is required for
the in-order completion to provide the expected
results~\cite{Vandierendonck:2007:BOE:1242531.1242548}. Out-of-order execution
introduces additional dependencies e.g. Write-After-Read and Write-After-Write
which are resolved by register renaming i.e. allowing the hardware to use more
registers than will fit in the instruction format. However it increases the
size of the register file, lengthens the pipeline and therefore increases the
branch penalties~\cite{Jesshope03}.
\subsection{Branch prediction}
The number of instructions between branches is typically very small (average is
less than 6 instructions~\cite{Sechler:1995:DSL:203133.203135}). Branch
prediction is required to keep the pipeline full; while the branch and
instructions before it are executing, we can fetch and decode instructions
after the branch. If the branch was predicted accurately we can already have
the next instructions in the pipeline, however if the branch was not predicted
correctly it will result in a pipeline bubble or many pipeline bubbles in more
complex processors. In addition, the effect of instructions must also be
cancelled, which might involve the roll back of side effects. It means that a
huge number of cycles and energy are lost on computing something that did
not really participate in the required computation of the program. The issue
with branch prediction is, if there are multiple tasks/processes the branch
predictor has to predict branches not in only one but multiple programs
interleaved over time. To keep the high accuracy across heterogeneous codes,
the size of the branch predictor must grow. Because of multitasking branch
prediction is an expensive approach.
\subsection{Cache prefetching}
To avoid the delay of a cache miss, a processor can look at the memory access
pattern of the program and guess what cache line will be accessed next and
prefetch the line. If the guess is correct, the large delay required to access
the memory is avoided, however if it is not correct a large amount of energy
and memory bandwidth is wasted to fetch the data from the off-chip memory that
will not be used. Intel Itanium, Pentium 4 and Transmeta Crusoe are some
examples where cache prefetching is used.
\subsection{Discussion}
Traditional and super-scalar machines~\cite{Johnson:1989:SPD:891364} implement
most of the techniques described above in order to achieve the best performance
by the automatic exploitation of ILP. However, because of the additional
hardware the microprocessors are getting complex, energy inefficient and not
scalable~\cite{Cotofana:1998:DCI:945405.938236}. Next to super-scalar machines,
VLIW~\cite{Fisher:1983:VLI:1067651.801649} machines have been introduced which in
general are more scalable than super-scalar machines but have a complicated
design and a complex code generation process. In addition the binary code for a
VLIW machine can not be used on the same VLIW machine with a different issue
width i.e. binary compatibility is not achieved~\cite{Jesshope03}.
Computer architects have tried hard to improve performance of the programs
through implicit ILP without changing anything in the software. Because of
implicit ILP, the programmers were not required to have a detailed knowledge of
the hardware. The programmer used to wait to buy until a new more powerful machine is
available in the market and then the software magically got faster. However,
the industry is reaching the limits of increasing the frequency of processors
to execute more instructions in one cycle~\cite{bell.06.jpp}.
It seems like the free lunch is over~\cite{citeulike:6643735}, programmers have
to take responsibility of parallelization in order to get more performance
and therefore they need to get familiar with the architecture and concurrent
execution model. Some automatic parallelization techniques are taking the
approach of abstracting away the architecture
details~\cite{Grelck:2007:SOS:1248648.1248654}, but can not exploit the
architectural resources fully as programmers can do manually with the
knowledge of the architecture. The introduction of concurrency in software
engineering, increases the complexity of the software, decreases the
productivity of the engineer, but increases the performance of the software.
\section{Performance via concurrency in hardware}
\label{sn:performance_concurrency_hardware}
The increased number of transistors on a chip has enabled computer architects
to provide concurrent execution of independent instructions on multiple
execution units. However, the implicit concurrency that can be extracted
automatically in hardware is limited~\cite{bell.06.jpp} i.e. parallelism is not
possible unless concurrency is introduced explicitly in programs.
\section{Performance via explicit concurrency in software}
\label{sn:performance_concurrency_software}
Instead of using a single instruction stream and trying to improve the
performance of the stream by implicit parallelism, we can have multiple
instruction streams which can execute concurrently. These streams are called
"threads" and provide an independent flow of control within a process. A
thread is also called a "light-weight" process, as it has its own stack, local
variables and program counter. Context switching in threads is much cheaper
than in processes. Industry has realized that writing parallel programs to exploit
concurrency of the hardware can not be avoided in future.
Introducing concurrency in programs is difficult, also debugging these programs
require a lot of effort from programmers. The order of execution is known but
the order of execution of threads is not known which makes the behavior of
multiple threads in a parallel program difficult to predict. The
synchronization between threads to ensure that shared state is accessed
atomically can make the concurrency difficult to define and can significantly
affect the performance of the program in multi-core systems. It is true that
threads in some form were quite commonly used in networking from 1980s e.g.
multiple connections to HTTP servers. In networking, threads represent
independent transaction and in applications, threads interact with each other in
producing results. Because of the inherent difficulty,
parallel programming never became a mainstream programming
paradigm~\cite{Talia:1997:PCS:256175.256193} in software engineering
community in the past.
Despite being difficult, parallel programing is the most desirable technology
in the current state-of-the-art multi-cores processors. The performance of an
application can be improved only from parallelization on contemporary hardware. A
number of programming libraries have been developed to provide concurrency
constructs to expose concurrency in programs. POSIX
threads~\cite{Garcia:2000:PTL:348120.348381}, OpenMP~\cite{openmp},
SystemC~\cite{SystemC} etc. are some of the libraries that provide constructs
for creation, termination, synchronization and communication between threads.
While parallelization is desirable, the management of threads in software by
the contemporary hardware is expensive. Typically 10-100 thousand cycles are consumed
in creation or synchronization of threads. There is also a cost of context
switching. Therefore fine-grained parallelism can not be achieved by explicit
concurrency in software. The next logical step is to introduce concurrency at
multiple levels; applications, operating systems and hardware. We need threads
in software, we need threads in hardware and we need the management of threads
in hardware. The concurrency at all levels will exploit the maximum possible
parallelism.
\section{Modern multi-core and many-core systems} \label{sn:modern_many-cores}
Since parallelization is the only practical solution in current technology and
the number of transistors on a single chip is
growing~\cite{Sodan:2010:PVM:1749398.1749434}, therefore we claim that in the
future there will be large number of cores on a single chip where programmers
have to put effort to write parallel programs with the knowledge of the
underlying architecture. In this section we describe some state-of-the-art
multi- and many- core systems. Some of these cores are available commercially
while others are mainly in the research domain and are comparable to the
Microgrid. There may exist some other many-core systems but these are not
discussed in this section.
\subsection{Nvidia's GPGPU}
Nvidia's GPGPU (General Purpose Graphical Processing Unit)~\cite{nvidia} makes
use of a GPU as a co-processor to accelerate the execution in CPUs for
general-purpose computing. The acceleration happens by offloading some of the
computationally intensive and time consuming portions of the program from CPU
to GPU. The rest of the application still runs on the CPU. Therefore it is also
known as "heterogeneous" or "hybrid" computing. A CPU generally consists of
two, four or eight cores, while the GPU consists of hundreds/thousands of
smaller cores. All these cores work together to crunch through the data in
the application. From the user's point of view, the application runs faster
because it is using the massively parallel processing power of the GPU to boost
performance. CUDA (Computer Unified Device Architecture) is a parallel
programming model for GPU. It enables dramatic increase in computing
performance by harnessing the power of GPUs.
\subsubsection*{Discussion}
GPU is actually a specialized accelerator that is connected to traditional
single- or multi-cores processor. Therefore a large amount of work is required
for the transaction of data between processor and GPUs. Programmers have to
divide the problem into coarse sub-problems that can be solved independently.
In addition the programmers have to explicitly manage the memory and
concurrency i.e. a complex model of the architecture is forced in the mind of
the programmer.
GPU architecture is based on the SIMD model and therefore can efficiently
execute SIMD based applications. The SIMD architectures are very inefficient in
branching, as each branch path must be executed sequentially. GPUs can achieve
a very high performance in embarrassingly parallel applications. However
applications with dense communication between threads (e.g. FFTs, compression
algorithms, sorting etc.) can not achieve a very high performance compared to
other multi-cores processors~\cite{karel2012}. GPUs become really slow in
functional calls, e.g. FFT is an embarrassingly parallel application, but it
does not scale very well on GPUs because of function calls from the outer loop.
\subsection{Sun/Oracle's UltraSPARC Tx}
Sun Microsystems have introduced a RISC architecture named SPARC (Scalable
Processor ARChitecture) in 1987. Oracle then bought Sun Microsystems and they together
introduced UltraSPARC T1 microprocessor (code name Niagara) in 2005. This
continued as a series of UltraSPARC T2, UltraSPARC T3 and in 2011 UltraSPARC
T4.
The UltraSPARC T4~\cite{UltraSparcT4} has a 16-stage integer pipeline, 11-stage
floating point pipeline and 2 issue width. It has a thread priority mechanism
where one thread can get preferential access to a core's hardware to give
increased performance. It has 4 processors on a single die and each processor
consists of 8 cores therefore a total of 32 cores are available on a single
chip. Each core has 8 hardware threads i.e. 64 hardware threads on a processor
and 256 hardware threads on the chip. Each core can switch between eight
threads using a modified LRU (Least Recently Used) algorithm for thread
selection. Each core is associated with 16KB of L1 I-cache and D-cache and
128KB of L2-cache. Eight cores share 4MB L3-cache and the DDR is 1TB. Total
transistors count is approximately 855 millions. The frequency of every core
can be changed in the range of 2.85 and 3.0 GHz. The technology used is 40nm
CMOS and the total die size is $403mm^2$.
The UltraSPARC T4 processor has increased single-thread performance, while
maintaining the high multi-thread throughput performance, therefore
single-threaded applications can have an efficient execution. It automatically
switches to single-thread mode when only a single thread is active,
dedicating all resources to that thread's execution. While software can
activate up to eight threads on each core at a time, hardware dynamically and
seamlessly allocates core resources such as instruction, data, L2-caches and
TLBs, as well as out-of-order execution resources such as the 128-entry
re-order buffer in the core. The cores provide sophisticated branch prediction
and have the features for prefetching instructions and data.
\subsubsection*{Discussion}
The UltraSPARC is addressing internet servers or any large scale systems. It
generally addresses coarse-grained or embarrassingly parallel applications and
therefore disregard desktop computing and fine-grained parallelism. A library
is used to map software threads to hardware threads and it has an overhead of
creation and synchronization~\cite{T3-2011}. It is based on SMP model and has
an increased single thread performance. But it suffers from the scalability of
the bandwidth and power consumption of the interconnection between processor
and memory.
The UltraSPARC T4 cores are complex (16 stage pipeline) and have out-of-order
execution, branch prediction and cache prefetching, which are energy
in-efficient features. It has some inefficiency coming from the use of a huge
shared L2-cache which is necessary for server application (large number of
synchronizations around data) but has a cost in silicon i.e. heat, energy and
wiring complexity.
\subsection{Tilera's TILE64}
Tilera has introduced TILE64~\cite{tile64}, based on MIMD model. It has 64
cores on a chip using 90nm CMOS. These cores are fully functional, programmable
and each is capable of running its own operating system. A group of cores can
also be used to run as a symmetrical multi-processing operating system. Every
core has a frequency range of 600 to 900 MHz. The cores are in-order, three-way
VLIW issue width and implement a MIPS-derived VLIW instruction set. The
pipeline has many (more than 6) stages. Each core has 32 general-purpose
registers and three functional units: two integer arithmetic logic units and a
load-store unit. Every core has L1-cache and L2-cache as well as a distributed
L3-cache.
Tilera's architecture eliminates the on-chip bus interconnection by placing a
communications switch on each core and arranging them in a grid fashion on the
chip to create an efficient two-dimensional traffic system for packets. This
technology is named as intelligent mesh (iMesh). iMesh is similar to mesh
network used in Intel's SCC or NoC in embedded systems, with the innovation
that the flow of the messages in the mesh network can dynamically be adapted
based on the load of the network.
Tilera's Multicore Development Environment (MDE) provides
programming framework that allows developers to scale
their applications to large-scale multicore systems. It has enable the
standard tools such as gcc, gdb and oprofile, to be multi-core aware so that
the developer can easily debug, profile and optimize code running across dozens
of cores.
\subsubsection*{Discussion}
Tilera's architecture does not include hardware support for floating point
operation and therefore is not suitable for scientific computing. It mainly
targets embedded applications e.g. video encoding/decoding and network packet
processing. It is programmed in such a way that requires using registers for
communication by the programmer. Which means there is more responsibility on
the part of the programmer or compiler in creating threads. In a way programmers
need to understand the architecture in detail in order to program it. Also
the mapping of threads in the program to the hardware requires a software
library and therefore the cost of creation, synchronization and mapping of
software threads to hardware can not be avoided.
\subsection{Intel's SCC}
Intel's SCC (Single-chip Cloud Computer)~\cite{intelscc} is an experimental
many-cores research platform designed to address hardware and software
challenges by industry and academic institutions in the tera-scale
project~\cite{terascale}. It consists of 48 Pentium 1 cores connected in a
mesh network and on-chip message passing network for inter-thread
communication. The cores are relatively simple but fully functional
general-purpose cores. There is no hardware cache coherency protocol, which
allowed Intel to place 48 cores on a chip using CMOS 45nm technology. It does
not come as a stand-alone computer and a management PC (MCPC) is used to run the
applications on the chip.
Intel's SCC has fine-grained power management where software applications are
given control to turn cores on and off or to change their performance levels,
continuously adapting to use the minimum energy needed at a given moment. It
can run all 48 cores at one time over a range of 25W to 125W and selectively
vary the voltage and frequency of the mesh network as well as a group of cores.
Each tile (2 cores) can have its own frequency, and groupings of four tiles (8
cores) can each run at their own voltage. Every core uses the mainstream x86
(CISC) instruction set. The Linux operating system is available for the chip,
as well as gcc and Fortran compilers. A small library RCC is used for the
communication between cores.
\subsubsection*{Discussion}
Intel's SCC is a prototype which is designed for studying the parallel
programming paradigm in general-purpose computers. Therefore it is not really a
commercial product to be used for mainstream computing. It mainly addresses
coarse-grained parallelism, and may not achieve a high performance improvement
in fine-grained applications. The absence of hardware cache coherency protocol
places more responsibility on the programmer and hence requires more effort
from the programmers to manage the coherency of the caches. The Pentium 1 core
is actually single-threaded machine and therefore can not achieve
latency tolerance in long latency operations.
\subsection{Microgrid}
The
Microgrid~\cite{conf:hpc:Jesshope04,Bernard:2010:RPM:2031978.2031994,JesshopeAPC08}
is a general-purpose, many-core architecture developed at the University of
Amsterdam which implements hardware multi-threading using data flow scheduling
and a concurrency management protocol in hardware to create and synchronize
threads within and across the cores on chip. The suggested concurrent
programming model for this chip is based on fork-join constructs, where each
created thread can define further concurrency hierarchically. This model is
called the microthreading model and is also applicable to current multi-core
architectures using a library of the concurrency constructs called
\emph{svp-ptl} ~\cite{SVP-PTL2009} built on top of pThreads. In our work, we
focus on a specific implementation of the microthreaded architecture where each
core contains a single issue, in-order RISC pipeline with an ISA similar to
DEC/Alpha, and all cores are connected to an on-chip distributed memory
network~\cite{Jesshope:2009:ISM:1577129.1577136,Bousias:2009:IEM:1517865.1518255}.
Each core implements the concurrency constructs in its instruction set and is
able to support hundreds of threads and their contexts, called microthreads and
tens of families (i.e. ordered collections of identical microthreads)
simultaneously.
A number of tools and simulators are added to the designer's toolbox and used
for the evaluation of the Microgrid from different perspective. The compiler
for the Microgrid~\cite{poss.12.sl} can generate binary for different
implementations of the Microgrid. We have software libraries that provide the
run-time systems for the microthreading model on the shared memory SMP machines
and referred as \emph{svp-ptl}~\cite{SVP-PTL2009} and distributed memory for
clusters/grids and are referred as Hydra~\cite{Andrei:msc_hydra:2010} and
\emph{dsvp-ptl}~\cite{DSVP-PTL2011} The SL compiler can generate binary for
UTLEON3~\cite{5491777,danek.12},
MGSim~\cite{Bousias:2009:IEM:1517865.1518255,poss.13.MGSim.SAMOS} and
HLSim~\cite{Irfan:multipe_levels_hlsim:2013, Irfan:oneipc_hlsim:2013,
Irfan.12.2013.signatures, Irfan.12.2013.CacheBased, Irfan.01.2014.analytical,
Irfan:hl_sim_ptl:2011, Irfan:msc_hlsim:2009, Uddin:2012:CSM:2162131.2162132}.
\section{Conclusion}
\label{sn:conclusion}
The psychology of electrical engineers is that they can double the number of
transistors on a single chip every second year, which has enabled computer
architects to design more complex microprocessors. A number of approaches were
tried to achieve improvements in the throughput of the program implicitly. But
industry has reached the limits of implicit improvement in performance, and
multi-core designs seem to be promising approaches to achieve performance
explicitly. However, concurrency in hardware alone can not improve the
performance of the program unless concurrency is also exposed in software.
\bibliographystyle{plain}
|
1,108,101,566,497 | arxiv | \section{Introduction}
The dimensional reduction of ten-dimensional IIB supergravity on Calabi-Yau orientifolds yields four-dimensional $\mathcal{N}=1$ supergravity theories \cite{Grimm:2004uq}, which are of particular phenomenological interest. The resulting couplings are given by topological quantities of the internal space which are computable for explicit backgrounds, and thus provide a fruitful environment for string model building \cite{Blumenhagen:2005mu,Grana:2005jc,Douglas:2006es,Blumenhagen:2006ci,Akerblom:2007np,McAllister:2008hb}. The compactification on a Calabi-Yau threefold preserves a quarter of the supersymmetry of ten dimensions and thus results in a $\mathcal{N}=2$ theory in four dimensions, which is then broken to $\mathcal{N}=1$ by the presence of orientifold planes. Incorporating gauge fields by adding D-branes in the Calabi-Yau background one is led to introduce extended objects with negative tension to cancel gravitational and electro/magnetic tadpoles, given by the orientifold planes, which however carry no physical degrees of freedom by themselves \cite{Giddings:2001yu}. String theory provides an infinite series in $\alpha'$ of higher-derivative corrections to the leading order two-derivative IIB supergravity action. However, even the next to leading order $\alpha'^3$-correction to the four-dimensional action arising in Calabi-Yau (orientifold) compactifications are only marginally understood, but have proven to be of high relevance to string phenomenology \cite{Denef:2004ze,Balasubramanian:2005zx}.
In this work, we discuss a set of four-derivative couplings that arise in four-dimensional $\mathcal{N}=2$ and $\mathcal{N}=1$ supergravity theories resulting from purely gravitational eight-derivative $\alpha'^3$-corrections to ten-dimensional IIB supergravity \cite{Kiritsis:1997em,Antoniadis:1997eg,Liu:2013dna}, upon compactification on a Calabi-Yau threefold and orientifold, respectively.
Such corrections are of conceptual as well as of phenomenological importance. Four-dimensional $\mathcal{N}=1$ and $\mathcal{N}=2$ supergravity theories with four-derivative interaction terms are only marginally understood \cite{Katmadas:2013mma,Koehn:2012ar,Horndeski:1974wa,Alvarez-Gaume:2015rwa}, and the knowledge of the relevant couplings is desirable. A recent progress is the classification of $4d,\,\mathcal{N}=1$ four-derivative superspace operators for ungauged chiral multiplets \cite{Ciupke:2016agp}. On the other hand higher-derivative couplings have a prominent role in phenomenological models such as inflation \cite{Amendola:1993uh,Bielleman:2016grv, Aoki:2015eba, Dalianis:2014sqa} and have been used in the context of moduli stabilization recently \cite{Ciupke:2015msa}.
Dimensionally reducing ten-dimensional IIB supergravity on a supersymmetric background must yield an effective four-dimensional $\mathcal{N}=1$, $\mathcal{N}=2$ supergravity theory depending on how much supersymmetry is preserved by the background. However, the supersymmetric completion at order $\alpha'^3$ of IIB supergravity is not known, thus a exhaustive study of the four-derivative effective action at order $\alpha'^3$ in four dimensions is out of reach.
Hence our strategy will be to focus on a complete subset of the ten-dimensional IIB supergravity theory at order $\alpha'^3$ and argue that the resulting couplings in the four-dimensional theory cannot be altered by any other sector of the higher-dimensional theory.
More concretely, the terms we analyze in ten dimensions carry four Riemann tensors, thus are schematically of the form $\mathcal{R}^4$ and are shown to be complete \cite{Gross:1986iv,Rajaraman:2005up,Hyakutake:2006aq}. In other words all other possible $\mathcal{R}^4$-terms are related to this sector via a higher-derivative field-redefinition of the metric. We hence restrict our analysis to a subset of four-dimensional couplings, which can only origin from the $\mathcal{R}^4$ -sector and thus must also be complete in the above sense. In particular we focus on K\"{a}hler deformations of the internal space, which give rise to a set of real scalar fields in the external space. We do not allow for background fluxes or localized sources for D-branes in this work, furthermore we neglect higher-derivative corrections arising due to D-branes and O-planes.
It is well known that the classical Einstein-Hilbert term gives rise to the kinetic terms for the K\"{a}hler moduli. The $\mathcal{R}^4$-sector generically corrects the couplings of the kinetic terms at order $\alpha'^3$ by some expression carrying six internal space derivatives \cite{Bonetti:2016dqh}, which was also discussed in the context of M-theory/F-theory in \cite{Grimm:2013bha,Grimm:2013gma,Grimm:2014efa,Grimm:2014xva,Grimm:2015mua}. However, these $\alpha'$-corrections will not be addressed in this work. Furthermore, note that the two-derivative kinetic terms generically receive backreaction effects at order $\alpha'^3$ from the modified supersymmetric background at this order in the string length. However, the four-derivative external terms arising from $\mathcal{R}^4$ do not receive corrections from the modified background since these would be even higher order in $\alpha'$. The interaction terms of the K\"{a}hlermoduli fields with the four-dimensional metric moreover can only arise form purely gravitational terms in ten dimensions given at order $\alpha'^3$ solely by the $\mathcal{R}^4$-sector. We restrict ourselves to study four-derivative couplings at most quadratic in the infinitesimal K\"{a}hlermoduli deformations. However, a complete analysis would need to also take into account cubic and quartic infinitesimal K\"{a}hlermoduli deformations, which will be discussed in a forthcoming work \cite{MCY3}.
This paper is organized as follows. In section \ref{4dLagr} we review the relevant $\mathcal{R}^4$-terms in ten dimensions, comment on the supersymmetric background, and discuss the four-derivative couplings quadratic in the K\"{a}hlermoduli deformations, arising upon dimensional reduction on a Calabi-Yau threefold. In section \ref{4dN1} we then perform the orientifold projection to yield the $\mathcal{N}=1$ couplings at fourth order in derivatives.
\section{The 4d four-derivative Lagrangian}\label{4dLagr}
This section discusses the dimensional reduction of IIB supergravity including purely gravitational eight-derivative corrections on a Calabi-Yau threefold to four dimensions. We fluctuate the background metric by K\"{a}hler deformations and focus on couplings which carry four external space derivatives and are at most quadratic in the infinitesimal K\"{a}hler deformations.
We first review the relevant $\alpha'^3$ $R^4$-corrections to ten-dimensional IIB supergravity and the supersymmetric background.
\subsection{ IIB higher-derivative action}\label{redres}
The IIB higher-derivative action at order $\alpha '^3$ has various contributions \cite{Schwarz:1982jn,Grisaru:1986kw,Gross:1986mw,Abe:1987ud,Kehagias:1997jg,Kehagias:1997cq,Minasian:2015bxa,Policastro:2006vt,Policastro:2008hg}. For the discussion at hand only the $\mathcal{R}^4$-sector containing four ten-dimensional Riemann tensors will be relevant. This subsector of the IIB supergravity action at order $\alpha'^3$ in the Einstein-frame is given by
\begin{equation}\label{RR4action}
S_{\text{grav}} = S_{EH} + \alpha \; S_{\wh R^4} \;\; , \;\ \; \text{with} \;\;\; \alpha = \frac{\zeta(3) \alpha'^3}{3 \cdot 2^{10}} \;\; ,
\end{equation}
and
\begin{equation}
S_{EH} = \frac{1}{2 \kappa_{10}^2} \int \wh R \wh\ast 1 \;\; ,
\end{equation}
where $2\kappa_{10}^2 = (2\pi)^7 \alpha'^4$. The higher-derivative contribution can be schematically written as
\begin{equation}\label{R4}
S_{\wh R^4} = \frac{1}{2 \kappa_{10}^2} \int e^{-\frac{3}{2} \wh\phi}
\big( t_8 t_8 + \tfrac{1}{8} \epsilon_{10} \epsilon_{10} \big) \wh R^4 \wh * 1 \;\; ,
\end{equation}
where the explicit tensor contractions are given by
\ba\label{deft8t8R4}
\epsilon_{10} \epsilon_{10} \wh R^4 &= \epsilon^{R_1 R_2 M_1\ldots M_{8} } \epsilon_{R_1 R_2 N_1 \ldots N_{8}} \wh R^{N_1 N_2}{}_{M_1 M_2} \wh R^{N_3 N_4}{}_{M_3 M_4} \wh R^{N_5 N_6}{}_{M_5 M_6} \wh R^{N_7 N_8}{}_{M_7 M_8} \ , \nonumber \\[.1cm]
t_8 t_8 \wh R^4 & = t_{8}^{ M_1 \dots M_8} t_{8 \, N_1 \dots N_8} \wh R^{N_1 N_2}{}_{M_1 M_2} \wh R^{N_3 N_4}{}_{M_3 M_4} \wh R^{N_5 N_6}{}_{M_5 M_6} \wh R^{N_7 N_8}{}_{M_7 M_8} \ .
\ea
Where $ \epsilon_{10}$ is the ten-dimensional Levi-Civita tensor and the explicit definition of the tensor $t_8$ can be found in \cite{Freeman:1986zh}.
Let us note that we do not discuss higher-derivative terms of the dilaton, since we lack completeness of the ten-dimensional action. However, the complete axio-dilaton dependence of the $R^4$-terms is known to be
\begin{equation}\label{R42}
S^{\text{\tiny{(2)}}}_{\wh R^4} = \frac{1}{2 \kappa_{10}^2}\int E(\tau,\bar \tau)^{3/2}
\Big( t_8 t_8 + \tfrac{1}{8} \epsilon_{10} \epsilon_{10} \Big) \wh R^4 \wh * 1 \;\; ,
\end{equation}
where $E(\tau,\bar \tau)^{3/2}$ is the $SL(2,\mathbb Z)$-invariant Eisenstein Series given by
\begin{equation}\label{Eisen32}
E(\tau,\bar \tau)^{3/2} =
\sum_{(m,n) \neq(0,0)} \frac{\tau_2^{3/2}}{|m + n \, \tau|^3} \ ,
\end{equation}
with $\tau = \wh C_0 + i e^{ - \wh \phi} := \tau_1 + i \tau_2$ the axio-dilaton.
In the large $\tau_2$ limit, which corresponds to the small
string coupling limit \eqref{Eisen32} results in
\begin{equation} \label{f0expansion}
E(\tau,\bar \tau)^{3/2} = 2 \zeta(3) \, \tau_2^{3/2} + \tfrac{2\pi^2}{3} \tau_2^{-1/2}
+ {{\mathcal O}}(e^{-2\pi \tau_2}) \ .
\end{equation}
We will use this approximation in \eqref{R42} in the following discussion, and only look at the leading order contribution in $g_s$, the string coupling, given by \eqref{R4}.
\subsection{Supersymmetric background}\label{redres}
The supersymmetric background of ten-dimensional IIB supergravity at the two-derivative level, thus at leading order in $\alpha'$ is given by a Calabi-Yau threefold $Y_3$. For simplicity we do not consider localized sources and background fluxes, and thus the line element is given by
\ba\label{metricbackg}
ds^2 = \eta_{\mu \nu} dx^\mu dx^\nu + 2 g^{\text{\tiny{(0)}}}_{m \bar n} dy^m dy^\bar n \;\; ,
\ea
with $\eta_{\mu \nu}$ the Minkowski metric, where $\mu = 0,1,2,3$ is a $4d$ external space world index and $m = 1,\ldots, 3$ is the index of the complex three-dimensional internal Calabi-Yau manifold with metric $g^{\text{\tiny{(0)}}}_{m \bar n} $, where $m,\bar m$ are holomorphic and anti-holomorphic indices, respectively. Taking into account the higher-curvature corrections \eqref{R4} in ten dimensions, \eqref{metricbackg} is no longer a supersymmetric background but needs to be modified such that the internal manifold is no longer Ricci flat.
It was shown that the internal space metric is modified as
\begin{equation}
g^{\text{\tiny{(0)}}}_{m \bar n} \longrightarrow g^{\text{\tiny{(0)}}}_{m \bar n} + \alpha'^3 g^{\text{\tiny{(1)}}}_{m \bar n} \;\; ,
\end{equation}
where $g^{\text{\tiny{(1)}}}_{m \bar n}$ is a solution to the modified Einstein equation $R_{m \bar n} = \alpha'^3\partial_m \partial_{\bar n} Q$, with Q the six-dimensional Euler-density \eqref{Euler}, \cite{Freeman:1986br}.
However, we can restrict our analysis to the case of the leading order metric \eqref{metricbackg}, since at order $\alpha'^3$ the four-derivative couplings only receive corrections from the $\mathcal{R}^4$-terms evaluated on the zeroth order Calabi-Yau background.
We do not incorporate for internal flux in this work, since the considered sector decouples, and also do not allow for localized D-brane sources, which would give rise to a warpfactor in \eqref{metricbackg}.
In the following we freeze the complex structure moduli, and allow solely for the K\"{a}hler deformations given by the harmonic $(1,1)$-forms $\{\omega_{i}\}$, with $i=1,...,h^{1,1} $, where $h^{(1,1)} = \text{dim} H^{(1,1)}$ the dimension of the $(1,1)$-cohomology group. The harmonicity is w.r.t.\,the zeroth order Calabi-Yau metric. These give rise to the massless K\"{a}hler moduli fields by varying the background metric by
\begin{equation}\label{Kaehlerfluc}
g^{\text{\tiny{(0)}}}_{m \bar n } \rightarrow g^{\text{\tiny{(0)}}}_{m\bar n} - i \, \delta v^i \, \omega_{i \, m \bar n}\;\;,
\end{equation}
where $\delta v^i$ are the real scalar infinitesimal K\"{a}hler deformations.\footnote{Note that we choose the fluctuation to be $- i \, \delta v^i \, \omega_{i \, m \bar n}$. The choice of sign is such that combined with the convention $J_{m \bar n} = i g_{m \bar n}$, to give a positive sign in $\delta J = \delta v^i \omega_i$.} Let us emphasize that also \eqref{Kaehlerfluc} receives $\alpha'^3$-corrections \cite{Grimm:2014efa}, however, these do not affect the four-derivative couplings at the relevant order in $\alpha'$. A preliminary study for allowing both the complex structure deformations and K\"{a}hler deformations simultaneously at the higher-derivative level arising from the $\mathcal{R} ^4$-sector in the context of M-theory can be found in \cite{mythesis}.
In this work we consider four-derivative couplings which are up to quadratic order in the infinitesimal K\"{a}hler deformations $\delta v^i$.
\subsection{Reduction results}\label{redres}
Compactifying the action \eqref{RR4action} on the Calabi-Yau background \eqref{metricbackg} we expand the result at four external derivative level up to quadratic order in the infinitesimal K\"{a}hler deformations \eqref{Kaehlerfluc}.
The reduction result may be expressed entirely in terms of the second Chern-form $c_2$, see \eqref{Chernclasses}, the K\"{a}hler form \eqref{eq:Kform} and a higher-derivative object $Z_{m \bar m n \bar n}$ \cite{Katmadas:2013mma}
given by
\begin{equation}\label{higherder}
Z_{m \bar m n \bar n} = \tfrac{1}{(2 \pi)^2}\epsilon _{ m \bar m m_1 \bar m_1 m_2 \bar m_2 } \epsilon _{ n \bar n n_1 \bar n_1 n_2 \bar n_2 } R {}^{\bar m_1 m_1 \bar n_1 n_1} R {}^{\bar m_2 m_2 \bar n_2 n_2}\; .
\end{equation}
Its analog for a Calabi-Yau four-fold has been encountered in the context of M-theory/F-theory in \cite{Grimm:2014efa}. $Z_{m \bar m n \bar n}$ in \eqref{higherder} obeys the following relations
\ba\label{Zrel}
&Z_{m \bar m n \bar n}= -Z_{m \bar n n \bar m} = Z_{n \bar m m \bar n} &\;\;\;\;\; &Z_{m \bar m} =Z_{m \bar m n}{}^{ n} = -2 i (\ast c_2)_{m \bar m}& \;\; & Z_{m\bar m} \omega_i^{\bar m m} = 2 i\ast (c_2\wedge \omega_i)& \nonumber \\ \nonumber \\
& Z_{m \bar m} g^{\bar m m}= Z_{m}{}^m{}_n {}^{ n} = 2 \ast(c_2 \wedge J) & \;\;\; & Z_{m \bar m n \bar n}R^{m \bar m n \bar n} = - 3! \, 2 \pi \ast c_3 \;\; .
\ea
Note that $Z_{m \bar m n \bar n}$ has the symmetry properties of the Riemann tensor build from a K\"{a}hler metric. It is itself not topological but is related to second and third Chern form of a Calabi-Yau manifold of dimension $n \geq 3$. In the following we dress objects constructed from the background Calabi-Yau metric with the symbol - $ ^{\text{\tiny{(0)}}}$ - as e.g. $Z^{\text{\tiny{(0)}}}_{m \bar m n \bar n}$.
We have now set the stage to discuss the reduction results.
By fluctuating the Calabi-Yau metric with the K\"ahler deformations, the higher-derivative $\alpha'^3$-terms \eqref{R4} at two-derivative level give rise to a $ \alpha'^3 $-modified four-dimensional Einstein-Hilbert term \cite{Becker:2002nn} and $ \alpha'^3 $-corrections to the kinetic terms for the K\"{a}hler moduli fields \cite{Bonetti:2016dqh}. The explicit form of these corrections has been also worked out in the context of M-theory on Calabi-Yau fourfolds in \cite{Grimm:2013bha,Grimm:2013gma,Grimm:2014efa,Grimm:2015mua}. The four-dimensional dilaton $\phi$ arises as $\wh\phi \to\phi$. Its internal component is constant at leading order but is given by $\phi \propto \alpha'^3 Q $ at the order of consideration. However, for the discussion at hand only the leading order constant part is relevant. The focus of this work is to derive the four-derivative corrections to the leading order two-derivative 4d Lagrangian, as discussed next.
The reduction of the classical Einstein-Hilbert term gives
\begin{equation}\label{classR}
\frac{1}{2 \kappa_{10}^2} \int \wh R \wh\ast 1 \longrightarrow \frac{1}{2 \kappa_{10}} \int_{\cM_4} \Big[ \Omega R + \nabla_\mu \delta v^i \nabla^\mu \delta v^j \int_{Y_3} \Big( \tfrac{1}{2} \omega_{i m \bar n} \omega_{j }{}^{\bar n m} - \omega_{i m}{}^m \omega_{j n}{}^n \Big) \Big] \ast_4 1\;\; + {{\mathcal O}} (\alpha) ,
\end{equation}
with
\begin{equation}\label{Weyl0n}
\Omega = \int_{Y_3} \Big[ 1 - i \delta v^i \, \omega _{i m}{}^{m} + \tfrac{1}{2} \delta v^i \delta v^j ( \omega _{i m \bar n} \omega _{j }{}^{\bar n m} - \omega _{i m}{}^m \omega _{j n}{}^n ) \Big] *_6 1\ ,
\end{equation}
where the ${{\mathcal O}}(\alpha)$ corrections in \eqref{classR} arise due to the mentioned $\alpha'^3$-modification of the background. However, these terms do not interfere with our analysis. It is necessary to consider the Weyl rescaling factor \eqref{Weyl0n} up to order $ (\delta v)^2 $.
The four-derivative corrections arising from the ten-dimensional $\mathcal{R}^4$-terms result in
\ba\label{R4Red}
& \tfrac{1}{2 \kappa_{10}^2} \int e^{-\frac{3}{2} \wh\phi}
\Big( t_8 t_8 + \tfrac{1}{8} \epsilon_{10} \epsilon_{10} \Big) \wh R^4 \wh * 1 \quad \quad \quad \longrightarrow
\ea\vspace{- 0,5 cm}
\ba
\tfrac{192 (2 \pi)^2}{2 \kappa_{10}^2} \int_{M_4} e^{-\frac{3}{2} \phi} \bigg [ \, \;\; &\Big[ 4 \, R_{\mu \nu} R^{\mu \nu} - R^2 \Big]\Big( \int_{Y_3} c_2^{\text{\tiny{(0)}}}\wedge J ^{\text{\tiny{(0)}}} + \delta v^i \int_{Y_3} c_2^{\text{\tiny{(0)}}}\wedge \omega_i + \delta v^i \delta v^j \int_{Y_3} \delta_j (c_2^{\text{\tiny{(0)}}}\wedge \omega_i ) \Big) & \nonumber \\
\;\; + &\Big[ \big( - 2 R_{\mu \nu } + \tfrac{1}{2} R g_{\mu \nu} \big)\nabla^\mu \delta v^i \nabla^\nu \delta v^j \ + \nabla_\mu \nabla^\mu \delta v^i \; \nabla_\nu \nabla^\nu \delta v^j \ \Big]\, \int_{Y_3} \, Z^{\text{\tiny{(0)}}}_{m\bar m n \bar n} \omega_i{}^{\bar m m} \omega_j{}^{\bar n n} \ast 1 & \nonumber \\
& \;\; -2 \nabla_\mu \nabla_\nu \delta v^i \; \nabla^\mu \nabla^\nu \delta v^j \ \, \int_{Y_3} \, Z^{\text{\tiny{(0)}}}_{m\bar m n \bar n} \omega_i{}^{\bar m m} \omega_j{}^{\bar n n} \ast 1 \;\;\; \bigg ]\ast_4 1 \;\; ,& \nonumber
\ea
where $\delta_i$ denotes the variation resulting from the metric shift \eqref{Kaehlerfluc}. Note that
\begin{equation}
\int_{Y_3} \delta_j (c_2^{\text{\tiny{(0)}}} \wedge \omega_i ) =0 \;\; ,
\end{equation}
since $c_2\wedge \omega_i $ is a topological quantity and hence its variation results in a total derivative. Furthermore, let us note that the four-dimensional Euler-density is given by
\begin{equation}\label{euler}
e(\nabla) = R^2 -4 R_{\mu \nu} R^{\mu \nu} + R_{\mu \nu \rho \sigma } R^{\mu \nu \rho \sigma} \; , \;\; \text{with} \;\; \int_{M_4} e(\nabla) \ast_4 1= \chi(M_4) \;\; ,
\end{equation}
where $\chi(M_4)$ is the Euler-characteristic of the external space $M_4$. Comparing \eqref{euler} to \eqref{R4Red} one infers that one may express the reduction result at zeroth order in $\delta v^i$ in terms of $R_{\mu \nu \rho \sigma } R^{\mu \nu \rho \sigma} $, plus the topological term dependent on $\chi(M_4)$. However, we will not perform this substitution since there is a more intuitive way of expressing the result as we will discuss in the next section.
Let us stress that \eqref{R4Red} is not the complete reduction result at the four-derivative level arising from the $\mathcal{R}^4$-sector, but we have neglected terms cubic and quartic in the fluctuations $\delta v^i$. Their derivation is crucial for a complete understanding, and we refer the reader to future work.
\subsubsection{Weyl rescaling }
In this section we perform the Weyl rescaling of the four-dimensional action composed of \eqref{classR} and \eqref{R4Red} to the canonical Einstein-frame. Furthermore, we discuss the extension of the infinitesimal K\"{a}hler deformations to finite fields.
The Weyl rescaling of the classical Einstein-Hilbert term gives
\begin{equation}\label{Weyl2}
\tfrac{1}{2 \kappa_{10}^2} \int_{\cM_4} \Omega R *1 \stackrel{\text{Weyl}}{\rightarrow} \tfrac{1}{ (2 \pi)^4 \alpha'} \int_{\cM_4} R *1 - \tfrac{3 }{2} \nabla_\mu \delta v^i \nabla^\mu \delta v^j \; \tfrac{1}{ \mathcal{V}^{\text{\tiny{(0)}}}{}^2} \mathcal{K}^{\text{\tiny{(0)}}}_{i} \mathcal{K}^{\text{\tiny{(0)}}}_{j} *1 \;\; .
\end{equation}
Where we have used identities \eqref{IN2} for the intersection numbers $\mathcal{K}^{\text{\tiny{(0)}}}_i, \, \mathcal{K}^{\text{\tiny{(0)}}}_{ij}, \, \mathcal{K}^{\text{\tiny{(0)}}}_{ijk}$, whose definitions are given in \eqref{IN1}. Moreover, note that from the definition \eqref{IN1} it is manifest that the volume $\mathcal{V}^{\text{\tiny{(0)}}}$ and the intersection numbers $\mathcal{K}^{\text{\tiny{(0)}}}_i,\mathcal{K}^{\text{\tiny{(0)}}}_{ij},\mathcal{K}^{\text{\tiny{(0)}}}_{ijk}$ are dimensionless and are expressed in terms of the length scale $\alpha'$. In this conventions also the fields $\delta v^i$ are dimensionless.
Due to the appearance of the four-derivative term the Weyl rescaling of the action is more involved. One may show that by using \eqref{Weyl1} and \eqref{Weyl2} up to total derivative contributions at order $\alpha'^3$ one finds
\begin{eqnarray}\label{Weyln}
\int_{M_4} e^{-\frac{3}{2} \phi} \bigg [ & \Big[ 4 \, R_{\mu \nu} R^{\mu \nu}- R^2 \Big] \Big( \int_{Y_3} c^{\text{\tiny{(0)}}}_2\wedge J^{\text{\tiny{(0)}}} + \delta v^i \int_{Y_3} c^{\text{\tiny{(0)}}}_2\wedge \omega_i \Big) \quad \quad \bigg ] \ast 1\quad \quad \quad \quad& \\
& \stackrel{\text{Weyl}}{\rightarrow}& \nonumber \\
\int_{M_4} e^{-\frac{3}{2} \phi} \bigg [ & + \;\Big[ 4 \, R_{\mu \nu} R^{\mu \nu} - R^2 \Big] \;\; \Big( \int_{Y_3} c^{\text{\tiny{(0)}}}_2\wedge J^{\text{\tiny{(0)}}} + \delta v^i \int_{Y_3} c^{\text{\tiny{(0)}}}_2\wedge \omega_i \Big) \quad \quad \quad \quad \quad \quad \quad \quad &\nonumber \\
& - \;\; \Big[4R_{\mu \nu} - R g_{\mu \nu} \Big] \nabla^\mu \nabla^\nu \delta v^i \;\; \frac{2}{\mathcal{V}^{\text{\tiny{(0)}}}} \mathcal{K}^{\text{\tiny{(0)}}}_{i} \Big( \int_{Y_3} c^{\text{\tiny{(0)}}}_2\wedge J^{\text{\tiny{(0)}}} + \delta v^i \int_{Y_3} c^{\text{\tiny{(0)}}}_2\wedge \omega_i \Big) \quad \quad &\nonumber \\
& \,- \;\; \Big[4R_{\mu \nu} - R g_{\mu \nu} \Big] \nabla^\mu \delta v^i \nabla^\nu \delta v^j \; \; \frac{1}{\mathcal{V}^{\text{\tiny{(0)}}}{}^2} \mathcal{K}^{\text{\tiny{(0)}}}_{j} \mathcal{K}^{\text{\tiny{(0)}}}_{i} \,\int_{Y_3} c^{\text{\tiny{(0)}}}_2\wedge J^{\text{\tiny{(0)}}} \quad \;\; \quad \quad \quad \quad\;\;\; \;\;& \nonumber \\
& \;\;+ \;\; \Big[ 4 \nabla_\mu\nabla_\nu \delta v^i \;\; \nabla^\nu \nabla^\mu \delta v^j - \nabla_\mu\nabla^\mu \delta v^i \;\; \nabla_\nu \nabla^\nu \delta v^j \Big] \;\; \frac{1}{\mathcal{V}^{\text{\tiny{(0)}}}{}^2} \mathcal{K}^{\text{\tiny{(0)}}}_{i} \mathcal{K}^{\text{\tiny{(0)}}}_{j} \int_{Y_3} c^{\text{\tiny{(0)}}}_2\wedge J^{\text{\tiny{(0)}}} & \bigg ]\ast_4 1 + \dots \nonumber
\end{eqnarray}
The elipses denote terms where more than two fields $\delta v^i$ carry derivatives and furthermore terms, which have derivatives acting on the dilaton. An exhaustive derivation of the four-derivative dilaton action would require the knowledge of the ten-dimensional higher-derivative dilaton action \cite{Gross:1986mw,Kehagias:1997jg,Kehagias:1997cq,Minasian:2015bxa,Policastro:2006vt,Policastro:2008hg}, which lacks completeness and is hence beyond the scope of our study.
Before collecting the contributions arising due to the Weyl rescaling \eqref{Weyl1}, \eqref{Weyl2} and combining it with the reduction results \eqref{classR} and \eqref{R4Red} let us first lift the infinitesimal K\"ahler fluctuations around the background metric to full fields. We proceed by making the naive replacement $v^i = v^{\text{\tiny{(0)}}}{}^i + \delta v^i $, where $ J^{\text{\tiny{(0)}}} = v^{\text{\tiny{(0)}}}{} ^i \omega_{i}$ is the background K\"{a}hler form.
This substitution is straightforward when the couplings are given by topological quantities as in the case of them being intersection numbers, where one simply infers e.g. $\mathcal{K}^{\text{\tiny{(0)}}}_i \to \mathcal{K}_i$.
Analogously, one infers in the case of the topological higher-derivative coupling that
\begin{equation}
\int_{Y_3} c^{\text{\tiny{(0)}}} _2\wedge J^{\text{\tiny{(0)}}} + \delta v^i \int_{Y_3} c^{\text{\tiny{(0)}}} _2\wedge \omega_i \longrightarrow \int_{Y_3} c_2\wedge J \;\; ,
\end{equation}
where $ J = v^i \omega_{i}$, and $c_2$ is constructed from the metric $g_{m\bar n} = - i v^i \omega_i$. However, the uplift of the coupling $\int_{Y_3} \, Z^{\text{\tiny{(0)}}}_{m\bar m n \bar n} \omega_i{}^{\bar m m} \omega_j{}^{\bar n n}$ is less trivial since it does not represent a topological quantity of the internal Calabi-Yau threefold.
We will write the uplift of this coupling in the action by naively replacing the background metric by $g_{m\bar n}$, thus one yields $\int_{Y_3} \, Z_{m\bar m n \bar n} \omega_i{}^{\bar m m} \omega_j{}^{\bar n n}$. However, a more refined analysis would be required to fully justify this choice.
Combining the uplift of the reduction result \eqref{classR}, \eqref{R4Red} and the terms, which arose due to Weyl rescaling \eqref{Weyl1} and \eqref{Weyl2}, and by using the definition $ \mathcal{G}_{\mu\nu} := R_{\mu \nu} - \frac{1}{4} g_{\mu \nu} R$, which is defined in close analogy to the Einstein tensor\footnote{ The Einstein tensor is given by $G_{\mu \nu} =R_{\mu \nu} - \frac{1}{2} R g_{\mu \nu} $.} one finds
\ba\label{R4Red3f}
S_{\text{kin}} = \tfrac{1}{ (2 \pi)^4 \alpha'} \int_{M_4} & \bigg [ R + \nabla_\mu v^i \nabla^\mu v^j\tfrac{1}{ \mathcal{V}} \Big( \tfrac{1}{2}\mathcal{K}_{ij} - \tfrac{1}{\mathcal{V}}\mathcal{K}_i\mathcal{K}_j\Big)
+ \tfrac{ \zeta(3) \; \alpha'}{4} \, e^{- \tfrac{3}{2} \phi}\bigg ( \, \mathcal{G}_{\mu\nu} \mathcal{G}^{\mu \nu} \mathcal{Z} - \mathcal{G}_{\mu\nu}\, \nabla^\nu \nabla^\mu v^i \, \Big( \tfrac{2}{\mathcal{V}} \mathcal{K}_i \mathcal{Z} \Big) & \nonumber \\
&\;\;\;\;\;+ \mathcal{G}_{\mu\nu}\, \nabla^\mu v^i \nabla^\nu v^j \Big( \mathcal{Z}_{ij} - \tfrac{1}{\mathcal{V}^2} \mathcal{K}_i \mathcal{K}_j \mathcal{Z} \Big) - \tfrac{1}{2} \nabla_\mu\nabla^\mu v^i \;\; \nabla_\nu \nabla^\nu v^j \Big( \mathcal{Z}_{ij} + \tfrac{1}{2 \mathcal{V}^2}\mathcal{K}_i \mathcal{K}_j \mathcal{Z} \Big) & \nonumber \\
&\;\;\;\;\;\; + \nabla_\mu\nabla_\nu v^i \;\; \nabla^\mu \nabla^\nu v^j \; \Big( \mathcal{Z}_{ij} + \tfrac{1}{ \mathcal{V}^2}\ \mathcal{K}_i \mathcal{K}_j \mathcal{Z} \Big) \;\; \bigg ) \;\; \bigg ] \ast_4 1 \;\; .&
\ea
Where we have used the dimensionless quantities
\ba\label{Zdef}
&\mathcal{Z} = \tfrac{1}{2 \pi \alpha'}\int_{Y_3} c_2 \wedge J \; ,&
&\mathcal{Z}_{i}= \tfrac{1}{2 \pi \alpha'} \int_{Y_3} c_2 \wedge \omega_i \; ,&
&\mathcal{Z}_{ij} = -\tfrac{1}{4 \pi \alpha'} \int_{Y_3}\, Z_{m\bar m n \bar n} \omega_i{}^{\bar m m} \omega_j{}^{\bar n n} \ast 1 \;\; ,&
\ea
obeying the relations
\ba\label{relZs}
\mathcal{Z}_{i}= \mathcal{Z}_{ij}v^j = \mathcal{Z}_{ji} v^j \;\;\; \text{and} \;\;\; \mathcal{Z}=\mathcal{Z}_{i} v^i \;\; ,
\ea
which can be seen by using \eqref{Zrel}. Note that as expected $\frac{\delta}{\delta v^i} \mathcal Z = \mathcal Z_i \;\; \text{but} \;\;\; \frac{\delta}{\delta v^j} \mathcal Z_ i= 0$, thus $\mathcal Z_{ij}$ cannot be obtained easily by taking derivatives w.r.t.\,$\delta v^i$. Let us stress that we have neglected $\alpha'$-corrections to the two-derivative part of this action \cite{Bonetti:2016dqh}, since those will not interfere with the four-derivative couplings. Furthermore, note that due to the uplift to finite fields $v^i$, terms in \eqref{R4Red3f} may have a higher power in the fields $v^i$, in contrast to the quadratic dependence of the infinitesimal K\"{a}hler deformations.
Let us close this section by remaking that the higher-derivative effective action \eqref{R4Red3f} can be rewritten using field redefinitions involving higher-derivative pieces themselves. Thus the given presentation is a particular choice, which results naturally after dimensional reduction. However, one may perform field redefinitions as e.g.
\begin{equation}
g_{\mu \nu} \to g_{\mu \nu} + a R_{\mu \nu} + b R g_{\mu \nu} \;\; a,b \in \mathbb{R} \;\; .
\end{equation}
One concludes that the higher-derivative couplings in \eqref{R4Red3f} are presented in one particular frame of the fields $g_{\mu \nu}$ and $v^i$. A more sophisticated analysis of the supersymmetric completion at the four-derivative level would be required to select a canonical frame.
\section{The 4d, $\mathcal{N} =1$ action}\label{4dN1}
In this section we perform the orientifold projection on the effective action \eqref{R4Red3f}, which amounts to adding $O3/O7$ planes to the Calabi-Yau background \cite{Giddings:2001yu,Acharya:2002ag,Brunner:2003zm,Brunner:2004zd,Dabholkar:1996pc}. For consistency we are required to also consider $D3/D7$ branes in this setup. However, we will not discuss any $\alpha'^3$-corrections arising from these sources, but let us emphasize that a complete treatment would require such a refined analysis. Already at the classical level these would source a warp-factor and background fluxes, which we chose not to account for.
In \ref{orientifold} we review the well known properties of the orientifold projection on Calabi-Yau threefolds \cite{Grimm:2004uq,Andrianopoli:2001zh}, and apply it to the four-derivative effective action derived in the previous section.
We then proceed in \ref{LinMul} by expressing the truncated spectrum in terms of the real scalar fields of the linear multiplet of $4d$, $\mathcal{N} =1$ supergravity.
\subsection{ Orientifold projection}\label{orientifold}
In the following we consider $O3/O7$ planes in the Calabi-Yau threefold background, known as Calabi-Yau orientifold, denoted in the following as $X$. The presence of orientifold planes truncates the effective theory from $\mathcal{N}=2$ to $\mathcal{N}=1$ supersymmetry.
Orientifold planes manifest themselves as an isometric, holomorphic involution $\sigma: X\to X$, thus $\sigma^2= id$ and $ \sigma^\ast g = g$ on the internal Calabi-Yau space with metric $g$, such that
\begin{equation}\label{oProj}
\sigma^\ast J =J \;\;.
\end{equation}
Moreover, the presence of $O3/O7$ planes results in $\sigma^\ast \Omega = - \Omega$, where $\Omega$ is the holomorphic $(3,0)$-form. Furthermore, considering the action of $\Omega_p (-1)^{F_L}$ on the space-time fields, where $\Omega_p$ is the world-sheet parity and $F_L$ the space-time fermion
number of the left moving sector, one finds that
\begin{equation}\label{oProj2}
\Omega_p (-1)^{F_L} \phi =\phi \;\;\; \text{and} \;\;\; \Omega_p (-1)^{F_L} g=g \;\; .
\end{equation}
The cohomology groups $H^{p,q}$ naturally decompose in odd and even eigenspaces under the action of $\sigma^\ast$ as $H^{p,q} = H^{p,q}_{+} \oplus H^{p,q}_{-} $. Since the K\"{a}hler form is invariant under the orientifold projection \eqref{oProj}, only the K\"{a}hler deformations related to the even eigenspace $H^{1,1}_+$ remain in the spectrum, such that $J = v^a \omega_a ,\;\;a =1,\dots,h^{1,1}_+$.
Subjected to the orientifold projection the reduction result \eqref{R4Red3f} has to be modified accordingly and one straightforwardly arrives at
\ba\label{R4Red4}
S_{\text{kin}} = \tfrac{1}{ (2 \pi)^4 \alpha'} \int_{M_4} & \bigg [ R + \nabla_\mu v^i \nabla^\mu v^b\tfrac{1}{ \mathcal{V}} \Big( \tfrac{1}{2}\mathcal{K}_{ab} - \tfrac{1}{\mathcal{V}}\mathcal{K}_a\mathcal{K}_b\Big)
+\tfrac{ \zeta(3) \, \alpha'}{4} e^{- \tfrac{3}{2} \phi}\bigg ( \, \mathcal{G}_{\mu\nu} \mathcal{G}^{\mu \nu} \mathcal{Z} - \mathcal{G}_{\mu\nu}\, \nabla^\nu \nabla^\mu v^a \, \Big( \tfrac{2}{\mathcal{V}} \mathcal{K}_a \mathcal{Z} \Big) &\nonumber \\
&\;\;\;\;\;\; + \mathcal{G}_{\mu\nu}\, \nabla^\mu v^a \nabla^\nu v^b \Big( \mathcal{Z}_{ab} - \tfrac{1}{\mathcal{V}^2} \mathcal{K}_a \mathcal{K}_b \mathcal{Z} \Big) - \tfrac{1}{2} \nabla_\mu\nabla^\mu v^a \;\; \nabla_\nu \nabla^\nu v^b \Big( \mathcal{Z}_{ab} + \tfrac{1}{2 \mathcal{V}^2}\mathcal{K}_a \mathcal{K}_b \mathcal{Z} \Big) &\nonumber \\
&\;\;\;\;\;\; + \nabla_\mu\nabla_\nu v^a \;\; \nabla^\mu \nabla^\nu v^b \; \Big(\mathcal{Z}_{ab} + \tfrac{1}{ \mathcal{V}^2} \mathcal{K}_a \mathcal{K}_b \mathcal{Z}\Big) \;\; \bigg ) \;\;\bigg ] \ast_4 1 \;\; .&
\ea
Where we have used the properties of the orientifold projection to conclude that
\ba\label{ZorProj}
&\mathcal{ Z} = \tfrac{1}{2 \pi \alpha'} \int_{Y_3} c_2 \wedge J = \tfrac{1}{2 \pi \alpha'} \int_{X} c_2 \wedge J \;\; , \quad \quad \;\; \mathcal{Z}_{a}= \tfrac{1}{2 \pi \alpha'} \int_{Y_3} c_2 \wedge \omega_a =\tfrac{1}{2 \pi \alpha'} \int_{X} c_2 \wedge \omega_a & \\ \nonumber
&\mathcal{Z}_{ab } = - \tfrac{1}{4 \pi \alpha'} \int_{Y_3}\, Z_{m\bar m n \bar n} \omega_a{}^{\bar m m} \omega_b{}^{\bar n n} \ast 1 = -\tfrac{1}{4 \pi \alpha'}\int_{X}\, Z_{m\bar m n \bar n} \omega_a{}^{\bar m m} \omega_b{}^{\bar n n} \ast 1 \;\; , &
\ea
obeying the analogous relations to \eqref{relZs} given by
\ba
\mathcal{Z}_{a}= \mathcal{Z}_{ab}v^b = \mathcal{Z}_{ba} v^b \;\;\; \text{and} \;\;\; \mathcal{Z}=\mathcal{Z}_{a} v^a \;\; .
\ea
\subsection{4d, $\mathcal{N}=1$ linear multpilets}\label{LinMul}
The canonical form of the $4d, \,\mathcal{N}=1$ action for the real scalars $L^a$ in the linear multiplets takes the form
\begin{equation}
S = \tfrac{1}{ (2 \pi)^4 \alpha'} \int_{M_4} R \ast 1 \; + \; \tfrac{1}{2} G_{ab} \nabla_\mu L^a \nabla^\mu L^b \ast 1\; ,
\end{equation}
with the couplings $ G_{ab} $, which can be inferred from a kinematic potential $\tilde K $ as $G_{ab} = \frac{\delta}{\delta L^a} \frac{\delta}{\delta L^b} \tilde K$.
The identification of the K\"{a}hler moduli fields $v^a$ with the real scalars in the linear multiplet of the $4d$, $ \mathcal{N}=1$ supergravity theory at leading order in $\alpha'$ is given by
\begin{equation} \label{Lredef}
L^a = \frac{v^a}{\mathcal{V}} \;\; .
\end{equation}
Eventual $\alpha'$-modifications of \eqref{Lredef} due to the two-derivative analysis at this order in $\alpha'$ \cite{Bonetti:2016dqh} do not alter the four-derivative couplings at the relevant order in $\alpha'$, thus it suffices to express the action in terms of \eqref{Lredef}.
To determine all the relevant four-derivative couplings of $L^i$ one requires knowledge of the couplings cubic and quartic in the infinitesimal fluctuations $\delta v^i $ arising from the $\mathcal{R}^4$-sector. This is however, beyond the study of this work and we have thus omitted such terms also arising due to the Weyl rescaling in \eqref{Weyln}. However, one may show that one can express the couplings $ T_{\mu \nu }\, \nabla^\nu \nabla^\mu v^a $ and $T_{\mu \nu }\, \nabla^\mu v^a \nabla^\nu v^b $ in terms of the fields $L^a$ in the linear multiplets without making use of information of the neglected sector. This does not apply to the $\nabla_\mu\nabla^\mu v^a \;\; \nabla_\nu \nabla^\nu v^b$ and $\nabla_\mu\nabla^\nu v^a \;\; \nabla_\nu \nabla^\mu v^b $ terms where the knowledge of the other four-derivative couplings is crucial. Hence we will not consider the latter in the following.
Expressing \eqref{R4Red4} in terms of the linear multiplets one finds
\ba\label{R4Red5}
S_{\text{kin}} = \tfrac{1}{ (2 \pi)^4 \alpha'} \int_{M_4} \bigg [ & R \; + \tfrac{1}{2} \nabla_\mu L^a \nabla^\mu L^b \;\; \mathcal{V} \Big( \mathcal{K}_{ab} - \tfrac{1}{\mathcal{V}}\mathcal{K}_a\mathcal{K}_b \Big)+ \tfrac{ \zeta(3) \alpha'}{4} e^{- \frac{3}{2} \phi} \bigg ( \ \, \mathcal{G}_{\mu\nu} \mathcal{G}^{\mu \nu} \mathcal{Z} + \mathcal{G}_{\mu\nu}\, \nabla^\nu \nabla^\mu L^a \, \mathcal{K}_a \mathcal{Z} \nonumber
& \\
& \;\;\; \;\;+ \mathcal{G}_{\mu\nu}\, \nabla^\mu L^a \nabla^\nu L^b \Big( \mathcal{V}^2 \mathcal{Z}_{ab} + \tfrac{5}{2} \mathcal{K}_a \mathcal{K}_b \mathcal{Z} - 3 \mathcal{V} \mathcal{K}_{ab} \mathcal Z- \mathcal{V}\mathcal{K}_a \mathcal{Z}_b \Big) \;\; \bigg ) \;\; \bigg ] \ast1 \ . &
\ea
Classically one then encounters the K\"{a}hler metric on the moduli space to be given by
\begin{equation}
G_{ab} = \mathcal{V} \int_{X} \omega_a \wedge \ast \omega_b = \mathcal{V} \Big( \mathcal{K}_{ab} - \tfrac{1}{\mathcal{V}}\mathcal{K}_a\mathcal{K}_b \Big)\;\; ,
\end{equation}
arising from the kinematic potential $\tilde K = -2 \log \mathcal{V} = \log \mathcal{K}_{ijk}L^iL^jL^k$.
The resulting novel couplings at order $\alpha'^3$, couple derivatives of the real scalars $L^a$ to the tensor $\mathcal{G}_{\mu\nu}$, which is composed out of the Ricci tensor and the Ricci scalar. The higher-derivative coupling $\mathcal{G}_{\mu\nu} \mathcal{G}^{\mu \nu} \mathcal{Z} $ has been analyzed in \cite{Alvarez-Gaume:2015rwa}, and leads to a propagating massive spin 2 ghost mode. However, let us note that the appearance of ghost modes in effective field theories is not an immediate issue since it is related to the truncation of the ghost-free infinite series resulting from string theory.
Let us next comment on the term $\mathcal{G}_{\mu\nu}\, \nabla^\mu L^a \nabla^\nu L^b $.
Firstly, note that this higher-derivative coupling does not correct the propagator of $L^a$, since it vanishes in the Minkowski background. Thus it does not give rise to any ghost modes for $L^a$.
The analogous case of the Einstein-tensor coupled to a scalar field is well studied and relevant in the context of inflation. It was observed that such a coupling of a scalar field to curvature terms favors slow roll inflation, in other words rather steep potentials can exhibit the feature of slow roll.
It is expected that this coupling \eqref{R4Red5} could be used to implement these scenarios in the context of K\"{a}hler moduli inflation. It is an old approach in the context of string theory to drive slow roll inflation by a K\"{a}hler modulus \cite{PhysRevD.34.3069,PhysRevD.52.3548,Conlon:2005jm,Cicoli:2008gp,Burgess:2016owb}.
It would be interesting to analyze the consequences of the derived novel couplings to such inflationary models and their relevance due to their $\alpha'^3$-suppression \cite{Cicoli:2015wja,Broy:2015zba,Cicoli:2016chb}.
Finally, let us discuss the coupling $ \mathcal{G}_{\mu\nu}\, \nabla^\nu \nabla^\mu L^a $. As in the above case it does not correct the propagator of $L^a$. In contrast to the previous case these couplings are poorly studied in inflation literature and hence their embedding in string inflation models is desirable.
In both cases coefficients dependent on topological quantities $\mathcal Z , \mathcal Z_a$, see \eqref{ZorProj}, of the internal Calabi-Yau orientifold and are trivially related to the analog quantities \eqref{Zdef} of the Calabi-Yau threefold,
and are thus computable in the context of algebraic geometry.
However, the semi-topological coupling $\mathcal Z_{ab}$ requires the knowledge of the Calabi-Yau metric and although derivable in principle it is beyond the capability of current available techniques.
\section{Conclusions}
Considering purely gravitational $\mathcal{R}^4$-corrections at order $\alpha'^3$ to the leading order IIB supergravity action in ten dimensions, we performed a dimensional reduction to four dimensions on a Calabi-Yau threefold. Analyzing the reduction result at four-derivative level and quadratic in the infinitesimal K\"{a}hler deformations
we derived novel couplings of the K\"{a}hler moduli fields and gravity. We argued that these are complete in a sense that the couplings cannot be altered by other sectors of the IIB action at order $\alpha'^3$, or by modifications of the background.
We then performed the orientifold projection to derive a minimal supergravity theory in four dimensions. Let us stress that for a complete analysis one needs to derive the reduction result up to quartic order in the infinitesimal K\"{a}hler deformations. Only then one is able to draw definite conclusions for all of the resulting four-derivative couplings involving the K\"{a}hler moduli fields and gravity. This is an interesting question to be answered and the obvious next step in this research program. Let us conclude by emphasizing that a detailed analysis of the novel couplings in the context of K\"{a}hler moduli inflation in IIB orientifold setups is desirable.
\vspace*{.5cm}
\noindent
\subsection*{Acknowledgments}
I would like to thank Federico Bonetti, Thomas Grimm, Dieter Luest, Taizan Watari and Itamar Yakov for helpful discussions
and comments. In particular, I would like to express my thankfulness to the theoretical high-energy physics groups of the Walter Burke institue at Caltech, the center for the fundamental laws of nature in Harvard, and the Max-Planck institute for physics in Munich, for their hospitality during my visits. This work was supported by the Grant-in-Aid for Scientific Research on Innovative Areas 2303, MEXT, and the WPI program of Japan.
\begin{appendix}
\vspace{2cm}
\noindent {\bf \LARGE Appendix}
\addcontentsline{toc}{section}{Appendix}
\section{\bf Conventions, definitions, and identities} \label{Conv_appendix}
In this work we denote the ten-dimensional space indices by capital Latin letters $M,N = 0,\ldots,9$,
the external ones by $\mu,\nu = 0,1,2,3$, and the internal complex ones by $m,n,p=1,2,3$ and $\bar m, \bar n,\bar p =1, 2,3$. The metric signature of the ten-dimensional space is $(-,+,\dots,+)$.
Furthermore, the convention for the totally
anti-symmetric tensor in Lorentzian space in an orthonormal frame is $\epsilon_{012...9} = \epsilon_{012}=+1$.
The epsilon tensor in $d$ dimensions then satisfies
\ba
\epsilon^{R_1\cdots R_p N_{1 }\ldots N_{d-p}}\epsilon_{R_1 \ldots R_p M_{1} \ldots M_{d-p}} &= (-1)^s (d-p)! p!
\delta^{N_{1}}{}_{[M_{1}} \ldots \delta^{N_{d-p}}{}_{M_{d-p}]} \,,
\ea
where $s=0$ if the metric has Riemannian signature and $s=1$ for a Lorentzian metric.
We adopt the following conventions for the Christoffel symbols and Riemann tensor
\ba
\Gamma^R{}_{M N} & = \fr12 g^{RS} ( \partial_{M} g_{N S} + \partial_N g_{M S} - \partial_S g_{M N} ) \, , &
R_{M N} & = R^R{}_{M R N} \, , \nonumber\\
R^{M}{}_{N R S} &= \partial_R \Gamma^M{}_{S N} - \partial_{S} \Gamma^M{}_{R N} + \Gamma^M{}_{R T} \Gamma^T{}_{S N} - \Gamma^M{}_{ST} \Gamma^T{}_{R N} \,, &
R & = R_{M N} g^{M N} \, ,
\ea
with equivalent definitions on the internal and external spaces. Written in components, the first and second Bianchi identity are
\begin{eqnarray}\label{Bainchiid}
{R^O}_{PMN} + {R^O}_{MNP}+{R^O}_{NPM} & = & 0 \nonumber\\
(\nabla_L R)^O{}_{PMN} + (\nabla_M R)^O{}_{PNL} + (\nabla_N R)^O{}_{PLM} & = & 0 \;\;\; .
\end{eqnarray}
Let us specify in more detail our conventions regarding complex coordinates
in the internal space.
For a
complex Hermitian manifold $M$ with complex dimension $n$
the complex coordinates $z^1 , \dots, z^n$ and
the underlying real coordinates $\xi^1, \dots , \xi^{2n}$ are related by
\begin{equation}
( z^1,...,z^n ) = \left( \frac{1}{\sqrt{2}}(\xi^1 + i \xi^2), \dots , \frac{1}{\sqrt{2}}(\xi^{2n-1} + i \xi^{2n}) \right) \,.
\end{equation}
Using these conventions one finds
\begin{equation}
\sqrt{g} d\xi^1 \wedge ... \wedge d\xi^{2n} = \sqrt{g} (-1)^{\frac{(n-1)n}{2}} i^n dz^1\wedge...\wedge dz^n
\wedge d\bar z^1 \wedge...\wedge d\bar z^n = \frac{1}{n!} J^n \,,
\end{equation}
with $g$ the determinant of the metric in real coordinates and $\sqrt{\det g_{m n}} = \det g_{m \bar n} $. The K\"{a}hler form is given by
\begin{equation}
\label{eq:Kform}
J = i g_{m\bar{n} } dz^m \wedge d\bar z^{\bar{n} } \, .
\end{equation}
Let $\omega_{p,q}$ be a $(p,q)$-form, then its Hodge dual is the $(n-q,n-p)$ form
\begin{align} \label{eq:pgform}
\ast \omega_{p,q} & = \frac{ (-1)^{\frac{n(n-1) }{2} } \, i^n \, (-1)^{pn}} {p!q!(n-p)!(n-q)!}
\omega_{m_1 \dots m_p \bar{n} _1 \dots \bar{n} _q}
\epsilon^{m_1 \dots m_p}_{\phantom{m_1 \dots m_p} \bar r_1 \dots \bar r_{n-p}} \nonumber \\
& \quad \times \epsilon^{\bar{n} _1 \dots \bar{n} _q}_{\phantom{\bar \beta_1 \dots \bar{n} _q} s_1 \dots s_{n-q}}
dz^{ s_1}\wedge \dots \wedge dz^{ s_{n-q}} \wedge d \bar z^{\bar r_1} \wedge \dots \wedge d \bar z^{\bar r^{n-p}}.
\end{align}
Finally, let us record our conventions regarding Chern forms.
To begin with,
we define the curvature two-form for Hermitian manifolds to be
\begin{equation}\label{curvtwo}
{\mathcal{R}^m}_n = {{R^m}_n }_{ r \bar s} dz^ r \wedge d\bar{z}^\bar{s}\;\;,
\end{equation}
and we set
\begin{eqnarray} \label{defR3}
\text{Tr}{\mathcal{R}}\;\;& =& {{R^ m }_ m }_{ r \bar{s}}dz^ r \wedge d\bar{z}^{\bar{s}} \;,\nonumber \\
\text{Tr}{\mathcal{R}^2} &= & {{R^{ m }}_{n }}_{ r \bar{s}} {{R^{n }}_{ m }}_{ r_1 \bar{s}_1}dz^{ r}
\wedge d\bar{z}^{\bar{s}}\wedge dz^{ r_1} \wedge d\bar{z}^{\bar{s}_1} \;,\nonumber \\
\text{Tr}{\mathcal{R}^3} &=& {{R^{ m }}_{n }}_{ r \bar{s}} R^{n }{}_{n _1 r_1 \bar{s}_1}
{{R^{n _1}}_{ m }}_{ r_2 \bar{s}_2}dz^{ r} \wedge d\bar{z}^{\bar{s}}\wedge dz^{ r_1} \wedge d\bar{z}^{\bar{s}_1}\wedge dz^{ r_2} \wedge d\bar{z}^{\bar{s}_2} \; .
\end{eqnarray}
The Chern forms can then be expressed in terms of the curvature two-form as
\begin{align} \label{Chernclasses}
c_0 &= 1\nonumber \;, \\
c_1 &= \frac{1}{2\pi} i \text{Tr}{ \mathcal{R}} \nonumber\;, \\
c_2 &= \frac{1}{(2\pi)^2} \frac{1}{2}\left( \text{Tr}{\mathcal{R}^2} - (\text{Tr}{\mathcal{R}})^2 \right)\;, \nonumber\\
c_3 &= \frac{1}{3}c_1c_2 + \frac{1}{(2\pi)^2} \frac{1}{3} c_1 \wedge \text{Tr} \mathcal{R}^2 -
\frac{1}{(2\pi)^3}\frac{i}{3} \text{Tr} \mathcal{R}^3
\end{align}
The Chern forms of an $n$-dimensional Calabi-Yau manifold $Y_n$ reduce to
\begin{equation}\label{chern34}
c_2 (Y_{n \geq 2}) = \frac{1}{(2\pi)^2} \frac{1}{2} \text{Tr}{\mathcal{R}^2} \;\; \text{and} \;\;c_3 (Y_{n \geq 3}) = - \frac{1}{(2\pi)^3} \frac{i}{3} \text{Tr}{\mathcal{R}^3}
\end{equation}
The six dimensional Euler-density is given by
\begin{equation} \label{Euler}
Q = -\tfrac {1}{3} \left( R^{\text{\tiny{(0)}}}_{m_1}{}^{m_2}{}_{n_1}{}^{n_2}
R^{\text{\tiny{(0)}}}_{m_2}{}^{m_1} {}_{n_2} {}^{n_3}
R^{\text{\tiny{(0)}}}_{n_2} {}^{n_1} {}_{n_3} {}^{n_2} + R^{\text{\tiny{(0)}}}_{m_1}{}^{m_2}{}_{n_1}{}^{n_2}
R^{\text{\tiny{(0)}}}_{m_2}{}^{m_3} {}_{n_2} {}^{n_3}
R^{\text{\tiny{(0)}}}_{m_3} {}^{m_1} {}_{n_3} {}^{n_1} \right) \ .
\end{equation}
It satisfies
\begin{equation} \label{Q_integral}
Q = (2\pi)^3 \, *_6 c_3 \ , \qquad
\int_{Y_3} Q *_6 1 = (2\pi)^3 \chi \ ,
\end{equation}
where $\chi$ is the Euler-Characteristic of the internal Calabi-Yau manifold.
Let us next define the intersection numbers\ba\label{IN1}
\mathcal{K}_{i j k} &= \tfrac{1}{(2 \pi \alpha')^3}\int_{Y_3} \omega_i \wedge \omega_j \wedge \omega_k \, , \quad \quad \;\;\; \quad \quad \quad
\mathcal{K}_{i j } = \tfrac{1}{(2 \pi \alpha')^3}\int_{Y_3} \omega_i \wedge \omega_j \wedge J = \mathcal{K}_{i j k } v^k \,, \nonumber \\
\mathcal{K}_{i } \;\; & = \tfrac{1}{2 ( 2 \pi \alpha')^3}\int_{Y_3} \omega_i \wedge J\wedge J = \fr12 \mathcal{K}_{i j k } v^j v^k \, , \quad
\mathcal{V} \;= \tfrac{1}{3! ( 2 \pi \alpha')^3}\int_{Y_3} J \wedge J\wedge J = \fr1{3!} \mathcal{K}_{i j k } v^i v^j v^k \, ,
\ea
where $\{ \omega_i \}$ are harmonic (1,1) -forms w.r.t.~to the Calabi- Yau metric $g_{m \bar n}$.
Let us state the useful identities
\ba \label{IN2}
\omega_{im}{}^m & = i \frac{\mathcal{K}_i}{\mathcal{V}} \ , &
\omega_{im \bar n} \omega_j{}^{\bar nm} \, *_61 & =
\omega_i \wedge \omega_j \wedge J- \frac{1}{\mathcal{V}{}^2} \mathcal{K}_i \mathcal{K}_j \,
*_61 \ .
\ea
We present the formulae for a Weyl rescaling $g_{\mu \nu} \to \Omega g_{\mu \nu}$ of the four-derivative terms, $R_{\mu \nu} R^{\mu \nu} ,\, R^2$. These expressions can be derived straight forwardly, and are given by
\begin{eqnarray}\label{Weyl1}
R^2 \stackrel{\text{Weyl}}{\rightarrow} & \frac{1}{\Omega^2} R^2 \,\, - \,\, 6 \; R \frac{1}{\Omega^3} ( \nabla_{\mu} \nabla^\mu \Omega )
\,\, + \,\, 3\frac{1}{\Omega^4} R \,\, ( \nabla_{\mu} \Omega )(\nabla^{\mu}\Omega)
\,\, + \,\, 9 \frac{1}{\Omega^5} (\nabla_{\mu} \nabla^\mu \Omega ) ( \nabla_{\nu} \nabla^\nu \Omega ) &\nonumber \\
& \,\, - \,\, 9 \frac{1}{\Omega^5} ( \nabla_{\mu} \Omega)( \nabla^{\mu}\Omega ) ( \nabla_{\nu} \nabla^\nu \Omega )
\,\, + \,\, 9 \frac{1}{4 \Omega^6}( \nabla_{\mu} \Omega)( \nabla^{\mu}\Omega) ( \nabla_{\nu} \Omega)( \nabla^{\nu}\Omega ) \;\; ,& \;\;
\end{eqnarray}
and
\begin{eqnarray}\label{Weyl2}
R_{\mu \nu} R^{\mu \nu} \stackrel{\text{Weyl}}{\rightarrow} & \frac{1}{\Omega^2} R_{\mu \nu} R^{\mu \nu} \,\, - \,\, \; R \frac{1}{\Omega^3} ( \nabla_{\mu} \nabla^\mu \Omega )
\,\, + \,\, 3 \frac{1}{\Omega^4} R_{\mu \nu} \,\, \nabla
^{\mu} \Omega \nabla^{\nu}\Omega - \,\, 2 \frac{1}{\Omega^4} R_{\mu \nu} \,\, \nabla
^{\mu} \nabla^{\nu}\Omega &\nonumber \\
& \,\, + \,\, 2 \frac{1}{\Omega^5} (\nabla_{\mu} \nabla^\mu \Omega ) ( \nabla_{\nu} \nabla^\nu \Omega )
\,\, + \,\, \frac{1}{\Omega^5} (\nabla^{\mu} \nabla^\nu \Omega ) ( \nabla_{\mu} \nabla_\nu \Omega )
\,\, - \,\, \frac{3}{2} \frac{1}{\Omega^5} ( \nabla_{\mu} \nabla^{\mu}\Omega ) ( \nabla_{\nu}\Omega )( \nabla^\nu \Omega )& \nonumber \\
&
\,\, - \,\, 3 \frac{1}{\Omega^5} ( \nabla^{\mu} \nabla^{\nu}\Omega ) ( \nabla_{\nu}\Omega )( \nabla_\mu \Omega )
\,\, + \,\, 9 \frac{1}{4 \Omega^6}( \nabla_{\mu} \Omega ) (\nabla^{\mu}\Omega) ( \nabla_{\nu} \Omega)( \nabla^{\nu}\Omega )\;\; .&
\end{eqnarray}
\end{appendix}
|
1,108,101,566,498 | arxiv | \section{INTRODUCTION}
\noindent
Recent experimental discovery of the $\Theta (1540)$ as a narrow KN bound state in $\gamma$-nucleon and
$\gamma$-nucleus processes, $e^+e^-$ and hadronic machines \cite{NICOLAI}\footnote{However, this narrow state has
not been confirmed by several experiments with high statistics and very good particle identification, using either
$e^+e^-$
\cite{LISTA} and hadronic initial states.} have stimulated renewed theoretical interests in hadron spectroscopy
\cite{JAFFE,THEORY}. In this paper, we shall critically re-analyze the mass determinations of the isoscalar $I=0$,
$\Theta$ pentaquark mass from the exponential Laplace sum rules (LSR) \cite{oka.04,zhu.03,makus.04} within the diquark
scenario \cite{JAFFE} and propose new analysis using Finite Energy Sum Rule (FESR).
\vspace*{-0.5cm}
\section{THE PENTAQUARK CURRENTS}
\noindent
The basic ingredients in the resonance mass determinations from QCD spectral sum rules \cite{SVZ,SNB} as well as from
lattice QCD calculations are the choice of the interpolating
currents for describing the resonance states. Contrary to the ordinary mesons, where the form of the current comes
from first principles, there are different choices of the pentaquark currents in the literature. The following analysis
also postulates the existence of a strongly bound pentaquark resonance.
We shall list below some possible operators describing the isoscalar
$I=0$ and
$J=1/2$ channel\footnote{The isovector
$I=1$ current for $S$-wave
resonance have been proposed by \cite{mat.02}. We have also checked (see also \cite{NIELSEN}) that the tensor diquark current used in
\cite{IOFFE} is an isovector $I=1$ instead of an isoscalar $I=0$ as confirmed by
the authors in the revised version of their paper. In the following, we shall neglect the isospin breaking discussed in
\cite{VENEZIA}, which would come from higher order diagrams in our analysis.}, which would correspond to the experimental
candidate
$\Theta(1540)$ \cite{NICOLAI}.
\noindent
Defining the pseudoscalar ({\it ps}) and scalar ({\it s}) diquark interpolating fields as:
\begin{eqnarray}
\label{RD1.pseudo}
Q^{ps}_{ab}(x) &=& \left[u_{a}^{T}(x)Cd_{b}(x)\right]~,\nonumber\\
Q^{s}_{ab}(x) &=& \left[u_{a}^{T}(x)C\gamma_{5}d_{b}(x)\right]~,
\end{eqnarray}
where $a,~b,~c$ are colour indices and $C$ denotes the charge conjugation matrix,
the lowest dimension current built by two diquarks and one anti-quark describing the $\Theta$ as
a $I=0,~J^P=1/2^+$ $S$-wave resonance is \cite{oka.04} (see also \cite{SASAKI})\footnote{A negative parity state can be obtained
by multiplying by $\gamma_5$ the diquark operator.}:
\begin{equation}
\label{RD1T.oka}
\eta^{\Theta}_{\mbox{{\small \cite{oka.04}}}} = \epsilon^{abc} \epsilon^{def} \epsilon^{cfg} Q_{ab}^{ps}
Q_{de}^{s} C \bar{s}_{g}^{T}~,
\end{equation}
and the one with one diquark and three quarks is~\cite{zhu.03}:
\begin{equation}
\label{RD1.zhu}
\eta^{\Theta}_{\mbox{{\small \cite{zhu.03}}}} = \frac{1}{\sqrt{2}}\epsilon^{abc}
Q^{s}_{ab} \left\{ u_{e} \bar{s}_e i \gamma_5 d_c - (u \leftrightarrow d) \right\}~.
\end{equation}
This later choice can be interesting if the instanton repulsive force arguments \cite{SHURYAK} against the existence of a pseudoscalar
diquark bound state apply. Alternatively, a description of the $\Theta(1.54)$ as a $I=0$, $J^P=1/2^+$ $P$-wave resonance has been
proposed by
\cite{JAFFE} and used by
\cite{makus.04} in the sum rule analysis:
\begin{eqnarray}
\label{RD1.eidcur}
\eta^\Theta_{\mbox{{\small \cite{makus.04}}}} = \left( \epsilon^{abd} \delta^{ce} + \epsilon^{abc} \delta^{de}
\right)
[Q^{s}_{ab}(D^{\mu}Q^{s}_{cd})-\nonumber\\
(D^{\mu}Q^{s}_{ab})Q^{s}_{cd}] \gamma_{5} \gamma_{\mu} C \bar{s}_{e}^{T}~.
\end{eqnarray}
We have generalized this current by considering its mixing with the following one having the same dimension and quantum numbers:
\label{RD2.eidcur}
\begin{equation}\label{eta.new}
\eta^\Theta_{{{\small new}}} = \epsilon^{abc} \epsilon^{def} \epsilon^{cfg}
Q^{ps}_{ab}Q^{s}_{de}\gamma_\mu (D^{\mu} C \bar{s}_{g}^{T})~.
\end{equation}
\section{THE QCD SPECTRAL FUNCTIONS}
\noindent
For the QCD spectral sum rules analysis,
we shall work here with the
two-point correlators:
\begin{eqnarray}
\Pi^H(q^2) &\equiv& i \int d^4x ~e^{iqx} \
\langle 0\vert {\cal T}
\eta^H(x)
\bar{\eta}^H(0) \vert 0 \rangle ,
\end{eqnarray}
built from the previous $\eta$ currents. It possesses the Lorentz decomposition:
\begin{equation}
\label{eq: invariant}
\Pi^H(q^2)= \hat q A^H(q^2)+ B^H(q^2)~.
\end{equation}
$\bullet$ The QCD expression of the correlators associated to different choices of the currents
is known in the literature \cite{oka.04,zhu.03,makus.04} to leading order of PT series and including
the three first non-perturbative condensate ($D\leq 5,6$) contributions. We have checked the QCD expressions given there and agree
with their results. However, at that approximation, we have added some missing contributions in \cite{zhu.03}.\\
$\bullet$ We have not
included in the OPE the contribution of the $D=2$ tachyonic gluon mass induced by the resummation of the PT series \cite{ZAK}
bearing in mind that this effect will be negligible, to the accuracy we are working, as illustrated in some examples
\cite{TACH}. \\
$\bullet$ We have included the $D=7,9$ contributions into the QCD expression of the spectral function associated to the current
in Eq. (\ref{RD1T.oka}), which we shall extensively study as
a prototype example in this paper. In doing this calculation, we have worked in the chiral limit $m_s=0$, such that for consistencies, we
shall use the $SU(3)$ symmetric value $\langle\bar ss\rangle=\langle\bar dd\rangle$ for these contributions. In this particular example,
we have checked that the contribution of the four-quark condensate vanishes to leading order. In our preliminary results,
we also found that its radiative correction though not identically zero gives a negligible contribution.
We have also neglected the
contributions of the three-gluon condensate of the type g$\langle GGG\rangle\langle\bar ss\rangle$ assuming that the theorem in \cite{MALLIK} for
the light quark bilinear operators continues to hold for the diquark correlators \footnote{We plan to check explicitly this
result in a future publication.}, which factorize during the evaluation of the QCD expression. \\
$\bullet$ We have evaluated the new contributions associated to the current
$\eta_{new}$ in Eq. (\ref{eta.new}), where we found that, to leading order in $\alpha_s$ and in the
chiral limit
$m_q\rightarrow 0$, the contribution to the correlator vanishes. This result justifies a posteriori the
{\it unique choice} of operator for the $P$-wave state used in \cite{JAFFE,makus.04}.
\section{THE LAPLACE SUM RULES (LSR)}
\noindent
We shall be concerned with the Laplace transform sum rules:
\begin{eqnarray}
\label{eq:lapl}
{\cal L}^H_{A/B}(\tau)
&\equiv& \int_{t_\leq}^{\infty} {dt}~\mbox{e}^{-t\tau}
~\frac{1}{\pi}~\mbox{Im}{A^H/B^H}(t),\nonumber \\
{\cal R}^H_{A/B}(\tau) &\equiv& -\frac{d}{d\tau} \log {{\cal L}^H_{A/B}(\tau)},
\end{eqnarray}
where $t_\leq$ is the hadronic threshold, and H denotes the corresponding hadron. The latter sum rule,
or its slight modification, is useful, as it is equal to the
resonance mass squared, in
the usual duality ansatz parametrization of the spectral function:
\begin{eqnarray}
\frac{1}{\pi}\mbox{ Im}A^H/B^H(t)\simeq (\lambda^2_H/\lambda^2_HM_H)\delta(t-M^2_H)
\ + \ \nonumber\\
``\mbox{QCD continuum}" \Theta (t-t_c),
\end{eqnarray}
where the ``QCD continuum comes from the discontinuity of the QCD
diagrams, which is expected to give a good smearing of the
different radial excitations . $\lambda_H$ is
the residue of the hadron $H$;
$t_c$ is the QCD continuum threshold, which is, like the
sum rule variable $\tau$, an (a priori) arbitrary
parameter. In this
paper, we shall look for the
$\tau$- and $t_c$-stability criteria for extracting the optimal
results.
For illustrating our analysis, we
give below the checked and completed LSR of the $S$-wave current in Eq. (\ref{RD1T.oka}) including the new $D=7,9$
high-dimension condensates in $B$:
\begin{eqnarray}
\label{eq: lsra}
&&{\cal L}^\Theta_{A_{\cite{oka.04}}}(\tau)=\frac{\tau^{-6}E_5}{860160\pi^{8}}+
\frac{\tau^{-4}E_3}{30720\pi^{6}}m_s\langle\bar{s}s\rangle+\nonumber\\
&&\frac{\tau^{-4}E_3}{122880\pi^{7}}\langle{\alpha_s}G^2\rangle
-\frac{\tau^{-3}E_2}{36864\pi^{6}}m_sg\langle\bar{s}{\bf\sigma G}s\rangle~,
\end{eqnarray}
\vspace{-0.35cm}
\begin{eqnarray}
\label{eq: lsrb}
&&{\cal L}^\Theta_{B_{\cite{oka.04}}}(\tau)=\frac{\tau^{-6}E_5}{122880\pi^{8}}m_s
-\frac{\tau^{-5}E_4}{15360\pi^{6}}\langle\bar{s}s\rangle+,
\nonumber\\
&&
\frac{\tau^{-4}E_3}{12288\pi^{6}}g\langle
\bar{s}{\bf\sigma.G}s\rangle+\frac{\tau^{-4}E_3}{24576\pi^{7}}m_s\langle{\alpha_s}G^2\rangle-\nonumber\\
&&\frac{\tau^{-3}7E_2}{27648\pi^{5}}\langle\bar{s}s\rangle\langle{\alpha_s}G^2\rangle+\frac{\tau^{-2}E_1}{6144\pi^{4}}\langle\bar{s}s\rangle g\langle
\bar{s}{\bf\sigma.G}s\rangle ~,\nonumber\\
\end{eqnarray}
\vspace{-0.2cm}
where:
\vspace{-0.1cm}
\begin{equation}
E_n=1-\Big{[}\rho_n\equiv e^{-t_c\tau}\sum_{0}^{n}\frac{(t_c\tau)^k}{k!}\Big{]}~,
\end{equation}
\vspace{-0.2cm}
$\rho_n$ being the notation in \cite{SNB}, while:
$\langle\bar{s}s\rangle,~\langle\alpha_s G^2\rangle$ are respectively the dimension $D=3$ quark and $D=4$
gluon condensates;
$g\langle
\bar{s}{\bf\sigma G}s\rangle\equiv g\langle\bar{s}\sigma^{\mu\nu}(\lambda_a/2) G_{\mu\nu}^as\rangle\equiv M_0^2\langle \bar ss\rangle$
is the $D=5$ mixed condensate. Throughout this paper we
shall use the values of the QCD parameters given in Table 1.
\vspace{-1.cm}
\begin{table}[H]
\begin{center}
\setlength{\tabcolsep}{.28pc}
\newlength{\digitwidth} \settowidth{\digitwidth}{\rm 1.5}
\caption{\footnotesize QCD input parameters used in the analysis.}
\begin{tabular}{ll}
\hline
Parameters&References \\
\hline
$\bar m_s(2~{\rm GeV})= (111\pm 22)$ MeV&\cite{SNB,QMASS,SNMS,PDG}\\
$\langle \bar dd\rangle^{1/3}$(2 GeV)=$-(243\pm 14)$ MeV&\cite{SNB,QMASS,DOSCHSN}\\
$\langle \bar ss\rangle /\langle \bar dd\rangle=0.8\pm 0.1$&\cite{SNB,QMASS,SNP2}\\
$\langle \alpha_s G^2\rangle=(0.07\pm 0.01)$ GeV$^4$&\cite{SNB,SNG}\\
$M^2_0=(0.8\pm 0.1)$ GeV$^2$&\cite{SNB,SNSP}\\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{-1.cm}
\noindent
We study the LSR in Eqs.
(\ref{eq: lsra}) and (\ref{eq: lsrb}). We find that all LSR corresponding to different currents present
the common features shown in Fig. \ref{fig: lsr}:
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=6cm]{OKALSR.eps}
\vspace{-.5cm}
\caption{\footnotesize $\tau$-behaviour of $M_\Theta$ for given values of $t_c$. On the LHS of the vertical dashed line, the OPE
converges. The vertical line with arrow on each curve shows that the continuum contribution dominates over the resonance in the LHS
region. A (resp B) corresponds to the invariant defined in Eq. (\ref{eq: invariant})}
\label{fig: lsr}
\vspace{-1.3cm}
\end{center}
\end{figure}
\begin{enumerate}
\vspace{-.2cm}
\item[$-$]The $B$-component increases rapidly with $\tau$. Then, it is useless at that approximation of the OPE.
\vspace{-.5cm}
\item[$-$] For $A$, the mass prediction decreases smoothly when $\tau$ increases. The OPE converges for
$\tau\leq 0.9$ GeV$^{-2}$ (LHS of the vertical dashed line).\vspace{-.3cm}
\item[$-$] The QCD continuum contribution dominates over the resonance one in all ranges of $\tau$ where the OPE converges (LHS of the
vertical line with arrow on each curve).
Indeed, for $\tau\leq 0.9$ GeV$^{-2}$, the QCD continuum contribution eats more than 84\% of the OPE one.
\end{enumerate}
\vspace{-.2cm}
$\bullet$ Therefore, it is impossible to find a {\it sum rule window} region where both the
resonance dominates over the QCD continuum contribution, and where the OPE converges.
Intuitively, this feature is expected as the current describing the pentaquark is of
higher dimensions, and therefore is more affected by the continuum contribution than
the well-known sum rule for ordinary $\bar qq$ mesons. The absence of the sum rule
window is reflected by the increase of the mass predictions with the QCD continuum
threshold
$t_c$. In existing literature, the $t_c$-values have been fixed ad hoc and intuitively.
\\
$\bullet$ During the evaluation of the different QCD diagrams, we do not find (to leading order in $\alpha_s$) any
factorization of the $(\bar s u)$-$(udd)$ diagram corresponding to a reducible $K$-$N$ continuum diagram, which has nothing to do
with the diquark picture. Then, our direct observation does not support the criticisms raised in \cite{KONDO.04} and refuted in \cite{LEE.04} on
a possible double counting due to the non-subtraction of the reducible diagram in the existing sum rules analysis of the $\Theta$.
\\
$\bullet$ We conclude from the previous prototype example that the LSR using the simple duality ansatz: resonance+QCD continuum
criterion is not appropriate for determining the pentaquark masses due to the absence of
the usual {\it sum rule window}. Due to the huge continuum contribution ($\approx
85\%$) at relatively large $\tau\approx 1$ GeV$^{-2}$, the LSR
cannot strictly indicates the existence of the resonance into the spectral function.\\
$\bullet$ We have
checked (though not explicitly shown in the paper) that the conclusions reached in the paper also apply to the sum rules used in the literature:
\cite{zhu.03} (current in Eq. (\ref{RD1.zhu})) and susbsequent uses in \cite{KONDO.04,LEE.04} for the $I=0$,
$S$-wave state; the sum rules used in \cite{mat.02} for the $I=1$, $S$-wave state; in \cite{makus.04}
(current in Eq. (\ref{RD1.eidcur})) for the
$I=0,~1$
$P$-wave state; in \cite{IOFFE} for the $I=1$ tensor current and the
sum rules used in \cite{NAVARRA,KONDO.14} for studying the $J=3/2$ states. Indeed, in most LSR, the OPE does not converge at the
scale where the results are extracted, while the QCD continuum threshold has been taken arbitrarily or intuitively.\\
$\bullet$ The above results raise some doubts
on the validity of the results obtained so far in the existing literatures. Indeed, if one insists on using the LSR for predicting the
$\Theta$ parameters and some other pentaquark states, it is mandatory to introduce a more involved parametrization of the continuum spectral
function.
\section{FINITE ENERGY SUM RULES (FESR)}
\noindent
In contrast to the LSR, Finite Energy Sum Rules (FESR) \cite{RAF,KRAS,SNB} have the advantage to project out a set of
constraints for operators of given dimensions (local duality). They also correlate the resonance mass and residue to
the QCD continuum threshold $t_c$, so avoiding inconsistencies of the values of these parameters. Also
contrary to the LSR, the resonance and QCD continuum contributions are separated from the very beginning. The
FESR read:
\vspace{-0.2cm}
\begin{eqnarray}
{\cal M}^H_{n}(A/B) &\equiv& \int_{t\leq}^{t_c}{dt}~t^n\mbox{
Im}A^H/B^H|_{EXP}\nonumber\\
&\simeq &\int_{t\leq}^{t_c}{dt~t^n}\mbox{ Im}A^H/B^H|_{QCD}~.
\end{eqnarray}
From the expressions of the spectral function given
previously, one can easily derive the FESR constraints.
\\
Doing the FESR analysis for the $A(q^2)$ invariant, one can notice that, at the approximation where the OPE is known ($D\leq 6$),
one does not have a stability in
$t_c$ for different moments ${\cal M}^\Theta_{n}$ and for different choices of the currents (see Fig. \ref{fig:
mass1}). Therefore, we will not consider this invariant in the paper.
\subsection*{The $I=0$, $S$-wave channel}
\noindent
We illustrate the analysis by the current in Ref. \cite{oka.04} (the other choice \cite{zhu.03} in Eq.
(\ref{RD1.zhu})) has approximately the same dynamics as one can inspect from the QCD expressions). Including
the $D=7$ and 9 condensate contributions, the two first lowest dimension constraints from the
$B(q^2)$ invariant read:
\begin{eqnarray}
{\cal M}^\Theta_{0,\cite{oka.04}} &=& {\frac {{\it m_s}\,{{\it t_c}}^{6}}{{88473600}{\pi }^{8}}
}-{\frac {{\langle\bar{s}s\rangle}\,{{ t_c}}^{5}}{1843200\pi ^{6}}}\nonumber\\
&+&
\,{\frac {{\it g\langle\bar{s}{\bf\sigma.G}s\rangle}\,{{\it t_c}}^{4}}{294912{\pi }^{6}}
}
+{{m_s\langle\alpha_s G^2\rangle t_c^4}\over{589824\pi^7}}
\nonumber\\
&-&{7\langle\alpha_s G^2\rangle\langle\bar ss\rangle t_c^3\over 165888\pi^5}+{\langle\alpha_s G^2\rangle{\it g\langle\bar{s}{\bf\sigma.G}s\rangle}t_c^2\over 12288\pi^5}
\end{eqnarray}
\begin{eqnarray}
{\cal M}^\Theta_{1,\cite{oka.04}} &=& {\frac {{ m_s}\,{{ t_c}}^{7}}{{103219200}{\pi }^{8}}
}-{\frac {{\langle\bar{s}s\rangle}\,{{ t_c}}^{6}}{2211840\pi ^{6}}}\nonumber\\
&+&
\,{\frac {{ g\langle\bar{s}{\bf\sigma.G}s\rangle}\,{{ t_c}}^{5}}{368640{\pi }^{6}}
}
+{{m_s\langle\alpha_s G^2\rangle t_c^5}\over{737280\pi^7}}
\nonumber\\
&-&{7\langle\alpha_s G^2\rangle\langle\bar ss\rangle t_c^4\over 221184\pi^5}+{\langle\alpha_s G^2\rangle{\it g\langle\bar{s}{\bf\sigma.G}s\rangle}t_c^3\over 18432\pi^5}
\end{eqnarray}
from which one can deduce the mass squared:
\begin{equation}
M^2_{\Theta}\simeq {\frac{{\cal M}^\Theta_{1,\cite{oka.04}}}{{\cal M}^\Theta_{0,\cite{oka.04}}}}~.
\end{equation}
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=6cm]{OKAFESR.eps}
\vspace{-.5cm}
\caption{\footnotesize $t_c$-behaviour of $M_\Theta$ for different truncations of the OPE. }
\vspace{-1.cm}
\label{fig: mass1}
\end{center}
\end{figure}
The behaviour of $M_{\Theta}$ is shown in Fig. \ref{fig: mass1}, for different truncations of the OPE. One can
notice a stability at
$
{t_c}\simeq 2.29~ {\rm GeV}^2~,
$
where the OPE starts to converge after only the inclusion of the $D=7+9$ condensates, while $D=7$ alone destroys the
stability reached for $D\leq 5$.
One can notice the important contribution of the lowest quark and mixed quark-gluon condensates in the OPE, which play a crucial
role in this mass determination. To that order, we obtain:
\vspace{-0.2cm}
\begin{eqnarray}\label{eq:smass}
M_\Theta&\simeq& (1513\pm 20\pm 10\pm 40 \pm 30 \pm 30\pm 95)~{\rm GeV}\nonumber\\
&\simeq&(1513\pm 114)~{\rm GeV}~,
\end{eqnarray}
where the errors come respectively from $m_s,~\langle\bar qq\rangle,~\langle\alpha_s G^2\rangle,~M^2_0$, the estimate of the higher dimension
condensates and the violation of the vacuum saturation assumption of the $D=7,9$ condensates by a factor $(2\pm 1)$
like in the
$\rho$-meson
\cite{TARRA,RAF} and some other channels \cite{SNB}.
One can notice that:\\
$\bullet$ The existence of the $t_c$-stability point makes the superiority of FESR compared to the LSR in this channel. For the LSR,
$M_\Theta$ increases with $t_c$. Here, the
localisation of the stability point induces here a negligible error. \\
$\bullet$ The FESR order parameter in the OPE, $t_c\simeq 2.3$ GeV$^2$ is much larger
than for the LSR ($\tau^{-1}\leq 1$ GeV$^2$), implying a much better convergence of the OPE for the FESR,
and then a much more reliable result than the LSR.\\
$\bullet$ Working with ratio of higher moments $ {\cal M}^\Theta_{2}/{\cal M}^\Theta_{1}$,...leads to
almost the same value of $M_\Theta$. The slight variation is much smaller than the error in Eq. (\ref{eq:smass}).\\
$\bullet$ Truncating the OPE at $D=5$ like done in the available literature would give a slightly lower value of $M_\Theta$ at the
stability point $t_c\approx 3.2$ GeV$^2$ (see Fig. \ref{fig: mass1}), but compatible
with the one in Eq. (\ref{eq:smass}).
\\
$\bullet$ Contrary to $M_\Theta$, the value and the sign of $\lambda_\Theta^2$ are very sensitive to the truncation of the OPE due to the
alternate signs of the condensate contributions in the analysis. The stability in $t_c$ is obtained after the inclusion of the $D=5$
condensate contributions as shown in Fig. \ref{fig: okalambda}.
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=6cm]{okafesrlamb.eps}
\vspace{-.5cm}
\caption{\footnotesize $t_c$-behaviour of $\lambda^2_\Theta\times 10^9$ in GeV$^9$ including $D\leq 5$ condensates
in the OPE. }
\label{fig: okalambda}
\vspace{-1.1cm}
\end{center}
\end{figure}
To our approximation $D\leq 9$, the most conservative result is:
\begin{equation}
\lambda_\Theta^2\approx -(0.14\sim 0.49)\times 10^{-9}~{\rm GeV}^{12},
\end{equation}
where the range comes from the shift of the $t_c$-stability point from $D=5$ to $D=9$ approximation.
This result, though inaccurate, suggests that the parity of the
$\Theta$ is negative, as indicated by the lattice results given in
\cite{LATT}\footnote{At the approximation $D\leq 5$
the LSR does not converge such that analogous results obtained in \cite{oka.04,KONDO.04,LEE.04} should be taken with a great care.}.
Improving the accuracy of our result requires more high-dimension condensate terms in the OPE.
\subsection*{The $I=1$, $S$-wave channel}
\noindent
We have also applied FESR in the $I=1$ $S$-wave channel with the current \cite{mat.02}:
\vspace{-0.2cm}
\begin{equation}
\eta^{\Theta}_{\mbox{{\small \cite{mat.02}}}} = {1\over\sqrt{2}}\epsilon^{abc} \Big{[} Q_{ab}^{s}
Q_{ce}^{s} + tQ_{ab}^{ps}
Q_{ce}^{ps}-(u\leftrightarrow d)\Big{]}C \bar{s}_{e}^{T}
\end{equation}
\vspace{-0.2cm}
where $t$ is an arbitrary mixing parameter. To the $D\leq 5$ approximation, the analysis gives almost the value of $M_\Theta$ in Fig. \ref{fig:
mass1} at the same approximation. This result can be interpreted as a consequence of the good realization of the $SU(2)_F$ symmetry for the $u$
and $d$ quarks. Then, we expect that the unmixed $I=1$ partners of the umnixed $I=0$ state will be around the 1.5 region
if any.
\section*{The $I=0$, $P$-wave channel}
\noindent
We do a similar analysis for the $P$-wave
current given in Eqs. (\ref{RD1.eidcur}) and (\ref{eta.new}), where as we have mentioned in section 3, the contribution from Eq.
(\ref{eta.new}) vanishes to leading order in $\alpha_s$. The corresponding FESR up to
$D=5$ condensates are given below
\footnote{The inclusion of higher dimension condensates is in progress.}:
\begin{eqnarray}
{\cal M}^P_{0,\cite{makus.04}} &=& {\frac {{\it m_s}\,{{\it t_c}}^{7}}{{361267200}{\pi }^{8}}
}-{\frac {{\langle\bar{s}s\rangle}\,{{ t_c}}^{6}}{5529600\pi ^{6}}}\nonumber\\
&+&
\,{\frac {{\it g\langle\bar{s}{\bf\sigma.G}s\rangle}\,{{\it t_c}}^{5}}{614400{\pi }^{6}}-\frac {m_s \langle\alpha_s
G^2\rangle t_c^5}{19660800\pi^7}}
\end{eqnarray}
\begin{eqnarray}
{\cal M}^P_{1,\cite{makus.04}} &=& {\frac {{\it m_s}\,{{\it t_c}}^{8}}{{412876800}{\pi }^{8}}
}-{\frac {{\langle\bar{s}s\rangle}\,{{ t_c}}^{7}}{6451200\pi ^{6}}}\nonumber\\
&+&
\,{\frac {{\it g\langle\bar{s}{\bf\sigma.G}s\rangle}\,{{\it t_c}}^{6}}{737280{\pi }^{6}}-\frac {m_s \langle\alpha_s
G^2\rangle t_c^5}{23592960\pi^7}}
\end{eqnarray}
To this order, the moments present a similar $t_c$-behaviour as above (see Figs. \ref{fig: pmass} and \ref{fig: plambda}.).
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=6cm]{eidFESR.eps}
\vspace{-.5cm}
\caption{\footnotesize $t_c$-behaviour of $M_P$ including the $D\leq 5$ condensates in the OPE. }
\vspace{-1.cm}
\label{fig: pmass}
\end{center}
\end{figure}
Both for the mass and residue, the stability point is:
\begin{equation}
{t_c}\simeq 5.5~ {\rm GeV}^2~.
\end{equation}
The corresponding resonance mass and residue are:
\begin{eqnarray}\label{eq: pmass}
M_P&\simeq& 1.99\pm 0.19~{\rm GeV},\nonumber\\
\lambda_P^2&\approx& -(0.7\sim 7.1)\times 10^{-9}~{\rm GeV}^{14}~.
\end{eqnarray}
The errors come mainly from the estimate of the unknown $D=7,~9$ condensates contributions inspired from the
$S$-wave channel.
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=6cm]{eidlamb.eps}
\vspace{-.5cm}
\caption{\footnotesize $t_c$-behaviour of $\lambda^2_P\times 10^9$ GeV$^{12}$ including the $D\leq 5$ condensates in
the OPE. }
\vspace{-1.cm}
\label{fig: plambda}
\end{center}
\end{figure}
One can notice that:
\\
$\bullet$ The value of the QCD continuum threshold at which the FESR stabilizes is much higher
than the intuitive choice used in the LSR \cite{makus.04} needed to reproduce the experimental mass of the $\Theta$.\\
$\bullet$ The mass value obtained for the $P$-wave resonance is $(450\pm 190)$ MeV higher than the $\Theta(1540)$ mass,
which suggests that there is a $P$-wave state different from the $\Theta(1540)$ in the region around 2 GeV, which we expect to
be discovered experimentally.\\
$\bullet$ The value and sign of the residue suggest that this $P$-wave state has a negative parity like the $\Theta(1540)$.
\section{THE $\Theta$-$K$-$N$ COUPLING}
\noindent
For studying this coupling, we start from the three-point function:
\begin{eqnarray}
V(p,q)=i^2\int d^4x~d^4y~e^{i(px+qy)}~\langle 0\vert \eta(0)N(x)K(y)\vert 0\rangle
\end{eqnarray}
where $K(y)\equiv (m_s+m_u)\bar s(i\gamma_5)u$ is the kaon current, while $N(x)\equiv :u(C\gamma_5)du:+b:u(Cd)\gamma_5 u:$ is the
nucleon interpolating field
\cite{HEIDEL} ($b$ being an arbitrary mixing parameter). $\eta$ is the $\Theta$ current defined in previous section. For
definiteness, we work with the $S$-wave
current given by \cite{oka.04}. A QCD evaluation of the vertex in the chiral limit $m_s=0$ shows that the leading and
$\alpha_s$ orders perturbative and non-perturbative diagrams give zero contributions \footnote{Our result is stronger than
the one in Ref. \cite{IOFFE} which claims a non-zero $\alpha_s$ contribution.} . The result then suggests that the
$\Theta$-$K$-$N$ coupling is of the order $\alpha_s^2$ supporting the experimental observation \cite{NICOLAI} that the $\Theta (1540)$ is a
narrow state. The narrowness of a pentaquark state has been already advocated in the past, from duality arguments, where its decay into
$B\bar BB$ baryon states is dynamically favoured, implying that light pentaquark states below this threshold are naturally narrow
\cite{VENEZIA}. A narrow $S$-wave pentaquark state has been also obtained in \cite{STECH} and \cite{IOFFE} using simple chiral
symmetry arguments. In \cite{KARL} the narrowness of the $\Theta$ is due to a destructive interference between two almost degenerate
states, while in \cite{SORBA}, it is due to the flavour structure of the $\Theta$, which, after the meson formation, the residual
three-quark piece has a little overlap with the octet baryon wave-function.
\section{SUMMARY AND CONCLUSIONS}
\noindent
$\bullet$ We have re-analyzed the existing LSR results in the literature. We found that due to the slow convergence of the OPE
and to the relative importance of the QCD continuum contribution into the spectral function, the minimal duality ansatz ``one
resonance + QCD continuum" is not sufficient for finding a {\it sum rule window} where the results are optimal. These
features penalize {\it all} existing sum rule results in the literature, which then become unreliable despite the fact
that the mass predictions reproduce quite well the data. However, this {\it apparent} good prediction is due to the fact
that the {\it intuitive or arbitrary} choice of the continuum threshold $t_c$. In fact, in the LSR analysis, the mass
prediction increases with $t_c$ though it is a smooth function of $\tau$ as can be seen in Fig. \ref{fig: lsr}. \\
$\bullet$ On the contrary,
FESR has the advantage to present a good
$t_c$-stability and converges faster than the LSR, because the optimal results are obtained at higher scale $t_c\approx (2\sim
3)$ GeV$^2$ than the one of LSR $\tau^{-1}\leq 1$ GeV$^2$. \\
$\bullet$ Truncating the OPE at the $D=9$ condensates, at which the OPE starts to converge, we obtain the result
in Eq. (\ref{eq:smass}), for the $S$-wave state, which one can compare with the experimental candidate $\Theta(1540)$.
\\
$\bullet$ By truncating the OPE at $D=5$, we also find from FESR a good degeneracy between the unmixed $I=0$ and $I=1$ $S$-wave
states.\\
$\bullet$ Similarly, we obtain, from FESR, the mass of the $P$-wave state of about 2 GeV in Eq. (\ref{eq: pmass}) to order $D= 5$ of
the OPE, but including the estimated effects of $D=7,9$ condensates. This mass is $(450\pm 190)$ MeV higher
than the
$\Theta(1540)$. \\
$\bullet$ Finally, an analysis of the
$\Theta$-$K$-$N$ coupling using vertex sum rules supports results in the literature that the $\Theta(1540)$ is a narrow
state.\\
$\bullet$ Our results seem to favour the case (b) discussed in \cite{JAIN} where the $\Theta$ resonance is induced in $KN$ scattering
by coupling to a confined channel. A complete program using FESR in different pentaquark channels is in progress and will be
published elsewhere.
\section*{ACKNOWLEDGEMENTS}
\noindent
Communications with M. Eidemueller, R.L. Jaffe, F. Navarra, M. Nielsen, J.M. Richard, R. Rodrigues da Silva and G.C. Rossi
are acknowledged.
|
1,108,101,566,499 | arxiv | \section{Introduction}
In the recent years, there has been an increasing interest on Deep Learning for supervised classification \citep{lecun}. It is very difficult to give an exact definition of what it is due to its wide applicability in different contexts and formulations, but it can be thought of as a set of algorithms able to gradually
learn a huge number of parameters in an architecture composed by
multiple non-linear transformations, called multi-layer structure. Deep Neural Networks have achieved great success in supervised classification and an important example of it is given by the so-called Facebook's DeepFace software: a deep learning facial recognition system
that employs a nine-layer neural network with over 120 million connection
weights. It can identify human faces in digital images with an accuracy of 97.35\%, at the same level as the human visual capability \citep{hodson}.
Deep learning architectures are now widely used for speech recognition, object detection, pattern recognition, image processing and many other supervised classification tasks; for a comprehensive historical survey and its applications, see \cite{schmidhuber} and the references therein.
Despite the success of deep models for supervised tasks, there has been limited research in the machine learning and statistics community on deep methods for clustering. In this paper we will present and discuss deep Gaussian mixtures for clustering purposes, a powerful generalization of classical Gaussian mixtures to multiple layers.
Identifiability of the model is discussed and an innovative stochastic estimation algorithm is proposed for parameter estimation.
Despite the fact that in recent years research on mixture models has been intense and prolific in many directions, we will show how deep mixtures can be very useful for clustering in complex problems.
The paper is organized as follows. In the next section classical Gaussian mixture models will be reviewed. In Section 3 deep Gaussian mixtures are defined and their main probabilistic properties presented. Identifiability is also discussed. In Section 4 dimensionally reduced deep mixtures are presented. Section 5 is devoted to the estimation algorithm for fitting the model. Experimental results on simulated and real data are presented in Section 6. We conclude this paper with some final remarks (Section 7).
\section{Gaussian Mixture Models}
Finite mixture models \citep{Peel} have gained growing popularity in the last decades as a tool for model-based clustering \citep{Fraley}. They are now widely used in several areas such as pattern recognition, data mining, image analysis, machine learning and in many problems involving clustering and classification methods.
Let $\textbf{y}_i$ be a $p$-dimensional random vector containing $p$ quantitative variables of interest for the statistical unit $i$th, with $i=1,\ldots,n$.
Then $\textbf{y}_i$ is distributed as a Gaussian Mixture Model (GMM) with $k$ components if
\begin{equation*}
f(\textbf{y}_i;\boldsymbol\theta) =
\sum_{j=1}^k\pi_j\phi^{(p)}(\textbf{y}_i;\mu_j,\Sigma_j),
\end{equation*}
\noindent where the $\pi_j$ are positive weights subject to $\sum_{j=1}^k\pi_j=1$ and the $\mu_j,\Sigma_j$ are the parameters of the Gaussian components. Note an interesting property that will be very useful in defining our proposal: a Gaussian mixture model has a related factor-analytic representation via a linear model with a certain prior probability as
\begin{equation*}
\textbf{y}_i=\mu_j + \Lambda_j \textbf{z}_i + \textbf{u}_i \ \ \ \ \textrm{with prob. } \pi_j,
\end{equation*}
\noindent where ${\bf z}_i$ is $p$-dimensional a latent variable with a multivariate standard Gaussian distribution and $\textbf{u}_i$ is an independent vector of random errors with $\textbf{u}_i\sim N(0,\Psi_j)$, where the $\Psi_j$ are diagonal matrices. The component-covariance matrices can then be decomposed as $\Sigma_j=\Lambda_j \Lambda_j^\top+\Psi_j$.
\section{Deep Mixture Models}
Deep learning is a hierarchical inference method organized in a multilayered architecture, where the subsequent multiple layers of learning are able to efficiently describe complex relationships. In the similar perspective of deep neural networks, we define a Deep Gaussian Mixture model (DGMM) as a network of multiple layers of latent variables. At each layer, the variables follow a mixture of Gaussian distributions. Thus, the deep mixture model consists of a set of nested mixtures of linear models that globally provide a nonlinear model able to describe the data in a very flexible way.
\subsection{Definition}
Suppose there are $h$ layers. Given the set of observed data $\textbf{y}$ with dimension $n \times p$ at each layer a linear model to describe the data with a certain prior probability is formulated as follows:
\begin{eqnarray}\label{eqn1}
&&(1) \ \ \ \ \ \ \ \ \textbf{y}_i=\eta_{s_1}^{(1)}+ \Lambda_{s_1}^{(1)} \textbf{z}_i^{(1)}+\textbf{u}_i^{(1)} \textrm{ with prob. } \pi_{s_1}^{(1)}, \ {s_1}=1,\ldots,k_1, \nonumber \\
&& (2) \ \ \ \ \ \ \ \ \textbf{z}_i^{(1)}=\eta_{s_2}^{(2)}+ \Lambda_{s_2}^{(2)} \textbf{z}_i^{(2)}+\textbf{u}_i^{(2)} \textrm{ with prob. } \pi_{s_2}^{(2)}, \ s_2=1,\ldots,k_2, \nonumber \\
&& \ \ \ \ \ \ \ \ \ \ \ \ \ ... \\
&& (h) \ \ \ \ \ \ \ \ \textbf{z}_i^{(h-1)}=\eta_{s_h}^{(h)}+ \Lambda_{s_h}^{(h)} \textbf{z}_i^{(h)}+\textbf{u}_i^{(h)} \textrm{ with prob. } \pi_{s_h}^{(h)}, \ t=1,\ldots,k_h, \ \ \ \ \ \nonumber
\end{eqnarray}
\noindent where $\textbf{z}_i^{(h)}\sim N(\textbf{0},\textbf{I}_p)$ ($i=1,\ldots,n$) and $\textbf{u}_i^{(1)},\ldots,\textbf{u}_i^{(h)}$ are specific random errors that follow a Gaussian distribution with zero expectation and covariance matrices $\Psi_{s_1}^{(1)}, \ldots,\Psi_{s_h}^{(h)}$, respectively, $\eta_{s_1}^{(1)}, \ldots,\eta_{s_h}^{(h)}$ are vectors of length $p$, $\Lambda_{s_1}^{(1)}, \ldots,\Lambda_{s_h}^{(h)}$ are square matrices of dimension $p$. The specific random variables $\textbf{u}$ are assumed to be independent of the latent variables $\textbf{z}$. From this representation it follows that at each layer the conditional distribution of the response variables given the regression latent variables is a (multivariate) mixture of Gaussian distributions.
\begin{figure}
\centering
\includegraphics[scale=0.4]{deepex.eps}
\caption{Structure of a DGMM with $h=3$ and number of layer components $k_1=3$, $k_2=3$ and $k_3=2$}\label{fig:example}
\end{figure}
To illustrate the DGMM, consider $h=3$ and let the number of layer components be $k_1=3$, $k_2=3$ and $k_3=2$. The structure is shown in Figure \ref{fig:example}. Thus, at the first layer we have that the conditional distribution of the observed data given ${\bf z}^{(1)}$ is a mixture with 3 components and so on. More precisely, by considering the data as the zero layer, $\textbf{y}=\textbf{z}^{(0)}$, all the conditional distributions follow a first order Markov first order property that is $f(\textbf{z}^{(l)}|\textbf{z}^{(l+1)},\textbf{z}^{(l+2)},\ldots,\textbf{z}^{(h)};\boldsymbol\Theta)=
f(\textbf{z}^{(l)}|\textbf{z}^{(l+1)};\boldsymbol\Theta)$ for $l=0,\ldots,h-1$. At each layer, we have
\begin{eqnarray}\label{cond}
f(\textbf{z}^{(l)}|\textbf{z}^{(l+1)};\boldsymbol\Theta)=\sum_{i=1}^{k_{l+1}}\pi_i^{(l+1)}N(\eta_i^{(l+1)}+
\Lambda_i^{(l+1)} \textbf{z}^{(l+1)},\Psi_i^{(l+1)}).
\end{eqnarray}
\noindent Moreover, with the DGMM with $k_1=3$, $k_2=3$ and $k_3=2$ will have a `global' number of $M=8$ sub-components ($M=\sum_{l=1}^h \pi_l$), but final $k=18$ possible paths for the statistical units ($k=\prod_{l=1}^h \pi_l$) that share and combine the parameters of the $M$ sub-components. Thanks to this tying, the number of parameters to be estimated is proportional to the number of sub-components, thus reducing the computational cost to learning directly a model with $k=18$ components.
Let $\Omega$ be the set of all possible paths through the network.
The generic path $s=(s_1,\ldots,s_{h})$ has a probability $\pi_s$ of being sampled, with
$$ \sum_{s \in \Omega} \pi_s=\sum_{s_1,\ldots,s_{h}} \pi_{(s_1,\ldots,s_{h})}=1.$$
\noindent The DGMM can be written as
\begin{eqnarray}
f(\textbf{y};\boldsymbol\Theta) &=& \sum_{s \in \Omega} \pi_s N(\textbf{y};\boldsymbol\mu_s,\boldsymbol\Sigma_s),
\end{eqnarray}
\noindent where
\begin{eqnarray*}
\boldsymbol\mu_s&=& \eta_{s_1}^{(1)} + \Lambda_{s_1}^{(1)} (\eta_{s_2}^{(2)} + \Lambda_{s_2}^{(2)}(\ldots (\eta_{s_{h-1}}^{(h-1)} + \Lambda_{s_{h-1}}^{(h-1)}\eta_h^{(h)}))) \\
&=& \eta_{s_1}^{(1)} + \sum_{l=2}^h \left( \prod_{m=1}^{l-1} \Lambda_{s_m}^{(m)}\right) \eta_{s_l}^{(l)}
\end{eqnarray*}
\noindent and
\begin{eqnarray*}
\boldsymbol\Sigma_s&=& \Psi_{s_1}^{(1)} + \Lambda_{s_1}^{(1)} (\Lambda_{s_2}^{(2)}(\ldots (\Lambda_{s_h}^{(h)}\Lambda_{s_h}^{(h)\top}+\Psi_{s_h}^{(h)})\ldots)\Lambda_{s_2}^{(2)\top})\Lambda_{s_1}^{(1)\top} \\
&=& \Psi_{s_1}^{(1)} + \sum_{l=2}^h \left( \prod_{m=1}^{l-1} \Lambda_{s_m}^{(m)}\right) \Psi_{s_l}^{(l)}\left( \prod_{m=1}^{l-1} \Lambda_{s_m}^{(m)}\right)^\top.
\end{eqnarray*}
Thus globally the deep mixture can be viewed as a mixture model with $k$ components and a fewer number of parameters shared through the path. In a DGMM, not only the conditional distributions, but also the marginal distributions of the latent variables ${\bf z}^{(l)}$ are Gaussian mixtures. This can be established by integrating out the bottom latent variables, so that at each layer
\begin{eqnarray}\label{eqn:marg}
f(\textbf{z}^{(l)};\boldsymbol\Theta) &=& \sum_{\tilde{s}= (s_{l+1},\ldots,s_h)} \pi_{\tilde{s}} N(\textbf{z}^{(l)};\tilde{\boldsymbol\mu}_{\tilde{s}}^{(l+1)},\tilde{\boldsymbol\Sigma}_{\tilde{s}}^{(l+1)}),
\end{eqnarray}
\noindent where $\tilde{\boldsymbol\mu}_{\tilde{s}}^{(l+1)}= \eta_{s_{l+1}}^{(l+1)} + \Lambda_{s_{l+1}}^{(l+1)} (\eta_{s_{l+2}}^{(l+2)} + \Lambda_{s_{l+2}}^{({l+2})}(\ldots (\eta_{s_{h-1}}^{(h-1)} + \Lambda_{s_{h-1}}^{(h-1)}\eta_h^{(h)})))$
and $ \tilde{\boldsymbol\Sigma}_{\tilde{s}}^{(l+1)}=\Psi_{s_{l+1}}^{(l+1)} + \Lambda_{s_{l+1}}^{(l+1)} (\Lambda_{s_{l+2}}^{(l+2)}(\ldots (\Lambda_{s_h}^{(h)}\Lambda_{s_h}^{(h)\top}+\Psi_{s_h}^{(h)})\ldots)\Lambda_{s_{l+2}}^{(l+2)\top})\Lambda_{s_{l+1}}^{(l+1)\top}$.
A deep mixture model for modeling natural images has been proposed by \cite{NIPS2014}. However, this model suffers from serious identifiability issues as discussed in the next section.
\subsection{Model-based clustering and identifiability}
As previously observed in a DGMM the total number of components (potentially identifying the groups) is given by the total number possible paths, $k$. In case the true number of groups, say $k^*$, is known, one could limit the estimation problem by considering only the models with $k_1=k^*$ ($k_1<k$) and perform clustering through the conditional distribution
$f(\textbf{y}|\textbf{z}^{(1)};\boldsymbol\Theta)$. This has the merit to have a nice interpretation: the remaining components of the bottom layers act as density approximations to the global non-Gaussian components. In this perspective, the model represents an automatic tool for merging mixture components \citep{Hennig2010,patrick,melnykov} and the deep mixtures can be viewed as a special mixture of mixtures model \citep{Li2005}.
However, in the general situation without further restrictions, the DGMM defined in the previous session suffers from serious identifiability issues related to the number of components at the different layers and the possible equivalent paths they could form. For instance, if $h=2$, a DGMM with $k_1=2$, $k_2=3$ components may be indistinguishable from a DGMM with $k_1=3$, $k_2=2$ components, both giving a total number of possible $k=6 \ (=k_1 \cdot k_2)$ paths.
Notice that even if $k^*$ is known and we fix $k_1=k^*$ there is still non-identifiability for models with more than two layers.
Moreover, in all cases, there is a serious second identifiability issue related to parameter estimation.
In order to address the first issue, the we introduce an important assumption on the model dimensionality: the latent variables at the different layers have progressively decreasing dimension, $r_1,r_2,\ldots,r_h$, where $p > r_1 > r_2 > \ldots, > r_h \geq 1$. As a consequence, the parameters at the different levels will inherit different dimensionality as well. This constraint has also the merit to avoid over-parameterized models, especially when $p$ is high.
The second identifiability issue arises from the presence of latent variables and it is similar in its nature to the identifiability issue that affects factor models. In particular, given an invertible matrix $A$ of dimension $r \times r$, with $r<p$, the factor model $y = \eta + \Lambda z +u$, with $u \sim N(0,\Psi)$, and the
transformed factor model $y = \eta + \Lambda A A^{-1}z+ u$ are indistinguishable, where $A$ is an orthogonal matrix and the factors have zero mean and identity covariance matrix. Thus there are
$r(r-1)/2$ fewer free parameters. This ambiguity can be avoided by imposing the constraint that $\Lambda^\top\Psi^{-1}\Lambda$ is diagonal with elements in decreasing order (see, for instance, \cite{Mardia}).
Moving along the same lines, in the DGMM, at each layer from 1 to $h-1$, we assume that the conditional distribution of the latent variables $f(\textbf{z}^{(l)}|\textbf{z}^{(l+1)};\boldsymbol\Theta)$ has zero mean and identity covariance matrix and the same diagonality constraint on the parameters at each level.
\section{Deep dimensionally reduced Gaussian mixture models}
Starting from the model (\ref{eqn1}), dimension reduction is obtained by considering layers that are sequentially described by latent variables with a progressively decreasing dimension, $r_1,r_2,\ldots,r_h$, where $p > r_1 > r_2 > \ldots, > r_h \geq 1$. The dimension of the parameters in (\ref{eqn1}) changes accordingly.
Consider as an illustrative example a two-layer deep model ($h=2$). In this case, the dimensionally reduced DGMM consists of the system of equations:
\begin{eqnarray*}\label{eqn2}
&&(1) \ \ \ \textbf{y}_i=\eta_{s_1}^{(1)}+ \Lambda_{s_1}^{(1)} \textbf{z}_i^{(1)}+\textbf{u}_i^{(1)} \textrm{ with prob. } \pi_{s_1}^{(1)}, \ j=1,\ldots,k_1, \\
&& (2) \ \ \ \textbf{z}_i^{(1)}=\eta_{s_2}^{(2)}+ \Lambda_{s_2}^{(2)} \textbf{z}_i^{(2)}+\textbf{u}_i^{(2)} \textrm{ with prob. } \pi_{s_2}^{(2)}, \ i=1,\ldots,k_2,
\end{eqnarray*}
where $\textbf{z}_i^{(2)}\sim N(\textbf{0},\textbf{I}_{r_2})$, $\Lambda_{s_1}^{(1)}$ is a (factor loading) matrix of dimension $p \times r_1$, $\Lambda_{s_2}^{(2)}$ has dimension $r_1 \times r_2$, and ${\Psi}_{s_1}^{(1)}$ and ${\Psi}_{s_2}^{(2)}$ are squared matrices of dimension $p \times p$ and $r_1 \times r_1$, respectively. The two latent variables have dimension $r_1$ and $r_2$, respectively with $p> r_1 >r_2 \geq 1$.
The model generalizes and encompasses several model-based clustering methods. Gaussian mixtures are trivially obtained in absence of any layer and dimension reduction.
Mixtures of factor analyzers \citep{MCLACHLAN2003} may be considered as a one-layer deep model, where $\Psi_{s_1}^{(1)}$ are diagonal and $\textbf{z}_i^{(1)}\sim N(\textbf{0},\textbf{I}_{r_1})$. When $h=2$ with $k_1=1$, ${\Psi}^{(1)}$ is diagonal, and $\Lambda_{s_2}^{(2)}=\{0\}$, the deep dimensionally reduced mixture coincides with mixtures of factor analyzers with common factor loadings \citep{Baek} and heteroscedastic factor mixture analysis \citep{Montanari2010}. The so-called mixtures of factor mixture analyzers introduced by \cite{Viroli2010} is a two-layer deep mixture with $k_1>1$, ${\Psi}_{s_1}^{(1)}$ diagonal and $\Lambda_{s_2}^{(2)}=\{0\}$.
Under the constraints that $h=2$, $\Psi_{s_1}^{(1)}$ and $\Psi_{s_2}^{(2)}$ are diagonal, the model is a deep mixture of factor analyzers \citep{Tang12}. In this work, the authors propose to learn one layer at a time. After estimating the parameters at each layer, samples from the posterior distributions for that layer are used as data for learning the next step in a greedy layer-wise learning algorithm. Despite its computational efficiency this multi-stage estimation process suffers from the uncertainty in the sampling of the latent variable generated values. A bias introduced at a layer will affect all the remaining ones and the problem grows with $h$, with the number of components and under unbalanced possible paths. In the next section we will present a unified estimation algorithm for learning all the model parameters simultaneously.
\section{Fitting Deep Gaussian Mixture Models}
Because of the hierarchical formulation of a deep mixture model, the EM algorithm represents the natural method for parameter estimation. The algorithm alternates between two steps and it consists of maximizing (M-step) and calculating the conditional expectation (E-step) of the complete-data log-likelihood function given the observed data, evaluated at a given set of parameters, say $\boldsymbol\Omega'$:
\begin{eqnarray}\label{eq:condexp}
&&E_{\textbf{z}^{(1)},\ldots,\textbf{z}^{(h)},\textbf{s}|\textbf{y}; \boldsymbol\Theta'}\left[ \log L_c(\boldsymbol\Theta)\right].
\end{eqnarray}
This implies that we need to compute the posterior distributions of the latent variables given the data in the E-step of the algorithm. In contrast to the classical GMM, where this computation involves only the allocation latent variable $\textbf{s}$ for each mixture component, in a deep mixture model the derivation of bivariate (or multivariate) posteriors is required, thus making the estimation algorithm very slow and not applicable to large data.
To further clarify this, consider the expansion of the conditional expectation in (\ref{eq:condexp}) as sum of specific terms. For a model with $h=2$ layers, it takes the following form
\begin{eqnarray}\label{eq:condexp2}
&&E_{\textbf{z},s|\textbf{y}; \boldsymbol\Theta'}\left[ \log L_c(\boldsymbol\Theta)\right]=
\sum_{s \in \Omega}\int
f(\textbf{z}^{(1)},s|\textbf{y};\boldsymbol\Theta') \log
f(\textbf{y}|\textbf{z}^{(1)},s;\boldsymbol\Theta)d\textbf{z}^{(1)} \nonumber \\
&+& \sum_{s \in \Omega}\int \int f(\textbf{z}^{(1)},\textbf{z}^{(2)},s|\textbf{y};\boldsymbol\Theta') \log
f(\textbf{z}^{(1)}|\textbf{z}^{(2)}, s;\boldsymbol\Theta)d\textbf{z}^{(1)} d\textbf{z}^{(2)} \nonumber \\
&+& \int f(\textbf{z}^{(2)}|\textbf{y};\boldsymbol\Theta') \log
f(\textbf{z}^{(2)}) d\textbf{z}^{(2)} + \sum_{s \in \Omega} f(s|\textbf{y};\boldsymbol\Theta') \log f(s;\boldsymbol\Theta).
\end{eqnarray}
A proper way to overcome these computational difficulties is to adopt a stochastic version of the EM algorithm (SEM), \citep{celeux} or its Monte Carlo alternative (MCEM) \citep{wei}. The principle underlying the handling of the latent variables is to draw observations (SEM) or samples of observations (MCEM) from the conditional density of the latent variables given the observed data, in order to simplify the computation of the E-step.
The strategy adopted is to draw pseudorandom observations at each layer of the network through the conditional density $f(\textbf{z}^{(l)}|\textbf{z}^{(l-1)},s;\boldsymbol \Theta')$, starting from $l=1$ to $l=h$, by considering as fixed, the variables at the upper level of the model for the current fit of parameters, where at the first layer $\textbf{z}^{(0)}=\textbf{y}$.
The conditional density $f(\textbf{z}^{(l)}|\textbf{z}^{(l-1)},s;\boldsymbol \Theta')$ can be expressed as
\begin{eqnarray}\label{eqn:inversa}
f(\textbf{z}^{(l)}|\textbf{z}^{(l-1)},s;\boldsymbol \Theta')=\frac{f(\textbf{z}^{(l-1)}|\textbf{z}^{(l)},s;\boldsymbol \Theta')f(\textbf{z}^{(l)}|s)}{f(\textbf{z}^{(l-1)}|s;\boldsymbol \Theta')},
\end{eqnarray}
\noindent where the denominator does not depend on $\textbf{z}^{(l)}$ and acts as a normalization constant, and the two terms in the numerator, conditionally on $s$, are Gaussian distributed according to equations (\ref{eqn:marg}) and (\ref{cond}):
\begin{itemize}
\item[] $f(\textbf{z}^{(l-1)}|\textbf{z}^{(l)},s;\boldsymbol \Theta')=N(\eta_{s_l}^{(l)}+\Lambda_{s_l}^{(l)}\textbf{z}^{(l)},\Psi_{s_l}^{(l)})$,
\item[] $f(\textbf{z}^{(l)}|s;\boldsymbol \Theta')=N(\tilde{\boldsymbol\mu}_{s_l}^{(l+1)},\tilde{\boldsymbol\Sigma}_{s_l}^{(l+1)})$.
\end{itemize}
By substituting them in (\ref{eqn:inversa}), after some simple algebra, it is possible to show that
\begin{eqnarray}\label{eqn:inversa2}
f(\textbf{z}^{(l)}|\textbf{z}^{(l-1)},s)=N\left(\boldsymbol\rho_{s_l}(\textbf{z}^{(l-1)}),\boldsymbol\xi_{s_l}\right),
\end{eqnarray}
where
$$\boldsymbol\rho_{s_l}(\textbf{z}^{(l-1)})=\boldsymbol\xi_{s_l}\left(\left(\Lambda_{s_l}^{(l)}\right)^\top\left(\Psi_{s_l}^{(l)}\right)^{-1}(\textbf{z}^{(l-1)}-\boldsymbol\eta_{s_l}^{(l)})+\left(\tilde{\boldsymbol\Sigma}_{s_l}^{(l+1)}\right)^{-1}\tilde{\boldsymbol\mu}_{s_l}^{(l+1)}\right)$$
and $$\boldsymbol\xi_{s_l}=\left(\left(\tilde{\boldsymbol\Sigma}_{s_l}^{(l+1)}\right)^{-1}+\left(\Lambda_{s_l}^{(l)}\right)^\top\left(\Psi_{s_l}^{(l)}\right)^{-1}\Lambda_{s_l}^{(l)}\right)^{-1}.$$
This is the core of the stochastic perturbation of the EM algorithm. Due to the sequential hierarchical structure of the random variable generation, the E and M steps of the algorithm can be computed for each layer. Considering the sample of $n$ observations, at the layer $l=1,\ldots,h$, we maximize
\begin{eqnarray}\label{eq:M1}
&& E_{\textbf{z}^{(l)},\textbf{s}|\textbf{z}^{(l-1)};\boldsymbol\theta'}\left[\sum_{i=1}^n \log f(\textbf{z}^{(l-1)}_i|\textbf{z}_i^{(l)},s;\boldsymbol\Theta)\right] \nonumber
\\ &=& \sum_{i=1}^n \int f(\textbf{z}_i^{(l)},s|\textbf{z}^{(l-1)}_i;\boldsymbol\Theta')\log f(\textbf{z}^{(l-1)}_i|\textbf{z}_i^{(l)},s;\boldsymbol\Theta)
d\textbf{z}_i
\end{eqnarray}
with respect to $\Lambda_{s_l}^{(l)}$, $\Psi_{s_l}^{(l)}$, and $\eta_{s_l}^{(l)}$. By considering
$f(\textbf{z}^{(l-1)}|\textbf{z}^{(l)},s)=N(\eta_{s_l}^{(l)}+\Lambda_{s_l}{(l)}\textbf{z}^{(l)},\Psi_{s_l}^{(l)}$), we can compute the score of (\ref{eq:M1}) to derive the estimates for the new parameters given the provisional ones. Therefore, the complete stochastic EM algorithm can be schematized as follows. For $l=1,\dots,h$:
\bigskip
\noindent\rule[0.5ex]{\linewidth}{0.7pt}
{\footnotesize
\begin{description}
\item[-] S-STEP ($\textbf{z}_i^{(l-1)}$ is known) \\
Generate $M$ replicates $\textbf{z}_{i,m}^{(l)}$ from $f(\textbf{z}_i^{(l)}|\textbf{z}_i^{(l-1)},s;\boldsymbol\Theta')$.
\item[-] E-STEP - Approximate: \\
$$E[\textbf{z}_i^{(l)}|\textbf{z}_i^{(l-1)},s;\boldsymbol\Theta'] \cong \frac{\sum_{m=1}^M \textbf{z}_{i,m}^{(l)}}{M}$$ and $$E[\textbf{z}_i^{(l)}\textbf{z}_i^{(l)\top}|\textbf{z}_i^{(l-1)},s;\boldsymbol\Theta'] \cong \frac{\sum_{m=1}^M \textbf{z}_{i,m}^{(l)}\textbf{z}_{i,m}^{(l)\top}}{M}.$$
\item[-] M-STEP - Compute:\\
\begin{eqnarray*}
\hat{\Lambda}_{s_l}^{(l)}&=&\frac{\sum_{i=1}^np(s|\textbf{z}_i^{(l-1)})(\textbf{z}_i^{(l-1)}-\eta_{s_l}^{(l)})E[\textbf{z}_i^{(l)\top}|\textbf{z}_i^{(l-1)},s]
E[\textbf{z}_i^{(l)}\textbf{z}_i^{(l)\top}|\textbf{z}_i^{(l-1)},s]^{-1}}
{\sum_{i=1}^np(s|\textbf{z}_i^{(l-1)})},\\
\hat{\Psi}_{s_l}^{(l)}&=&\frac{\sum_{i=1}^np(s|\textbf{z}_i^{(l-1)})\left[(\textbf{z}_i^{(l-1)}-\eta_{s_l})(\textbf{z}_i^{(l-1)}-\eta_{s_l})^\top-(\textbf{z}^{(l-1)}_i-\eta_{s_l})
E[\textbf{z}_i^{(l)\top}|\textbf{z}_i^{(l-1)},s]\hat{\Lambda}_{s_l}^\top\right]} {\sum_{i=1}^np(s|\textbf{z}_i^{(l-1)})},\\
\hat{\eta}_{s_l}^{(l)}&=&\frac{\sum_{i=1}^n p(s_|\textbf{z}_i^{(l-1)})\left[\textbf{z}_i^{(l-1)}-\Lambda_{s_l}
E[\textbf{z}_i^{(l)\top}|\textbf{z}_i^{(l-1)},s]\right]} {\sum_{i=1}^np(s|\textbf{z}_i^{(l-1)})}, \\
\hat\pi_{s}^{(l)}&=&\sum_{i=1}^n f\left(s_l|\textbf{y}_i\right),
\end{eqnarray*}
\end{description}}
\noindent where $f\left(s_l|\textbf{y}_i\right)$ is the posterior probability of the allocation variable given the observed data that can be computed via Bayes' formula.
\section{Simulated and Real Application}
\subsection{Smiley Data}
In this simulation experiment we have generated $n=1000$ observations from four classes in 3-dimensional space. The first two variables are relevant for clustering and have been generated by using the R package \verb"mlbench". They are structured into two Gaussian eyes, a triangular nose and a parabolic mouth, as shown in Figure \ref{fig2}. We have taken the standard deviation for eyes and mouth equal to 0.45 and 0.35, respectively. The third variable is a noise variable, independently generated from a Gaussian distribution with standard deviation 0.5.
\begin{figure}
\centering
\includegraphics[scale=0.7]{smile.eps}
\caption{Smiley Data}\label{fig2}
\end{figure}
Data have been independently generated 100 times. On each replicate, we applied DGMM with two-layers with $r_1=2$, $r_2=1$, $k_1=4$, and $k_2$ ranging from 1 to 5. We fitted the models 10 times in a multistart procedure and we selected the best fit according to BIC.
We compared the DGMM results with several clustering methods by fixing the number of groups equal to the true $k=4$ for all strategies. We fitted a Gaussian Mixture Model (GMM) by using the \verb"R" package \verb"Mclust" \citep{mclust1}, skew-normal and skew-t Mixture Models (SNmm and STmm) by using the \verb"R" package \verb"EMMIXskew" \citep{wang}, $k$-means, Partition around Medoids (PAM), and by Ward's method (Hclust) implemented hierarchically. Clustering performance is measured by the Adjusted Rand Index (ARI) and the misclassification rate. The average of the two indicators across the 100 replicates together with their standard errors are reported in Table \ref{tab1}.
\begin{table}[t]
\caption{Results on Smiley datasets: average of Adjusted Rand Index and misclassification rates across the 100 replicated. Standard errors are reported in brackets.\label{tab1}}
\begin{center}
\begin{tabular}{|l|cc|}
\hline
Method & ARI& m.r. \\
\hline
k-means & 0.661 \ \ (0.003) & \ \ 0.134 \ \ (0.001)\\
PAM & 0.667 \ \ (0.004) & \ \ 0.132 \ \ (0.001)\\
Hclust & 0.672 \ \ (0.013) & \ \ 0.141 \ \ (0.006)\\
GMM & 0.653 \ \ (0.008) & \ \ 0.178 \ \ (0.006)\\
SNmm & 0.535 \ \ (0.006) & \ \ 0.251 \ \ (0.006)\\
STmm & 0.566 \ \ (0.006) & \ \ 0.236 \ \ (0.004)\\
DGMM & 0.788 \ \ (0.005) & \ \ 0.087 \ \ (0.002)\\
\hline
\end{tabular}
\end{center}
\end{table}
Figure \ref{fig3} shows the box plots of the Adjusted Rand Indices and miclassification rates (m.r.'s) across the 100 replicates.
The results indicate that DGMM achieves the best classification performance compared to the other methods.
\begin{figure}
\centering
\includegraphics[scale=0.6]{smi1.eps}\\
\includegraphics[scale=0.6]{smi2.eps}
\caption{Smiley Data: Box plots of the Adjusted Rand Indices and Miclassification rates across the 100 replicates.}\label{fig3}
\end{figure}
\subsection{Real Data}
In this section we shall apply the deep mixture model to some benchmark data used by the clustering and classification community.
We shall consider:
\begin{itemize}
\item \emph{Wine Data}: this dataset comes from a study \citep{forina} on 27 chemical and physical properties of three types of wine
from the Piedmont region of Italy: Barolo (59), Grignolino (71), and Barbera (48). The clusters are well separated and most clustering methods give high clustering performance on this data.
\item \emph{Olive Data}: The dataset contains the percentage composition of eight fatty acids found by lipid fraction of 572 Italian olive oils \citep{forina2}. The data come from three regions: Southern Italy (323), Sardinia (98), and Northern Italy (151) and the aim is to distinguish between them. Also in this case, the clustering is not a very difficult task even if the clusters are not balanced.
\item \emph{Ecoli Data}: data consist of $n=336$ proteins classified into their various cellular localization sites based on
their amino acid sequences. There are $p=7$ variables and $k=8$ really unbalanced groups that make the clustering task rather difficult: cp cytoplasm (143), inner membrane
without signal sequence (77), perisplasm (52), inner membrane,
uncleavable signal sequence (35), outer membrane (20), outer
membrane lipoprotein (5), inner membrane lipoprotein (2), inner
membrane, cleavable signal sequence (2). These data are available from the UCI machine learning repository.
\item \emph{Vehicle Data}: the dataset contains $k=4$ types of vehicles: a double decker bus (218), Cheverolet van (199), Saab 9000 (217) and an Opel Manta 400 (212). The aim is to cluster them on the basis of their silhouette represented from many different angles for a total of $p=18$ variables. This is a difficult classification task. In particular, the bus, the van and the cars are distinguishable, but it is very difficult to distinguish between the cars. The data are taken from the R library \verb"mlbench".
\item \emph{Satellite Data}: the data derive from multi-spectral, scanner images purchased from NASA by the
Australian Centre for Remote Sensing. They consist of 4 digital images of the same scene in different spectral bands
structured into $3 \times 3$ square neighborhood of pixels. Therefore, there are $p=36$ variables. The number of images is $n=6435$ coming from $k=6$ groups of images: red soil (1533), cotton crop (703), grey
soil (1358), damp grey soil (626), soil with vegetation stubble (707) and very damp grey soil (1508). This is notoriously a difficult clustering task not only because there are 6 unbalanced classes, but also because classical methods may suffer from the dimensionality $p=36$. The data are available from the UCI machine learning repository.
\end{itemize}
On these data we compared the DGMM model with Gaussian Mixture Models (GMM), skew-normal and skew-t Mixture Models (SNmm and STmm), $k$-means and the Partition Around Medoids (PAM), hierarchical clustering with Ward distance (Hclust), Factor Mixture Analysis (FMA), and Mixture of Factor Analyzers (MFA). For all methods, we assumed the number of groups to be known. This assumption is made in order to compare the respective clustering performances. Note that in the case of an unknown number of groups, model selection for the DGMM can be done similarly to all the other mixture based approaches by using information criteria. Therefore, we considered the DGMM with $h=2$ and $h=3$ layers, a number of subcomponents in the hidden layers ranging from 1 to 5 (while $k_1=k^*$) and all possible models with different dimensionality for the latent variables under the constraint $p>r_1> ... >r_h\geq 1$. Moreover, we considered 10 different starting points for all possible models.
For the GMM we considered all the possible submodels according to the family based on the covariance decomposition implemented in \verb"mclust". Finally, we fitted FMA and MFA by using the R package \verb"MFMA" available from the first author's webpage with different starting points and different number of latent variables ranging from 1 to the maximum admissible number.
In all cases we selected the best model according to BIC.
\begin{table}[t]
\small
\caption{Results on Real Data: Adjusted Rand Index (ARI) and misclassification rates (m.r.). \label{tab2}}
\begin{center}
\begin{tabular}{|l|cc|cc|cc|cc|cc|}
\hline
\emph{Dataset} & \multicolumn{2}{|c|}{\emph{Wine}} & \multicolumn{2}{|c|}{\emph{Olive}} & \multicolumn{2}{|c|}{\emph{Ecoli}} & \multicolumn{2}{|c|}{\emph{Vehicle}} & \multicolumn{2}{|c|}{\emph{Satellite}}\\
& ARI & m.r. & ARI & m.r. & ARI & m.r. & ARI & m.r. & ARI & m.r. \\
\hline
$k$-means & 0.930 & 0.022 & 0.448 & 0.234 & 0.548 & 0.298 & 0.071 & 0.629 & 0.529 & 0.277 \\
PAM & 0.863 & 0.045 & 0.725 & 0.107 & 0.507 & 0.330 & 0.073 & 0.619 & 0.531 & 0.292 \\
Hclust & 0.865 & 0.045 & 0.493 & 0.215 & 0.518 & 0.330 & 0.092 & 0.623 & 0.446 & 0.337 \\
GMM & 0.917 & 0.028 & 0.535 & 0.195 & 0.395 & 0.414 & 0.089 & 0.621 & 0.461 & 0.374 \\
SNmm & 0.964 & 0.011 & 0.816 & 0.168 & - & - & 0.125 & 0.566 & 0.440 & 0.390 \\
STmm & 0.085 & 0.511 & 0.811 & 0.171 & - & - & 0.171 & 0.587 & 0.463 & 0.390 \\
FMA & 0.361 & 0.303 & 0.706 & 0.213 & 0.222 & 0.586 & 0.093 & 0.595 & 0.367 & 0.426 \\
MFA & 0.983 & 0.006 & 0.914 & 0.052 & 0.525 & 0.330 & 0.090 & 0.626 & 0.589 & 0.243 \\
DGMM & 0.983 & 0.006 & 0.997 & 0.002 & 0.749 & 0.187 & 0.191 & 0.481 & 0.604 & 0.249 \\
\hline
\end{tabular}
\end{center}
\end{table}
For the smaller dataset (\emph{Wine}, \emph{Olive}, \emph{Ecoli}, \emph{Vehicle}) the best DGMM suggested by BIC was the model with $h=2$ layers, while $h=3$ layers were suggested for the \emph{Satellite} data.
The \emph{Wine} data are quite simple to classify. Most methods performed quite well. The best DGMM model was obtained with $r_1=3, \ r_2=2$ and $k_1=3, \ k_2=1$.
The \emph{Olive} data are not very well distinguished by classical methods such as $k$-means and hierarchical clustering, while model-based clustering strategies produce better performance.
Here deep learning with $r_1=5, \ r_2=1$ and $k_1=3 \ k_2=1$ suggested by BIC, gives excellent results with only 1 misclassified unit.
The challenging aspect of a cluster analysis on \emph{Ecoli} data is the high number of (unbalanced) classes. On these data SNmm and STmm did not reach convergence due to their being unable to handle satisfactorily the presence of two variables that each took on only two distinct values. The best clustering method also in this case is given by the deep mixture with $r_1=2, \ r_2=1$ and $k_1=8, \ k_2=1$.
Deep mixtures performed better than the other methods also for the difficult task to distinguish between silhouettes of \emph{vehicles} with progressively dimension reduction of $r_1=7, \ r_2=1$ and components $k_1=4, \ k_2=3$.
Finally, for the \emph{Satellite} data a DGMM with $h=3$ layers and $r_1=13, \ r_2=2, \ r_1=1$ and $k_1=6, \ k_2=2, \ k_1=1$ is preferred in terms of BIC. Results here are comparable with MFA with 4 factors, its having slightly higher ARI but with less corrected classified units in the total.
\section{Final remarks}
In this work a deep Gaussian mixture model (DGMM) for unsupervised classification has been investigated. The model is a very general framework that encompasses classical mixtures, mixtures of mixtures models, and mixture of factor analyzers as particular cases. Since DGMM is a generalization of classical model-based clustering strategies, it is guaranteed to work as well as these methods. We demonstrate the greater flexibility of DGMM with its higher complexity; for this reason it is particularly suitable for data with large sample size.
We illustrated the model on simulated and real data. From the experimental study we conducted, the method works efficiently and it gives a good clustering performance with $h=2$ and $h=3$ layers where, as suggested, model choice can be undertaken according to information criteria.
\bibliographystyle{Chicago}
|
1,108,101,566,500 | arxiv | \section{Introduction and main result}
In this note we study the effective behavior at low energy of a non relativistic quantum particle in $\mathbb{R}^3$ interacting with a system of $N$ randomly distributed obstacles in the limit $N \rightarrow \infty$. In order to formulate such Lorentz gas model, we introduce the set $ Y_N=\{y_1,\dots y_N\}$ of random variables in $\mathbb{R}^3$, independenltly chosen according to a common distribution with density $W$.
We assume that the interaction among the particle and the $i$-th obstacle is described by the Gross-Pitaevskii potential
\[
V^N_i(x) = N^2 V(N(x-y_i))\,,
\]
where the unscaled potential $V$ decays to zero at infinity sufficiently fast. Therefore, the Hamiltonian of the particle is
\begin{equation}\label{eq:H_n}
H_N=-\Delta+\sum_{i=1}^N V_i^N(x)\,,
\end{equation}
where we have chosen units such that $\hbar=1$ and the mass is $1/2$. The assumptions on $V$ will guarantee that $H_N$ is a selfadjoint operator in
$L^2(\mathbb{R}^3)$. The aim of this paper is to characterize the limit behavior of $H_N$ and the fluctuations around the limit.
\noindent
We note that for $N$ large the range $r_0$ of the potential $V_i^N$ is of order $N^{-1}$ while the average distance $d$ among the obstacles is of order $N^{-1/3}$. If the wavelength of the particle $\lambda_p$ is taken of order $1$, we are studying the regime
\[
r_0 \ll d \ll \lambda_p \,,
\]
which is the case occurring, for example, in the analysis of scattering of slow neutrons from condensed matter (Neutron Optics). We reasonably expect that, for $N \rightarrow \infty$, the particle \say{sees} an effective potential depending on the density of obstacles. Moreover, one could be tempted to consider essentially correct the formal manipulation
\[
\sum_{i=1}^N V_i^N (x) \;\sim \; \frac{1}{N} \sum_{i=1}^N N^3 V(N(x-y_i))\;\sim\; b \, \frac{1}{N} \sum_{i=1}^N \delta(x-y_i)\,,\;\;\;\;\;\;\;\; b=\int\!\!dx\, V(x)\,
\]
and to obtain $b W$ as effective potential. Indeed, this is not the case and we shall see that the effective potential is the density of scattering length of the system of obstacles $4 \pi aW$, where $a$ is the scattering length associated to the potential $V$ (see definition below).
The situation is completely analogous to the more difficult case of a gas of $n$ particles interacting via two-body potentials scaling as $n^2 V(n x)$ for $n \rightarrow \infty$ as investigated in~\cite{ESY0, ESY2, ESY3, P, BDS, BS}. In particular, we refer to \cite[Sect. 5]{BPS} for a discussion on the emergence of the scattering length in that context.
Let us introduce the definition of scattering length. Given the solution $\phi_0$ of the zero energy scattering problem
\begin{equation}\label{eq:phi0}
\left\{
\begin{aligned}
(-\Delta+V)\phi_0=0\\
\lim_{|x|\to+\infty}\phi_0(x)=1 \,,
\end{aligned}
\right.
\end{equation}
the scattering length $a$ associated to the potential $V$ is defined by
\[
a=\frac{1}{4\pi}\int dx\,V(x)\phi_0(x).
\]
It is well known that a condition for the existence of a finite scattering length is the fact that zero is not an eigenvalue nor a resonance for $-\Delta +V$. As for the physical meaning, we recall that $a$ represents the effective linear dimension of the scatterer at low energy. It is also easy to check by scaling that the scattering length associated to the rescaled potential $V^N_i$ is $\displaystyle{a_i^N=a/N}$.
\noindent
In this paper we give the proof of the convergence in the strong resolvent sense of $H_N$ to the limiting Hamiltonian
\[
H=-\mathcal{D}+ 4\pi a W\,,
\]
where the convergence is in probability with respect to the distribution of the obstacles. Denoted by $\|\cdot\|_p$ the norm in $L^p(\mathbb{R}^3)$ $1\leq p\leq \infty$ we give below the precise formulation of our main theorem.
\begin{theorem}\label{MainTheorem}
Let $V \in L^1(\mathbb{R}^3, (1 + |x|^4) \textrm{d} x)\cap L^3(\mathbb{R}^3)$ such that zero is not an eigenvalue nor a resonance for $-\Delta +V$ and let $a \in \mathbb{R}$ be the corresponding scattering length. Moreover, let $W \in L^1(\mathbb{R}^3) \cap L^p(\mathbb{R}^3)$, for some $p>3$, $f \in L^2( \mathbb{R}^{3})$ and take $\l>0$ large enough.
Then for any $\e>0$ and $\b<1/2$ we have
\[
\lim_{N\to +\infty} P_N\big(\,\{Y_{N}: N^{\beta} \| (H_{N}+\lambda)^{-1}f-(H+\lambda)^{-1}f\|_2>\epsilon\} \,\big)=0 \,,
\]
where $P_N$ is the product probability measure $\{W(x) \textrm{d} x \}^{\otimes N}$ on the set of configurations of points $Y_N$.
\end{theorem}
\begin{remark} Theorem \ref{MainTheorem} implies the convergence in probability as $N \to \infty$ of the unitary group $e^{-i t H_N}$, associated to the $N$ dependent Hamiltonian \eqref{eq:H_n}, to the $N$ independent unitary group $e^{-i t H}$, for any time $t>0$.
\end{remark}
\noindent
As an immediate consequence of Theorem \ref{MainTheorem} and of previous results \cite{FOT, FHT} we can also characterize the fluctuations around the limit operator, as expressed in the following theorem.
\begin{theorem}\label{th:fluctuations}
Under the same assumptions of Theorem \ref{MainTheorem}, for any $f,g \in L^2(\mathbb{R}^3)$ the random variable
\[
\eta^\l_{f,g}(Y_N) := \sqrt N \,\Big( g, \big( (H_N +\l)^{-1} - (H + \l)^{-1}\big) f \Big)
\]
converges in distribution for $N \to \infty$ to a Gaussian random variable $\bar \eta^\l_{f,g}$ of zero mean and covariance
\[ \begin{split}
& E\Big( (\bar \eta^\l_{f,g})^2 \Big) \\
& = (4 \pi a)^2 \| (H +\l)^{-1} g\, (H+\l)^{-1}f\|^2_{L^2_W} - 4 \pi a \big( (H+\l)^{-1} g, (H+\l)^{-1} f \big)^2_{L^2_W}
\end{split}\]
where $E(\cdot)$ means expectation with respect to the probability measure $P_N$ and $L^2_W = L^2(\mathbb{R}^{3}, W(x) \textrm{d} x )$.
\end{theorem}
\noindent
Let us briefly comment on the above results. We find that the asymptotic behavior of our Lorentz gas is completely characterized by the density of the obstacles and by their scattering length. In particular, this means that the dependence of the limit Hamiltonian on the interaction potentials $V_i^N$ is only through the associated scattering length $a/N$, i.e., a single physical parameter describing the effect of the obstacle as a scatterer at low energy. In this sense, in our scaling and for $N$ large, the Lorentz gas exhibits a universal behavior.
As we already mentioned, in the many-body context the same type of universal behaviour of the interaction arises in the effective description of the dynamics of $n$ bosons interacting through Gross-Pitaevskii potentials and undergoing Bose-Einstein condensation. More precisely, under the assumption that at time zero the system exhibits Bose-Einstein condensation into the one-particle wave function $\varphi \in L^2(\mathbb{R}^3)$, one expects condensation to be preserved at any time in the limit $n \to \infty$ and the condensed wave function to evolve according to the Gross-Pitaevskii equation $i \partial_t \varphi_t = -\mathcal{D} \varphi_t + 4 \pi a |\varphi_t|^2 \varphi_t$, with initial condition $\varphi_0=\varphi$. This fact has been well established mathematically for non negative potentials~(see \cite{ESY0, ESY2, ESY3, P, BDS, BS}) and shows that at the level of the evolution of the condensate wave function and in the limit $n \to \infty $ the interaction enters only through its scattering length.
Indeed, our Lorentz gas can be considered as a simplified model obtained from the more general case of a test particle interacting with other $N$ particles when the masses of these particles are infinite. Nevertheless, we believe that our analysis could have some interest and could give some hints for the general case. The reason is that, because of its simpler structure, our Lorentz gas allows a more detailed analysis. In particular, we obtain the convergence result without any assumption on the sign of the interaction potential $V$ and we can characterize the fluctuations in a relatively simple and explicit way.
\section{Line of the proof}\label{line}
In this Section we describe the method of the proof and collect some preliminary results and notation useful in the sequel.
\noindent
Let us start with some notation. Given $ \underline{\phi}_N = \{ \phi_1, \ldots, \phi_N\} \in \oplus_{i=1}^N L^2(\mathbb{R}^3)$, we define
\[
\|\underline{\phi}_N\|^2 = \sum_{i=1}^N \| \phi_i \|^2_2 \,.
\]
Moreover, for $ \vec{X}_N =\{X_1, \ldots, X_N\} \in \mathbb{R}^N$ we set
\[
\| \vec{X}_N\|^2 = \sum_{i=1}^N X_i^2\,.
\]
\noindent
It is useful to write the interaction potential as $V(x)=u(x)v(x)$, where
\[
u(x)=|V(x)|^{1/2} \,, \;\;\;\;\;\;\;\;\;\;\;
v(x)=|V(x)|^{1/2}\sgn(V(x))\,.
\]
Using the above factorization, we rewrite the scattering length associated to the potential $V$ as
\begin{equation}\label{scalen2}
a= \frac{1}{4\pi} (u,\mu)\,,
\end{equation}
where $\mu$ solves
\begin{equation}\label{eq:mu}
\mu+v\mathcal{G}^0u\mu=v
\end{equation}
and $\mathcal{G}^0$ is the operator with integral kernel $\mathcal{G}^0(x)=(4\pi |x|)^{-1}.$
Indeed, under the assumption that zero is neither an eigenvalue nor a resonance for $-\Delta+V$, the equation \eqref{eq:mu} has a unique solution in $L^2(\mathbb{R}^3)$. Then, one can check that the function $\phi_0 := 1-\mathcal{G}^0 u \mu$ solves problem \eqref{eq:phi0}
\[
(-\Delta +V)(1-\mathcal{G}^0 u\mu)= -u\mu +V -V\mathcal{G}^0u\mu= -u\,(\mu + v \mathcal{G}^0 u \mu - v )=0\,.
\]
Moreover
\[
4 \pi a = \int \!\!dx\, V \phi_0 = \int\!
\! dx\, u(v-v\mathcal{G}^0u \mu)= \int \!\! dx\, u \mu \,,
\]
so that \eqref{scalen2} is verified.
\noindent
Analogously, for the rescaled potentials we set $V_i^N(x)=u_i^N(x)v_i^N(x)$, where
\begin{equation*}
\begin{aligned}
u_i^N(x)&=|V_i^N(x)|^{1/2}=Nu(N(x-y_i))\,, \\
v_i^N(x)&=|V_i^N(x)|^{1/2}\sgn(V_i^N(x))=Nv(N(x-y_i))
\end{aligned}
\end{equation*}
and for the scattering length we have
\begin{equation*}
a_i^N= \frac{1}{4 \pi} (u_i^N,\mu_i^N)=a /N \,,
\end{equation*}
where
\begin{equation}\label{luscan}
\mu_i^N+u_i^N \mathcal{G}^0 v_i^N \mu_i^N=v_i^N\,.
\end{equation}
\vskip 0.3cm
\noindent
Let us discuss the line of the proof. We first observe that
the proof of Theorem \ref{MainTheorem} is non probabilistic. In fact, we prove the convergence for a fixed set of configurations of obstacles $Y_N=\{y_1,\dots y_N\}$ satisfying the following regularity assumption
\begin{description}
\item[(Y1)\label{ass:Y1}] Let $\nu^*(p)= \frac 1 3 \frac{p-3}{p-1} \in (0,1/3)$. For any $0<\nu < \nu^*(p)$ there exists $C$ such that
\[
\min_{i\neq j}|y_i-y_j|\geq \frac{C}{N^{1-\nu}}\,.
\]
\item[(Y2)\label{ass:Y2}] For any $N>0$ and any $0<\xi\leq 1$ we have
\[
\frac{1}{N^2}\sum_{\substack{i,j=1\\i\neq j}}^N\frac{1}{|y_i-y_j|^{3-\xi}}\leq C_\xi<\infty \,.
\]
\end{description}
The convergence in probability then follows once we show that \ref{ass:Y1} and \ref{ass:Y2} hold with probability increasing to one in the limit $N \to \infty$. More precisely the following lemma holds.
\begin{lemma}\label{lm:Y} Let $Y_N=\{y_1,\dots y_N\}$ a configuration of $N$ identically distributed random variables in $\mathbb{R}^3$, whose distribution has density $W \in L^1(\mathbb{R}^3) \cap L^p(\mathbb{R}^3)$, for some $p>3$. Then, the set of configurations on which \ref{ass:Y1} and \ref{ass:Y2} hold has probability increasing to one as $N$ goes to infinity.
\end{lemma}
\begin{proof} The standard proof of the Lemma~\cite{O, PV} can be easily adapted to the situation where $W \in L^1(\mathbb{R}^3) \cap L^p(\mathbb{R}^3)$ with $p>3$.
We start analysing the assumption \ref{ass:Y1}. Let $Z_N =\{ Y_N | \min_{i\neq j}|y_i - y_j| \geq \frac C {N^{1-\nu}}\}$ the set of configurations of $N$ obstacles for which \ref{ass:Y1} holds. We show that in the limit $N \to \infty$ the probability of the complement of $Z_N$ goes to zero. We have
\begin{equation} \begin{split} \label{PGamma}
P_N(Z_N^c) =\; & P_N \big( \{ Y_N |\, \exists\, i,j;i\neq j : |y_i - y_j| < C N^{-(1-\nu)} \} \big) \\
\leq & \; \frac{N(N-1)}{2} \int_{|x| <C N^{-(1-\nu)}} \textrm{d} x\, W(x)\,.
\end{split}\end{equation}
To bound the last integral we use H\"older inequality. For $1/p^\prime+1/p =1$, we have
\[ \begin{split}
\int_{|x| < CN^{\nu-1}} \hskip -1cm \textrm{d} x\, W(x) & \leq \left(\int_{|x| < CN^{\nu-1}} \textrm{d} x\, \right)^{1/{p^\prime}} \!\!\left(\int_{|x| < CN^{\nu-1}} \textrm{d} x |W(x)|^p\, \right)^{1/{p}} \\[6pt]
& \leq CN^{3(\nu-1)/p^\prime} \|W\|_p\,.
\end{split}\]
Hence the r.h.s. of \eqref{PGamma} goes to zero as $N$ goes to infinity for any $\nu < \frac 1 3 \frac{p-3}{p-1}$. Note that the requirement $W \in L^p$ with $p>3$ assures that $\nu>0$. \\[-9pt]
To show that also \ref{ass:Y2} holds with probability increasing to one as $N \to \infty$ it is sufficient to note that the $M=N(N-1)$ random variables $|y_i - y_j|$, with $i,j=1, \ldots, N$ and $i \neq j$, are interchangeable and we can reorder them as $\{ X_1, \ldots, X_M\}$ (e.g. using a diagonal progression as in the Cantor pairing function). Standard results~\cite{K} ensure that under the assumption
\begin{equation} \label{EX1}
E( X_{1}^{-3+\xi}) = \int \textrm{d} x \textrm{d} y \frac{W(x) W(y)}{|x-y|^{3-\xi}} \leq C\,,
\end{equation}
we have
\[
\lim_{M \to \infty} \frac 1 M \sum_{k=1}^M \frac 1 {X_k^{3-\xi}} = E( X_1^{-3+\xi})\,.
\]
The bound \eqref{EX1} follows from the assumptions on $W(x)$. Let $R>0$ be arbitrary. Then:
\[
\int_{|x-y|>R} \textrm{d} x \textrm{d} y \frac{W(x)W(y)}{|x-y|^{3-\xi}} \leq R^{-3+\xi} \| W\|_1^2 \leq C_\xi\,.
\]
On the other hand
\[
\int_{|x-y| \leq R} \textrm{d} x \textrm{d} y \frac{W(x)W(y)}{|x-y|^{3-\xi}} \leq \int_{|x-y| \leq R} \textrm{d} x \textrm{d} y \frac{|W(x)|^2}{|x-y|^{3-\xi}} \leq C_\xi \,.
\]
\end{proof}
\vspace{0.5cm}
\noindent
By Lemma \ref{lm:Y} we conclude that, in order to prove Theorem \ref{MainTheorem}, it is enough to show that for all $f\in L^2( \mathbb{R}^{3})$
\begin{equation}\label{conresy}
\lim_{N\to\infty}\| (H_{N}+\lambda)^{-1}f-(H+\lambda)^{-1}f\|_2=0
\end{equation}
uniformly on configurations $Y_{N}$ satisfying \ref{ass:Y1} and \ref{ass:Y2}. In fact, since the measure of the configurations where \ref{ass:Y1} and \ref{ass:Y2} do not hold goes to zero as $N \to \infty$, we have
\[ \begin{split}
\lim_{N\to +\infty} &P_N\big(\,\{Y_{N}: N^{\beta} \| (H_{N}+\lambda)^{-1}f-(H+\lambda)^{-1}f\|_2>\epsilon\} \,\big)\, \\
&= \lim_{N\to +\infty} P_N\big(\,\{Y^*_{N}: N^{\beta} \| (H_{N}+\lambda)^{-1}f-(H+\lambda)^{-1}f\|_2>\epsilon\} \,\big) =0\,,
\end{split}
\]
where $\{Y^*_N\}$ is the set of configurations of obstacles where \ref{ass:Y1} and \ref{ass:Y2} hold.
\noindent
The proof of Theorem \ref{th:fluctuations} is obtained with slight modifications of the step followed in \cite{FOT}, \cite{FHT} for a similar problem, and therefore we refer the reader to those papers. \\
\noindent
The following remarks summarize two important consequences of the validity of the assumptions \ref{ass:Y1} and \ref{ass:Y2}.
\begin{remark}
Notice that from \ref{ass:Y1} and \ref{ass:Y2} it follows that for any $0<\nu<\nu^*$
\begin{equation} \label{Y3}
\frac{1}{N^{3-\nu^2}}\sum_{\substack{i,j =1 \\i\neq j}}^N\frac{1}{|y_i-y_j|^{4}} \leq C_\nu\,.
\end{equation}
Indeed
\begin{equation*}
\begin{aligned}
\frac{1}{N^{3-\nu^2}}\sum_{\substack{i,j=1\\i\neq j}}^N\frac{1}{|y_i-y_j|^4}&=\frac{1}{N^{1-\nu^2}}\frac{1}{N^2}\sum_{\substack{i,j=1\\i\neq j}}^N \frac{1}{|y_i-y_j|^{3-\nu}}\frac{1}{|y_i-y_j|^{1+\nu}}\\
&\underset{\textup{by \ref{ass:Y1}}}{\leq}\frac{1}{N^{1-\nu^2}} N^{(1+\nu)(1-\nu)}\Bigg(\frac{1}{N^2}\sum_{\substack{i,j=1\\ i\neq j}}^N\frac{1}{|y_i-y_j|^{3-\nu}}\Bigg)\\
&\underset{\textup{by \ref{ass:Y2}}}{\leq} c_\nu \,.
\end{aligned}
\end{equation*}
\end{remark}
\vspace{0.2cm}
\begin{remark}
For $\lambda\geq 0$ we denote by $\mathcal{G}^{\lambda}$ the free resolvent $(-\Delta+\lambda)^{-1}$ and, with a slight abuse of notation, also the corresponding integral kernel
\[
\mathcal{G}^\lambda(x)=\frac{e^{-\sqrt{\lambda}|x|}}{4\pi |x|}.
\]
Moreover, we define the $N\times N$ matrix $G^\l$ with entries
\begin{equation*}
G^{\lambda}_{ij}=
\begin{cases}
\mathcal{G}^\lambda(y_i-y_j) & i\neq j\\
0 & i = j \,.
\end{cases}
\end{equation*}
\noindent
Then, due to hypothesis \ref{ass:Y2} on our configuration of obstacles, we get
\[
\frac{1}{N}\left\lVert G^\lambda \right\rVert\leq c(\lambda)\to 0 \qquad \text{for $\lambda\to +\infty$}\,.
\]
Indeed, if we fix $0 <\b<1$ and use that $e^{-x} \leq x^{-\b}$, we have by \ref{ass:Y2}
\[
\begin{split}
\frac 1 {N^2 } \| G^{\lambda} \|^2 & \leq
\frac 1 {N^2} \sum_{\substack{i,j=1\\ i\neq j}}^N \frac{e^{-2 \sqrt \l |y_i-y_j|}}{16 \pi^2 |y_i-y_j|^2} \\
& \leq c\, \l^{-\b/2} \frac{1}{N^2} \sum_{\substack{i,j=1\\ i\neq j}}^N \frac 1 {|y_i - y_j|^{2+\b}} \leq c_\b \l^{-\b/2}\,.
\end{split}
\]
Hence, in particular, there exists $\lambda_0>0$ such that
\begin{equation}\label{eq:lambda0}
\frac{1}{N}\lVert G^{\lambda}\rVert<1 \qquad \forall \lambda>\lambda_0.
\end{equation}
Note that with a slight abuse of notation we denote with the same symbol $G^\l_{ij}$ both the elements of the matrix $G^\l$ and the operator on $L^2( \mathbb{R}^3)$ acting as the multiplication by $G^\l(y_i - y_j)$.
\end{remark}
\vspace{0.5cm}
\noindent
Given a set of configurations satisfying $(Y1), (Y2)$, the strategy for the proof of \eqref{conresy} is based on some ideas and techniques developed in the study of boundary value problems for the Laplacian on randomly perforated domains, see \cite{FOT}, \cite{FT} and \cite{FT2}. For a given $f \in L^2(\mathbb{R}^3)$ we consider the solution $\ps_N$ of the equation
\[
(H_N +\l) \ps_N = f\,.
\]
We use the Resolvent Identity to rewrite
\begin{equation} \label{eq:psiN}
\psi_N=(H_N+\lambda)^{-1}f=\mathcal{G}^\lambda f+ \sum_{i=1}^N \mathcal{G}^\lambda v_i^N \rho_i^N\,,
\end{equation}
where the functions
\[
\rho_i^N=u_i^N (H_N+\lambda)^{-1}f
\]
solve
\begin{equation}\label{eq:rho}
\rho_i^N+u_i^N\mathcal{G}^\lambda v_i^N\rho_i^N+u_i^N\sum_{\substack{j=1\\ j\neq i}}^N \mathcal{G}^\lambda v_j^N\rho_j^N=-u_i^N \mathcal{G}^\lambda f.
\end{equation}
The idea is to represent the potential on the r.h.s. of~\eqref{eq:psiN} by its multipole expansion and to show that, for large $N$, the only relevant contribution comes from the first term of this expansion, that is the monopole term. According to this program we decompose $\ps_N = \tilde \ps_N + (\ps_N - \tilde \ps_N)$, with
\begin{equation}\label{eq:psi_N_tilde}
\tilde{\psi}_N(x)=(\mathcal{G}^\lambda f)(x)+\sum_{i=1}^N \mathcal{G}^\lambda(x-y_i)Q_i^N
\end{equation}
where
\begin{equation}\label{eq:Q_N}
Q_i^N=(v_i^N,\rho_i^N)\,.
\end{equation}
The problem is then split in two parts: find a limit of $\tilde \ps_N$ and than show that the difference $ (\ps_N - \tilde \ps_N)$ converges to zero for $N$ going to infinity.
\noindent
In order to find a limit of $\tilde \ps_N$ we recognize that the equation for the charge $Q_i^N$ can be written as
\begin{equation}
\begin{split} \label{eq:Q}
\frac N{4 \pi a}\, Q_i^N + \sum_{\substack{j=1\\ j\neq i}}^N G^\l_{ij} Q^N_j &= - (\mathcal{G}^\l f)(y_i) + R^N_i
\end{split}
\end{equation}
with $R^N_i= A^N_i + B^N_i + D^N_i$ and
\begin{equation}\label{eq:R}
\begin{aligned}
A_i^N & = \int \textrm{d} x\, v_i^N (x) \int \textrm{d} z \frac{e^{-\l |x-z|}-1}{4\pi|x-z|} u^N_i(z) \mu^N_i(z) \\
B_i^N & = \sum_{\substack{j=1\\ i\neq j}}^N \int \textrm{d} x\, u_i^N(x) \m_i^N(x) \int \textrm{d} z \big(\mathcal{G}^\l(x-z) - \mathcal{G}^\l(y_i-y_j) \big) v_j^N(z) \r^N_j(z) \\
D_i^N & = \int \textrm{d} x\, \m_i^N(x) u^N_i(x) \int \textrm{d} z\, \big(\mathcal{G}^\l(x-z) - \mathcal{G}^\l (y_i - z) \big) f(z)\,.
\end{aligned}
\end{equation}
Since we expect $R_i^N$ to be an error term,
equation \eqref{eq:Q} suggests to study the approximate equation obtained from \eqref{eq:Q} removing $R_i^N$. With this motivation, we define
\begin{equation}\label{eq:psi_N_hat}
\hat{\psi}_N(x)=(\mathcal{G}^\lambda f)(x)+\sum_{i=1}^N \mathcal{G}^\lambda(x-y_i)q_i^N\,.
\end{equation}
where the charges $q_i^N$ satisfy
\begin{equation}\label{eq:q_i}
\frac{N}{4\pi a}q_i^N+
\sum_{\substack{j=1\\ j\neq i}}^N
G^\lambda_{ij}q_j^N=-(\mathcal{G}^\lambda f)(y_i)\,,
\end{equation}
As before we rewrite $\tilde \ps_N = \hat \ps_N + (\tilde\ps_N - \hat\ps_N)$ and prove that the difference $ (\tilde\ps_N - \hat\ps_N)$ converges to zero for $N$ going to infinity. Finally, we show that the sequence $\hat \ps_N$ converges to $\ps$ defined by
\begin{equation}\label{eq:psi}
\psi=(-\Delta+4 \pi aW+\lambda)^{-1}f\,.
\end{equation}
This last step is strongly based on the analogy with the Hamiltonian with $N$ zero-range interactions considered in \cite{FHT}.
%
In fact, the resolvent of an Hamiltonian $H_{N\alpha,Y_{N}}$ with $N$ point interactions located at the points $Y_{N}=\{y_1,\dots,y_N\}$ with strength $N\alpha$ is given (see e.g.~\cite{AGHH}) by
\[
(H_{N\alpha,Y_{N}}+\lambda)^{-1}=\mathcal{G}^{\lambda}+\sum_{\substack{i,j=1\\ i\neq j}}^N (\,\Upxi_{N\alpha,Y_{N}}(\lambda) )^{-1}_{ij}(\mathcal{G}^{\lambda}(\cdot-y_i),\cdot)\mathcal{G}^{\lambda}(\cdot-y_j)\,.
\]
where the $N\times N$ matrix $\Upxi_{N,Y_N}(\lambda)$ is defined by
\[
(\,\Upxi_{N\alpha,Y_{N}}(\lambda))_{ij}=\left(N\alpha+\frac{\sqrt{\lambda}}{4\pi}\right)\delta_{ij}-(1-\delta_{ij})G^{\lambda}_{ij}.
\]
Hence
\begin{equation}\label{eq:phiN}
\phi_{N}=(H_{N\alpha,Y_{N}}+\lambda)^{-1}f=(\mathcal{G}^{\lambda}f)(x)+\sum_{i=1}^N \mathcal{G}^{\lambda}(x-y_i)\tilde{q}_i
\end{equation}
where
\begin{equation}\label{eq:qtilde}
\left(N\alpha+\frac{\sqrt{\lambda}}{4\pi}\right)\tilde{q}_i-\sum_{\substack{j=1\\ j\neq i}}^NG^\lambda_{ij}\,\tilde{q}_j=(\mathcal{G}^\lambda f)(y_i).
\end{equation}
Comparing \eqref{eq:phiN} and \eqref{eq:qtilde} with \eqref{eq:psi_N_hat} and \eqref{eq:q_i} respectively and recalling in addition that the scattering length of a point interaction with strength $\alpha$ equals $-1/(4\pi\alpha)$, the analogy between $\phi_N$ and $\hat{\psi}_N$ is clear. \\
\noindent
The paper is organized as follows. In Section \ref{prelBounds} we collect some properties of $\mu_i^N$ and $\rho_i^N$ that will be used along the paper. In Section \ref{sec:monopole} and \ref{sec:pointcharge} we show that the differences $ (\ps_N - \tilde\ps_N)$ and $ (\tilde\ps_N - \hat\ps_N)$ converge to zero for $N$ going to infinity. In Section \ref{sec:convergence} we study the convergence of $\hat \ps_N$.
To not overwhelm the notation, from now on we skip the dependence on $N$ where not strictly necessary.
\section{A priori estimates} \label{prelBounds}
We prove some useful a priori estimates for the solutions of equations \eqref{luscan} and \eqref{eq:rho}.
\begin{lemma} \label{lemma:rho-mu}
Let $\mu_i= \mu_i^N \in L^2(\mathbb{R}^3)$ and $\rho_i= \r_i^N \in L^2(\mathbb{R}^3)$ defined in \eqref{luscan} and \eqref{eq:rho} respectively. Under the assumptions of Theorem~\ref{MainTheorem} there exists a constant $C>0$
such that
\begin{align}
\sup_i \| \m_i \|_2 & \leq C N^{-1/2}
\end{align}
Moreover
\begin{equation}
\| \underline{\mu} \| \leq C \,, \qquad \| \underline{\r}\,\| \leq C \| f \|_2 \,. \label{rho-norm}
\end{equation}
\end{lemma}
\begin{proof
It is simple to check that by scaling $\mu_i (x)=N\mu(N(x-y_i))$. On the other hand, since $\mu$ satisfies \eqref{eq:mu}
and zero is not an eigenvalue nor a resonance for $V$ we can invert the operator $(\unit + u\, \mathcal{G}^0 v)$ and get
\begin{equation} \label{normaHatMu}
\| \mu \|_2^2 \leq C\,.
\end{equation}
Hence
\[
\|\mu_i \|_2^2=\int \textrm{d} x |\mu_i (x)|^2=\frac{1}{N} \int \textrm{d} x |\mu(x)|^2=\frac{1}{N}\|\mu\|_2^2\,,
\]
which leads to $ \sup_i \| \m_i \|_2 \leq C N^{-1/2} $ and $\|\underline{\mu}\|\leq C$.
Next we prove the bound for $\| \underline \r \,\|$ where we recall that the charge $\r_i$ solves \eqref{eq:rho}.
We set
\[
\hat \r_i(x) := N^{-1} \r_i \left(y_i + x/N\right)\,.
\]
From \eqref{eq:rho} we have
\begin{multline} \label{eq:hatrhoevol}
\big(\unit + u\, \mathcal{G}^0 v \big) \hat \r_i(x) + (u (\mathcal{G}^{\l/N^2} - \mathcal{G}^0) v \hat \r_i)(x) + \frac 1 N \sum_{\substack{i, j \\ i\neq j}} G^\l_{ij}(u\, v \hat\r_j)(x)\\[3pt]
+ \frac 1 N \sum_{\substack{i, j \\ i\neq j}} \big(u \big(\mathcal{G}^{\l,N}_{ij}- G^\l_{ij} \big) v \hat\r_j\big)(x) = - u(x)\, (\mathcal{G}^\l f)\left( y_i + x/N \right)\,,
\end{multline}
where $\mathcal{G}^{\l,N}_{ij}$ denotes the operator in $L^2( \mathbb{R}^3)$ with integral kernel
\begin{equation} \label{GlambdaN}
\mathcal{G}_{ij}^{\l, N}(x-z) =\frac{e^{-\sqrt \l |y_i -y_j - (x-z)/N|}}{4 \pi^2 |y_i -y_j - (x-z)/N|} \,.
\end{equation}
Our goal is to show that the operator $M^\l$ acting on $\oplus_{i=1}^N L^2( \mathbb{R}^3)$ defined by
\[ \begin{split}
\big( M^\l \big)_{ij} =\; & \big[(\unit + u\, \mathcal{G}^0 v) + u (\mathcal{G}^{\l/N^2} - \mathcal{G}^0) v\big] \d_{ij} \\
&+ \frac 1 N \big[ u G^\l_{ij} v+ u \big(\mathcal{G}^{\l,N}_{ij}- G^\l_{ij} \big) v \big] (1 -\d_{ij})
\end{split}
\]
is invertible. Due to the assumptions on $V$ the operator $(\unit + u \mathcal{G}^0 v)$ is invertible. Then, in order to prove that $M^\l$ is invertible
it suffices to show that there exists $\l_0$ such that the operators $M_{1}^\l$, $M_{2}^\l$ and $M_{3}^\l$ defined by
\[ \begin{split}
\big( M_{1}^\l \big)_{ij} &= u (\mathcal{G}^{\l/N^2} - \mathcal{G}^0) v \,\d_{ij} \\
\big( M_{2}^\l \big)_{ij} &= N^{-1} u\, G^\l_{ij}\, v \\
\big( M_{3}^\l \big)_{ij} &= N^{-1} u \big(\mathcal{G}^{\l,N}_{ij}- G^\l_{ij} \big) v\,(1-\d_{ij}) \,.
\end{split}
\]
have a norm going to zero as $N\to \infty$ for any $\l >\l_0$.
Denoting with $\lVert\cdot\rVert_{\textup{HS}}$ the Hilbert-Schmidt norm in $L^2(\mathbb{R}^3)$ and using the definition of $u(x)$ and $v(x)$, we obtain
\begin{equation} \label{M1norm}
\|M_{1}^\l\|^2 \leq \|u (\mathcal{G}^{\l/N^2} - \mathcal{G}^0) v \|_{HS}^2
\leq C \frac{\l}{N^2} \|V\|_{1}^2\,,
\end{equation}
which is small for any $\l$ in the limit $N \to \infty$. To bound the norm of the second matrix, we fix $0 < \b<1$. Then, using \ref{ass:Y2}
\begin{equation}
\begin{split} \label{M2norm}
\|M_{2}^\l\|^2 \leq \frac 1 {N^2} \sum_{\substack{i, j \\ i \neq j}} \| u\, G^\l_{ij}\, v \|^2_{HS}
& = \frac 1 {N^2} \sum_{\substack{i, j \\ i \neq j}} \int \textrm{d} x \textrm{d} z |V(x)| \frac{e^{-2 \sqrt{\l } |y_i - y_j|}}{ 16\pi^2|y_i- y_j|^2} |V(z)| \\
& \leq c \| V \|_1 ^2\, \l^{-\b/2} \frac 1 {N^2} \sum_{\substack{i, j \\ i \neq j}} \frac{1}{|y_i - y_j|^{2+\b}} \leq c_\b \l^{-\b/2}\,.
\end{split}
\end{equation}
The r.h.s. of \eqref{M2norm} can be made small by choosing $\l$ sufficiently large.
To bound the third term we use that for any $\xi < 1$ the following bound holds true:
\begin{equation} \label{3.12}
\| u\, (\mathcal{G}^{\l, N}_{ij} - G^\l_{ij})\, v \|^2_{HS} \leq C\left[ \frac{1}{N^2 |y_i-y_j|^4} + \frac{1}{N^2 |y_i-y_j|^2} + \frac{1}{N^{1-\xi} |y_i - y_j|^{3-\xi}} \right]
\end{equation}
From \eqref{3.12} and using the assumptions on the charge distribution and \eqref{Y3} we have
\begin{equation}
\begin{split} \label{M3norm}
\|M_{3}^\l\|^2 & \leq \frac 1 {N^2} \sum_{\substack{i, j=1 \\ i \neq j}}^N \| u\, (\mathcal{G}^{\l, N}_{ij} - G^\l_{ij})\, v \|^2_{HS}
\\ &\leq C \sum_{\substack{i, j=1 \\ i \neq j}}^N \left[ \frac{1}{N^4 |y_i-y_j|^4} + \frac{1}{N^4 |y_i-y_j|^2} + \frac{1}{N^{3-\xi} |y_i -y_j|^{3-\xi}}\right]
\leq \frac{C} {N^{1-\xi}}\,.
\end{split}
\end{equation}
To prove \eqref{3.12} we define the cutoff function $\chi_{ij}^N(x)$ to be equal to one if \mbox{$|x| \leq\frac N 2 |y_i -y_j|$} and zero otherwise and write
\begin{equation}\begin{split} \label{3.14}
& \| u (\mathcal{G}^{\l,N}_{ij} -\mathcal{G}^\l_{ij}) v \|_{HS}^2 \\
& \leq C \int \hskip -0.1cm \textrm{d} x \textrm{d} z \chi_{ij}^N(x-z) |V(x)| |V(z)| \left(\frac{e^{-\sqrt \l | y_i - y_j -(x-z)/N|}}{|y_i - y_j -(x-z)/N|} - \frac{e^{-\sqrt \l |y_i-y_j|}}{|y_i-y_j|}\right)^2\\
& + C\int \hskip -0.1cm\textrm{d} x \textrm{d} z |V(x)| |V(z)| (1 - \chi_{ij}^N(x-z)) \hskip -0.1cm \left(\frac{e^{-2\sqrt \l | y_i - y_j -(x-z)/N|}}{|y_i - y_j -(x-z)/N|^2} + \frac{e^{-2\sqrt \l |y_i-y_j|}}{|y_i-y_j|^2}\right)
\end{split}
\end{equation}
To bound the term on the second line of \eqref{3.14} we exploit the fact that whenever $\chi_{ij}^N(x)$ is different from zero the difference in the round brackets is small. In particular, we have
\begin{equation}\begin{split} \label{Glambda2}
& \int \textrm{d} x \textrm{d} z \chi_{ij}^N(x-z) |V(x)| |V(z)| \left(\frac{e^{-\sqrt \l | y_i - y_j -(x-z)/N|}}{|y_i - y_j -(x-z)/N|} - \frac{e^{-\sqrt \l |y_i-y_j|}}{|y_i-y_j|}\right)^2\\[6pt]
& \leq C \hskip -0.1cm\int \hskip -0.1cm \textrm{d} x \textrm{d} z \chi_{ij}^N(x-z) |V(x)| |V(z)| \frac{1}{|y_i-y_j|^2} \left(e^{-\sqrt \l | y_i - y_j -(x-z)/N|}- e^{-\sqrt \l |y_i-y_j|} \right)^2 \\[6pt]
& \phantom{{}={}} +C \int \textrm{d} x \textrm{d} z \chi_{ij}^N(x-z) |V(x)| |V(z)| \; e^{-\sqrt \l | y_i - y_j -(x-z)/N|}\\
& \hskip 4cm \times\left( \frac{1}{| y_i - y_j -(x-z)/N|} - \frac{1}{ |y_i - y_j|}\right)^2 \,.
\end{split}\end{equation}
To bound the first term on the r.h.s. of \eqref{Glambda2} we use the bound
\[
\big|e^{-|y- w/N|} - e^{-|y|} \big| \leq C |w|/N
\]
obtaining
\begin{equation} \label{bound1}
C \int \textrm{d} x \textrm{d} z \chi_{ij}^N(x-z) |V(x)| |V(z)| \frac {|x-z|^2} {N^2 |y_i - y_j|^2} \leq \frac C {N^2} \frac 1 { |y_i - y_j|^2}\,.
\end{equation}
To bound the second term on the r.h.s. of \eqref{Glambda2} we note that $|y_i - y_j - (x-z)/N|$ $\geq |y_i - y_j| - |x-z|/N$, and moreover on the support of $\chi_{ij}^N(x-z)$ we also have $|y_i - y_j - (x-z)/N$ $\geq |y_i - y_j|/2$. Hence:
\[
\chi_{ij}^N(x-z) \Big(\frac{1}{|y_i - y_j - (x-z)/N|} - \frac 1{|y_i - y_j |}\Big) \leq \frac{2|x-z|}{N |y_i - y_j|^2}
\]
We obtain
\begin{equation} \begin{split} \label{bound2}
&\int \textrm{d} x \textrm{d} z \chi_{ij}^N(x-z) |V(x)| |V(z)| \; e^{-\sqrt \l | y_i - y_j -(x-z)/N|}\\
& \hskip 4cm \times\left( \frac{1}{| y_i - y_j -(x-z)/N|} - \frac{1}{ |y_i - y_j|}\right)^2 \\[6pt]
& \leq C \int \textrm{d} x \textrm{d} z \chi_{ij}^N(x-z) |V(x)| |V(z)| \frac {|x-z|^2} {N^2 |y_i - y_j|^4}\\
& \leq \frac{C}{N^2 |y_i - y_j|^4}\,.
\end{split}\end{equation}
We are left with the bound of the term on the third line of \eqref{3.14}, for which we exploit the fast decaying behaviour of the potential. We start from the term which does not contain the singularity. We fix $\a$ such that $1-\a \in (0,1)$ and we multiply and divide by $|x-z|^\a$; then we use that $|x-z| \geq C N |y_i - y_j|$ on the support of the integral and that $|x-z|\geq 1$ for $N$ large enough, due to \ref{ass:Y1}:
\begin{equation} \label{bound3}
\begin{split}
& \int \textrm{d} x \textrm{d} z |V(x)| |V(z)| (1 - \chi_{ij}^N(x-z)) \frac{e^{-2\sqrt \l |y_i-y_j|}}{|y_i-y_j|^2} \\
& \quad \leq \frac{C}{ |y_i-y_j|^2} \int \textrm{d} x \textrm{d} z |V(x)| |V(z)| (1 - \chi_{ij}^N(x-z)) \frac{|x-z|^{\a}}{N^\a |y_i-y_j|^\a} \\
& \quad \leq \frac{C}{N^\a |y_i-y_j|^{2+\a}} \int \textrm{d} x \textrm{d} z |V(x)| |V(z)| |x-z| \\
& \quad \leq \frac{C}{N^\a |y_i-y_j|^{2+\a}}\,.
\end{split}
\end{equation}
To bound the remaing term on the third line of \eqref{3.14} we use a similar idea; we obtain
\begin{equation} \label{3.19}
\begin{split}
& \int \textrm{d} x \textrm{d} z |V(x)| |V(z)|(1 - \chi_{ij}^N(x-z)) \frac{e^{-2\sqrt \l |y_i-y_j - (x-z)/N|}}{|y_i-y_j - (x-z)/N|^2} \\
& \leq \frac{C}{(N |y_i-y_j|)^{2+\a}} \int \textrm{d} x \textrm{d} z |V(x)| |V(z)| (1 - \chi_{ij}^N(x-z)) \frac{|x-z|^{2+\a}}{|y_i-y_j - (x-z)/N|^2} \\
& \leq \frac{C}{ N^\a |y_i-y_j|^{2+\a}} \int \textrm{d} x \textrm{d} z |V(x)| |V(z)| (1 - \chi_{ij}^N(x-z)) \frac{(|x| +|z|)^{3}}{|N(y_i-y_j) - (x-z)|^2} \\
& \leq \frac{C}{ N^\a |y_i-y_j|^{2+\a}} \int \textrm{d} x^\prime \textrm{d} z^\prime \frac{|V(x^\prime+ N y_i)| |V(z^\prime+ N y_j)| }{|x^\prime- z^\prime|^2}\, (|x^\prime + N y_i |^3 + |z^\prime + N y_j|^3)
\end{split}
\end{equation}
where in the last line we used the change of variables $x^\prime= x- N y_i$ and $z^\prime= z- N y_j$ and we removed the cutoff function.
To bound the r.h.s. of \eqref{3.19} we use the Hardy-Littlewood-Sobolev inequality: let $f \in L^p(\mathbb{R}^n)$ and $h \in L^q(\mathbb{R}^n)$ with $p,q>1$ and let $0<\l<n$ with $1/p +\l/n +1/q=2$, then there exists a constant $C$ independent of $f$ and $h$ such that
\[
\Big| \int_{\mathbb{R}^n} f(x)|x-y|^{-\l} h(y) dx dy \Big| \leq C \|f \|_p \| h\|_q\,.
\]
Hence
\[
\begin{split}
& \int \textrm{d} x \textrm{d} z \frac{|V(x+ N y_i)| |V(z+ N y_j)| |x + N y_i |^3 }{|x- z|^2} \leq C \| |\cdot|^3V \|_{3/2} \| V \|_{3/2}\,,
\end{split}
\]
Then, due to our assumptions on the potential, we obtain
\begin{equation} \label{bound4}
\begin{split}
& \int \textrm{d} x \textrm{d} z |V(x)| |V(z)|(1 - \chi_{ij}^N(x-z)) \frac{e^{-2\sqrt \l |y_i-y_j - (x-z)/N|}}{|y_i-y_j - (x-z)/N|^2} \\
& \leq \frac{C}{ N^\a |y_i-y_j|^{2+\a}}
\end{split}
\end{equation}
Putting together \eqref{bound1}, \eqref{bound2}, \eqref{bound3} and \eqref{bound4} we prove \eqref{3.12}.
The bounds \eqref{M1norm}, \eqref{M2norm} and \eqref{M3norm} together with the assumptions on $V$ prove that there exists a $\l_0>0$ such that for any $\l>\l_0$ and $N$ large enough the operator $M^\l$ is invertible. From Eq.~\eqref{eq:hatrhoevol} we obtain
\[
\begin{split}
\| \hat \rho_i \|^2_2 &\leq C \sum_{i=1}^N \int \textrm{d} x |u(x)|^2 |(\mathcal{G}^\l f)(y_i +x/N)|^2 \\
& \leq C N \Big( \sup_x |(\mathcal{G}^\l f)(x)|^2 \Big) \\
& \leq C N \|f \|_2^2 \,.
\end{split}
\]
It follows that
\[
\begin{split}
\| \underline \rho\,\|^2 &=\sum_{i=1}^N \int \textrm{d} x | N \hat \rho_i( N(x-y_i))|^2 = \frac 1 N \sum_{i=1}^N \| \hat \rho_i \|_2^2 \leq C \|f \|_2^2 \,.
\end{split}
\]
\end{proof}
\section{Monopole expansion} \label{sec:monopole}
In this section we analyse the difference between the solution $\ps_N$ defined by \eqref{eq:psiN} and the approximate solution $\tilde \ps_N$, obtained considering the first term of a multipole expansion for the potential, defined in \eqref{eq:psi_N_tilde}. This is the content of the following proposition.
\begin{proposition}\label{prop:step1} Let $\psi_N$ and $\tilde{\psi}_N$ be defined in \eqref{eq:psiN} and \eqref{eq:psi_N_tilde} respectively. Then, under the assumption of Theorem \ref{MainTheorem},
\[
\lim_{N \to \infty} N^{\b}\norma{\psi_N-\tilde{\psi}_N}_2=0 \qquad \forall \b<1\,.
\]
\end{proposition}
\begin{proof}
Using \eqref{eq:psiN} and \eqref{eq:psi_N_tilde} we write
\[
\ps_N(x) - \tilde \ps_N(x) = \sum_{i=1}^N \int \textrm{d} z\, v_i(z) \rho_i(z) \big( \mathcal{G}^\l(x-z) - \mathcal{G}^\l(x-y_i) \big) := \sum_{i=1}^N K_i(x)\,.
\]
We have
\begin{equation} \begin{split} \label{2.1}
\| \ps_N - \tilde \ps_N \|^2 &\leq
\sum_{i=1}^N \int \textrm{d} x\, K_i^2(x) + \sum_{\substack{i,j=1\\ i \neq j}}^N \int \textrm{d} x\, K_i(x) K_j(x)\,.
\end{split}\end{equation}
We first bound the diagonal term on the r.h.s. of \eqref{2.1}.
\[ \begin{split}
\sum_{i=1}^N \int \textrm{d} x K_i^2(x) & = \sum_{i=1}^N \int \textrm{d} z \textrm{d} z^\prime v_i(z) v_i(z^\prime) \rho_i(z) \rho_i(z^\prime) \\
& \hskip 1cm \times \int \textrm{d} x \big( \mathcal{G}^\l(x-z) - \mathcal{G}^\l (x-y_i) \big)\big( \mathcal{G}^\l(x-z^\prime) - \mathcal{G}^\l (x-y_i) \big)
\end{split}\]
Using elliptic coordinates we can explicitly calculate the integral over $x$ of the products of Green's functions on the last line. For instance, let us consider the product $\mathcal{G}^\lambda(x-z)\mathcal{G}^\lambda(x-z^\prime).$ We set $r_1=|x-z|$,\, $r_2=|x-z^\prime|$ and $R=|z-z^\prime|$ and consider the new variables $\{\mu, \nu, \ph\}$ with $\mu=(r_1+r_2)/R \in [1, +\infty)$, $\nu= (r_1-r_2)/R \in [-1,1]$ and $\ph \in[0, 2\pi)$ the rotation angle with respect to the axis $zz^\prime$.
Then
\[
\begin{split}
\int \textrm{d} x\, &\mathcal{G}^\lambda (x-z)\mathcal{G}^\lambda(x-z^\prime)\\
&=\int dx\,\frac{e^{-\sqrt{\lambda}(|x-z|+|x-z^\prime|)}}{16\pi^2|x-z||x-z^\prime|}\\
&=\int_1^{+\infty}d\mu\int_{-1}^1d\nu\int_0^{2\pi}d\varphi \frac{R^3}{8}(\mu^2-\nu^2)\frac{e^{-\sqrt{\lambda}(\mu+\nu+\mu-\nu)R/2}}{16\pi^2\frac{R}{2}(\mu+\nu)\frac{R}{2}(\mu-\nu)}\\
&=\frac{1}{8\pi\sqrt{\lambda}}\,e^{-\sqrt{\lambda}|z-z^\prime|}\,.
\end{split}
\]
Proceeding analogously for the other terms we obtain
\[\begin{split}
& \sum_{i=1}^N \int \textrm{d} x\, K_i^2(x) \\
& \leq C \sum_{i=1}^N \int \textrm{d} z \textrm{d} z^\prime \,v_i(z) v_i(z^\prime) \rho_i(x) \rho_i(z^\prime) \Big( e^{-\l|z-z^\prime|} -e^{-\l|z-y_i|} - e^{-\l|z^\prime-y_i|} + 1 \Big) \,.
\end{split}\]
We use the definition of $v_i$ and rescale the integration variables as follows $N(z-y_i) \to z$ and $N(z^\prime-y_i) \to z^\prime$. Hence
\begin{equation} \begin{split} \label{Step1Diag}
& \sum_{i=1}^N \int \textrm{d} x K_i^2(x) \\
& \leq C N^{-3} \sum_{i=1}^N \int \textrm{d} z \textrm{d} z^\prime| V(z)|^{1/2} |V(z^\prime)|^{1/2} \hat \rho_i(z) \hat \rho_i(z^\prime) (|z| + |z^\prime|) \\
& \leq C N^{-3} \| (1+|\cdot|^2) V \|_1 \sum_{i=1}^N \|\hat \r_i \|_2^2 \leq C N^{-2}
\end{split}
\end{equation}
where we recall that $\hat \rho_i(z) = N^{-1} \rho_i(y_i + z/N)$ and $ \| \underline{\hat{\rho}} \|^2 \leq C N$ from Lemma \ref{lemma:rho-mu}.
To bound the non diagonal term we use Cauchy-Schwarz inequality and the elementary estimate $ab\leq 1/2(a^2+b^2)$
\begin{equation} \begin{split} \label{2.3}
\sum_{\substack{i,j=1\\ i \neq j}}^N \int \textrm{d} x & K_i(x) K_j(x) \; \leq \frac \e 2 \sum_{\substack{i,j=1\\ i \neq j}}^N \| \rho_i \|^2_2 \| \rho_j \|^2_2 \\
& +\frac 1 {2 \e} \sum_{\substack{i,j=1\\ i \neq j}}^N \int \textrm{d} z \textrm{d} z^\prime |v_i(z)|^2 |v_j(z^\prime)|^2 \\
&\hskip 1cm \times \Big | \int \textrm{d} x \big(\mathcal{G}^\l(x- z) - \mathcal{G}^\l(x - y_i) \big) \big(\mathcal{G}^\l(x- z^\prime) - \mathcal{G}^\l(x - y_j) \big) \Big|^2
\end{split}\end{equation}
with $\e>0$ to be fixed. To bound the second term in the r.h.s. of Eq.~\eqref{2.3} we first integrate over $x$ using elliptic coordinates, then we use the definition of $v_i$ and rescale the integration variables $z$ and $z^\prime$. We obtain
\begin{equation} \label{2.4}
\frac 1 {2 \e N^2} \sum_{\substack{i,j=1\\ i \neq j}}^N \int \textrm{d} z \textrm{d} z^\prime |V(z)| |V(z^\prime)| |\zeta^N_{ij}(z, z^\prime)|^2\end{equation}
with
\begin{equation} \label{zetaij}
\zeta^N_{ij}(z, z^\prime)\! = e^{-\sqrt{\l}|y_i-y_j + (z-z^\prime)/N |} -e^{-\sqrt{\l}|y_i-y_j +z/N|} - e^{-\sqrt{\l}|y_i-y_j -z^\prime/N|} +e^{-\sqrt{\l}|y_i-y_j|}\,.
\end{equation}
With a Taylor expansion at first order, it is easy to check that the function $f(x)= e^{-\sqrt{\l} |x+a|}$ satisfies
\[
\Big| e^{-\sqrt{\l} |x+a|} - e^{-\sqrt \l |a|} \Big( 1 - \sqrt{\l}\, \frac{x \cdot a}{|a|} \Big)\Big| \leq C |x|^2\,.
\]
with $C$ independent on $a$. Hence
\begin{equation} \label{differenzaG}
|\zeta^N_{ij}(z, z^\prime)| \leq \frac{C}{N^2} (|z|+|z^\prime|)^2\,,
\end{equation}
and
\begin{multline} \label{2.4}
\frac 1 {2 \e N^2} \sum_{\substack{i,j=1\\ i \neq j}}^N \int \textrm{d} z \textrm{d} z^\prime |V(z)| |V(z^\prime)|\,| \zeta^N_{ij}(z, z^\prime)|^2 \\
\leq \frac C {\e N^4 } \int \textrm{d} z \textrm{d} z^\prime |V(z)| |V(z^\prime)| (|z|+ |z^\prime|)^4 \leq C \e^{-1} N^{-4} \,.
\end{multline}
Using Eq. \eqref{2.3} and \eqref{2.4}, the bound $\| \underline \rho\, \| \leq C$ and the assumptions on the potential, and choosing $\e = N^{-2}$ we obtain
\begin{equation} \begin{split} \label{Step1NonDiag}
\sum_{\substack{i,j=1\\ i \neq j}}^N \int \textrm{d} x & K_i(x) K_j(x) \; \leq C N^{-2}\,.
\end{split}\end{equation}
%
Eq. \eqref{Step1Diag} and \eqref{Step1NonDiag}, together with \eqref{2.1} conclude the proof of the proposition.
\end{proof}
\section{Point charge approximation} \label{sec:pointcharge}
In this section we analyse the difference between $\tilde{\psi}_N$ and $\hat{\psi}_N$ and show that it is small for large $N$. This is the content of the next proposition.
\begin{proposition}\label{prop:step2} Let $\tilde{\psi}_N$ be defined by \eqref{eq:psi_N_tilde}, \eqref{eq:Q_N} and $\hat{\psi}_N$ by \eqref{eq:psi_N_hat} and \eqref{eq:q_i}. Under the assumptions of Theorem \ref{MainTheorem} and for $\lambda$ large enough
\[
\lim_{N \to \infty}N^{-\b} \norma{\hat{\psi}_N-\tilde{\psi}_N}_2 =0 \qquad \forall \b<3/2
\]
\end{proposition}
The proposition follows from the next two lemmas.
\begin{lemma} \label{lemma:HatVsTilde}
Let $\tilde{\psi}_N$ be defined by \eqref{eq:psi_N_tilde}, \eqref{eq:Q_N} and $\hat{\psi}_N$ by \eqref{eq:psi_N_hat} and \eqref{eq:q_i}. Then,
\[
\norma{\hat{\psi}_N-\tilde{\psi}_N}_2\leq c\, \sqrt{N}\, \norma{ \vec q- \vec Q}
\]
\end{lemma}
\vskip 0.2cm
\begin{lemma}\label{lemma:Qvsq} Let $Q_i=(v_i, \rho_i)$ and $q_i$ be defined in \eqref{eq:q_i}. Then there exists $\d>0$ such that
\[
\| \vec Q - \vec q \,\| \leq C N^{-2 -\d}\,.
\]
\end{lemma}
\vskip 0.2cm
\begin{proof}[Proof of Lemma \ref{lemma:HatVsTilde}.] We notice that
\begin{equation} \label{2.6}
\begin{aligned}
\norma{\tilde{\psi}_N-\hat{\psi}_N}^2_2&\leq
\int dx\,\left|\sum_{i=1}^N \mathcal{G}^{\lambda}(x-y_i)(q_i-Q_i)\right|^2\\
&= \sum_{i=1}^N (q_i-Q_i)^2 \int \textrm{d} x \frac{e^{- 2\sqrt \l |x-y_i|}}{16 \pi^2 |x-y_i|^2} \\
&\phantom{{}={}} + \sum_{\substack{i,j=1\\i\neq j}}^N(q_i-Q_i)(q_j-Q_j)\int dx\, \frac{e^{-\sqrt{\lambda}(|x-y_i|+|x-y_j|)}}{16\pi^2|x-y_i||x-y_j|}.
\end{aligned}
\end{equation}
The term on the second line of \eqref{2.6} is clearly bounded by $ C \| \vec Q - \vec q \|^2$.
To evaluate the integral in the last line of \eqref{2.6} we use an explicit integration as in the proof of Prop.~\ref{prop:step1} and Cauchy-Schwarz inequality. We get:
\[
\begin{aligned}
\sum_{\substack{i,j=1\\i\neq j}}^N&(q_i-Q_i)(q_j-Q_j)\int dx\, \frac{e^{-\sqrt{\lambda}(|x-y_i|+|x-y_j|)}}{16\pi^2|x-y_i||x-y_j|} \\
&\leq c_\lambda\sum_{\substack{i,j=1\\i\neq j}}^N(q_i-Q_i)(q_j-Q_j)e^{-\sqrt{\lambda}|y_i-y_j|}\\
&\leq c_\lambda \sum_{\substack{i,j=1\\i\neq j}}^N(q_i-Q_i)^2\\
&\leq c_\lambda N\norma{\vec q- \vec Q}^2\,.
\end{aligned}
\]
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:Qvsq}.] Eqs. \eqref{eq:Q} and \eqref{eq:q_i} for the charges $Q_i$ and $q_i$ give
\begin{equation} \label{Qminusq}
\frac N{4 \pi a}\,( Q_i - q_i) + \sum_{\substack{j=1\\j \neq i}}^N G^\l_{ij} (Q_j-q_j) = R_i\,.
\end{equation}
We denote
\begin{equation}\label{eq:Gamma}
\G^\l_{ij} := \left(\d_{ij} + \frac {4\pi a} N (1-\d_{ij})G^\l_{ij} \right)\,,
\end{equation}
so that \eqref{Qminusq} becomes
\[
\sum_{\substack{j=1\\j \neq i}}^N \G^\l_{ij} (Q_j - q_j) =\frac{4\pi a}{N}R_i\,,
\]
where $R_i=A_i+B_i+D_i$ is defined in \eqref{eq:R}. On the other hand the bound \eqref{eq:lambda0} yields immediately the invertibility of $\G^\lambda_{ij}$ for $\lambda>\lambda_0.$ Therefore
\[
\| \vec Q- \vec q\, \| \leq \frac C N \| \vec R\|
\]
Lemma \ref{lemma:Qvsq} is proved showing that there exists $\d>0$ such that
\begin{equation} \label{normR}
\| \vec R\, \|\leq N^{-1 -\d}\,.
\end{equation}
We start from $\| \vec A\,\|$.
By Cauchy-Schwarz inequality we get
\[
|A_i| \leq \| \r_i \|_2 \| \m_i \|_2 \| v_i (\mathcal{G}^\l - \mathcal{G}^0) u_i \|_{HS}\,.
\]
Using the definitions of $u_i$ and $v_i$
\[ \begin{split} \label{viDeltaGui}
\| v_i (\mathcal{G}^\l - \mathcal{G}^0) u_i \|_{HS}^2 & = \int \textrm{d} x \textrm{d} z N^4 |V(N(x-y_i))| |V(N(z-y_i))| \frac{(e^{-\sqrt \l |x-z|}-1)^2}{ 16 \pi^2 |x-z|^2} \\
& = C \int \textrm{d} x \textrm{d} z \frac{|V(x)| |V(z)|}{|x-z|^2} \big( e^{-\sqrt{\l} |x-z|/N} -1 \big)^2 \\
& \leq C \l N^{-2} \| V\|_1^2 \,.
\end{split}\]
With the bounds in Lemma \ref{lemma:rho-mu} we have
\begin{equation} \begin{split} \label{normA}
\| \vec A\, \| &\leq \| \underline \r\, \| \Big(\sup_i \| v_i (\mathcal{G}^\l - \mathcal{G}^0) u_i \|^2_{HS}\Big)^{1/2} \; \Big(\sup_i \| \m_i \|^2_2 \Big)^{1/2} \leq C N^{-3/2}\| f\|_2\,.
\end{split}\end{equation}
Next, we analyse $\|\vec B\,\|$. We define
\[
B_{ij} =\int \textrm{d} x \textrm{d} z\, u_i(x) \m_i(x) \big(\mathcal{G}^\l(x-z) - G^\l_{ij} \big) v_j(z) \r_j(z) \,.
\]
Then
\[
\| \vec B\,\|^2 =\; \sum_{i=1}^N \Bigg( \sum_{\substack{j=1\\j \neq i}}^N B_{ij} \Bigg)^2 \,.
\]
Using twice Cauchy-Schwarz inequality, first in the $x$ and $z$ variables and then in the sum over $j$, we get
\[ \begin{split}
\| \vec B\, \|^2 & \leq\sum_{i=1}^N \Bigg[ \sum_{\substack{j=1\\j \neq i}}^N \|\mu_i \|_2 \| \r_j \|_2 \Big( \int \textrm{d} x \textrm{d} z |u_i(x)|^2 \big(\mathcal{G}^\l(x-z) - G^\l_{ij} \big)^2 |v_j(z)|^2 \Big)^{1/2} \;\Bigg]^2 \\
& \leq \sum_{i=1}^N \|\mu_i\|^2_2 \, \sum_{\substack{k=1\\k \neq i}}^N \| \r_k \|^2_2 \, \sum_{\substack{j=1\\j \neq i}}^N \| v_j (\mathcal{G}^\l - G^\l_{ij}) u_i \|_{HS}^2 \\
& \leq \big(\sup_i \|\mu_i\|^2_2 \big) \| \underline \r\, \|^2 \sum_{i=1}^N \sum_{\substack{j=1\\j \neq i}}^N \| v_j (\mathcal{G}^\l - G^\l_{ij}) u_i \|_{HS}^2 \,.
\end{split}
\]
Rescaling variables and recalling the definition \eqref{GlambdaN} for $\mathcal{G}^{\l,N}_{ij}$ we have
\begin{equation} \label{vjDeltaGui}
\| v_j (\mathcal{G}^\l - G^\l_{ij}) u_i \|_{HS}^2 = N^{-2}\| v (\mathcal{G}^{\l,N}_{ij} - G^\l_{ij}) u \|_{HS} .
\end{equation}
Using the bounds \eqref{Y3}, \eqref{Glambda2} and \eqref{vjDeltaGui}, together with Lemma \ref{lemma:rho-mu} we obtain
\begin{equation} \begin{split} \label{normB}
\| \vec B\, \| & \leq C \Bigg(\frac{1}{N^5} \sum_{\substack{i,j =1\\ i \neq j}} ^N \frac{1}{|y_i - y_j|^4} \Bigg)^{1/2} \leq c_\nu N^{-1 - \frac 1 2 \nu^2}\,.
\end{split}\end{equation}
To conclude we consider $\|\vec D\|$. We have
\[ \begin{split}
\| \vec D \,\|^2 =\; & \sum_{i=1}^N \Big| \int \textrm{d} x \textrm{d} z \m_i(x) u_i(x) \, \big(\mathcal{G}^\l(x-z) - \mathcal{G}^\l (y_i - z) \big) f(z)\Big|^2 \\
=\; & \sum_{i=1}^N \int \textrm{d} z \textrm{d} z^\prime f(z) f(z^\prime) \xi_i(z) \xi_i(z^\prime)\,,
\end{split}\]
with
\[
\xi_i(z) = \int \textrm{d} x \mu_i(x) u_i(x) \big(\mathcal{G}^\l(x-z) - \mathcal{G}^\l (y_i - z) \big)\,.
\]
Using Cauchy-Schwarz inequality
\begin{equation} \begin{split}
\| \vec D \,\|^2
& \leq \| f\|^2_2 \, \Bigg[ \sum_{i=1}^N \Big( \int \textrm{d} z \xi^2_i(z) \Big)^2 + \sum_{\substack{i,j=1\\ i \neq j}}^N \Big( \int \textrm{d} z \xi_i(z) \xi_j(z) \Big)^2 \Bigg]^{1/2}\,. \label{2.16}
\end{split} \end{equation}
We proceed as in the proof of Prop.~\ref{prop:step1}. As for the diagonal term, using the scaling property $\mu(x) =N^{-1} \mu_i (y_i + x/N)$ we obtain
\begin{equation} \begin{split} \label{Diag.Ci}
& \sum_{i=1}^N \Big( \int \textrm{d} z\, \xi^2_i(z) \Big)^2 \\
& \leq\sum_{i=1}^N \bigg[ \frac{C}{N^{2}}\int \textrm{d} x \textrm{d} {x^\prime}\m(x)\mu(x^\prime) |V(x)|^{1/2} |V(x^\prime)|^{1/2} \\
& \hskip 3cm \times\Big( e^{- \sqrt \l \frac{ |x-x^\prime|}{N}} - e^{- \sqrt \l \frac{ |x|}{N}} - e^{- \sqrt \l \frac{|x^\prime|}{N}}+1 \Big) \bigg]^2 \\
& \leq C\sum_{i=1}^N \bigg[ N^{-3}\int \textrm{d} x \textrm{d} {x^\prime} \m(x)\m(x^\prime)|V(x)|^{1/2} |V(x^\prime)|^{1/2} (|x| +|x^\prime|) \bigg]^2 \\
&\leq C N^{-6} \| (1 +|\cdot|^2)V \|_1 \sum_{i=1}^N \|\mu\|^4_2 \, \leq C N^{-5}\,.
\end{split} \end{equation}
Here we used $\| \mu \|_2^2 \leq C $, see \eqref{normaHatMu}. To estimate the non diagonal term in \eqref{2.16} we use the bound \eqref{differenzaG} for the function $\zeta_{ij}^N$ defined in \eqref{zetaij}. By the scaling properties of $u_i(x)$ we have:
\begin{equation} \begin{split} \label{nonDiag.Ci}
& \sum_{\substack{i,j=1\\ i \neq j}}^N \Big( \int \textrm{d} z\, \xi_i(z) \xi_j(z) \Big)^2 \\
& = \sum_{\substack{i,j=1\\ i \neq j}}^N \bigg[ N^{-2} \int \textrm{d} x \textrm{d} x^\prime |V(x)|^{1/2} |V(x^\prime)|^{1/2} \mu(x)\mu(x^\prime) \\
& \phantom{{}={}}\times \int \textrm{d} z \left( \mathcal{G}^\l(y_i-z + x/ N) - \mathcal{G}^\l (y_i-z) \right) \left(\mathcal{G}^\l(y_j-z + x^\prime/N) - \mathcal{G}^\l(y_j-z)\right) \bigg]^2 \\
& = \sum_{\substack{i,j=1\\ i \neq j}}^N \bigg[ N^{-2} \int \textrm{d} x \textrm{d} x^\prime |V(x)|^{1/2} |V(x^\prime)|^{1/2} \m(x)\m(x^\prime)\, \zeta_{ij}^N(x,x^\prime) \Big]^2 \\
& \leq C\,\sum_{\substack{i,j=1\\ i \neq j}}^N \bigg[ N^{-4} \int \textrm{d} x \textrm{d} x^\prime |V(x)|^{1/2} |V(x^\prime)|^{1/2} \m(x)\m(x^\prime) (|x| +|x^\prime|)^2 \bigg]^2 \\
& \leq C\, N^{-8} \sum_{\substack{i,j=1\\ i \neq j}}^N \Big( \int \textrm{d} x |V(x)| (1 +|x|^2) \int \textrm{d} x^\prime |\m(x^\prime)|^2 \Big)^2 \leq C N^{-6}\,.
\end{split}\end{equation}
Putting together \eqref{2.16}, \eqref{Diag.Ci} and \eqref{nonDiag.Ci} we obtain
\begin{equation} \label{normD}
\|\vec D\, \|^2 \leq C N^{-5}\,.
\end{equation}
The bound \eqref{normR} for $\| \vec R \, \|$ follows from \eqref{normA}, \eqref{normB} and \eqref{normD}.
\end{proof}
\newpage
\section{Proof of Theorem \ref{MainTheorem}} \label{sec:convergence}
In this section we prove the main result stated in Theorem \ref{MainTheorem}. By Props.~\ref{prop:step1} and \ref{prop:step2} it remains to show the convergence of $\hat{\psi}_N$ to $\psi$. Although the proof is a slight modification of the step followed in~\cite{FHT} (see also \cite{BFT}) we report the details here for the sake of completeness.
\begin{proposition}\label{prop:step3}
Let $\hat{\psi}_N$ and $\psi$ be defined as in \eqref{eq:psi_N_hat} and \eqref{eq:psi} respectively. Then under the assumptions of Theorem~\ref{MainTheorem} and for $\lambda>\lambda_0$
\[
\lim_{N\to+\infty} N^{\beta}\lVert \hat{\psi}_N-\psi\rVert=0 \qquad \forall \beta<1/2.
\]
\end{proposition}
To prove the proposition we first introduce $q$ defined by
\begin{equation}\label{eq:q_def}
-\frac{1}{4\pi a}q=\psi=(-\Delta+4\pi aW+\lambda)^{-1}f.
\end{equation}
Then by the second resolvent identity we get
\begin{equation}\label{eq:q}
\frac{1}{4\pi a}q(x)+\int dz\, \mathcal{G}^\lambda(x-z)W(z)q(z)=-(\mathcal{G}^\lambda f)(x).
\end{equation}
In the following lemma we compare $q(y_i)$ with $q_i.$
\begin{lemma}\label{lm:q_i-q}
Let $q_i$ and $q$ be defined as in \eqref{eq:q_i} and \eqref{eq:q} respectively. Then under the same assumptions as in Theorem \ref{MainTheorem} and for $\lambda>\lambda_0$
\[
\lim_{N\to+\infty}N^{\beta}\left\{\frac{1}{N}\sum_{i=1}^N\,[N q_i-q(y_i)]^2\right\}^{1/2}=0 \qquad\forall\beta<1/2.
\]
\end{lemma}
\begin{proof}
From \eqref{eq:q_i} and \eqref{eq:q} we get
\begin{equation}\label{eq:q_i-q}
\sum_{i=1}^N\left(\frac{1}{4\pi a}\delta_{ij}+\frac{1}{N}(1-\delta_{ij})G^\lambda_{ij}\right)\left(\sqrt{N}q_j-\frac{1}{\sqrt{N}}q(y_j)\right)=L_i
\end{equation}
where
\begin{equation}\label{eq:L}
L_i=\frac{1}{N^{3/2}}\sum_{\substack{j=1\\j\neq i}}^NG^\lambda_{ij}q(y_j)- \frac{1}{\sqrt{N}}(\mathcal{G}^\lambda Wq)(y_i).
\end{equation}
Recalling the definition of $\G_{ij}^\lambda$ given in \eqref{eq:Gamma} we rewrite \eqref{eq:q_i-q} as
\[
\sum_{j=1}^N \G^\lambda_{ij} \left(\sqrt{N} q_j-\frac{1}{\sqrt{N}}q(y_j)\right)=(4\pi a) L_i.
\]
Using invertibility of $\Gamma^{\lambda}$ for $\lambda>\lambda_{0}$ (see \eqref{eq:lambda0}) and multiplying by $N^{\beta}$ with $\beta<1/2$ we get \[
N^{\beta}\left\{\sum_{i=1}^N \left |\sqrt{N}q_i-\frac{1}{\sqrt{N}}q(y_i)\right |^2\right\}^{1/2} \leq C N^{\beta}\Vert \vec L\, \Vert\,.
\]
It remains to prove that $N^{\beta}\Vert \vec L\Vert$ goes to zero. In particular noticing that $E(\|\vec L\|)=0$ and applying Chebyshev inequality it is enough to show $N^{2\beta}E(\lVert \vec L \rVert^2)\to 0$. We use that
\begin{equation} \begin{split} \label{expectations}
& E\big(\,\mathcal{G}^\l(x-y_i) q(y_i)\, \big) = \big(\mathcal{G}^\l W q \big)(x) \\
& E\big(\,(\mathcal{G}^\l W q)^2(y_i)\, \big)= \| \mathcal{G}^\l W q \|^2_{L^2_W} \\
& E\big(\,(\mathcal{G}^\l (y_i -y_j) q(y_j))^2\, \big) = \big(1, (\mathcal{G}^\l)^2 \ast (W q^2) \big)_{L^2_W} \,,
\end{split}
\end{equation}
where we used the notation $ (f\ast g)(x) = \int \textrm{d} y f(x-y) g(y)$. From \eqref{eq:L} and \eqref{expectations} we obtain
%
\[ \begin{split}
& N^{2\beta}E\left( \Vert \vec L \, \Vert^2\right) \\
& = N^{2\beta-1}E\Bigg(\sum_{i=1}^N \bigg(\frac{1}{N^2}\sum_{\substack{j,k=1\\j \neq i, k\neq i,j}}^N G^\lambda_{ij}G^\lambda_{ik}q(y_j)q(y_k) + \frac{1}{N^2}\sum_{\substack{j=1\\j\neq i}}^N \big(G^\lambda_{ij}q(y_j)\big)^2 \\
& \hskip 3.5cm -\frac{2}{N}\sum_{\substack{j=1\\j\neq i}}^NG^\lambda_{ij}\,q(y_j)(\mathcal{G}^\lambda Wq)(y_i)\Big) +(\mathcal{G}^\lambda Wq )^2(y_i)\bigg) \;\Bigg) \\
& = N^{2\beta-1} \Bigg( \frac{(N-1)}{N} E\big( (\mathcal{G}^\l (y_1 -y_2) q(y_2))^2 \big) \\
& \hskip 2cm + \bigg( \frac{(N-1)(N-2)}{N} - 2(N-1) +N \bigg) E\big((\mathcal{G}^\l W q)^2(y_1)\big)\Bigg) \\
& = \frac{N-1}{N^{2-2\beta}}\,(1,(\mathcal{G}^{\lambda})^2 \ast (Wq^2))_{L^2_W}-\frac{N-2}{N^{2-2\beta}}\,\Vert \mathcal{G}^\lambda Wq\Vert^2_{L^2_W}\,,
\end{split}
\]
which goes to zero for $N \to \infty$ for any $\b<1/2$.
This concludes the proof of the Lemma.
\end{proof}
\begin{proof} [Proof of Prop.~\ref{prop:step3}]
Let us consider $g\in L^2( \mathbb{R}^3).$ Then by \eqref{eq:q_i}, \eqref{eq:psi_N_hat}, \eqref{eq:q_def},\eqref{eq:q} we get
\[
\begin{aligned}
|(g,\hat{\psi}_N-\psi)|&=|(g,\sum_{i=1}^N q_i\,\mathcal{G}^\lambda (\cdot-y_i))-(g,\mathcal{G}^\lambda Wq
|\\
&\leq |\sum_{i=1}^N \big(q_i-\frac{q(y_i)}{N}\big)\mathcal{G}^\lambda g(y_i)|+|\frac{1}{N}\sum_{i=1}^N q(y_i) \mathcal{G}^\lambda g(y_i)-(g,\mathcal{G}^\lambda Wq)|.
\end{aligned}
\]
Then using Cauchy-Schwarz inequality and multiplying both sides by $\displaystyle{\frac{N^{\beta}}{\|g\|}}$ we obtain
\begin{equation}\label{eq:psi_hat-psi}
\begin{aligned}
\frac{|N^{\beta}(g,\hat{\psi}_N-\psi)|}{\| g \|}\leq &\frac{\displaystyle{\sup_x} |\mathcal{G}^\lambda g(x)|}{\| g \|}N^{\beta}\left\{\frac{1}{N}\sum_{i=1}^N(Nq_i-q(y_i))^2\right\}^{1/2}\\
&+N^{\beta}\frac{\left|\eta(Y_N)-E(\eta(Y_N))\right|}{\lVert g \rVert}
\end{aligned}
\end{equation}
where
\[
\eta(Y^N)=\frac{1}{N}\sum_{i=1}^N(\mathcal{G}^\lambda g)(y_i)q(y_i).
\]
The first term in \eqref{eq:psi_hat-psi} goes to zero by Lemma \ref{lm:q_i-q}. Furthermore
\[
\begin{aligned}
E\!\left(\frac{|\eta(Y_N)-E(\eta(Y_N))^2 }{\lVert g \rVert^2}\right)\!\!=&\frac{E\left(\eta(Y_N)^2\right)}{\| g \|^2}-\frac{E\left(\eta(Y_N)\right)^2}{\lVert g \rVert^2}\\
=&\left(\frac{\int\! \textrm{d} x\, W(x)q^2(x)(\mathcal{G}^\lambda g)^2(x)}{\|g\|^{2}N}+\frac{N-1}{N}\frac{E(\eta(Y_N))^2}{\|g\|^{2}}\right)\\
&-\frac{E(\eta(Y_N))^2}{\lVert g\rVert^2}\\
=&\frac{\int\! \textrm{d} x\, W(x)q^2(x)(\mathcal{G}^\lambda g)^2(x)\!-\!\left(\int\! \textrm{d} y\, (\mathcal{G}^\lambda g)(y) q(y) W(y)\right)^2}{\|g\|^{2}\,N}\\
\leq& \frac{C}{N}(\lVert q\rVert_{L^2_W}+(1,q)_{L^2_W}^2).
\end{aligned}
\]
Then by Chebyshev inequality also the second term in \eqref{eq:psi_hat-psi} goes to zero uniformily in $\lVert g\rVert$. Taking the supremum over $g\in L^2(\mathbb{R}^3) $ we get the thesis.
\end{proof}
We are now ready to prove our main result
\begin{proof}[Proof of Theorem \ref{MainTheorem}]
It follows immediately from Propositions \ref{prop:step1}, \ref{prop:step2} and \ref{prop:step3}.
\end{proof}
|
1,108,101,566,501 | arxiv | \section{Introduction}
The NA60 experiment was approved in the year 2000 to study a number of
physics topics accessible through the measurement of muon pairs. All
topics were studied by previous experiments, but were left with major
open questions: (i)~the excess emission of lepton pairs for masses
below the $\rho$-resonance with the possible link to the chiral transition,
(ii)~the enhanced production of intermediate mass muon pairs with the
ambiguity of either prompt thermal radiation or increased open charm
production, and (iii)~the precise mechanism underlying J/$\psi$ suppression,
asking for a variation of the system size. This paper is restricted to
the first topic, while the second and third are treated in another
NA60 paper presented during this conference~\cite{ho:el}.
The NA45/CERES experiment has consistently observed an excess emission
of electron pairs with invariant masses $0.2<m<0.6$~GeV/$c^{2}$ above
known sources from hadron decays, in S-Au and Pb-Au collisions, which
is concentrated at low pair $p_{\rm T}$ and scales steeper than linear
with the associated charged particle multiplicity~\cite{ceres:el,
ceres1:el}. The theoretical interpretation of this excess has been
linked to the restoration of chiral symmetry in the hot and dense
medium, leading to a ``melting'' of the $\rho$ with extra yield at lower masses
and decreased yield at the nominal pole
position~\cite{theo:el}. However, statistical accuracy and mass
resolution up to and including the 2000 data have not been sufficient to
positively verify this scenario; the excess continuum seems
structureless up to the $\rho$/$\omega$ region, and even
$q\overline{q}$ annihilation cannot be ruled out at present. Better
statistics, signal-to-background ratio and mass resolution are
therefore required to clarify the existing ambiguities. The NA60
experiment has now potentially achieved this goal. However, only
preliminary results can be presented at this stage, including raw
spectra over the whole mass region. More detailed results are only
given on the properties of the $\phi$.
\section{Experimental set-up}
The essential components of the NA60 experiment are shown in figure~1.
\begin{figure}
\begin{center}
\mbox{\epsfig{file=fig1a.eps,width=0.8\textwidth}}
\mbox{\epsfig{file=fig1b.eps,width=0.6\textwidth}}
\caption{Full NA60 set-up (upper) and detail of the target region (lower).}
\label{fig:setup}
\end{center}
\end{figure}
The muon spectrometer previously used in NA10/NA38/NA50
consists of an air-core toroidal magnet, 8 multi-wire proportional
chambers for muon tracking and momentum determination, and 4
scintillator hodoscopes for the muon pair trigger. A 5.5~m long hadron
absorber before the spectrometer serves as the muon filter, with
the usual drawbacks of such a set-up: energy loss and multiple
scattering of the muons impair the mass resolution and prohibit an
accurate vertex determination. The new silicon pixel telescope added
by NA60 is used to track all charged particles in the vertex region
before the hadron absorber and to determine their momenta independently
of the muon spectrometer. It consists of a number of tracking planes
with a total of 12 space points, embedded in a 2.5~T dipole
magnet. The planes are made from assemblies of detector chips bump-bonded
to radiation-tolerant readout chips, developed for the ALICE and LHCb
experiments at the LHC~\cite{pixel:el}. Each chip contains
$256\times32$ pixels with a pixel size of
$50\times425$~$\mu$m$^{2}$. Matching of the muon tracks before and
after the hadron absorber both in {\it coordinate and momentum} space
improves the mass resolution in the region of the light vector mesons
$\omega$, $\phi$ by a factor of nearly 3, decreases the combinatorial
background by (partial) kink-rejection of muons from $\pi$ and
K~decays, and finally allows the identification of displaced vertices of muons
from D-decays for the measurement of open
charm~\cite{ho:el,ruben:el}.
Further components of the NA60 set-up are a beam tracker of 4
cryogenic silicon microstrip detectors upstream of the target to track
the incoming beam particles, and a zero-degree quartz-fiber ``spaghetti''
calorimeter (``ZDC'') to tag the centrality (number of participants)
of the collision. The high luminosity of NA50 is kept in NA60. Radiation
tolerance and high readout speed of the silicon pixel telescope allow
for beam intensities of $5\cdot10^{7}$ per 5~s~burst for ions
and $2\cdot10^{9}$ per~5~s~burst for protons in connection with target
thicknesses of more than 10\,\% of an interaction lenght. The dimuon
trigger is effective enough to record the resulting data rates without
any centrality selection.
\section{Global results for low mass muon pairs}
During the 5-week long Indium run in 2003, around $4\cdot10^{12}$
ions with an energy of 158~GeV/nucleon were delivered at the target,
and a sample of 230~Million dimuon events (background dominated)
were written to tape. The presently available dimuon mass spectra,
without any centrality selection, are displayed in figure~2.
\begin{figure}
\mbox{\epsfig{file=fig2a.eps,width=0.52\textwidth}}
\mbox{\epsfig{file=fig2b.eps,width=0.55\textwidth}}
\caption{Invariant mass spectra for measured opposite-sign muon pairs,
combinatorial background and net pairs after background subtraction
(left), and enlarged section of the net pairs (right).}
\label{fig:mass}
\end{figure}
The combinatorial background of opposite-sign muon pairs is determined
by mixing single muons from events with like-sign pairs; subtraction
of this background from the measured opposite-sign pairs results in
the net spectrum labeled ``signal pairs''. The mean
signal-to-background ratio is about 1/4. The net spectrum contains
370\,000 pairs, corresponding to about 35\% of the available total
statistics; the final sample will therefore contain about 1 Million
pairs with an effective statistics of 10$^{6}$/(4+1) =
200\,000. Compared to the CERES Pb-Pb data 1995/96~\cite{ceres:el} or
2000~\cite{ceres1:el}, this is an improvement by a factor of roughly
1\,000. The enlarged low mass net spectrum on the right of figure~2
shows the light vector mesons $\omega$ and $\phi$ to be completely
resolved. The mass resolution for the $\phi$ is about 23 MeV/c$^{2}$,
independent of centrality. One can even recognize the rare
$\eta\rightarrow\mu\mu$ decay, which should lead to the first
unambiguous cross section measurement of the $\eta$ in nuclear collisions.
It should be stressed that the extension of the mass spectrum all the
way down to the 2m$_{\mu}$ threshold is accompanied by a complete
coverage in pair transverse momentum down to zero, albeit with
decreasing acceptance by up to 2 orders of magnitude for the lowest
masses. This presents a further drastic improvement compared to the
NA50 set-up.
\begin{figure}[h!]
\begin{center}
\mbox{\epsfig{file=fig3.eps,width=0.6\textwidth}}
\caption{Invariant mass spectra for 4 bins in associated charged particle multiplicity.}
\label{fig:masslog}
\end{center}
\end{figure}
Although the statistics is high enough for finer binning, the data
have so far only been subdivided into 4 coarse bins in associated
charged particle multiplicity, as measured by the pixel telescope. The
net invariant mass spectra for these 4 bins are shown in figure~3,
arbitrarily normalized in the region of the $\omega$ peak.
One recognizes some relative variation at very low masses and about a
factor of 2 increase in the $\phi$. Stronger variations occur below
the $\omega$, between the $\omega$ and the $\phi$, and above the
$\phi$. Those significantly above the $\phi$ are mostly due to the
fact that hard processes scale with the number of binary
nucleon-nucleon collisions
rather than with $N_{\rm part}$ (like the $\omega$). The quantitative
judgment of the rest has to await a remaining correction to the mass
spectra of figures~2 and~3 which has not yet been done: the subtraction
of muon pairs arising from incorrect (``fake'') matches of muon tracks
between the two spectrometers. This correction will also be obtained
from mixing events, but the corresponding software has not yet been
finalized. From overlay MC simulations it seems, however, that the
level of fake matches is too small to account for the varying yield of
dimuons in the neighborhood of the $\omega$ and $\phi$~\cite{ruben:el}.
\section{Detailed results for the $\phi$ meson}
With the $\omega$ and $\phi$ so well resolved and isolated from the
rest of the low mass pairs, a more detailed analysis has already been
performed on the $\phi$/$\omega$ yield ratio and the
transverse momentum distribution of the $\phi$ as a function of
centrality. The analysis is based on the sample shown in figure~2,
containing about 37\,000 net events for the $\phi$ (14\,000 effective
statistics), after subtraction of the remaining underlying continuum.
\begin{figure}
\begin{center}
\mbox{\epsfig{file=fig4.eps,angle=0,width=0.65\textwidth}}
\caption{Ratio of the cross sections for $\phi$ and $\omega$
production in the rapidity window $3.3<y<4.2$. The NA60 data are
obtained from the procedure described in the text. The quoted errors
are purely statistical. The data from NA50 shown for comparison
are also discussed in the text.}
\label{fig:yield}
\end{center}
\end{figure}
The $\phi/\omega$ yield ratio has been determined from the raw data by
propagating the muon pairs arising from the known
resonance~($\eta,\rho,\omega,\phi$) and
Dalitz~($\eta,\eta^{\prime},\omega$) decays through the NA60 set-up
with GEANT, using the hadron-decay generator GENESIS~\cite{gen:el} as
an input. An additional continuum source with
exponential fall-off beyond the $\phi$ and the level of fake matches
as obtained from overlay MC simulations have been taken into account
in an approximate way. The final result on $\phi/\omega$ is obtained
by adjusting the input ratio $\phi/\omega$ of GENESIS such that the
output fits the measured data. It should be stressed that the results
are rather insensitive to the details of the procedure, with
one exception: the $\rho/\omega$ ratio in the fit procedure is
somewhat ill-defined, while the sum $\rho+\omega$ is stable. For this
reason, the fit has been constrained to a fixed ratio
$\rho/\omega=$~1. The whole procedure is done independently for each of
the 4 multiplicity bins. The average multiplicity density in each of
the bins is converted, via the correlation of multiplicity vs.\
ZDC~energy, to average values for the number of participants $N_{\rm
part}$, using Glauber fits to the ZDC data. The results for
$\phi/\omega$ are plotted in figure~4. Note that the errors are purely
statistical; the systematic errors are under investigation. Results
previously obtained by NA50 for the system Pb-Pb are shown for
comparison. They have been derived from the published values of
$\phi/(\rho+\omega)_{\mu\mu}$~\cite{na50:el}, again assuming
$\rho/\omega=$~1 to be consistent, and correcting for the branching
ratios of $\rho,\omega,\phi\rightarrow\mu\mu$. They are also
corrected to correspond to the same cut $p_{\rm T}>1.1$~GeV/$c$ for
the $\rho,\omega,\phi$ rather than the common cut $m_{\rm T}>1.5$~GeV/$c^{2}$~\cite{na50:el},
using the NA50 slope parameter value $T=228$~MeV for the extrapolation.
It is remarkable that the two data sets agree within errors, both in
absolute value and in the slope~vs.~$N_{\rm part}$. This implies
$N_{\rm part}$ to be a reasonable scaling variable for particle ratios
in different collision systems, at least as long as A is not too small.
\begin{figure}
\begin{center}
\mbox{\epsfig{file=fig5.eps,width=0.6\textwidth}}
\caption{Transverse momentum spectra of the $\phi$ meson for
4 bins in associated charged multiplicity density. The errors are purely
statistical; the systematic errors are under investigation.}
\label{fig:pt}
\end{center}
\end{figure}
The analysis of the transverse momentum spectra of the $\phi$ has been
done in a straightforward way. The raw data are obtained by selecting
a narrow window around the $\phi$ peak in the net spectrum of figure~2,
and then subtracting the content of 2 side-windows symmetrically placed
around the peak. It was verified that the results are completely
insensitive to the choice of width and position of the
side-windows, up to the extreme of applying the whole procedure to the
gross spectrum before subtraction of the combinatorial background. The
raw data were then corrected for acceptance, using a 2-dimensional
acceptance matrix in $p_{\rm T}$ and $y$ obtained from a detailed
simulation of the NA60 set-up, and finally converted to invariant
relative cross sections $1/p_{\rm T} {\rm d}N/{\rm d}p_{\rm T}$. The
acceptance correction is quite uncritical, since the acceptance varies
by less than a factor of 2 over the whole $p_{\rm T}$ range. Finally,
the invariant cross sections were fitted with the simple form
$\exp(-m_{\rm T}/T)$ to extract the slope parameters $T$ in a way
consistent with other publications. The spectra and associated
$T$ values show no variation with $y$ within errors. The
$y$-integrated data for $3.3<y<4.1$ are plotted in figure~5 separately
for the 4 bins
in associated charged particle multiplicity density. The data extend
up to $p_{\rm T}=2.5$~GeV/$c$; for the full-statistics sample, a limit of
3.5~GeV/$c$ may be reachable. The slope parameters derived from
the fits are well defined within the chosen parametrization. The
average slope parameter over the whole range in multiplicity and
$p_{\rm T}$ is $T=252\pm3$~MeV; if the fit is restricted to $p_{\rm
T}<1.5$~GeV/$c$ (NA49 range) or $m_{\rm T}>1.5$~GeV/$c^{2}$ (NA50
range), the resulting values are $256\pm3$ and $245\pm5$, respectively, i.e.\
nearly identical within errors.
The tendency of the slope parameter $T$ to rise with $\langle {\rm
d}N_{\rm ch}/{\rm d}\eta \rangle$ or $N_{\rm part}$ is clearly borne
out in figure~6.
\begin{figure}
\begin{center}
\mbox{\epsfig{file=fig6.eps,width=0.65\textwidth}}
\caption{Slope parameters $T$ of the transverse momentum spectra of
the $\phi$ meson for different experiments at 158~GeV/nucleon. The
errors shown for NA60 are purely statistical; the systematic
errors are under investigation, but are presently believed to be
$\leq10$~MeV.}
\label{fig:pt}
\end{center}
\end{figure}
For comparison, this plot also contains the slope parameters reported
by NA49 for Pb-Pb on $\phi\rightarrow$ KK~\cite{na49:el}, and those
from NA50 for Pb-Pb on
$\phi\rightarrow\mu\mu$~\cite{na50:el}. Remarkably, NA60
and NA49 agree in the region of overlap in $N_{\rm part}$
within the rather large errors of NA49. Whether that bears on the
famous ``$\phi$-puzzle'', originally discussed in view of the
discrepancy between NA49 and NA50 for Pb-Pb~\cite{shuryak:el}, remains
to be seen. A difference now also exists between NA60 and NA50 for
which we have no obvious explanation, and the usefulness of $N_{\rm
part}$ as the proper scaling variable between different systems for
quantities other than particle ratios is in any case not proven. A
consistent solution of the $\phi$-puzzle by NA60 would require
parallel data on $\phi\rightarrow$ KK for the In-In system. Such an
analysis, based solely on track information from the pixel telescope,
is indeed in progress. Work on a precision determination of the mass
and width of the $\phi$, which addresses further aspects of in-medium
effects on the $\phi$, is also in progress.
\section{Conclusions}
The NA60 experiment is setting new standards in the data quality of
muon pair measurements. A high statistics run with protons on many
different nuclear targets is presently being performed to provide
precision reference data for all three major aspects of the NA60
program. Specifically, the low mass region will benefit from
unprecedented sensitivity to sources other than the known meson
decays.
\section*{References}
|
1,108,101,566,502 | arxiv | \section{Introduction}
Observations indicate that a significant fraction of spiral galaxies are barred (e.g. Cheung et al. 2013
and references therein). While precise measurements of the
bar fraction vary (between 20 and 70 percent, depending on the sample) there is no doubt that the phenomenon is a
rather common feature in galaxies. In addition, comparisons of the frequency of bar presence in nearby galaxies
to the one in high-redshift objects lead to the conclusion that bars occur relatively late in the evolution of
galaxies (Sheth et al. 2008).
The essence of bar formation is the transformation of circular orbits in the stellar disk into elongated ones
(for reviews on dynamics of bars see e.g. Sellwood \& Wilkinson 1993; Athanassoula 2013).
While a simplified analytic description of the phenomenon is possible (Binney \& Tremaine 2008), the problem can be
fully tackled only using $N$-body simulations. This approach was pioneered by Miller and Smith (1979) who studied
the bar evolution and discussed the bar pattern speed, the particle orbits and the predicted observational properties
of bars. Sparke \& Sellwood (1987) studied the bar formation via disk instability, performed detailed classification
of stellar orbits in the bar and found the bar to be stable and robust. While such instability is indeed generally
believed to be responsible for bar formation, we still do not have a full grasp of all the intricacies
of bar formation and evolution and alternative ways to form a bar have been considered.
Combes et al. (1990) were among the first to discuss the box and peanut shapes generated by stellar bars and Raha
et al. (1991) discovered a mechanism that may be responsible for these shapes in the form of the buckling instability.
The instability drives the stars out of the galactic plane and may significantly weaken the bar (for more
recent developments see e.g. Athanassoula 2005; Martinez-Valpuesta, Shlosman \& Heller 2006;
Saha, Pfenniger \& Taam 2013). Insight into its nature can be obtained via studies of vertical
orbital instabilities (e.g. Binney 1978; Pfenniger 1984; Skokos, Patsis \& Athanassoula 2002a,b).
Additional complication in the study of bar dynamics is introduced by the presence of extended dark matter
haloes surrounding the disks (Debattista \& Sellwood 2000; Athanassoula 2002, 2003;
Dubinski, Berentzen \& Shlosman 2009; Saha \& Naab 2013). Overall, the evolution of the bar's major properties,
such as its strength or pattern speed
depends on a plethora of parameters and has been only partially explored in simulations (Athanassoula \& Misiriotis
2002; Klypin et al. 2009; Athanassoula, Machado \& Rodionov 2013).
One factor that has been relatively underexplored is the effect of galaxy interactions.
Previous studies of the influence of interactions on the properties of the bars discussed mainly the effect of a
satellite on the bar in the normal-size galaxy (Gerin, Combes \& Athanassoula 1990; Sundin, Donner \& Sundelius 1993;
Miwa \& Noguchi 1998; Mayer \& Wadsley 2004; Romano-D\'{i}az et al. 2008).
In this paper we are interested in a different phenomenon, that of a tidally induced
bar in a dwarf galaxy interacting with a bigger host. Important hints that such a process may be important in
shaping the properties of present-day dwarfs are provided by observations that bars form later in fainter
galaxies (Sheth et al. 2008) and that bar fraction is higher in fainter objects in dense environments such as
galaxy clusters (Lansbury, Lucey \& Smith 2014). A possible interpretation of these results is that dwarf galaxies
are born with dynamically hotter disks which delays bar formation (Athanassoula \& Sellwood 1986; Sheth et al. 2012)
until they are accreted by a more massive galaxy or become a member of a group or cluster where they are affected
by tidal forces. An example of such a tidally induced bar could be the one in M82, resulting from an interaction
with M81 (Wills et al. 2000).
The formation of tidally induced bars in dwarf galaxies has been studied mainly in the context of the tidal stirring
scenario for the formation of dwarf spheroidal (dSph) galaxies in the Local Group (Mayer et al. 2001; Klimentowski
et al. 2009; Kazantzidis et al. 2011; {\L}okas, Kazantzidis \& Mayer 2011).
The simulations of this process revealed that the formation
of a bar is a natural intermediate stage of a dwarf galaxy in its morphological evolution from a disk toward a
spheroid that occurs for a variety of dwarf's orbits around the host and for different initial structural parameters.
{\L}okas et al. (2012) measured the shapes of simulated dSph galaxies and compared them to the shapes of classical
dSph satellites of the Milky Way quantified in the same way. Bar-like surface density distributions were found in the
Sagittarius, Ursa Minor and possibly Carina dwarfs. Elongated shapes, suggestive of the presence of the bar are also
seen in the recently discovered ultra-faint dwarfs like Hercules (Coleman et al. 2007) and Ursa Major II
(Mu\~noz, Geha \& Willman 2010). Note that although the Large Magellanic Cloud (LMC) is also known to possess a bar,
this structure is probably not of tidal origin. Indeed, according to the most probable scenario the
LMC is at its first pericentre around the Milky Way (Besla et al. 2007) and its pericentric distance is too large
(50 kpc) for the tidal forces to be effective.
The Sagittarius dwarf seems to be the most obvious candidate for a barred galaxy among the Local Group dSphs.
{\L}okas et al. (2010) proposed an evolutionary model of this dwarf starting from a disk embedded in a dark matter
halo. After the first pericentre passage the stellar component of the dwarf transforms into a bar that survives until
the next pericentre which was identified as the present stage of the dwarf we observe. The shape of the dwarf at this
time matches very well the actual elongated shape determined from observations by Majewski et al. (2003). The model
also explains the lack of rotation signal in the data (Frinchaboy et al. 2012).
In this work we look in more detail into the properties of a tidally induced bar on a typical orbit around the Milky Way.
In section 2 we present the simulation used for this study. In section 3 we describe the evolution of the dwarf
galaxy using global measurements of its properties. Sections 4 and 5 focus on the strength and length of the bar,
while section 6 is devoted to the pattern speed of the bar and its interpretation. The discussion follows in
section 7.
\section{The simulation}
Our simulation setup consisted of live models of two galaxies: the Milky Way-like host and the dwarf galaxy.
The $N$-body realizations of both galaxies were generated via procedures described in Widrow \& Dubinski (2005)
and Widrow, Pym \& Dubinski (2008). The procedures allow us to generate near-equilibrium models of galaxies composed
of a disk, a bulge and a halo with required properties. Both our galaxies contained exponential disks embedded in
NFW (Navarro, Frenk \& White 1997) dark matter haloes. The dark haloes were smoothly truncated at the radius
close to the virial radius in order to make their masses finite. Each component of each galaxy contained
$10^6$ particles ($4 \times 10^6$ particles total).
The dwarf galaxy model was similar to the default model
used in recent simulations of tidal stirring (Kazantzidis et al. 2011; {\L}okas et al. 2011). The dark
halo of the dwarf had a mass $M_{\rm h} = 10^9$ M$_{\odot}$ and concentration $c=20$. The disk had a
mass $M_{\rm d} = 2 \times 10^7$ M$_{\odot}$, exponential scale-length $R_{\rm d} = 0.41$ kpc and thickness
$z_{\rm d}/R_{\rm d} = 0.2$. The coldness of the disk is controlled by the central radial velocity dispersion
which we assume to be $\sigma_{R0} = 10$ km s$^{-1}$. This translates to the Toomre parameter $Q = 3.82$ at
$R = 2.5 R_{\rm d}$ and guarantees that our dwarf is stable against formation of the bar in isolation for the
time scales of interest here. We verified this by evolving the dwarf galaxy in isolation for 10 Gyr.
The host galaxy was chosen to resemble the model MWb of Widrow \& Dubinski (2005). It had a dark matter halo of mass
$M_{\rm H} = 7.7 \times 10^{11}$ M$_{\odot}$ and concentration $c=27$. The disk of the host had a mass $M_{\rm D}
= 3.4 \times 10^{10}$ M$_{\odot}$, the scale-length $R_{\rm D} = 2.82$ kpc and thickness $z_{\rm D} = 0.44$ kpc.
The central radial velocity dispersion of the disk was $\sigma_{R0} = 121$ km s$^{-1}$ which corresponds to
the Toomre parameter $Q = 2.2$ again making the disk stable against bar formation for time scales of interest.
Although there is evidence that the Milky Way has a bar (e.g. Blitz \& Spergel 1991; Dwek et al. 1995;
Martinez-Valpuesta \& Gerhard 2011; Romero-G\'omez et al. 2011), we specifically chose a model for the host
galaxy that does not form a bar since this makes the host potential constant in time. The main reason for
this choice is that we are most interested here in modeling the effect of the tidal forces on the growth
and evolution of the bar in the dwarf. The presence of a time dependent bar in the Milky Way-like host
would induce a second time dependence whose influence would be difficult to disentangle from the tidal
effects. Furthermore, both observational (e.g. Stanek et al. 1997) and theoretical (e.g. Shen et al. 2010)
studies indicate that the bar in the Milky Way is not very strong. Moreover, its half-length is of the order of 4 kpc,
i.e. much smaller than our adopted pericentre distance. It is thus unlikely to significantly influence the evolution
of the tidal bar in the dwarf.
For simplicity we also neglect other components of the Milky Way structure
such as the bulge, the stellar halo, the distinction into thin/thick disk etc. The mass of these components is at least a
few times smaller than the disk mass we assume and probably of the order of the uncertainty of the mass distribution
in the two main components (Widrow \& Dubinski 2005).
We placed the dwarf galaxy initially at an apocentre of a typical, eccentric orbit around the Milky Way
with apo- to pericentre distance ratio of $r_{\rm apo}/r_{\rm peri} = 120/25$ kpc. Due to a rather small mass
of the dwarf the orbit decays only a little in time as a result of dynamical friction. The dwarf's disk, the disk
of the Milky Way and the orbit were initially all coplanar and the dwarf's disk was in prograde rotation with
respect to the orbit. This configuration, together with the stability of the Milky Way's disk, makes sure that
the tidal forces experienced by the dwarf during the evolution are only controlled by the dwarf's distance
from the host galaxy and not by other subtle changes of the potential.
The evolution of the dwarf was followed for 10 Gyr using the GADGET-2 $N$-body code (Springel, Yoshida \& White 2001;
Springel 2005) and we saved 201 simulation outputs, one every 0.05 Gyr (which is significantly smaller
than the dynamical time of stars in the dwarf's disk).
The code configuration was that of Newtonian space with vacuum boundary conditions and the gravity
calculations were done with a tree option. We adopted the softening scales of $\epsilon_{\rm d} = 0.02$ kpc and
$\epsilon_{\rm h} = 0.06$ kpc for the disk and halo of the dwarf and $\epsilon_{\rm D} = 0.05$ kpc and
$\epsilon_{\rm H} = 2$ kpc for the disk and halo of the Milky Way, respectively. These choices were informed
by the study of Power et al. (2003) and allow to avoid strong discreteness and two-body effects in the case of
systems of given characteristic scales and particle numbers.
\section{Overview of the evolution}
The repeated action of the tidal force from the Milky Way made the dwarf galaxy evolve as envisioned by the tidal
stirring scenario originally proposed by Mayer et al. (2001) and studied in more detail by Klimentowski et al. (2009),
Kazantzidis et al. (2011) and {\L}okas et al. (2011, 2012). The main signatures of such evolution involve the mass
loss, the morphological transformation of the stellar component and the changes in the kinematics of the stars. Since
we are interested here in the formation and evolution of the bar we focus on the latter two. In order to give an idea
of the overall evolution of the dwarf we first perform rough measurements of the main, global features.
Inspection of the final state of the dwarf reveals that, in spite of the strong mass loss, it still possesses a well
visible bound stellar component of radius of the order of 1 kpc. Therefore we may measure the properties of the dwarf
at all times using stars within some fixed radius comparable to this value, or smaller.
One could choose to measure
the dwarf's properties at some characteristic scale such as the radius where maximum circular velocity occurs
(as was done in Klimentowski et al. 2009 and Kazantzidis et al. 2011), the half-light radius ({\L}okas et al.
2011, 2012) or the break radius (where the transition to the tidal tails occurs, see e.g.
{\L}okas, Gajda \& Kazantzidis 2013). However, the caveat of such measurements
is that such radii also evolve in time and the interpretation of the results may not be straightforward.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8cm
\epsfbox[5 5 182 270]{mass.eps}
\end{center}
\caption{The evolution of the mass of the dwarf galaxy. The upper panel shows measurements of the mass of stars,
the middle panel the mass of the dark matter component, and the lower one the mass-to-light ratio (assuming
$M/L = 2.5$ M$_{\odot}$/L$_{\odot}$ for the stars) within a fixed maximum radius $r_{\rm m}$.
In each panel the red curve corresponds to measurements within $r_{\rm m} = 0.5$ kpc and the blue one
within $r_{\rm m} = 1$ kpc. Vertical dotted lines indicate pericentre passages.}
\label{mass}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8cm
\epsfbox[5 5 182 182]{shape.eps}
\end{center}
\caption{The evolution of the shape of the stellar component of the dwarf galaxy. The upper panel shows
the evolution of the axis ratios $c/a$ (red lower line) and $b/a$ (blue upper line) in time. The lower panels shows
the evolution of the bar mode $A_2$ (blue lower line) and the triaxiality parameter $T$ (red upper line) in time.
All measurements were performed for stars within a constant radius of 0.5 kpc. Vertical dotted lines indicate
pericentre passages.}
\label{shape}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8cm
\epsfbox[5 5 182 270]{kinematics.eps}
\end{center}
\caption{The evolution of the kinematics of the stellar component of the dwarf galaxy. Upper panel:
the evolution of the velocity dispersions of the stars in spherical coordinates $\sigma_r$ (red upper line),
$\sigma_\phi$ (green middle line) and $\sigma_\theta$ (blue lower line). Middle panel:
the evolution of the averaged 1D velocity dispersion $\sigma$ (black line) and the rotation velocity $V$ (blue line).
Lower panel: evolution of the anisotropy parameter $\beta$.
All measurements were performed for stars within a constant radius of 0.5 kpc. Vertical dotted lines indicate
pericentre passages.}
\label{kinematics}
\end{figure}
For each simulation output we determine the centre of the dwarf galaxy by calculating the centre of mass of the stars
iteratively in decreasing radii until convergence is reached.
We start the analysis of the properties of the dwarf galaxy by measuring
the mass of stars and dark matter within a fixed radius $r_{\rm m}$ from the centre of the dwarf equal
to 1 and 0.5 kpc. The results are shown in the upper (stars) and middle
(dark matter) panel of Figure~\ref{mass} with the red curve corresponding to measurements within
$r_{\rm m} = 0.5$ kpc and the blue one within $r_{\rm m} = 1$ kpc. In the lower panel of the Figure we also plot
the mass-to-light ratio within these radii assuming $M/L = 2.5$ M$_{\odot}$/L$_{\odot}$ for the stars. The results
show that mass in both components is systematically lost from within both limiting radii, except for the period
after the first pericentre passage where the stellar content within 0.5 kpc is not diminished. The mass-to-light
ratio systematically decreases to converge to about $(8-9)$ M$_{\odot}$/L$_{\odot}$ at the end of the evolution.
This means that dark matter is stripped more efficiently than stars and that the stripping affects even the inner
part of the dwarf where the measurements were done.
To make sure we include a sufficient number of stars from the main body of the dwarf and at the same time
avoid the contamination from the tidal tails for all the
following measurements discussed in this section we fix the maximum radius at $r_{\rm m}=0.5$ kpc.
We note that the results of the measurements do not depend strongly on this choice.
Thus, for each output we select
stars within $r_{\rm m}$, find the principal axes of the stellar component from the tensor of inertia and rotate the
stellar positions and velocities to align them with the principal axes. In the following we will always refer to the
major, intermediate and minor axis of the stellar component as $x$, $y$ and $z$ respectively and the corresponding
axis lengths as $a$, $b$ and $c$. Having aligned the
stellar distribution in this way we estimated the axis ratios as a function of time. The results in terms of $c/a$ and
$b/a$ are shown in the upper panel of Figure~\ref{shape} as the red and blue line, respectively. At the first pericentre
passage the $b/a$ value drops significantly from the initial $b/a=1$ characteristic of the disk, while $c/a$ stays
roughly at the same level. This means that the initial disk transforms into a triaxial stellar component. This triaxial
shape is maintained until the end of the simulation, although both $b/a$ and $c/a$ increase.
\begin{figure*}
\begin{center}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot1xy.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot1xz.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot1yz.eps}
\leavevmode
\epsfxsize=0.95cm
\epsfbox[53 -28 87 149]{legend.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot2xy.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot2xz.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot2yz.eps}
\leavevmode
\epsfxsize=0.95cm
\epsfbox[53 -28 87 149]{legend.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot3xy.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot3xz.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot3yz.eps}
\leavevmode
\epsfxsize=0.95cm
\epsfbox[53 -28 87 149]{legend.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot4xy.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot4xz.eps}
\leavevmode
\epsfxsize=5cm
\epsfbox[0 0 185 200]{plot4yz.eps}
\leavevmode
\epsfxsize=0.95cm
\epsfbox[53 -28 87 149]{legend.eps}
\end{center}
\caption{Surface density distributions of the stars in the dwarf at subsequent apocentres at
$t=2.2, 4.35, 6.5, 8.65$ Gyr (rows) and along different lines of sight: the shortest ($z$), intermediate ($y$)
and longest ($x$) axis of the stellar component (columns, from left to right). The surface density
measurements were normalized to the maximum value $\Sigma_{\rm max} = 9.8 \times 10^5$ stars kpc$^{-2}$ occuring
for the line of sight along the $x$ axis at $t=2.2$ Gyr. Contours are equally spaced in $\log \Sigma$ with
$\Delta \log \Sigma = 0.05$.}
\label{surdenrot}
\end{figure*}
The properties of the shape are further illustrated in the lower panel of Figure~\ref{shape} where the red line shows
the triaxiality parameter $T = [1-(b/a)^2]/[1-(c/a)^2]$. The values of the parameter $0 < T < 1/3$ indicate the oblate
shape, the values $1/3 < T < 2/3$ a triaxial shape and $2/3 < T < 1$ a prolate shape. With $T > 0.7$ almost at all
outputs after the first pericentre we conclude that the shape of the stellar component is decidedly prolate. This
strongly suggests that at the first pericentre passage the stellar component of the dwarf formed a bar. To confirm
this statement we also measured the bar mode $A_2$ from the positions of the stars projected onto the
$xy$ plane (along the shortest axis). In general, the amplitude of the $m$th Fourier mode of the discrete distribution
of stars in the simulated galaxy is calculated as $A_m = (1/N) \left| \Sigma^{N}_{j=1} \exp (i m \phi_j) \right|$
where $\phi_j$ are the particle phases in cylindrical coordinates and $N$ is the total number of stars
(Sellwood \& Athanassoula 1986; Debattista \& Sellwood 2000).
The value of the bar mode $m=2$ is shown with the second (blue) line in the lower panel
of Figure~\ref{shape}. As values of $A_2$ are above 0.2 at all times after the first pericentre we conclude that indeed
a bar is formed at this time and this shape persists until the end of the simulation.
This conclusion is further supported by the measurements of the kinematic properties of the stellar component.
The kinematic measurements were performed using a system of spherical coordinates
such that at every simulation output
the angle $\theta$ measures the angular distance from the shortest axis ($z$) of the
stellar component, while $\phi$ is measured in the $xy$ plane. In the
upper panel of Figure~\ref{kinematics} we plot the velocity dispersions of the stars in the three coordinates. The
three lines from the top to the bottom correspond to $\sigma_r$ (red), $\sigma_{\phi}$ (green) and $\sigma_{\theta}$
(blue) respectively. At the first pericentre passage all dispersions increase, signifying the transition from the
overall streaming motion of the stars (rotation) to the random motions manifesting themselves via the velocity
dispersion. However, the radial dispersion increases most as expected in the case of the formation of a bar which
is supported by more radial orbits. Later on the dispersions decrease due to mass loss, except for $\sigma_{\theta}$
which remains roughly constant as a result of the thickening of the stellar component in time.
The middle panel of Figure~\ref{kinematics} shows the overall contribution of random versus ordered motion in terms of
a 1D velocity dispersion $\sigma = [(\sigma_r^2 + \sigma_{\phi}^2 + \sigma_{\theta}^2)/3]^{1/2}$ (black line)
and the mean velocity (blue line) around the shortest axis (along the $\phi$ coordinate of the spherical system,
$V = V_{\phi}$) i.e. the rotation velocity. The overall trend is for the rotation
to decrease (especially at the first pericentre passage) and the dispersion to remain roughly constant in time
or slightly decrease due to mass loss.
The transition from mostly circular orbits of the stars (in the initial disk) to more radial orbits in the bar is also
very well visible in the evolution of the anisotropy parameter of the stars
$\beta = 1 - (\sigma_{\theta}^2 + \sigma_{\phi}^{'2})/(2 \sigma_{r}^2)$ where the second velocity moment
$\sigma_{\phi}^{'2} =\sigma_{\phi}^2 + V_{\phi}^2$ includes rotation. The dependence of $\beta$ on time is
shown in the lowest panel of Figure~\ref{kinematics}. Clearly, the circular orbits of stars ($\beta<0$)
in the initial disk are replaced after the first pericentre by more radial orbits ($\beta>0$)
which survive until the end of the simulation.
Figure~\ref{surdenrot} shows the surface density distributions of the stars in the dwarf at subsequent apocentres of the
orbit (from the second at $t=2.2$ Gyr to the fifth at $t=8.65$ Gyr). The rows of the Figure correspond to the different
times of the simulation, while the columns to different lines of sight: along the shortest $z$ (face-on),
intermediate $y$ (side-on) and longest $x$ (end-on) axis of the stellar distribution (as determined
from stars within 0.5 kpc). In the left and middle column the
line of sight is perpendicular to the bar and the bar is clearly visible. At the second apocentre (at $t=2.2$ Gyr) the
stellar component of the dwarf is still restricted to the initial plane of the disk. At later times the stellar component
becomes thicker and at the final apocentre the outer density contours are almost spherical while the
bar-like shape is only preserved in the central part of the dwarf.
\section{The strength of the bar}
In the previous section we measured the global properties of the dwarf galaxy as a function of time, finding convincing
evidence for the formation of a bar after the first pericentre.
The strength of the bar can be quantified in more detail by measuring the profile of the bar mode $A_2$ as a function
of a cylindrical radius $R$. The coordinate system for these measurements was chosen so that $R$ is in the disk
equatorial plane ($xy$) as determined previously from the principal axes of the stellar component within 0.5 kpc.
The measurements of the values of the bar mode as a function of $R$ are shown in Figure~\ref{a2apoperi}
at subsequent apocentres (upper panel)
and pericentres (lower panel). At pericentres the dwarf is stretched by tidal forces from the Milky Way so that
the bar mode increases monotonically with radius. At apocentres, when the dwarf recovers its equilibrium, the bar mode
displays a characteristic shape, growing with radius up to a maximum value and then decreasing. After reaching a minimum
value, $A_2$ increases again as we transit from the bound component of the dwarf to the tidal tails. The tidal tails
are symmetrical elongated features on both sides of the dwarf (see e.g. {\L}okas et al. 2013)
which obviously results in the $A_2$ approaching unity at large radii. The maximum value of the $A_2$ mode decreases
with time which means that the bar becomes weaker as the evolution proceeds, a feature that was not obvious from the
global single-value measurement of $A_2$ shown in the lower panel of Figure~\ref{shape}.
It is also instructive to look at the higher order Fourier modes of the distribution of the stars. Figure~\ref{amapoperi}
compares the profiles of the non-zero even modes with $m=2,4,6,8$ at the second apocentre (upper panel)
and the first pericentre (lower panel). We note that the odd modes (not shown) all have very low amplitudes
within the main body of the dwarf which means that the distribution of the stars is symmetrical.
The measurements of even modes show that while
the $m>2$ even modes are systematically lower than the most significant $m=2$ bar mode and preserve the hierarchy at
all times, they nevertheless assume values decidedly different from zero. This means that the density distribution in the
tidally induced bar in our simulation cannot be described by the $m=2$ alone but higher even order components are
not negligible. Interestingly, the same behaviour was seen in simulated galaxies forming bars in isolation
by Athanassoula \& Misiriotis (2002, their figure 7) and by Ohta, Hamabe \& Wakamatsu (1990) who studied surface
photometry of six real barred spirals (see their figure 4). Such a hierarchy of modes thus seems to be a general
feature of barred galaxies, independent of their size and of the way they formed.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=7cm
\epsfbox[10 10 140 285]{a2apoperi.eps}
\end{center}
\caption{The bar mode $A_2$ as a function of cylindrical radius $R$. The upper panel shows the measurements at
subsequent apocentres and the lower panel at subsequent pericentres. At apocentres the bar mode curves show a
characteristic shape with a maximum, while at pericentres they are monotonically increasing.}
\label{a2apoperi}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=7cm
\epsfbox[10 10 140 285]{amapoperi.eps}
\end{center}
\caption{The even modes $A_m$ as a function of cylindrical radius $R$. The upper panel shows the profiles of the
first four even modes at the second apocentre ($t=2.2$ Gyr) and the lower panel the analogous measurements at the
first pericentre passage ($t=1.2$ Gyr). The modes all have a similar shape and preserve the hierarchy with
the lower order modes having always higher values.}
\label{amapoperi}
\end{figure}
We adopt the maximum value of the bar mode $A_{\rm 2,max}$ as the measure of the strength of the bar.
Figure~\ref{a2max} shows in the upper panel the cylindrical radius $R$ at which the first maximum of the bar mode
occurs as a function of time. The measurements are only significant between pericentres since near pericentres
the profile of $A_2$ is increasing and there is no well defined maximum (see Figure~\ref{a2apoperi}).
The lower panel of the Figure plots the
value of the maximum bar mode $A_{\rm 2,max}$ as a function of time. These detailed measurements confirm the
impression from Figure~\ref{a2apoperi}: in the long run the maximum of the bar mode decreases from about 0.7 after
the first pericentre to about 0.45 after the fourth one. Thus we conclude that the bar becomes weaker in time and
the changes in the bar strength are most significant at pericentres while between them $A_{\rm 2,max}$ remains
roughly constant.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8cm
\epsfbox[5 5 182 182]{a2max.eps}
\end{center}
\caption{The radius at which the first maximum of the bar mode $A_{\rm 2,max}$ occurs (upper panel) and the value
of the maximum of the bar mode $A_{\rm 2,max}$ (lower panel) as a function of time. Vertical dotted lines indicate
pericentre passages.}
\label{a2max}
\end{figure}
\section{The length of the bar}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=7cm
\epsfbox[2 0 190 200]{plot1xycont.eps}
\leavevmode
\epsfxsize=7cm
\epsfbox[0 0 365 355]{densitybar.eps}
\end{center}
\caption{Upper panel: an example of the measurement of the density profiles at the second apocentre ($t=2.2$ Gyr).
The contours indicate the levels of equal surface
density distribution of the stars with the bar clearly visible along the $x$ axis. The measurements are done by
counting stars in cylinders of radius 0.3 kpc in bins spaced equally in log $x$ (along the bar, blue lines)
and $y$ (perpendicular to the bar, red lines). Lower panel: density profiles measured for the output shown in the upper
panel. The blue (red) dots indicate measurements along (perpendicular to) the bar. Solid black lines show the
analytic fits to the measured profiles. The vertical green dashed line indicates the radius where the two fitted
profiles are equal, adopted as the measure of the length of the bar. }
\label{density}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8cm
\epsfbox[5 5 182 270]{barlength.eps}
\end{center}
\caption{The upper and middle panels show the parameters of the formula $\rho_1 (x) = C \exp [- (x/b)^{3/2}]$ fitted
to the measured density profiles along the bar as a function of time: the normalization $C$ and the scale-length $b$.
The lower panel plots the length of the bar $a_{\rm b}$ defined as the radius where the fitted density profiles
along the bar and perpendicular to it converge. Vertical dotted lines indicate
pericentre passages.}
\label{barlength}
\end{figure}
Athanassoula \& Misiriotis (2002) discussed different ways to measure the bar length.
In priciple, the profiles of $A_2$ such as those shown in Figure~\ref{a2apoperi} could be used to determine the
length of the bar. Such a procedure works in general for simulated bars forming in isolated disk galaxies as in such
cases the profile of $A_2$ declines smoothly after reaching the maximum value. One can then find e.g. the radius where
$A_2$ drops to some fraction (e.g. $1/2$) of the maximum value and use this scale as the measure of the bar length.
In our case however, the profile does not always drop sufficiently before it starts to increase again due to
the transition to the tidal tails.
One solution would be to find a minimum of $A_2$ and adopt the corresponding scale as the length of the bar. However,
even after smoothing our $A_2$ profiles are rather noisy and such measurements result in very imprecise estimates of
the bar length.
We therefore measured the length of the bar using a different approach, similar to method (v) discussed by
Athanassoula \& Misiriotis (2002, section 8). Namely, we calculated the density of stars
along the bar major axis (i.e. along the $x$ axis of the stellar component)
and perpendicular to it (along the $y$ axis)
in cylinders of radius 0.3 kpc and logarithmic bins in $x$ and $y$. An example of such measurements after $t = 2.2$ Gyr
from the start of the simulation (second apocentre) is shown in the
upper panel of Figure~\ref{density}. The contours indicate equal levels of surface density of stars similar to those
plotted in Figure~\ref{surdenrot} with the bar well visible along the $x$ axis in the inner part of the picture. The
cylinders of stars selected for the measurements are indicated with blue (along the bar) and red (perpendicular to the
bar) lines. The measured density profiles are plotted as dots of the corresponding colours in the lower panel of
Figure~\ref{density}. As expected, the measurement along the bar (blue points) is rather flat in the centre,
reflecting the approximately constant density distribution of the stars in the bar. The difference between the densities
measured in the two directions first grows with radius and then the two converge. We will adopt the scale where the
two densities converge as the measurement of the length of the bar.
In order to avoid the noise due to the limited number of stars in the outer bins we fitted the profiles with analytic
functions. We find that the density along the bar is well approximated at all times by an exponential
$\rho_1 (x) = C \exp [- (x/b)^{3/2}]$ where $C$ is the normalization constant and $b$ is the characteristic
scale-length. The density distribution perpendicular to the bar is well approximated by a simple power law
$\rho_2 (y) = D y^{-d}$. For every simulation output we measured the density profiles in this way and fitted the
formulae to the measurements. The fitted values of the $C$ and $b$ parameters of the density distribution along
the bar are especially interesting and their evolution is shown as a function of time in the upper and middle panel of
Figure~\ref{barlength}. We see that the values of both parameters decrease in time reflecting the stripping of the
stars (decrease of normalization) and shortening of the bar (decrease of scale-length). The parameters of the power-law
fit $\rho_2 (y)$ do not show any clear trend in time. Both the normalization $D$ and the power-law index $d$ stay
roughly constant in time with $d$ in the range of 2-2.5.
We note that the particular choice of the fitting formulae does not have to apply to other kinds of
bars. We used them mainly as a tool to smooth the results of bar length measurements and because the formulae
were general enough to accommodate the density profiles of our bar at all times.
Solving $\rho_1 (x) = \rho_2 (y)$ with the fitted parameters we find for each simulation output the scale at which
both density profiles converge. The scale, which we identify with the length of the bar, $a_{\rm b}$, is plotted
as a function of time in the lower panel of Figure~\ref{barlength}. The length of the bar thus decreases during the
evolution from $a_{\rm b} = 2.4$ kpc after the first pericentre to $a_{\rm b} = 1.3$ kpc after the fourth. Note that
the decrease of the bar length occurs mainly at pericentre passages so it is due to tidal shocking and not to
secular evolution since between pericentres the length remains approximately constant in time.
\section{The pattern speed of the bar}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.1cm
\epsfbox[5 2 182 267]{patternspeed.eps}
\end{center}
\caption{Upper panel: the angle between the nearest side of the bar's major axis and a fixed axis of the
simulation box as a function of time. Middle panel: the pattern speed of the bar as a function of time. Lower panel:
the ratio of the bar's pattern speed to the angular velocity of the dwarf on the orbit.
Vertical dotted lines indicate pericentre passages.}
\label{patternspeed}
\end{figure}
We finally also look at the pattern speed of the bar.
The simplest way to visualize its variability is to measure the angle
between the major axis of the stellar component and a fixed axis of the simulation box. For this measurement we use the
orientation of the major axis determined as before using stars within a constant radius of 0.5 kpc. The angle
between the major axis of the bar and a fixed axis in the initial orbital plane of the dwarf is shown as a function
of time in the upper panel of Figure~\ref{patternspeed}. At each output we calculated the angle between the fixed
axis and the nearest part of the bar, which is why the angle is always in the range between 0 and 90 degrees.
The measurements start after the first pericentre when the bar is formed. The bar lies almost exactly in
the orbital plane of the dwarf at all times as indicated by the values of the angle covering the whole range
of 0-90 degrees. The tumbling of the bar seems much slower in the second half of the evolution (after the third
pericentre passage), as shown by the much slower changes of the angle.
The actual pattern speed calculated as a change between the directions of the bar's major axis in subsequent
simulation outputs is plotted in the middle panel of Figure~\ref{patternspeed}. The pattern speed decreases in
the long run but shows a rather strong variability, mostly at the pericentre passages, but not only. We note
that the variability of the pattern speed mirrors closely our measurements of the mean rotation velocity
of the stars discussed in section 3 (see the blue line in the middle panel of Figure~\ref{kinematics}). This means
that after the first pericentre the rotation is mostly due to the tumbling of the bar.
A particularly interesting behaviour occurs at the second pericentre passage when the pattern speed drops
almost to zero, i.e. the bar almost stops rotating but then speeds up again.
The variation of the pattern speed at this moment (and its general evolution) can be understood by referring to
Figure~\ref{surdenperi} where we show the orientation of the bar with respect to the direction towards the centre
of the Milky Way. The upper and lower panel show respectively the
projections of the stellar component onto the orbital plane at $t=3.3$ and $t=3.5$ Gyr, that is just before and
just after the second pericentre. The circular arrow in the centres marks the anti-clockwise direction of the bar's
rotation. In each panel the solid black line indicates the direction towards the Milky Way
and the two green arrows show the direction of tidal forces acting on the two sides of the bar. At the earlier time
depicted in the Figure (upper panel) the torque due to the tidal forces is directed so that it slows the bar.
At the later time (lower panel) the orientation of the bar with respect to the direction to the Milky Way changes
and the torque now speeds up the bar. The result is for the bar to regain the pattern speed up to almost the same
level as before the pericentre.
The subsequent changes of the pattern speed, the more violent ones at pericentres, as well as the milder ones
between pericentres can all be traced to a particular orientation of the bar with respect to the tidal force
acting at a given time. In particular, at the third pericentre passage the bar is systematically slowed down
(the orientation is similar as in the upper panel of Figure~\ref{surdenperi}), while at the fourth pericentre
the bar is continuously accelerated (the orientation is similar as in the lower panel of Figure~\ref{surdenperi}).
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=7cm
\epsfbox[0 0 185 205]{plot66small.eps}
\leavevmode
\epsfxsize=7cm
\epsfbox[0 0 185 205]{plot70small.eps}
\end{center}
\caption{The change of direction of tidal torque from the Milky Way near the second pericentre passage. Plots show
the surface distribution of the stars in the dwarf projected onto the orbital plane at $t=3.3$ (upper panel)
and $t=3.5$ Gyr (lower panel). The colour coding is similar as in Figure~\ref{surdenrot} but
normalized to the central density value for this stage and line of sight
$\Sigma_{\rm max} = 6.8 \times 10^5$ stars kpc$^{-2}$.
In both panels the curved arrows in the centre indicate the direction of rotation of the
bar. The solid lines show the direction towards the Milky Way. Green arrows indicate the tidal forces acting on the bar.
Between the outputs the direction of the tidal torque changes from one slowing down the bar to one speeding it up.}
\label{surdenperi}
\end{figure}
It is also interesting to compare the pattern speed of the bar $\Omega_{\rm p}$ to the angular velocity
of the dwarf galaxy on its orbit $\Omega_{\rm o}$.
The ratio of the two quantities is plotted in the lower panel of Figure~\ref{patternspeed}. At pericentres
the dwarf obviously moves very fast on its rather eccentric orbit so the ratio $\Omega_{\rm p}/\Omega_{\rm o}$
is close to zero. Between pericentres the ratio increases up to a factor of a few and at apocentres is never below two.
Note that $\Omega_{\rm p}/\Omega_{\rm o} =1$ would mean that the bar is tidally locked as the Moon is locked to the
Earth and only one and always the same side of the bar would be directed towards the Milky Way. This is clearly not the
case. However, an interesting evolutionary stage takes place near the fourth apocentre of the orbit ($t=6-7$ Gyr)
when $\Omega_{\rm p}/\Omega_{\rm o} \approx 5/2$ and the angle between the bar and the direction to the Milky Way
stays in the range of $0-20$ degrees. This points to a possibility of a 5/2 resonance between the rotational and orbital
motion of the bar, similar to the 3/2 resonance of Mercury around the Sun (Correia \& Laskar 2004).
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=7cm
\epsfbox[0 0 190 190]{corotationapo.eps}
\end{center}
\caption{The circular frequency of the dwarf galaxy as a function of radius (red line) in comparison with the
pattern speed of the bar $\Omega_{\rm p}$ (blue line) at the second apocentre ($t=2.2$ Gyr).
These two quantities are equal at the corotation radius $R_{\rm CR} = 3.6$ kpc marked with a black arrow.
The second black arrow
indicates the length of the bar at this time $a_{\rm b} = 2.4$ kpc. The ratio of the two $s = R_{\rm CR}/a_{\rm b}=1.5$
is close to unity, so the bar is rather fast.}
\label{corotationapo}
\end{figure}
Finally, we estimate the speed of the bar in terms of the quantity $s = R_{\rm CR}/a_{\rm b}$ where $R_{\rm CR}$
is the corotation radius (where the circular frequency of the dwarf galaxy $\Omega$ equals
the pattern speed $\Omega_{\rm p}$) and $a_{\rm b}$ is the length of the bar estimated in the previous section.
According to theory of bars (Binney \& Tremaine 2008) they can only exist at radii $R<R_{\rm CR}$ so all bars
have $s > 1$ and are classified as fast if $s \approx 1$ or slow when $s \gg 1$. We calculated the corotation radii
for a number of outputs and compared them with corresponding bar lengths. An example of such a comparison
for the second apocentre ($t=2.2$ Gyr) is shown in Figure~\ref{corotationapo}. The red line is the circular frequency
of the dwarf galaxy $\Omega = [G M(r)/r^3]^{1/2}$ as a function of radius (calculated with radial step of 0.1 kpc)
and the blue line is the pattern speed from the middle panel
of Figure~\ref{patternspeed}. The two are equal at $R_{\rm CR} = 3.6$ kpc while the bar length at this time is
$a_{\rm b} = 2.4$ kpc (see the lower panel of Figure~\ref{barlength}). This gives us $s = 1.5$, a value not very different
from unity, thus the bar at this stage may be classified as fast.
\begin{table}
\begin{center}
\caption{Estimates of the speed of the bar at subsequent apocentres. }
\begin{tabular}{clccc}
apocentre & time & $R_{\rm CR}$ & $a_{\rm b}$ & $R_{\rm CR}/a_{\rm b}$ \\
& [Gyr] & [kpc] & [kpc] & \\
\hline
2 & 2.2 & 3.6 & 2.4 & 1.5 \\
3 & 4.35 & 3.4 & 1.8 & 1.9 \\
4 & 6.5 & 4.7 & 1.6 & 3.0 \\
5 & 8.65 & 4.4 & 1.3 & 3.3 \\
\hline
\label{tablespeed}
\end{tabular}
\end{center}
\end{table}
Similar calculations at all apocentres give the results listed in Table~\ref{tablespeed}. We restricted ourselves to
the measurements at apocentres because near pericentres the pattern speed varies strongly and in particular can be very
low which leads to high and thus meaningless estimates of the corotation radius. We see that at subsequent apocentres
the values of $s$ systematically increase, up to $s=3.3$ at the last apocentre, so the bar becomes slower with time.
This result is not obvious at first sight because $R_{\rm CR}$ and $a_{\rm b}$ behave differently in time.
While the bar length $a_{\rm b}$ decreases monotonically between apocentres (see Table~\ref{tablespeed}), the
$R_{\rm CR}$ increases or decreases with time. The latter is itself a combination of the
circular frequency, which decreases monotonically with time
as a result of mass loss, and the pattern speed, which does not have an obvious monotonic behaviour. The overall measure
of the bar speed in terms of $R_{\rm CR}/a_{\rm b}$ confirms however the general impression from the behaviour of the much
simpler quantity such as the mean rotation velocity (see the middle panel of Figure~\ref{kinematics}) or the mean
angular momentum which behaves in a way very similar to the rotation velocity.
The main physical reason for the slow-down of the bar over large
time scales can be traced to the effect of tidal stripping of the stars which happens preferentially to stars on
prograde orbits (as is the case in our simulation) due to resonances between the orbital and intrinsic motion
of the stars (D'Onghia et al. 2010). The stripped stars feed the tidal tails of the dwarf galaxy and
move on their own orbits around the Milky Way. Thus the angular momentum of the stars is taken by the stripped stars
and not transferred to the dark matter particles of the dwarf's halo as the angular momentum of the halo
does not significantly change during the evolution.
\section{Discussion}
We studied the formation and evolution of a stellar bar induced in a disky dwarf galaxy orbiting the Milky Way
by tidal forces from the host. We measured the main properties of the bar such as its strength, length and pattern
speed as a function of time and related the pattern speed to the dwarf's circular frequency.
The comparison between the two quantities led us to conclude that
while the bar is quite fast at its birth after the first pericentre passage, it becomes slower with time. This has
important consequences for understanding the process of formation of dwarf spheroidal galaxies of the Local Group.
As the tidal evolution proceeds, the bar becomes shorter and thicker and the stellar component changes its shape
towards spherical. One could attempt to explain the shortening of the bar as due to the mass loss that results
in decreasing the circular frequency of the dwarf. If the pattern speed of the bar remained constant in time,
the corotation radius would also decrease. Since the bar cannot extend beyond the corotation radius, this would explain
why it becomes shorter and thus relate the mass loss to the morphological transformation.
The actual behaviour, however, turns out to be more complicated. As we demonstrated in the previous sections,
the pattern speed of the bar is not constant but subject to abrupt changes near pericentres and
more benign ones in the other parts of the orbit, depending on the orientation of the bar major axis with respect
to the direction of the tidal force. This causes the corotation radius to vary non-monotonically with
time. There is thus no simple relation between the length of the bar and the corotation radius. The bar seems to
become shorter just due to randomization of stellar orbits resulting from tidal shocking.
It seems to be generally believed that a common intermediate stage of the evolution of disky dwarfs towards a more
spherical shape is that of bar buckling. Buckling seemed to occur in a large fraction of the cases of tidally
induced bars studied by Mayer et al. (2001). They claimed that buckling contributes to the heating of the disk
even more than the tidal heating itself. The occurrence of buckling is usually accompanied by an increase of the ratio of
velocity dispersions along the shortest axis and in the bar plane $\sigma_z/\sigma_R$ and by non-zero amplitude of the
odd Fourier mode $A_1$ in the edge-on view (e.g. Martinez-Valpuesta et al. 2006).
We have looked for signatures of bending instabilities in our bar by measuring the ratio
$\sigma_z/\sigma_R$ as a function of time
(for stars within radius of 1 kpc). This ratio is close to 0.4 after the first pericentre when the bar forms
and increases steadily with time to about 0.6 at the end of the evolution. There is no abrupt increase of
$\sigma_z/\sigma_R$ that would signify the presence of buckling.
We have also measured the $A_1$ mode for the stars seen along the intermediate axis (as in the middle
column of Figure~\ref{surdenrot}) and did not find it significantly different from zero.
The actual presence of
buckling was only detected by visual inspection of surface density maps such as those in Figure ~\ref{surdenrot}.
Slight asymmetries in the distribution of stars with respect to the bar plane in the edge-on view were observed
for a brief period between 3.5 and 3.8 Gyr from the start of the simulation, that is soon after the second pericentre
passage. As expected, the occurrence of buckling was
accompanied by the decrease in the bar strength ($A_2$ and $A_{2,{\rm max}}$) seen in the lower panels
of Figures~\ref{shape} and \ref{a2max} at these times. This brief period of buckling instability was followed by the
formation of the boxy/peanut shape visible in the edge-on view of the surface distribution of stars at $t=4.35$ Gyr
in Figure~\ref{surdenrot}.
In this paper, we explored the properties of the tidally induced bar only in one initial and rather special configuration,
that of coplanar disks of the dwarf and the host galaxy and the dwarf's disk rotation exactly prograde with respect
to the orbital motion. While the orientation of the Milky Way disk with respect to the orbital plane of the dwarf seems
of little consequence, the angle between the dwarf disk's angular momentum and the orbital angular momentum has
dramatic consequences. Preliminary simulations show that if the dwarf disk orientation is exactly retrograde the
bar does not form at all and the dwarf's stellar component remains disky. For intermediate orientations the bar does form
but it is typically weaker than in the case studied here. The dependence of the properties of tidally induced bars
on this and other parameters will be discussed in follow-up papers.
\section*{Acknowledgements}
This research was supported in part by PL-Grid Infrastructure,
by the Polish National Science Centre under grants NN203580940,
2013/10/A/ST9/00023 and the Polish-French HECOLS collaboration including grant 2013/08/M/ST9/00664.
EA acknowledges financial support to the DAGAL network from the People Programme (Marie Curie Actions)
of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA grant agreement number
PITN-GA-2011-289313. She also acknowledges financial support from the CNES
(Centre National d'Etudes Spatiales - France). We are grateful to the organizers and participants of the
conference ``The Role of Bars in Galaxy Evolution" in Granada in May 2013 for inspiration and discussions.
We would like to thank L. Widrow for providing procedures to generate $N$-body models
of galaxies for initial conditions. EL{\L} is grateful for the hospitality of Laboratoire d'Astrophysique
de Marseille at the time of her visit and
AdP for the hospitality of the Copernicus Center in Warsaw during his visit.
MS, GG and KK acknowledge the summer student program of the Copernicus Center.
|
1,108,101,566,503 | arxiv | \section{Introduction}
{Cyber-physical systems (CPSs) refer to a class of engineering systems that integrate the cyber aspects of computation and communication with physical entities \cite{c1}.} Integrating communication and computation with sensing and control elements has made CPSs a key enabler in designing emerging autonomous and smart systems with the promise of bringing unprecedented benefits to humanity. CPSs have already had a profound impact on variety of engineering sectors, including, process industries \cite{c2}, robotics \cite{c3}, {smart grids \cite{c4}, and intelligent transportation \cite{c5}, to name a few.} Despite their advantages with vast growth and success, these systems are vulnerable to cyber-physical threats and can face fatal consequences if not empowered with resiliency. {The importance of designing resilient and secure CPSs can be witnessed from severe damages made by recently reported cyber-physical attacks \cite{c6_7}}.\par
\subsection{Related Work}
Wireless sensor networks (WSNs) are a class of CPSs for which a set of sensors are spatially distributed to monitor and estimate a variable of interest (e.g., location of a moving target, state of a large-scale system, etc.), {and have various applications such as surveillance and monitoring, target tracking, and active health monitoring \cite{c8}}. In centralized WSNs, all sensors broadcast their measurements to a center at which the information is fused to estimate the state \cite{c9}. These approaches, however, are communication demanding and prone to single-point-of-failure. {To estimate the state with reduced communication burden, a distributed Kalman filter (DKF) is presented in \cite{c10}-\cite{d3}, in which sensors exchange their information only with their neighbors, not with all agents in the network or a central agent.} {Cost constraints on sensor nodes in a WSN result in corresponding constraints on resources such as energy and communications bandwidth. Sensors in a WSN usually carry limited, irreplaceable energy resources and lifetime adequacy is a significant restriction of almost all WSNs. Therefore, it is of vital importance to design event-triggered DKF to reduce the communication burden which consequently improves energy efficiency. To this end, several energy-efficient event-triggered distributed state estimation approaches are presented for which sensor nodes intermittently exchange information \cite{c13}-\cite{c16}. Moreover, the importance of event-triggered state estimation problem is also reported for several practical applications such as smart grids and robotics \cite{r02}-\cite{r04}. Although event-triggered distributed state estimation is resource-efficient, it provides an opportunity for an attacker to harm the network performance and its connectivity by corrupting the information that is exchanged among sensors, as well as to mislead the event-triggered mechanism. Thus, it is of vital importance to design a resilient event-triggered distributed state estimation approach that can perform accurate state estimation despite attacks.} \par
In recent years, secure estimation and secure control of CPSs have received significant attention and remarkable results have been reported for mitigation of cyber-physical attacks, {including denial of service (DoS) attacks \cite{c17}-\cite{c18}, false data injection attacks \cite{c19}-\cite{c23}, and bias injection attacks \cite{c24}. }For the time-triggered distributed scenario, several secure state estimation approaches are presented in \cite{c26}-\cite{c312}. Specifically, in \cite{c26}-\cite{c30} authors presented a distributed estimator that allows agents to perform parameter estimation in the presence of attack by discarding information from the adversarial agents. Byzantine-resilient distributed estimator with deterministic process dynamics is discussed in \cite{c27}. Then, the same authors solved the resilient distributed estimation problem with communication losses and intermittent measurements in \cite{c28}. Attack analysis and detection for distributed Kalman filters are discussed in \cite{c281}. Resilient state estimation subject to DoS attacks for power system and robotics applications is presented in \cite{c310}-\cite{c312}. Although meritable, these aforementioned results for the time-triggered resilient state estimation are not applicable to event-triggered distributed state estimation problems. {Recently, authors in \cite{c17r} addressed the event-triggered distributed state estimation under DoS attacks by employing the covariance intersection fusion approach. Although elegant, the presented approach is not applicable to mitigating the effect of deception attacks. To our knowledge, resilient state estimation for event-triggered DKF under deception attacks is not considered in the literature. For the first time, this work not only detects and mitigate the effect of attacks on sensor and communication channel but also presents a mathematical analysis for different triggering misbehaviors.}
\vspace{-0.35cm}
\subsection{Contributions and outline}
\vspace{-0.1cm}
{This paper contributes to analysis, detection, and mitigation of attacks on event-triggered DKF. To our knowledge, it is the first paper to analyze how the attacker can leverage the event triggering mechanism to damage the state estimation process over WSNs. It also presents a detection mechanism for attacks on event-triggered DKF that does not require the restrictive Gaussian assumption on the probability density function of the attack signal. Finally, a novel meta-Bayesian attack detection mechanism is presented that performs second-order inference to detect stealthy attacks. The details of these contributions are presented as follows:}
\begin{itemize}
\item {Attack analysis: We show that the attacker can cause emerging non-triggering misbehavior so that the compromised sensors do not broadcast any information to their neighbors. This can significantly harm the network connectivity and its collective observability, which is a necessary condition for solving the distributed state estimation problem. We then show that an attacker can achieve continuous-triggering misbehavior which drains the communication resources.}
\item {Attack detections: To detect adversarial intrusions a Kullback-Leibler (KL) divergence based detector is presented and estimated via k-nearest neighbors approach to obviate the restrictive Gaussian assumption on the probability density function of the attack signal.}
\item {Attack mitigation: To mitigate attacks on event-triggered DKF,
a meta-Bayesian approach is employed that performs second-order inference to form confidence and trust about the truthfulness or legitimacy of the outcome of its own first-order inference (i.e., the posterior belief about the state estimate) and those of its neighbors, respectively. Each sensor communicates its confidence to its neighbors and also incorporates the trust about its neighbors into its posterior update law to put less weight on untrusted data and thus successfully discard corrupted information.}
\end{itemize}
\textit{Outline:} The paper is organized as follows. Section II outlines the preliminary background for the event-triggered DKF. Section III formulates the effect of attacks on the event-triggered DKF and analyzes triggering misbehaviors for it. Attack detection mechanism and confidence-trust based secure event-triggered DKF are presented in Section IV and V, respectively. The simulation verifications are provided in Section VI. Finally, concluding remarks are presented in Section VII.
\vspace{-0.1cm}
\section{Notations and Preliminaries}
\subsection{Notations}
The data communication among sensors in a WSN is captured by an undirected graph ${\rm {\mathcal G}}$, consists of a pair $({\rm {\mathcal V}},{\rm {\mathcal E}})$, where ${\rm {\mathcal V}}=\{ 1,2,\ldots ,N\}$ is the set of nodes or sensors and ${\rm {\mathcal E}}\subset {\rm {\mathcal V}}\times {\rm {\mathcal V}}$ is the set of edges. An edge from node $j$ to node $i,$ represented by $(j,i)$, implies that node $j$ can broadcast information to node $i$. Moreover, $N_{i} =\{ j:(j,i)\in {\rm {\mathcal E}}\}$ is the set of neighbors of node $i$ on the graph ${\rm {\mathcal G}}.$ An induced subgraph ${\rm {\mathcal G}}^{w}$ is obtained by removing a set of nodes ${\rm {\mathcal W}}\subset {\rm {\mathcal V}}$ from the original graph ${\rm {\mathcal G}}$, which is represented by nodes set ${\rm {\mathcal V}\backslash {\mathcal W}}$ and contains the edges of ${\rm {\mathcal E}}$ with both endpoints in ${\rm {\mathcal V}\backslash {\mathcal W}}$.
Throughout this paper, ${\bf {\mathbb{R}}}$ and ${\bf {\mathbb{N}}}$ represent the sets of real numbers and natural numbers, respectively. $A^{T}$ denotes transpose of a matrix $A$. $tr(A)$ and $\max (a_{i} )$ represent trace of a matrix $A$ and maximum value in the set, respectively. ${\rm {\mathcal C}}(S)$ represents the cardinality of a set S. $\sigma _{\max } (A),$ $\lambda _{\max } (A),$ and $I_{n}$ represent maximum singular value, maximum eigenvalue of matrix A, and an identity matrix of dimension $n$, respectively. ${\rm {\mathcal U}}(a,b)$ with $a<b$ denotes an uniform distribution between the interval $a$ and $b$. Consider $p_{X} (x)$ as the probability density of the random variable or vector $x$ with $X$ taking values in the finite set $\{ 0,...,p\}.$ When a random variable $X$ is distributed normally with mean $\nu$ and variance $\sigma ^{2},$ we use the notation $X\sim {\rm {\mathcal N}}(\upsilon ,\sigma ^{2} )$. ${\bf {\rm E}}[X]$ and $\Sigma _{X} ={\bf {\rm E}}[(X-{\bf {\rm E}}[X])(X-{\bf {\rm E}}[X])^{T} ]$ denotes, respectively, the expectation and the covariance of $X.$ Finally, ${\bf {\rm E}}[.|.]$ represents the conditional expectation.
\vspace{-0.3cm}
\subsection{Process Dynamics and Sensor Models}
Consider a process that evolves according to the following dynamics
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum820040}
x(k+1)=Ax(k)\, +\, w(k),
\end{equation}
where $A$ denotes the process dynamic matrix, and $x(k)\in {\bf {\mathbb R}}^{n}$ and $w(k)$ are, respectively, the process state and process noise at the time $k$. The process noise $w(k)$ is assumed to be independent and identically distributed (i.i.d.) with Gaussian distribution, and $x_{0} \in {\rm {\mathcal N}}(\hat{x}_{0} ,P_{0} )\,$ represents the initial process state with $\hat{x}_{0}$ as mean and $P_{0}$ as covariance, respectively.
The goal is to estimate the state $x(k)$ for the process \eqref{ZEqnNum820040} in a distributed fashion using $N$ sensor nodes that communicate through the graph ${\rm {\mathcal G}}$, and their sensing models are given by
\begin{equation} \label{ZEqnNum687942}
y_{i} (k)=C_{i} x_{i} (k)\, +\, v_{i} (k);\, \, \, \, \, \, \, \, \, \, \, \forall i=1,\cdots ,N,
\end{equation}
where $y_{i} (k)\in {\bf {\mathbb R}}^{p}$ represents the measurement data with $v_{i} (k)$ as the i.i.d. Gaussian measurement noise and $C_{i}$ as the observation matrix of the sensor $i$, respectively.
\smallskip
\noindent
\textbf{Assumption 1}. The process noise $w(k),$ the measurement noise $v_{i} (k),$ and the initial state $x_{0}$ are uncorrelated random vector sequences.
\smallskip
\noindent
\textbf{Assumption 2}. The sequences $w(k)$ and $v_{i}(k)$ are zero-mean Gaussian noise with
\vspace{-0.15cm}
\[{\bf {\rm E}}[w(k)(w(h))^{T} ]=\mu _{kh} Q\, \, \, \, \]
and
\vspace{-0.15cm}
\[{\bf {\rm E}}[v_{i} (k)(v_{i} (h))^{T} ]=\mu _{kh} R_{i} ,\]
with $\mu _{kh} =0$ if $k\ne h$, and $\mu _{kh} =1$ otherwise. Moreover, $Q\ge0$ and $R_{i}>0$ denote the noise covariance matrices for process and measurement noise, respectively and both are finite.
\smallskip
\noindent
\textbf{Definition 1. (Collectively observable) \cite{c11}.} We call the plant dynamics \eqref{ZEqnNum820040} and the measurement equation \eqref{ZEqnNum687942} collectively observable, if the pair $(A,C_{S} )$ is observable where $C_{s}$ is the stack column vectors of $C_{j}, \,\,\forall j \in S$ with $S\subseteq {\rm {\mathcal V}}$ and ${\rm {\mathcal C}}(S)>N/2$.
\smallskip
\noindent
\textbf{Assumption 3.} The plant dynamics \eqref{ZEqnNum820040} and the measurement equation \eqref{ZEqnNum687942} are collectively observable, but not necessarily locally observable, i.e., $(A,C_{i} )$ $\, \forall i\in {\rm {\mathcal V}}$ is not necessarily observable.
Assumptions $1$ and $2$ are standard assumptions in Kalman filters. {Assumption 3 states that the state of the target in \eqref{ZEqnNum820040} cannot be observed by measurements of any single sensor, i.e., the pairs $(A,C_{i} )$ cannot be observable (see for instances \cite{c11} and \cite{c30}). It also provides the necessary assumption of collectively observable for the estimation problem to be solvable. Also note that under Assumption 2, i.e., the process and measurement covariance are finite, the stochastic observability rank condition coincides with the deterministic observability [Theorem 1, 43]. Therefore, deterministic observability rank condition holds true irrespective of the process and measurement noise.}
\vspace{-0.3cm}
\subsection{Overview of Event-triggered Distributed Kalman Filter}
This subsection presents the overview of the event-triggered DKF for estimating the process state $x(k)$ in \eqref{ZEqnNum820040} from a collection of noisy measurements $y_{i} (k)$ in \eqref{ZEqnNum687942}.
Let the prior and posterior estimates of the target state $x(k)$ for sensor node $i$ at time $k$ be denoted by $x_{i}(k|k-1)$ and $x_{i}(k|k)$, respectively. In the centralized Kalman filter, a recursive rule based on Bayesian inference is employed to compute the posterior estimate $x_{i}(k|k)$ based on its prior estimate $x_{i}(k|k-1)$ and the new measurement $y_{i}(k)$. When the next measurement comes, the previous posterior estimate is used as a new prior and it proceeds with the same recursive estimation rule. In the event-triggered DKF, the recursion rule for computing the posterior incorporates not only its own prior and observations, but also its neighbors' predictive state estimate. Sensor $i$ communicates its prior state estimate to its neighbors and if the norm of the error between the actual output and the predictive output becomes greater than a threshold after a new observation arrives. That is, it employs the following event-triggered mechanism for exchange of data with its neighbors
\vspace{-0.15cm}
\begin{equation} \label{eq3x}
\left\| y_{i} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| <\alpha,
\end{equation}
where $\alpha$ denotes a predefined threshold for event-triggering. Moreover, $\tilde{x}_{i} (k)$ denotes the predictive state estimate for sensor $i$ and follows the update law
\begin{equation} \label{ZEqnNum926700}
\tilde{x}_{i} (k)=\zeta _{i} (k)x_{i} (k|k-1)+(1-\zeta _{i} (k))A\tilde{x}_{i} (k-1),\, \, \forall i\in {\rm {\mathcal V}},
\end{equation}
with $\zeta _{i} (k)\in \left\{0,1\right\}$ as the transmit function. {Note that the predictive state estimate update equation in (4) depends on the value of the transmit function ${{\zeta }_{i}}(k)$ which is either zero or one depending on the triggering condition in (3). When ${{\zeta }_{i}}(k)=1$, then the prior and predictive state estimates are the same, i.e., ${{\tilde{x}}_{i}}(k)={{x}_{i}}(k|k-1)$. When ${{\zeta }_{i}}(k)=0,$ however, the predictive state estimate depends on its own previous state estimate, i.e., ${{\tilde{x}}_{i}}(k)=A{{\tilde{x}}_{i}}(k-1).$ }
Incorporating \eqref{ZEqnNum926700}, the following recursion rule is used to update the posterior state estimate in the event-triggered DKF \cite{c13}, \cite{c15} for sensor $i$ as
\begin{equation} \label{ZEqnNum257073}
\begin{array}{l} {x_{i} (k|k)=x_{i} (k|k-1)+K_{i} (k)(y_{i} (k)-C_{i} x_{i} (k|k-1))} \\ {\, \, \, \, \, \, \,\, \, \, \, \, \,\, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, +\gamma _{i} \sum _{j\in N_{i} }(\tilde{x}_{j} (k)-\tilde{x}_{i} (k) ),} \end{array}
\end{equation}
where
\vspace{-0.25cm}
\begin{equation} \label{ZEqnNum569383}
x_{i} (k|k-1)=Ax_{i} (k-1|k-1),
\end{equation}
is the prior update. Moreover, the second and the third terms in \eqref{ZEqnNum257073} denote, respectively, the innovation part (i.e., the estimation error based on the sensor $i^{th}$ new observation and its prior prediction) and the consensus part (i.e., deviation of the sensor state estimates from its neighbor's state estimates). We call this recursion rule as the \textit{Bayesian first-order inference} on the posterior, which provides the belief over the value of the state.
Moreover, $K_{i} (k)$ and $\gamma _{i}$ in \eqref{ZEqnNum257073}, respectively, denote the Kalman gain and the coupling coefficient. The Kalman gain $K_{i} (k)$ in \eqref{ZEqnNum257073} depends on the estimation error covariance matrices associated with the prior $x_{i} (k|k-1)$ and the posterior $x_{i} (k|k)$ for the sensor $i$. Let define the prior and posterior estimated error covariances as
\begin{equation} \label{ZEqnNum606287}
\begin{array}{l} {P_{i} (k|k-1)={\bf {\rm E}}[(x(k)-x_{i} (k|k-1))(x(k)-x_{i} (k|k-1))^{T} ],} \\ {P_{i} (k|k)={\bf {\rm E}}[(x(k)-x_{i} (k|k))(x(k)-x_{i} (k|k))^{T} ].} \end{array}
\end{equation}
which are simplified as \cite{c13}, \cite{c15}
\begin{equation} \label{ZEqnNum987927}
P_{i} (k|k)=M_{i} (k)P_{i} (k|k-1)(M_{i} (k))^{T} +K_{i} (k)R_{i} (K_{i} (k))^{T} ,
\end{equation}
and
\vspace{-0.25cm}
\begin{equation} \label{9)}
P_{i} (k|k-1)=AP_{i} (k-1|k-1)A^{T} +Q.
\end{equation}
with $M_{i} (k)=I_{n} -K_{i} (k)C_{i}.$ Then, the Kalman gain $K_{i} (k)$ is designed to minimize the estimation covariance and is given by \cite{c13}, \cite{c15}
\begin{equation} \label{ZEqnNum999982}
K_{i} (k)=P_{i} (k|k-1)(C_{i} )^{T} (R_{i} (k)+C_{i} P_{i} (k|k-1)(C_{i} )^{T} )^{-1} .
\end{equation} \par
Let the innovation sequence $r_{i} (k)$ for the node $i$ be defined as
\begin{equation} \label{ZEqnNum276515}
r_{i} (k)=y_{i} (k)-C_{i} x_{i} (k|k-1),
\end{equation}
\vspace{-0.15cm}
where $r_{i}(k)\sim {\rm {\mathcal N}}(0,\Omega _{i} (k))$ with
\begin{equation} \label{ZEqnNum368934} \nonumber
\Omega _{i} (k)={\bf {\rm E}}[r_{i} (k)(r_{i} (k))^{T} ]=C_{i} P_{i} (k|k-1)C_{i} {}^{T} +R_{i} (k).
\end{equation}\par
Note that for the notional simplicity, henceforth we denote the prior and posterior state estimations as $x_{i} (k|k-1)\buildrel\Delta\over= \bar{x}_{i} (k)$ and $x_{i} (k|k)\buildrel\Delta\over= \hat{x}_{i} (k),$ respectively. Also, the prior covariance and the posterior covariance are, respectively, denoted by $P_{i} (k|k-1)\buildrel\Delta\over= \bar{P}_{i} (k)$ and $P_{i} (k|k)\buildrel\Delta\over= \hat{P}_{i} (k)$. \par
\smallskip
{Based on equations (6)-(10)}, the event-triggered DKF algorithm becomes
$\textit{Time\,\,updates:}$ \par \hfill
\vspace{-0.4cm}
\begin{equation}\label{ZEqnNum838493}
\left\{ {\begin{array}{*{20}{c}}
\bar{x}_{i}(k+1)=A{{{\hat{x}}}_{i}(k)}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(a) \\
\bar{P}_{i}(k+1)=A{{{\hat{P}}}_{i}}(k){{A}^{T}}+Q(k)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(b)
\end{array}} \right.
\end{equation}\par
$\textit{Measurment\,\,updates:}$\par
\vspace{-0.4cm}
\begin{equation}\label{ZEqnNum727229}
{\left\{\begin{array}{l} {\hat{x}_{i} (k)=\bar{x}_{i} (k)+K_{i} (k)(y_{i} (k)-C_{i} \bar{x}_{i} (k))} \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, +\gamma _{i} \sum _{j\in N_{i} }(\tilde{x}_{j} (k)-\tilde{x}_{i} (k) ),\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, (a)} \\ {\tilde{x}_{i} (k)=\zeta _{i} (k)\bar{x}_{i} (k)+(1-\zeta _{i} (k))A\tilde{x}_{i} (k-1),\, \, \, \, \, \, \, \, \, \, \, \, \, (b)} \\ {K_{i} (k)=\bar{P}_{i} (k)C_{i}^{T} (R_{i} (k)+C_{i} \bar{P}_{i} (k)C_{i}^{T} )^{-1} ,\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, (c)} \\ {\hat{P}_{i} (k)=M_{i} \bar{P}_{i} (k)M_{i} {}^{T} +K_{i} (k)R_{i} (k)(K_{i} (k))^{T} .\, \, \, \, \, \, \,\, \, \, \, \, \, \, (d)} \end{array}\right. }
\end{equation}
\smallskip
\noindent
\noindent \textbf{Remark 1.} Based on the result presented in [17, Th.1], the event triggered DKF \eqref{ZEqnNum838493}-\eqref{ZEqnNum727229} ensures that the estimation error $\hat{x}_{i} (k)-x(k)$ is exponentially bounded in the mean square sense $\forall i\in {\rm {\mathcal V}}.$
\smallskip
\noindent
\noindent \textbf{Remark 2.} {The consensus gain ${{\gamma }_{i}}$ in (5) is designed such that the stability of the event-triggered DKF in (13)-(14) is guaranteed. Specifically, as shown in [Theorem 2, 19], if
\begin{equation}
\nonumber
{{\gamma }_{i}}=\frac{2(I-{{K}_{i}}{{C}_{i}}){{({{\Gamma }_{i}})}^{-1}}}{{{\lambda }_{\max }}(\mathcal{L}){{\lambda }_{\max }}({{(\Gamma )}^{-1}})}
\end{equation}
where $\mathcal{L}$ denotes the Laplacian matrix associated with the graph $\mathcal{G}$ and $\Gamma =diag\{{{\Gamma }_{1}},..,{{\Gamma }_{N}}\}$ with ${{\Gamma }_{i}}={{(I-{{K}_{i}}{{C}_{i}})}^{T}}{{A}^{T}}{{({{\bar{P}}_{i}})}^{+}}A(I-{{K}_{i}}{{C}_{i}}),\,\,\forall i=\{1,...,N\},$ then the stability of the event-triggered DKF in (13)-(14) is guaranteed. However, the design of event-triggered DKF itself is not the concern of this paper and this paper mainly analyzes the adverse effects of cyber-physical attacks on the event-triggered DKF and proposes an information-theoretic approach based attack detection and mitigation mechanism. Note that the presented attack analysis and mitigation can be extended to other event-triggered methods such as \cite{c14} and \cite{c16} as well.}
\vspace{-0.3cm}
\subsection{Attack Modeling}
In this subsection, we model the effects of attacks on the event-triggered DKF. An attacker can design a false data injection attack to affect the triggering mechanism presented in (\ref{eq3x}) and consequently compromise the system behavior.
\smallskip
\noindent
\textbf{Definition 2. (Compromised and intact sensor node).} We call a sensor node that is directly under attack as a compromised sensor node. A sensor node is called intact if it is not compromised. Throughout the paper, ${\rm {\mathcal V}}^{c}$ and ${\rm {\mathcal V}}\backslash {\rm {\mathcal V}}^{c}$ denote, respectively, the set of compromised and intact sensor nodes.
\smallskip
Consider the sensing model \eqref{ZEqnNum687942} for sensor node $i$ under the effect of the attack as
\begin{equation} \label{ZEqnNum973066}
y_{i}^{a} (k)=y_{i} (k)+f_{i} (k)=C_{i} x_{i} (k)\, +\, v_{i} (k)+f_{i} (k),
\end{equation}
where $y_{i} (k)$ and $y_{i}^{a}(k)$ are, respectively, the sensor $i$'$s$ actual and corrupted measurements and $f_{i} (k)\in {\bf {\rm R}}^{p}$ represents the adversarial input on sensor node $i.$ For a compromised sensor node $i,$ let $p'\subseteq p$ be the subset of measurements disrupted by the attacker.\par
Let the false data injection attack $\bar{f}_{j}(k)$ on the communication link be given by
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum397788}
\bar{x}_{j}^{a} (k)=\bar{x}_{j} (k)+\bar{f}_{j} (k),\, \, \, \forall j\in N_{i} .
\end{equation}
Using \eqref{ZEqnNum973066}-\eqref{ZEqnNum397788}, in the presence of an attack on sensor node $i$ and/or its neighbors, its state estimate equations in \eqref{ZEqnNum727229}-\eqref{ZEqnNum838493} becomes
\begin{equation} \label{ZEqnNum120276}
\left\{\begin{array}{l} {\hat{x}_{i}^{a} (k)=\bar{x}_{i}^{a} (k)+K_{i}^{a} (k)(y_{i} (k)-C_{i} \bar{x}_{i}^{a} (k))} \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, +\gamma _{i} \sum _{j\in N_{i} }(\tilde{x}_{j} (k)-\tilde{x}_{i}^{a} (k) )+f_{i}^{a} (k),} \\ {\bar{x}_{i}^{a} (k+1)=A\hat{x}_{i}^{a} (k),} \\ {\tilde{x}_{i}^{a} (k)=\zeta _{i} (k)\bar{x}_{i}^{a} (k)+(1-\zeta _{i} (k))A\tilde{x}_{i}^{a} (k-1),} \end{array}\right.
\end{equation}
where\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum499212}
f_{i}^{a} (k)=K_{i}^{a} (k)f_{i} (k)+\gamma _{i} \sum _{j\in N_{i} }\tilde{f}_{j} (k) ,
\end{equation}
with
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum429253} \nonumber
\tilde{f}_{j} (k)=\zeta _{j} (k)\bar{f}_{j} (k)+(1-\zeta _{j} (k))\tilde{f}_{j} (k-1).
\end{equation}
The Kalman gain $K_{i}^{a} (k)$ in presence of attack is given by
\begin{equation} \label{ZEqnNum654467}
K_{i}^{a} (k)=\bar{P}_{i}^{a} (k)C_{i}^{T} (R_{i} (k)+C_{i} \bar{P}_{i}^{a} (k)C_{i}^{T} )^{-1} .
\end{equation}
The first part in \eqref{ZEqnNum499212} represents the direct attack on sensor node $i$ and the second part denotes the aggregative effect of adversarial input on neighboring sensors, i.e., $j\in N_{i}$. Moreover, $\hat{x}_{i}^{a}(k),\, \, \bar{x}_{i}^{a} (k),$ and $\tilde{x}_{i}^{a}(k)$ denote, respectively, the corrupted posterior, prior, and predictive state estimates. The Kalman gain $K_{i}^{a}(k)$ depends on the following corrupted prior state estimation error covariance
\begin{equation} \label{ZEqnNum384197}
\bar{P}_{i}^{a} (k+1)=A\hat{P}_{i}^{a} (k)A^{T} +Q.
\end{equation}
where the corrupted posterior state estimation error covariance $\hat{P}_{i}^{a} (k)$ evolution is shown in the following theorem.
\begin{theorem}
Consider the process dynamics \eqref{ZEqnNum820040} with compromised sensor model \eqref{ZEqnNum973066}. Let the state estimation equation be given by \eqref{ZEqnNum120276} in the presence of attacks modeled by $f_{i}^{a}(k)$ in \eqref{ZEqnNum499212}. Then, the corrupted posterior state estimation error covariance $\hat{P}_{i}^{a}(k)$ is given by
\begin{equation} \label{ZEqnNum998129}
\begin{array}{l} {\hat{P}_{i}^{a} (k)=M_{i}^{a} (k)\bar{P}_{i}^{a} (k)(M_{i}^{a} (k))^{T} +K_{i}^{a} (k)[R_{i} (k)+\Sigma _{i}^{f} (k)](K_{i}^{a} (k))^{T} } \\ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, {+2\gamma _{i} \sum _{j\in N_{i} }(\stackrel{\frown}{P}_{i,j}^{a} (k) -\stackrel{\frown}{P}_{i}^{a} (k))(M_{i}^{a} (k))^{T} -2K_{i}^{a} (k)\Xi _{f} (k)} \\ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \,{+\gamma _{i} {}^{2} (\sum _{j\in N_{i} }(\tilde{P}_{j}^{a} (k) -2\tilde{P}_{i,j}^{a} (k)+\tilde{P}_{i}^{a} (k))}, \end{array}
\end{equation}
where $\Sigma _{i}^{f}(k)$ and $\Xi _{f} (k)$ denote the attacker's input dependent covariance matrices and $M_{i}^{a} =(I_{n} -K_{i}^{a} (k)C_{i} )$ with $K_{i}^{a} (k)$ as the Kalman gain and $\bar{P}_{i}^{a} (k)$ as the prior state estimation error covariance update in \eqref{ZEqnNum654467} and \eqref{ZEqnNum384197}, respectively. Moreover, $\tilde{P}_{i,j}^{a} (k)$ and $\stackrel{\frown}{P}_{i,j}^{a}(k)$ are cross-correlated estimation error covariances updated according to \eqref{ZEqnNum928831}-\eqref{ZEqnNum358063}.
\end{theorem}
\begin{proof}
See Appendix A.
\end{proof}
\vspace{-0.2cm}
Note that the corrupted state estimation error covariance recursion $\hat{P}_{i}^{a} (k)$ in \eqref{ZEqnNum998129} depends on the attacker's input distribution. Since the state estimation depends on compromised estimation error covariance $\hat{P}_{i}^{a} (k),$ therefore, the attacker can design its attack signal to blow up the estimates of the desired process state and damage the system performance.
\vspace{-0.2cm}
\section{ Effect of Attack on Triggering Mechanism}
This section presents the effects of cyber-physical attacks on the event-triggered DKF. We show that although event-triggered approaches are energy efficient, they are prone to triggering misbehaviors, which can harm the network connectivity, observability and drain its limited resources.
\vspace{-0.35cm}
\subsection{ Non-triggering Misbehavior}
In this subsection, we show how an attacker can manipulate the sensor measurement to mislead the event-triggered mechanism and damage network connectivity and collective observability by causing \textit{non-triggering misbehavior} as defined in the following Definition 3.
\smallskip
\noindent
\textbf{Definition 3 }(\textbf{Non-triggering Misbehavior).} The attacker designs an attack strategy such that a compromised sensor node does not transmit any information to its neighbors by misleading the triggering mechanism in (\ref{eq3x}), even if the actual performance deviates from the desired one.
The following theorem shows how a false data injection attack, followed by an eavesdropping attack, can manipulate the sensor reading to avoid the event-triggered mechanism (\ref{eq3x}) from being violated while the actual performance could be far from the desired one. To this end, we first define the vertex cut of the graph as follows.
\smallskip
\noindent
\textbf{Definition 4 (Vertex cut).} A set of nodes ${\rm {\mathcal C}}\subset {\rm {\mathcal V}}$ is a vertex cut of a graph ${\rm {\mathcal G}}$ if removing the nodes in the set ${\rm {\mathcal C}}$ results in disconnected graph clusters.
\begin{theorem}
Consider the process dynamics \eqref{ZEqnNum820040} with $N$ sensor nodes \eqref{ZEqnNum687942} communicating over the graph ${\rm {\mathcal G}}$. Let sensor $i$ be under a false data injection attack given by
\begin{equation} \label{ZEqnNum705143}
y_{i}^{a} (k)=y_{i} (k)+\theta _{i}^{a} (k)1_{p} ,\, \, \, \, \forall k\ge L+1,
\end{equation}
where $y_{i}(k)$ is the actual sensor measurement at time instant $k$ and $L$ denotes the last triggering time instant. Moreover, $\theta _{i}^{a}(k)\sim {\rm {\mathcal U}}(a(k),b(k))\, $ is a scalar uniformly distributed random variable in the interval $(a(k),b(k))$ with
\begin{equation} \label{ZEqnNum165624}
\left\{\begin{array}{l} {a(k)=\varphi -\left\| C_{i} \tilde{x}_{i} (k-1)\right\| +\left\| y_{i} (k)\right\|, } \\ {b(k)=\varphi +\left\| C_{i} \tilde{x}_{i} (k-1)\right\| -\left\| y_{i} (k)\right\|, } \end{array}\right.
\end{equation}
where $\tilde{x}_{i} (k)$ and $\varphi <\alpha $ denote, respectively, the predictive state estimate and an arbitrary scalar value less than the triggering threshold $\alpha .$ Then,
\begin{enumerate}
\item The triggering condition (\ref{eq3x}) will not be violated for the sensor node $i$ and it shows non-triggering misbehavior;
\item The original graph ${\rm {\mathcal G}}$ is clustered into several subgraphs, if all sensors in a vertex cut are under attack \eqref{ZEqnNum705143}.
\end{enumerate}
\end{theorem}
\begin{proof}
Taking norms from both sides of \eqref{ZEqnNum705143}, the corrupted sensor measurement $y_{i}^{a} (k)$ becomes
\begin{equation} \label{ZEqnNum862369}
\left\| y_{i}^{a} (k)\right\| =\left\| y_{i} (k)+\theta _{i}^{a} (k)1_{p} \right\| .
\end{equation}
Using the triangular inequality for \eqref{ZEqnNum862369} yields
\begin{equation} \label{ZEqnNum171011}
\left\| y_{i} (k)\right\| -\left\| \theta _{i}^{a} (k)1_{p} \right\| \le \left\| y_{i}^{a} (k)\right\| \le \left\| y_{i} (k)\right\| +\left\| \theta _{i}^{a} (k)1_{p} \right\| .
\end{equation}
Based on the bounds of $\theta _{i}^{a} (k)$, given by \eqref{ZEqnNum165624}, \eqref{ZEqnNum171011} becomes
\begin{equation} \label{27)} \nonumber
\left\| C_{i} \tilde{x}_{i} (k-1)\right\| -\varphi \le \left\| y_{i}^{a} (k)\right\| \le \left\| C_{i} \tilde{x}_{i} (k-1)\right\| +\varphi ,
\end{equation}
which yields
\begin{equation} \label{ZEqnNum939032} \nonumber
(\left\| y_{i}^{a} (k)\right\| -\left\| C_{i} \tilde{x}_{i} (k-1)\right\| -\varphi )(\left\| y_{i}^{a} (k)\right\| -\left\| C_{i} \tilde{x}_{i} (k-1)\right\| +\varphi )\le 0.
\end{equation}
This implies that the condition
\begin{equation} \label{29)} \nonumber
\, \left\| y_{i}^{a} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| \le \varphi <\alpha ,
\end{equation}
always holds true. Therefore, under \eqref{ZEqnNum705143}-\eqref{ZEqnNum165624}, the corrupted sensor node $i$ shows non-triggering misbehavior, which proves part 1.
We now prove part 2. Let ${\rm {\mathcal A}}_{n} \subseteq {\rm {\mathcal V}}^{c}$ be the set of sensor nodes showing non-triggering misbehavior. Then, based on the presented result in part 1, under the attack signal \eqref{ZEqnNum705143}, sensor nodes in the set ${\rm {\mathcal A}}_{n}$ are misled by the attacker and consequently do not transmit any information to their neighbors which make them to act as sink nodes. Since the set of sensor nodes ${\rm {\mathcal A}}_{n} $ is assumed to be a vertex cut. Then, the non-triggering misbehavior of sensor nodes in ${\rm {\mathcal A}}_{n}$ prevents information flow from one portion of the graph ${\rm {\mathcal G}}$ to another portion of the graph ${\rm {\mathcal G}}$ and thus clusters the original graph ${\rm {\mathcal G}}$ into subgraphs. This completes the proof.
\end{proof}
\vspace{-0.3cm}
\noindent
\textbf{Remark 3.} Note that to design the presented strategic false data injection attack signal given in \eqref{ZEqnNum705143} an attacker needs to eavesdrop the actual sensor measurement $y_{i} (k)$ and the last transmitted prior state estimate $\bar{x}_{i} (L)$ through the communication channel. The attacker then determines the predictive state estimate $\tilde{x}_{i} (k)$ using the dynamics in \eqref{ZEqnNum257073} at each time instant $k\ge L+1$ to achieve non-triggering misbehavior for the sensor node $i$.
We provide Example $1$ for further illustration of the results of Theorem 2.
\vspace{-0.0cm}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=2.68in,height=2.8in]{DKF_SMC.jpg}
\vspace{-2pt}\caption{Effect of non-triggering misbehavior on sensor nodes $\{$5,6$\}$ cluster the graph ${\rm {\mathcal G}}$ in the two isolated graphs ${\rm {\mathcal G}}_{1} $ and ${\rm {\mathcal G}}_{2}.$}\label{fig1}
\captionsetup{justification=centering}
\end{center}
\end{figure}
\vspace{-0.15cm}
\noindent
\textbf{Example 1.} Consider a graph topology for a distributed sensor network given in fig. 1. Let the vertex cut ${\rm {\mathcal A}}_{n} =\{ 5,6\}$ be under the presented false data injection attack in Theorem $2$ and show non-triggering misbehavior. Then, the sensor nodes in ${\rm {\mathcal A}}_{n} =\{ 5,6\}$ do not transmit any information to their neighbors under the designed false data injection attack. Moreover, the sensor nodes in ${\rm {\mathcal A}}_{n} =\{ 5,6\}$ act as sink nodes and prevent information flow from subgraph ${\rm {\mathcal G}}_{1}$ to subgraph ${\rm {\mathcal G}}_{2}$ which clusters the graph ${\rm {\mathcal G}}$ into two non-interacting subgraphs ${\rm {\mathcal G}}_{1}$ and ${\rm {\mathcal G}}_{2}$ as shown in Fig. 1. This example shows that the attacker can compromise the vertex cut ${\rm {\mathcal A}}_{n}$ of the original graph ${\rm {\mathcal G}}$ such that it shows non-triggering misbehavior and harm the network connectivity or cluster the graph into various non-interacting subgraphs.
We now analyze the effect of non-triggering misbehavior on the collective observability of the sensor network. To do so the following definitions are needed.
\smallskip
\noindent \textbf{Definition 5 (Potential Set). } A set of nodes ${\rm {\mathcal P} \subset} {\rm {\mathcal V}}$ is said to be a potential set of the graph ${\rm {\mathcal G}}$ if the pair $(A,C_{{\rm {\mathcal V}}\backslash {\rm{\mathcal P}}} )$ is not collectively observable.
\smallskip
\noindent \textbf{Definition 6 (Minimal Potential Set).} A set of nodes ${\rm {\mathcal P} }_{m} \subset {\rm {\mathcal V}}$ is said to be a minimal potential set if ${\rm {\mathcal P} }_{m}$ is a potential set and no subset of ${\rm {\mathcal P}}_{m}$ is a potential set.
\smallskip
\noindent \textbf{Remark 4.} Note that if the attacker knows the graph structure and the local pair$(A,C_{i} ),\, \, \, \forall i\in {\mathcal V}$. Then, the attacker can identify the minimum potential set of sensor nodes ${\rm{\mathcal P}}_{m}$ in the graph ${\rm {\mathcal G}}$ and achieves non-triggering misbehavior for ${\rm {\mathcal P} }_{m}.$ Thus, the set of sensor nodes ${\rm {\mathcal P}}_{m}$ does not exchange any information with its neighbors and becomes isolated in the graph ${\rm {\mathcal G}}$.
\smallskip
\noindent \textbf{Corollary 1.}
\textit{Let the set of sensors that shows non-triggering misbehavior be the minimal potential set ${\rm {\mathcal S}}_{n}$. Then, the network is no longer collectively observable and the process state reconstruction from the distributed sensor measurements is impossible.}
\vspace{-0.1cm}
\begin{proof}
According to the statement of the corollary, ${\rm {\mathcal S}}_{n}$ represents a minimal potential set of the graph ${\rm {\mathcal G}}$ and shows non-triggering misbehavior. Then, the sensor nodes in the set ${\rm {\mathcal S}}_{n}$ do not transmit any information to their neighbors and they act as sink nodes, i.e., they only absorb information. Therefore, the exchange of information happen just between the remaining sensor nodes in the graph ${\rm {\mathcal G}}\backslash {\rm {\mathcal S}}_{n}$. Hence, after excluding the minimum potential nodes ${\rm {\mathcal S}}_{n}$, the pair $(A,C_{{\rm {\mathcal G}}\backslash {\rm {\mathcal S}}_{n} } )$ becomes unobservable based on the Definitions $5$ and $6$, and thus makes the state reconstruction impossible. This completes the proof.
\end{proof}
\vspace{-0.4cm}
\subsection{Continuous-triggering Misbehavior}
In this subsection, we discuss how an attacker can compromise the actual sensor measurement to mislead the event-triggered mechanism and achieves continuous-triggering misbehavior and thus results in a time-driven DKF that not only drains the communication resources but also continuously propagates the adverse effect of attack in the network.
\smallskip
\noindent \textbf{Definition 7} \textbf{(Continuous-triggering Misbehavior).} Let the attacker design an attack strategy such that it deceives the triggering mechanism in (\ref{eq3x}) at each time instant. This turns the event-driven DKF into a time-driven DKF that continuously exchanges corrupted information among sensor nodes. We call this a continuous-triggering misbehavior.
We now show how a reply attack, followed by an eavesdropping attack, can manipulate the sensor reading to cause continuous violation of the event-triggered mechanism (\ref{eq3x}).
\vspace{-0.1cm}
\begin{theorem}
Consider the process dynamics \eqref{ZEqnNum820040} with $N$ sensor nodes \eqref{ZEqnNum687942} communicating over the graph ${\rm {\mathcal G}}.$ Let the sensor node $i$ in \eqref{ZEqnNum687942} be under a replay attack given by\newline
\vspace{-0.35cm}
\begin{equation} \label{ZEqnNum253008}
y_{i}^{a} (k)=C_{i} \bar{x}_{i} (k-1)+\upsilon _{i} (k),\, \, \forall k\ge l+1,
\end{equation}
\vspace{-0.15cm}
\noindent
where $\bar{x}_{i}(k-1)$ represents the last transmitted prior state and $\upsilon_{i} (k)$ denotes a scalar disruption signal. $l$ denotes the last triggering time instant when intact prior state estimate was transmitted. Then, the sensor node $i$ shows continuous-triggering misbehavior if the attacker selects $\left\| \upsilon _{i} (k)\right\| >\alpha.$
\end{theorem}{}
\begin{proof}
To mislead a sensor to cause a continuous-triggering misbehavior, the attacker needs to design the attack signal such that the event-triggered condition (\ref{eq3x}) is constantly being violated, i.e., $\, \left\| y_{i}^{a} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| \ge \alpha $ all the time. The attacker can eavesdrop the last transmitted prior state estimate $\bar{x}_{i}(k-1)$ and design the strategic attack signal given by \eqref{ZEqnNum253008}. Then, one has
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum491202}
\begin{array}{l}
y_i^a(k) - {C_i}{{\tilde x}_i}(k - 1) = {C_i}{{\bar x}_i}(k - 1) + {\delta _i}(k) - {C_i}{{\tilde x}_i}(k - 1){\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
= {C_i}{{\bar x}_i}(k - 1) + {\upsilon _i}(k) - {C_i}[{\zeta _i}(k - 1){{\bar x}_i}(k - 1){\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
\,\,\,\,\, + (1 - {\zeta _i}(k - 1))A{{\bar x}_i}(k - 2)]{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
= (1 - {\zeta _i}(k - 1)){C_i}[{{\bar x}_i}(k - 1) - A{{\bar x}_i}(k - 2)] + {\upsilon _i}(k),
\end{array}
\end{equation}
Taking the norm from both sides of \eqref{ZEqnNum491202} yields
\begin{equation} \label{ZEqnNum734745}
\begin{array}{l} {\left\| y_{i}^{a} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| } \\ {=\left\| (1-\zeta _{i} (k-1))C_{i} [\bar{x}_{i} (k-1)-A\bar{x}_{i} (k-2)]+\upsilon _{i} (k)\right\| ,} \end{array}
\end{equation}
Since for $k=l+1$, $\zeta _{i}(l)=1$
\vspace{-0.15cm}
\begin{equation} \label{34)}
\left\| y_{i}^{a} (l+1)-C_{i} \tilde{x}_{i} (l)\right\| =\left\| \upsilon _{i} (l+1)\right\| ,
\end{equation}
{If the attacker selects $\upsilon _{i}(l+1)$ in \eqref{34)}} such that $\left\| \upsilon _{i} (l+1)\right\| >\alpha $, then the attack signal \eqref{ZEqnNum253008} ensures triggering at time instant $k=l+1.$ Then, based on similar argument for \eqref{ZEqnNum734745}, $\forall k\ge l+1$
\vspace{-0.15cm}
\begin{equation} \label{35)} \nonumber
\left\| y_{i}^{a} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| =\left\| \upsilon _{i} (k)\right\| >\alpha ,
\end{equation}
which ensures continuous triggering misbehavior. This completes the proof.
\end{proof}
\vspace{-0.25cm}
To achieve continuous-triggering misbehavior the attacker needs to eavesdrop prior state estimates $\bar{x}_{i} (k-1)$ at each triggering instant and selects the $\upsilon _{i}(k)$ large enough such that $\left\| \upsilon _{i} (k)\right\| >\alpha $ always holds true.
Note that continuous-triggering misbehavior can completely ruin the advantage of event-triggered mechanisms and turn it into time-driven mechanisms. This significantly increases the communication burden. Since nodes in the WSNs are usually powered through batteries with limited energy, the attacker can drain sensors limited resources by designing the above-discussed attack signals to achieve continuous-triggering misbehavior, and, consequently can make them non-operating in the network along with the deteriorated performance of the network.
Note that although we classified attacks into non-triggering misbehavior and continuous-triggering misbehavior, to analyze how the attacker can leverage the event-triggered mechanism, the following \textit{analysis, detection and mitigation approaches} are not restricted to any class of attacks.
\vspace{-0.3cm}
\section{ Attack Detection}
In this section, we present an entropy estimation-based attack detection approach for the event-triggered DKF.
The KL divergence is a non-negative measure of the relative entropy between two probability distributions which is defined as follows.
\noindent \textbf{Definition 8 (KL Divergence) \cite{c24}}. Let $X$ and $Z$ be two random variables with probability density function $P_{X}$ and $P_{Z}$, respectively. The KL divergence measure between $P_{X}$ and $P_{Z}$ is defined as
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum937457}
D_{KL} (P_{X} ||P_{Z} )=\int _{\theta \in \Theta }P_{X} (\theta )\log \left(\frac{P_{X} (\theta )}{P_{Z} (\theta )} \right) ,
\end{equation}
with the following properties \cite{c32}
\begin{enumerate}
\item $D_{KL} (P_{X} ||P_{Z} )\ge 0;$
\item $D_{KL} (P_{X} ||P_{Z} )=0$ if and only if, $P_{X} =P_{z} ;$
\item $D_{KL} (P_{X} ||P_{Z} )\ne D_{KL} (P_{Z} ||P_{X} ).$
\end{enumerate}
In the existing resilient literature, the entropy-based anomaly detectors need to know the probability density function of sequences, i.e., $P_{X}$ and $P_{Z},$ {in \eqref{ZEqnNum937457}} to determine the relative entropy. In most of the cases, authors assume that the probability density function of corrupted innovation sequence remains Gaussian (see \cite{c24} and \cite{c34} for instance). Since, the attacker's input signal is unknown, it is restrictive to assume that the probability density function of the corrupted sequence remains Gaussian. To relax this \textit{restrictive assumption} on probability density function of the corrupted sequence, we estimate the relative entropy between two random sequences $X$ and $Z$ using \textit{$k-$nearest neighbor $(k-NN)$} based divergence estimator \cite{d5}.
{Let $\{ X_{1},\ldots,X_{n_{1} } \} $ and $\{ Z_{1} ,\ldots ,Z_{n_{2} } \} $ be i.i.d. samples drawn independently from $P_{X} $ and $P_{Z},$ respectively with $X_{j},\,\, Z_{j} \in {\bf {\mathbb R}}^{m}$. Let $d_{k}^{X}(i)$ be the Euclidean distance between $X_{i}$ and its \textit{$k-NN$} in $\{ X_{l} \} _{l\ne i} .$ The \textit{$k-NN$} of a sample $s$ in $\{ s_{1} ,\ldots ,s_{n} \} $ is $s_{i(k)}$ where $i(1),\ldots,i(n)$ such that
\vspace{-0.2cm}
\begin{equation} \label{37)} \nonumber
\left\| s-s_{i(1)} \right\| \le \left\| s-s_{i(2)} \right\| \le \ldots \le \left\| s-s_{i(n)} \right\| .
\end{equation}
More specifically, the Euclidean distance $d_{k}^{X}(i)$ is given by \cite{d5a}
\begin{equation}
\nonumber
\begin{array}{l}
d_k^X(i) = \mathop {\min }\limits_{j = 1, \ldots ,{n_1},j \ne \{ i,{j_1},...,{j_{k - 1}}\} } \left\| {{X_i} - {X_j}} \right\|
\end{array}
\end{equation}}
The \textit{$k-NN$} based relative entropy estimator is given by \cite{d5}
\begin{equation} \label{ZEqnNum207466}
\hat{D}_{KL} (P_{X} ||P_{Z} )=\frac{m}{n_{1} } \sum _{i=1}^{n_{1} }\log \frac{d_{k}^{Z} (i)}{d_{k}^{X} (i)} +\log \frac{n_{2} }{n_{1} -1} .
\end{equation}
The innovation sequences represent the deviation of the actual output of the system from the estimated one. It is known that innovation sequences approach a steady state quickly and thus it is reasonable to design innovation-based anomaly detectors to capture the system abnormality \cite{c24}. Using the innovation sequence of each sensor and the innovation sequences that it estimates for its neighbors, we present innovation based divergence estimator and design detectors to capture the effect of the attacks on the event-triggered DKF.
Based on innovation expression \eqref{ZEqnNum276515}, in the presence of attack, one can write the compromised innovation $r_{i}^{a} (k)$ for sensor node $i$ with disrupted measurement $y_{i}^{a} (k)$ in \eqref{ZEqnNum973066} and state estimation $\bar{x}_{i}^{a} \, $ based on \eqref{ZEqnNum120276} as
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum255093}
r_{i}^{a} (k)=y_{i}^{a} (k)-C_{i} \bar{x}_{i}^{a} (k).
\end{equation}
Let $\{ r_{i}^{a} (l),\ldots ,r_{i}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ be i.i.d. \textit{p}-dimensional samples of corrupted and nominal innovation sequences with probability density function $P_{r_{i}^{a} } $ and $P_{r_{i} },$ respectively. The nominal innovation sequence follows $r_{i}(k)$ defined in \eqref{ZEqnNum276515}. Using \textit{$k-NN$} based relative entropy estimator \eqref{ZEqnNum207466}, one has \cite{d5}
\begin{equation} \label{ZEqnNum280433}
\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )=\frac{p}{w} \sum _{j=1}^{w}\log \frac{d_{k}^{r_{i} } (j)}{d_{k}^{r_{i}^{a} } (j)} +\log \frac{w}{w-1} ,\, \, \forall i\in {\rm {\mathcal V}}.
\end{equation}
Define the average of the estimated KL divergence over a time window of $T$ as
\begin{equation} \label{ZEqnNum738078}
\Phi _{i} (k)=\frac{1}{T} \sum _{l=k-T+1}^{k}\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } ) ,\, \, \forall i\in {\rm {\mathcal V}}.
\end{equation}
Now, in the following theorem, it is shown that the effect of attacks on the sensors can be captured using \eqref{ZEqnNum738078}.
\begin{theorem}
Consider the distributed sensor network \eqref{ZEqnNum820040}-\eqref{ZEqnNum687942} under attack on sensor. Then,
\begin{enumerate}
\item in the absence of attack, $\Phi _{i} (k)=\log (w/w-1),\, \, \, \forall k;$
\item in the presence of attack, $\Phi _{i} (k)>\delta ,\, \, \forall k>l_{a},$ where $\delta $ and $l_{a}$ denotes, respectively, a predefined threshold and the time instant at which the attack happen.
\end{enumerate}
\end{theorem}{}
\begin{proof}
In the absence of attack, the samples of innovation sequences $\{ r_{i}^{a} (l),\ldots ,r_{i}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ are similar. Then, the Euclidean distance $d_{k}^{r_{i}^{a} } (j)=d_{k}^{r_{i} } (j),\, \, \forall j\in \{ 1,...,w\} $ and one has
\begin{equation} \label{ZEqnNum663932}
\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )=\log \frac{w}{w-1} ,\, \, \forall i\in {\rm {\mathcal V}}.
\end{equation}
Based on \eqref{ZEqnNum663932}, one has
\vspace{-0.15cm}
\begin{equation} \label{a43)}
\Phi _{i} (k)=\frac{1}{T} \sum _{l=k-T+1}^{k}\log \frac{w}{w-1} =\log \frac{w}{w-1} < \delta ,\, \, \forall i\in {\rm {\mathcal V}}.
\end{equation}
{where $\log (w/w-1)$ in (42) depends on the sample size of innovation sequence and $\log (w/w-1)\le 0.1,\, \, \, \forall w\ge 10$. Therefore, the predefined threshold $\delta$ can be selected with some $\delta>0.1$ such that the condition in (42) is always satisfied.} This complete the proof of part 1.
In the presence of attack, the samples of innovation sequences $\{ r_{i}^{a} (l),\ldots ,r_{i}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ are different, i.e., $d_{k}^{r_{i}^{a} } (j)\ne d_{k}^{r_{i} } (j),\, \, \forall j\in \{ 1,...,w\} $. More specifically, $d_{k}^{r_{i} } (j)>d_{k}^{r_{i}^{a} } (j), \, \, \forall j\in \{ 1,...,w\} $ due to change in the corrupted innovation sequence. Therefore, based on \eqref{ZEqnNum280433} the estimated relative entropy between sequences becomes
\begin{equation} \label{ZEqnNum657988}
\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )=\frac{p}{w} \sum _{j=1}^{w}\log (1+\frac{\Delta _{k}^{r_{i} } (j)}{d_{k}^{r_{i}^{a} } (j)} ) +\log \frac{w}{w-1} ,\, \forall i\in {\rm {\mathcal V}},
\end{equation}
with $\Delta _{k}^{r_{i} } (j)$ as the change in Euclidean distance due to corrupted innovation sequence. Based on \eqref{ZEqnNum657988}, one has
\begin{equation} \label{ZEqnNum750552}
\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )=\frac{p}{w} \sum _{j=1}^{w}\log (1+\frac{\Delta _{k}^{r_{i} } (j)}{d_{k}^{r_{i}^{a} } (j)} ) +\log \frac{w}{w-1} \gg \log \frac{w}{w-1} .
\end{equation}
Thus, one has
\vspace{-0.2cm}
\begin{equation} \label{46)}
\Phi _{i} (k)=\frac{1}{T} \sum _{l=k-T+1}^{k}\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )>\delta ,\, \, \forall i\in {\rm {\mathcal V}},
\end{equation}
where $T$ and $\delta $ denote the sliding window size and the predefined design threshold. This completes the proof.
\end{proof}
Based on Theorem 4, one can use the following condition for attack detection.
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum861796}
\left\{\begin{array}{l} {\Phi _{i} (k)\, <\delta :H_{0}, } \\ {\Phi _{i} (k)\, >\delta \, \, :H_{1}, } \end{array}\right.
\end{equation}
where $\delta $ denotes the designed threshold for detection, the null hypothesis $H_{0} $ represents the intact mode of sensor nodes and $H_{1}$ denotes the compromised mode of sensor nodes.
\smallskip
\noindent \textbf{Remark 5.} {Note that in the absence of an attack, the innovation sequence has a known zero-mean Gaussian distribution due to the measurement noise. Based on the prior system knowledge, one can always consider that the nominal innovation sequence is zero-mean Gaussian distribution with predefined covariance. The bound on the predefined covariance can be determined during normal operation of the event-triggered DKF. This assumption for the knowledge of the nominal innovation sequence for attack detection is standard in the existing literature (see \cite{c34} for instance). The designed threshold $\delta $ in \eqref{ZEqnNum861796} is a predefined parameter and chosen appropriately for the detection of the attack signal. Moreover, the selection of detection threshold based on expert knowledge is standard in the existing literature. For example, several results on adversary detection and stealthiness have considered similar thresholds \cite{c24}-\cite{c26}. }
\begin{algorithm}[!ht]
\caption{Detecting attacks on sensors.}
\begin{enumerate}
\item [1:] Initialize with a time window $T$ and detection threshold $\delta$.
\item [2:] \textbf{procedure} $\forall i=1,\ldots ,N$
\item [3:] {Use samples of innovation sequences $\{ r_{i}^{a} (l),\ldots,$ \qquad $r_{i}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ based on \eqref{ZEqnNum255093} and \eqref{ZEqnNum276515}, $\forall l\in k$.}
\item [4:] Estimate the $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )$ using \eqref{ZEqnNum750552}.
\item [5:] Compute $\Phi _{i} (k)$ as \eqref{46)} and use condition in \eqref{ZEqnNum861796} to detect attacks on sensors.
\item [6:] \textbf{end procedure}
\end{enumerate}
\end{algorithm}
Based on the results presented in Theorem 4 and Algorithm 1, one can capture attacks on both sensors and communication links, but it cannot identify the specific compromised communication link as modelled in \eqref{ZEqnNum397788}. To detect the source of attacks, we present an estimated entropy-based detector to capture the effect of attacks on the specific communication channel. More specifically, the relative entropy between the estimated innovation sequences for the neighbors at particular sensor node and the nominal innovation sequence of the considered sensor node is estimated using \eqref{ZEqnNum207466}.
Define the estimated innovation sequences $\zeta _{i,j}^{a}(k)$ for a neighbor $j$ under attacks on communication channel from the sensor node $i$ side as
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum178443}
\zeta _{i,j}^{a} (k)=y_{i} (k)-C_{j} \tilde{x}_{j}^{a} (k),
\end{equation}
where $\tilde{x}_{j}^{a}(k)$ is the corrupted communicated state estimation of neighbor $j$ at sensor node $i$ at the last triggering instant.\par
Let $\{ \zeta _{i,j}^{a} (l),\ldots ,\zeta _{i,j}^{a} (l-1+w)\}$ be i.i.d. \textit{p}-dimensional samples of neighbor's estimated innovation at the sensor node $i$
with probability density function $P_{\zeta _{i,j}^{a} }.$ Using \textit{$k-NN$} based relative entropy estimator \eqref{ZEqnNum207466}, one has
\begin{equation} \label{ZEqnNum691139}
\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )=\frac{p}{w} \sum _{j=1}^{w}\log \frac{d_{k}^{r_{i} } (j)}{d_{k}^{\zeta _{i,j}^{a} } (j)} +\log \frac{w}{w-1} ,\, \, \forall i\in {\rm {\mathcal V}},j\in N_{i} .
\end{equation}
Note that in the presence of attacks on the communication channels, the neighbor's actual innovation differs the neighbor's estimated innovation at sensor $i$. {In the absence of the attack, the mean value of all the sensor state estimates converge to the mean of the desired process state at steady state, and, therefore, the innovation sequences $r_{i}$ and $\zeta _{i,j}^{a}$ have the same zero mean Gaussian distributions. In the presence of attack, however, as shown in Theorem 5 and Algorithm 2, their distributions diverge.}
Define the average of the KL divergence over a time window of $T$ as
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum932962}
\Psi _{i,j} (k)=\frac{1}{T} \sum _{l=k-T+1}^{k}\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } ) ,\, \, \forall i\in {\rm {\mathcal V}},\, j\in N_{i} .
\end{equation}
\begin{theorem}
Consider the distributed sensor network \eqref{ZEqnNum820040}-\eqref{ZEqnNum687942} under attack on communication links \eqref{ZEqnNum397788}. Then, in the presence of an attack, $\Psi _{i,j} (k)>\delta ,\, \, \forall k$ where $\delta $ denotes a predefined threshold.
\end{theorem}
\begin{proof}
The result follows a similar argument as given in the proof of part $2$ of Theorem 4.
\end{proof}
\begin{algorithm}[!ht]
\caption{Detecting attack on a specific communication link.}
\begin{enumerate}
\item [1:] Initialize with a time window $T$ and detection threshold $\delta.$
\item [2:] \textbf{procedure} $\forall i=1,\ldots ,N$
\item [3:] {For each sensor node $j\in N_{i} $, use samples of innovation sequences$\{ \zeta _{i,j}^{a} (l),\ldots ,\zeta _{i,j}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\}$ based on \eqref{ZEqnNum178443} and \eqref{ZEqnNum276515}, $\forall l\in k$.}
\item [4:] Estimate the $\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )$ using \eqref{ZEqnNum691139}.
\item [5:] Compute $\Psi _{i,j}(k)$ as \eqref{ZEqnNum932962} and use same argument in \eqref{ZEqnNum861796} to detect attacks on specific communication link.
\item [6:] \textbf{end procedure}
\end{enumerate}
\end{algorithm}
\vspace{-0.4cm}
\section{ Secure Distributed Estimation Mechanism}
This section presents a meta-Bayesian approach for secure event-triggered DKF, which incorporates the outcome of the attack detection mechanism to perform second-order inference and consequently form beliefs over beliefs. That is, the second-order inference forms confidence and trust about the truthfulness or legitimacy of the sensors' own state estimate (i.e., the posterior belief of the first-order Bayesian inference) and those of its neighbor's state estimates, respectively. Each sensor communicates its confidence to its neighbors. Then sensors incorporate the confidence of their neighbors and their own trust about their neighbors into their posterior update laws to successfully discard the corrupted information.
\vspace{-0.4cm}
\noindent
\subsection{Confidence of sensor nodes}
The second-order inference forms a confidence value for each sensor node which determines the level of trustworthiness of the sensor about its own measurement and state estimate (i.e., the posterior belief of the first-order Bayesian inference). If a sensor node is compromised, then the presented attack detector detects the adversary and it then reduces its level of trustworthiness about its own understanding of the environment and communicates it with its neighbors to inform them the significance of its outgoing information and thus slow down the attack propagation.
To determine the confidence of the sensor node $i$, based on the divergence $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )$ from Theorem 4, we first define
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum125869}
\chi _{i} (k)=\frac{\Upsilon _{1} }{\Upsilon _{1} +\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )} ,
\end{equation}
with $0<\Upsilon _{1} <1$ represents a predefined threshold to account for the channel fading and other uncertainties. Then, in the following lemma, we formally present the results for the confidence of sensor node $i$.
\noindent \textbf{Lemma 1.} \textit{Let $\beta _{i} (k)$ be the confidence of the sensor node $i$ which is updated using
\begin{equation} \label{ZEqnNum359584}
\beta _{i} (k)=\sum _{l=0}^{k-1}(\kappa _{1} )^{k-l+1} \chi _{i} (l),
\end{equation}
where $\chi _{i}(k)$ is defined in \eqref{ZEqnNum125869}, and $0<\kappa _{1}<1$ is a discount factor. Then, $\beta _{i}(k)\in (0,1]$ and
\begin{enumerate}
\item $\beta _{i} (k)\to 0,\, \, \, \forall i\in {\rm {\mathcal V}}^{c} ;$
\item $\beta _{i} (k)\to 1,\, \, \, \forall i\in {\rm {\mathcal V}}\backslash {\rm {\mathcal V}}^{c} .$
\end{enumerate}}
\begin{proof}
Based on the expression \eqref{ZEqnNum125869}, since $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )\ge 0$, one has $\chi _{i} (k)\in (0,1]$. Then, using \eqref{ZEqnNum359584}, one can infer that $\beta _{i} (k)\in (0,1]$.
Now according to Theorem 4, if the sensor node $i$ is under attack, then $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )>>\Upsilon _{1} $ in \eqref{ZEqnNum125869}, which makes $\chi _{i}(k)$ close to zero. Then, based on expression \eqref{ZEqnNum359584} with the discount factor $0<\kappa _{1} <1,$ the confidence $\beta _{i}(k)$ in \eqref{ZEqnNum359584} approaches zero, and thus the $i^{th} $ sensor's belief about the trustworthiness of its own information would be low. This completes the proof of part 1.
On the other hand, based on Theorem 4, in the absence of attacks, $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )\to 0$ as $w\to \infty $, which makes $\chi _{i} (k)$ close to one and, consequently, $\beta _{i} (k)$ becomes close to one. This indicates that the $i^{th}$ sensor node is confident about its own state estimate. This completes the proof of part 2.
\end{proof}
\vspace{-0.1cm}
Note that the expression for the confidence of sensor node $i$ in \eqref{ZEqnNum359584} can be implemented using the following difference equation
\vspace{-0.3cm}
\begin{equation} \label{53)} \nonumber
\beta _{i} (k+1)=\beta _{i} (k)+\kappa _{1} \chi _{i} (k).
\end{equation}
Note also that the discount factor in \eqref{ZEqnNum359584} determines how much we value the current experience with regards to past experiences. It also guarantees that if the attack is not persistent and disappears after a while, or if a short-period adversary rather than attack (such as packet dropout) causes, the belief will be recovered, as it mainly depends on the current circumstances.
\vspace{-0.35cm}
\noindent
\subsection{Trust of sensor nodes about their incoming information}
Similar to the previous subsection, the second-order inference forms trust of sensor nodes to represent their level of trust on their neighboring sensor's state estimates. Trust decides the usefulness of the neighboring information in the state estimation of sensor node $i$.
The trust of the sensor node $i$ on its neighboring sensor $j$ can be determined based on the divergence $\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} })$ in \eqref{ZEqnNum178443} from Theorem 5, from which we define
\begin{equation} \label{ZEqnNum846884}
\theta _{i,j} (k)=\frac{\Lambda _{1} }{\Lambda _{1} +\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )} ,
\end{equation}
where $0<\Lambda _{1} <1$ represents a predefined threshold to account for the channel fading and other uncertainties. Then, in the following lemma, we formally present the results for the trust of the sensor node $i$ on its neighboring sensor $j.$
\smallskip
\noindent \textbf{Lemma 2.} \textit{Let $\sigma _{i,j}(k)$ be the trust of the sensor node $i$ on its neighboring sensor $j$ which is updated using
\begin{equation} \label{ZEqnNum805360}
\sigma _{i,j} (k)=\sum _{l=0}^{k-1}(\kappa _{2} )^{k-l+1} \theta _{i,j} (l),
\end{equation}
where $\theta _{i,j}(k)$ is defined in \eqref{ZEqnNum846884}, and $0<\kappa _{2} <1$ is a discount factor. Then, $\sigma _{i,j}(k)\in (0,1]$ and
\begin{enumerate}
\item $\sigma _{i,j} (k)\to 0,\, \, \, \forall j\in {\rm {\mathcal V}}^{c} \cap N_{i} ;$
\item $\sigma _{i,j} (k)\to 1,\, \, \, \forall j\in {\rm {\mathcal V}}\backslash {\rm {\mathcal V}}^{c} \cap N_{i} .$
\end{enumerate}}
\begin{proof}
The result follows a similar argument as given in the proof of Lemma 1.
\end{proof}
\vspace{-0.2cm}
Note that the trust of sensor node $i$ in \eqref{ZEqnNum805360} can be implemented using the following difference equation
\vspace{-0.2cm}
\begin{equation} \label{56)} \nonumber
\sigma _{i,j} (k+1)=\sigma _{i,j} (k)+\kappa _{2} \theta _{i,j} (k).
\end{equation}
Using the presented idea of trust, one can identify the attacks on the communication channel and discard the contribution of compromised information for the state estimation.
\vspace{-0.35cm}
\subsection{Attack mitigation mechanism using confidence and trust of sensors}
This subsection incorporates the confidence and trust of sensors to design a resilient event-triggered DKF. To this end, using the presented confidence $\beta _{i}(k)$ in \eqref{ZEqnNum359584} and trust $\sigma _{i,j}(k)$ in \eqref{ZEqnNum805360}, we design the resilient form of the event-triggered DKF as
\begin{equation} \label{ZEqnNum565391}
\begin{array}{l} {\hat{x}_{i} (k)=\bar{x}_{i} (k)+K_{i} (k)(\beta _{i} (k)y_{i} (k)+(1-\beta _{i} (k))C_{i} m_{i} (k)-C_{i} \bar{x}_{i} (k))} \\ {\, \, \, \, \, \,\, \, \, \, \,\, \,\, \,\, \, \, \, \, \,\, \, \, \, +\gamma _{i} \sum _{j\in N_{i} }\sigma _{i,j} (k)\beta _{j} (k)(\tilde{x}_{j} (k)-\tilde{x}_{i} (k)),} \end{array}
\end{equation}
where the weighted neighbor's state estimate $m_{i}(k)$ is defined as
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum466700}
\begin{array}{l} {m_{i} (k)=\frac{1}{\left|N_{i} \right|} \sum _{j\in N_{i} }\sigma _{i,j} (k)\beta _{j} (k)\tilde{x}_{j} (k) \approx x(k)+\varepsilon _{i} (k),\, \, \, } \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \qquad \forall k\, \, \, \left\| \varepsilon _{i} (k)\right\| <\tau ,} \end{array}
\end{equation}
where $\varepsilon _{i}(k)$ denotes the deviation between the weighted neighbor's state estimate $m_{i} (k)$ and the actual process state $x(k)$. Note that in \eqref{ZEqnNum466700} the weighted state estimate depends on the trust values $\sigma _{i,j} (k)$ and the confidence values $\beta _{j} (k),\, \, \forall j\in N_{i}.$ Since the weighted state estimate depends only on the information from intact neighbors, then one has $\left\| \varepsilon _{i} (k)\right\| <\tau$ for some $\tau >0,\, \, \forall k.$ For the sake of mathematical representation, we approximate the weighted state estimate $m_{i}(k)$ in terms of the actual process state $x(k)$, i.e., $m_{i}(k)\approx x(k)+\varepsilon _{i} (k).$ We call this a meta-Bayesian inference that integrates the first-order inference (state estimates) with second-order estimates or belief (trust and confidence on the trustworthiness of state estimate beliefs).
Define the prior and predictive state estimation errors as
\begin{equation} \label{ZEqnNum250987}
\begin{array}{l} {\bar{\eta }_{i} (k)=x(k)-\bar{x}_{i} (k)} \\ {\tilde{\eta }_{i} (k)=x(k)-\tilde{x}_{i} (k),} \end{array}
\end{equation}
Using the threshold in triggering mechanism (\ref{eq3x}), one has
\begin{equation} \label{ZEqnNum528573}
\begin{array}{l} {\left\| \tilde{\eta }_{i} (k)\right\| -\left\| x(k+1)-x(k)+v_{i} (k+1)\right\| \le \alpha /\left\| C_{i} \right\| ,} \\ {\left\| \tilde{\eta }_{i} (k)\right\| \le \alpha /\left\| C_{i} \right\| +{\rm {\mathcal B}},} \end{array}
\end{equation}
where ${\rm {\mathcal B}}$ denotes the bound on $\left\| x(k+1)-x(k)+v_{i} (k+1)\right\| .$
\noindent Other notations used in the following theorem are given by
\begin{equation} \label{ZEqnNum500695}
\begin{array}{l} {\bar{\eta }(k)=[\bar{\eta }_{1} (k),\ldots ,\bar{\eta }_{N} (k)],\, \, \, M(k)=diag[M_{1} (k),\ldots ,M_{N} (k)]} \\ {\Upsilon =diag[\gamma _{1} ,\ldots ,\gamma _{N} ],\, \, \Upsilon _{m} =\left\| \max \{ \gamma _{i} \} \right\| ,\, \, \forall i \in \mathcal{V}}, \\ {\bar{\beta }=(I_{N} -diag(\beta _{i} )),\, \, \, \, E(k)=[\varepsilon _{1} (k),\ldots ,\varepsilon _{N} (k)],} \\ {\tilde{\eta }(k)=[\tilde{\eta }_{1} (k),\ldots ,\tilde{\eta }_{N} (k)].} \end{array}
\end{equation}
\noindent
\textbf{Assumption 4.} At least $({\rm {\mathcal C}}(N_{i} )/2)+1$ neighbors of the sensor node $i$ are intact.
Assumption 4 is similar to the assumption found in the secure estimation and control literature \cite{c19}, \cite{c29}. Necessary and sufficient condition for any centralized or distributed estimator to resiliently estimate actual state is that the number of attacked sensors is less than half of all sensors.
\smallskip
\noindent
\noindent \textbf{Remark 6.} {Note that the proposed notion of trust and confidence for hybrid attacks on sensor networks for event-triggered DKF can also be seen as the weightage in the covariance fusion approach. Although covariance intersection-based Kalman consensus filters have been widely used in the literature to deal with unknown correlations in sensor networks (for instants see \cite{c10}-\cite{d10} and \cite{c310}-\cite{c312}), most of these results considered the time-triggered distributed state estimation problem with or without any adversaries. Compared with the existing results, however, a novelty of this work lies in detecting and mitigating the effect of attacks on sensors and communication channels for event-triggered DKF and providing a rigorous mathematical analysis for different triggering misbehaviors.}
\begin{theorem}
Consider the resilient event triggered DKF \eqref{ZEqnNum565391} with the triggering mechanism (\ref{eq3x}). Let the time-varying graph be ${\rm {\mathcal G}}(k)$ such that at each time instant $k,$ Assumptions 3 and 4 are satisfied. Then,
\begin{enumerate}
\item The following uniform bound holds on state estimation error in \eqref{ZEqnNum250987}, despite attacks
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum232225}
\left\| \bar{\eta }(k)\right\| \le (A_{o} )^{k} \left\| \bar{\eta }(0)\right\| +\sum _{m=0}^{k-1}(A_{o} )^{k-m-1} B_{o} ,
\end{equation}
where
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum594295}
\begin{array}{l} {A_{o} =\sigma _{\max } ((I_{N} \otimes A)M(k)),} \\ {B_{o} =\sigma _{\max } (A)\sigma _{\max } ({\rm {\mathcal L}}(k))\Upsilon _{m} \sqrt{N(} \alpha /\left\| C_{i} \right\| +{\rm {\mathcal B}})} \\ {\, \, \, \, \,\, \, \, \, \, \, \,\, \, \, \, \, \, \, \, +(\sigma _{\max } (A)+\sigma _{\max } (A_{o} ))\left\| \bar{\beta }\right\| \sqrt{N} \tau,} \end{array}
\end{equation}
with ${\rm {\mathcal L}}(k)$ denotes the confidence and trust dependent time-varying graph Laplacian matrix, and bound $\tau $ defined in \eqref{ZEqnNum466700};
\item The uniform bound on the state estimation error \eqref{ZEqnNum232225} becomes
\vspace{-0.15cm}
\begin{equation} \label{64)}
{\mathop{\lim }\limits_{k\to \infty }} \left\| \bar{\eta }(k)\right\| \le \frac{A_{o} B_{o} }{1-A_{o} }.
\end{equation}
\end{enumerate}
Moreover, other notations used in \eqref{ZEqnNum594295} are defined in \eqref{ZEqnNum500695}.
\end{theorem}
\begin{proof}
Using the presented resilient estimator \eqref{ZEqnNum565391}, one has
\begin{equation} \label{ZEqnNum429555}
\begin{array}{l} {\bar{x}_{i} (k+1)=A\hat{x}_{i} (k)} \\ \, \, \,\, \, \, \,\,\, \, \, \, \, {=A(\bar{x}_{i} (k)+K_{i} (k)(\beta _{i} (k)y_{i} (k)+(1-\beta _{i} (k))C_{i} m_{i} (k)} \\ \, \, \,\, \, \, \,\,\, \, \quad {-C_{i} \bar{x}_{i} (k))\, +\gamma _{i} \sum _{j\in N_{i} }\sigma _{i,j} (k)\beta _{j} (k)(\tilde{x}_{j} (k)-\tilde{x}_{i} (k))),} \end{array}
\end{equation}
Substituting \eqref{ZEqnNum466700} into \eqref{ZEqnNum429555} and using \eqref{ZEqnNum250987}, the state estimation error dynamics becomes
\begin{equation} \label{ZEqnNum162461}
\begin{array}{l} {\bar{\eta }_{i} (k+1)=AM_{i} (k)\bar{\eta }_{i} (k)+A\gamma _{i} \sum _{j\in N_{i} }a_{ij} (k)(\tilde{\eta }_{j} (k)-\tilde{\eta }_{i} (k) )} \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \, -AK_{i} (k)(1-\beta _{i} (k))C_{i} \varepsilon _{i} (k),} \end{array}
\end{equation}
where $a_{ij} (k)=\sigma _{i,j} (k)\beta _{j} (k)$ and $M_{i} (k)=I-K_{i} (k)C_{i} $.
\noindent Using \eqref{ZEqnNum162461} and notations defined in \eqref{ZEqnNum500695}, the global form of error dynamics becomes
\begin{equation} \label{ZEqnNum454905}
\begin{array}{l} {\bar{\eta }(k+1)=(I_{N} \otimes A)M(k)\bar{\eta }(k)-(\Upsilon \otimes A){\rm {\mathcal L}}(k)\tilde{\eta }(k)} \\ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \,\, \, \, \,\,\, \, \,{-(\bar{\beta }\otimes A)(I_{nN} -M(k))E(k)).} \end{array}
\end{equation}
Note that Assumption 4 implies that the total number of the compromised sensors is less than half of the total number of sensors in the network. That is, if $q$ neighbors of an intact sensor node are attacked and collude to send the same value to mislead it, there still exists $q+1$ intact neighbors that communicate values different from the compromised ones. Moreover, since at least half of the intact sensor's neighbors are intact, it can update its beliefs to discard the compromised neighbor's state estimates. Furthermore, since the time-varying graph ${\rm {\mathcal G}}(k)$ resulting from isolating the compromised sensors, based on Assumptions 3 and 4, the entire network is still collectively observable. Using the trust and confidence of neighboring sensors, the incoming information from the compromised communication channels is discarded.
Now taking norm of equation \eqref{ZEqnNum454905} from both sides and then using the triangular inequality, one has
\begin{equation} \label{ZEqnNum800097}
\begin{array}{l} {\left\| \bar{\eta }(k+1)\right\| \le \left\| (I_{N} \otimes A)M(k)\bar{\eta }(k)\right\| +\left\| (\Upsilon \otimes A){\rm {\mathcal L}}(k)\tilde{\eta }(k)\right\| } \\\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \, {+\left\| (\bar{\beta }\otimes A)(I_{nN} -M(k))E(k)\right\| .} \end{array}
\end{equation}
Using \eqref{ZEqnNum466700}, \eqref{ZEqnNum800097} can be rewritten as
\begin{equation} \label{ZEqnNum810116}
\begin{array}{l} {\left\| \bar{\eta }(k+1)\right\| \le A_{o} \left\| \bar{\eta }(k)\right\| +\sigma _{\max } ({\rm {\mathcal L}}(k))\left\| (\Upsilon \otimes A)\tilde{\eta }(k)\right\| } \\ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \, {+\left\| (\bar{\beta }\otimes A)-(\bar{\beta }\otimes I_{n} )(I_{N} \otimes A)M(k))E(k)\right\| .} \end{array}
\end{equation}
{After some manipulations, equation \eqref{ZEqnNum810116} becomes}
\begin{equation} \label{ZEqnNum297239}
\begin{array}{l} {\left\| \bar{\eta }(k+1)\right\| \le A_{o} \left\| \bar{\eta }(k)\right\| +\sigma _{\max } (A)\sigma _{\max } ({\rm {\mathcal L}}(k))\Upsilon _{m} \left\| \tilde{\eta }(k)\right\| } \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \, +(\sigma _{\max } (A)+\sigma _{\max } (A_{o} ))\left\| \bar{\beta }\right\| \sqrt{N} \tau ,} \end{array}
\end{equation}
with $\Upsilon _{m}$ defined in \eqref{ZEqnNum500695}. Then, using \eqref{ZEqnNum528573}, one can write \eqref{ZEqnNum297239} as
\begin{equation} \label{ZEqnNum560131}
\begin{array}{l} {\left\| \bar{\eta }(k+1)\right\| \le A_{o} \left\| \bar{\eta }(k)\right\| +(\sigma _{\max } (A)+\sigma _{\max } (A_{o} ))\left\| \bar{\beta }\right\| \sqrt{N} \tau} \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \,+\sigma _{\max } (A)\sigma _{\max } ({\rm {\mathcal L}}(k))\Upsilon _{m} \sqrt{N(} \alpha /\left\| C_{i} \right\| +{\rm {\mathcal B}})}, \end{array}
\end{equation}
After solving \eqref{ZEqnNum560131}, one has
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum925065}
\left\| \bar{\eta }(k)\right\| \le (A_{o} )^{k} \left\| \bar{\eta }(0)\right\| +\sum _{m=0}^{k-1}(A_{o} )^{k-m-1} B_{o} ,
\end{equation}
where $A_{0}$ and $B_{0}$ are given in \eqref{ZEqnNum594295}. This completes the proof of part 1. Based on Assumption 3, the distributed sensor network is always collectively observable. Thus, based on result provided in \cite{d6}, one can conclude that $A_{0}$ in \eqref{ZEqnNum925065} is always Schur and then the upper bound on state estimation error becomes \eqref{64)}. This completes the proof.
\end{proof}
\vspace{-0.15cm}
Based on the attack detection approach presented in Algorithms 1 and 2, one can detect the attacker's misbehavior and estimate the actual state using the result presented in Theorem 6 and Algorithm 3.
\begin{algorithm}[!ht]
\caption{Secure Distributed Estimation Mechanism (SDEM).}
\begin{enumerate}
\item [1:] Start with initial innovation sequences and design parameters $\Upsilon _{1} $ and $\Lambda _{1}$.
\item [2:] \textbf{procedure $\forall i=1,\ldots ,N$ }
\item [3:] {Use samples of innovation sequences $\{ r_{i}^{a} (l),\ldots, $ \qquad $r_{i}^{a} (l-1+w)\}$ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ based on \eqref{ZEqnNum255093} and \eqref{ZEqnNum276515}, $\forall l\in k$.}
\item [4:] Estimate the $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )$ using \eqref{ZEqnNum750552}.
\item [5:] {Based on \eqref{ZEqnNum125869}-\eqref{ZEqnNum359584}, compute confidence $\beta _{i} (k)$ as
\begin{equation}\label{Alg1}
\beta _{i} (k)=\Upsilon _{1} \sum _{l=0}^{k-1}\frac{(\kappa _{1} )^{k-l+1} }{\Upsilon _{1} +\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )}.
\end{equation}}
\item [6:] {For each sensor node $j\in N_{i} $, use samples of innovation sequences $\{ \zeta _{i,j}^{a} (l),\ldots ,\zeta _{i,j}^{a} (l-1+w)\}$ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\}$ based on \eqref{ZEqnNum178443} and \eqref{ZEqnNum276515}, $\forall l\in k$.}
\item [7:] Estimate the $\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )$ using \eqref{ZEqnNum691139}.
\item [8:] {Using \eqref{ZEqnNum846884}-\eqref{ZEqnNum805360}, compute trust $\sigma _{i,j}(k)$ as
\begin{equation}\label{Alg2}
\sigma _{i,j} (k)=\Lambda _{1} \sum _{l=0}^{k-1}\frac{(\kappa _{2} )^{k-l+1} }{\Lambda _{1} +\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )} \theta _{i,j} (l).
\end{equation}}
\item [9:] {Using the sensor measurement $y_{i} (k)$ with the confidence $\beta _{i} (k)$ {in \eqref{Alg1}}, the trust on neighbor's $\sigma _{i,j} (k)$ {in \eqref{Alg2}} and neighbor's state estimates $\tilde{x}_{j} (k),\, \, \forall j\in N_{i} $, update the resilient state estimator in \eqref{ZEqnNum565391}.}
\item [10:] \textbf{end procedure}
\end{enumerate}
\end{algorithm}
\vspace{-0.2cm}
\section{ Simulation Results}
In this section, we discuss simulation results to demonstrate the efficacy of presented attack detection and mitigation mechanism. The sensor network assumed to have following undirected graph topology as given in Fig. 2 with objective to follow the desired process dynamics.
Consider the process dynamics in \eqref{ZEqnNum820040} for generating the target trajectory as
\begin{equation}
x(k+1) =\left[\begin{array}{cc} {\cos (\pi /200)} & {-\sin (\pi /200)} \\ {\sin (\pi /200)} & {\cos (\pi /200)} \end{array}\right]x(k) \, +\, w(k),
\label{72)}
\end{equation}
with the observation matrix $C_{i} $ in \eqref{ZEqnNum687942}, noise covariances and initial state as
\vspace{-0.2cm}
\begin{equation} \label{721)}
C_{i} =[5\, \, 0;0\, \, 2],\, \, \, \, \, Q=I_{2} ,\, \, \, \, \, R_{i} =\, I_{2} ,\, \, \, \, \, x_{0} =(0.5,0).
\vspace{-0.2cm}
\end{equation}
\vspace{-0.1cm}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=1.45in,height=1.2in]{graph_ET.jpg}
\vspace{-5pt}\caption{Communication topology.}\label{fig2}
\captionsetup{justification=centering}
\end{center}
\end{figure}
\vspace{-0.7cm}
\noindent
\begin{figure}[H]
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=29mm]{est_error.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=28mm]{event.pdf}
\caption{}
\end{subfigure}
\caption{Sensor network without any attack. (a) State estimation errors (b) Transmit function for sensor 2}
\label{m1}
\vspace{-0.35cm}
\end{figure}
For intact sensor network, {based on the process dynamics in \eqref{72)} with noise covariances in \eqref{721)}}, the state estimates of sensors converge to the desired process state in the mean square sense and their state estimation error goes to zero for each sensor node as shown in Fig. 3a. The event generation based on the event-triggering mechanism in (\ref{eq3x}) with the triggering threshold $\alpha =1.8$ is shown in Fig. 3b. Then, we consider the sensor 2 of the network is compromised with the adversarial input $\delta _{2} (k)=2+10\sin (100k)$ after 20 seconds. Fig. 4a shows the attacker's effect on sensor 2 and one can notice that the compromised sensors and other sensors in the network deviates from desired target state and results in non-zero estimation error based on attacker's input. Furthermore, the event generation based on the event-triggering mechanism in (\ref{eq3x}) in the presence of attack is shown in Fig. 4b and it can be seen that after injection of the attack on sensor 2, the event-triggered system becomes time-triggered and shows continuous-triggering misbehavior. This result follows the analysis presented for the continuous-triggering misbehavior. In Fig. 5, we show the results for non-triggering misbehavior for sensor node 2 which follow the presented analysis.
\vspace{-0.1cm}
\begin{figure}[H]
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=31mm]{est_error_ua.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=31mm]{event_ua_u.pdf}
\caption{}
\end{subfigure}
\caption{Sensor node 2 under continuous-triggering misbehavior. (a) State estimation errors (b) Transmit function for sensor 2}
\label{m1}
\vspace{-0.6cm}
\end{figure}
\vspace{-0.2cm}
\begin{figure}[H]
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=30mm]{non-trig_est_error.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=31mm]{NT_event.pdf}
\caption{}
\end{subfigure}
\caption{Sensor node 2 under non-triggering misbehavior. (a) State estimation errors (b) Transmit function for sensor 2}
\label{m1}
\vspace{-0.35cm}
\end{figure}
\vspace{-0.35cm}
\begin{figure}[H]
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=30mm]{Detection_Cyb.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=30mm]{confidence_cyb.pdf}
\caption{}
\end{subfigure}
\caption{Sensor node 2 under attack. (a) Estimated KL divergence (b) Confidence of sensors}
\label{m1}
\vspace{-0.35cm}
\end{figure}
\vspace{-0.4cm}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=3.6in,height=1.3in]{final_save3.pdf}
\vspace{-10pt}\caption{State estimation errors under attack on sensor $2$ using proposed resilient state estimator.}\label{fig7}
\captionsetup{justification=centering}
\end{center}
\vspace{-0.6cm}
\end{figure}
\noindent
Now, we detect the effect of the attack on the sensor using presented attack detection mechanism. Fig. 6a shows the result for estimated KL divergence based attack detection mechanism and it illustrates that the after the injection of attack signal the estimated KL divergence starts increasing for compromised sensor node as well as for sensor nodes which has a path from the compromised sensor. One can always design a threshold and detect the effect of the attack in the sensor network and, then isolate the corrupted sensor in WSNs to avoid propagation of attack in the WSNs.\par
The estimated divergence for the compromised sensor, i.e., sensor 2 grows after attack injection at $k=20$ which follows the result presented in the Theorem 4. The confidence of the sensor is evaluated based on the Lemma 1 with the discount factor $\kappa _{1} =0.5$ and the uncertainty threshold $\Upsilon _{1} =0.5$. Fig. 6b shows the confidence of sensors in the presence of the considered attack which is close to one for healthy sensors and tends to zero for the compromised one. Then, the belief based proposed resilient estimator is implemented and Fig. 7 shows the result for the state estimation using the resilient estimator \eqref{ZEqnNum565391}. After the injection of attack, within a few seconds, the sensors reach consensus on the state estimates, i.e., the state estimates of sensors converge to the actual position of the target. The result in Fig. 7 follows Theorem 6.
\vspace{-0.25cm}
\section{ Conclusion}
In this paper, first, we analyze the adverse effects of cyber-physical attacks on the event-triggered distributed Kalman filter (DKF). We show that attacker can adversely affect the performance of the DKF. We also show that the event-triggered mechanism in the DKF can be leveraged by the attacker to result in a non-triggering misbehavior that significantly harms the network connectivity and its collective observability. Then, {to detect adversarial intrusions in the DKF, we relax restrictive Gaussian assumption on probability density functions of attack signals and estimate the Kullback-Leibler (KL) divergence via $k$-nearest neighbors approach. }Finally, to mitigate attacks, a meta-Bayesian approach is presented that incorporates the outcome of the attack detection mechanism to perform second-order inference and consequently form beliefs over beliefs, i.e., confidence and trust of a sensor. Each sensor communicates its confidence to its neighbors. Sensors then incorporate the confidence of their neighbors and their own trust about their neighbors into their posterior update laws to successfully discard corrupted sensor information. Then, the simulation result illustrates the performance of the presented resilient event-triggered DKF.
\vspace{-0.3cm}
\appendices
\section{Proof of Theorem 1}
Note that for the notional simplicity, in the following proof, we keep the sensor index $i$ but ignore the time-indexing $k$. Without the time index, we represent the prior at time $k+1$ as $\bar{x}_{i}^{a} (k+1)\buildrel\Delta\over= (\bar{x}_{i}^{a} )^{+} $ and follow the same for other variables.
Using the process dynamics in \eqref{ZEqnNum820040} and the corrupted prior state estimate in \eqref{ZEqnNum120276}, one has
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum533668}
(\bar{\eta }_{i}^{a} )^{+} =x^{+} -(\bar{x}_{i}^{a} )^{+} =A(x\, -\hat{x}_{i}^{a} )\, +\, w,
\end{equation}
where the compromised posterior state estimate $\hat{x}_{i}^{a} (k)$ follows the dynamics \eqref{ZEqnNum120276}. Similarly, using \eqref{ZEqnNum120276}, the corrupted posterior state estimation error becomes
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum240480}
\eta _{i}^{a} =x-\hat{x}_{i}^{a} =x-\bar{x}_{i}^{a} -K_{i}^{a} (y_{i} -C\bar{x}_{i}^{a} )-\gamma _{i} \sum _{j\in N_{i} }(\tilde{x}_{j}^{a} -\tilde{x}_{i}^{a} )-K_{i}^{a} f_{i} .
\end{equation}
\vspace{-0.12cm}
Then, one can write \eqref{ZEqnNum533668}-\eqref{ZEqnNum240480} as
\begin{equation} \label{ZEqnNum404232}
\left\{\begin{array}{l} {(\bar{\eta }_{i}^{a} )^{+} =A\eta _{i}^{a} \, +\, w,} \\ {\eta _{i}^{a} =(I_{n} -K_{i}^{a} C_{i} )\bar{\eta }_{i}^{a} -K_{i}^{a} v_{i} +u_{i}^{a} ,} \end{array}\right.
\end{equation}
where
\begin{equation} \label{ZEqnNum571757}
\vspace{-0.12cm}
u_{i}^{a} =\gamma _{i} \sum _{j\in N_{i} }(\tilde{\eta }_{j}^{a} -\tilde{\eta }_{i}^{a} )-K_{i}^{a} f_{i} .
\end{equation}
Based on \eqref{ZEqnNum926700}, we define the predictive state estimation error, respectively, under attack as
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum896894}
\begin{array}{l}(\tilde{\eta }_{i}^{a} )^{+} =x^{+} -(\tilde{x}_{i}^{a} )^{+}\\ \, \, \, \, \,\, \, \, \, \, \,\, \, \, \, \, \,\,\, \, \, \,\,\, =\zeta _{i}^{+} (\bar{\eta }_{i}^{a} )^{+} +(1-\zeta _{i}^{+} )A\tilde{\eta }_{i}^{a} . \end{array}
\end{equation}
Using \eqref{ZEqnNum404232}, the corrupted covariance of the prior state estimation error becomes
\begin{equation} \label{ZEqnNum928831}
\begin{array}{l} {(\bar{P}_{i}^{a} )^{+} ={\bf {\rm E}}\left[(\bar{\eta }_{i}^{a} )^{+} ((\bar{\eta }_{i}^{a} )^{+} )^{T} \right],} \\ {\, \, \, \, \,\, \, \, \, \, \, \,\,\, \, \, \, \, \, \, \, \, \, \, ={\bf {\rm E}}\left[(A\eta _{i}^{a} \, +\, w)(A\eta _{i}^{a} \, +\, w\, )^{T} \right]=A\hat{P}_{i}^{a} A^{T} +Q.} \end{array}
\end{equation}
Using the corrupted predictive state estimate error $\, (\tilde{\eta }_{i}^{a} )^{+} $ in \eqref{ZEqnNum896894} with $(\bar{P}_{i,j}^{a} )^{+} =A\hat{P}_{i,j}^{a} A^{T} +Q$, one can write the cross-correlated predictive state estimation error covariance $(\tilde{P}_{i,j}^{a} )^{+} $ as
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum354214}
\begin{array}{l} {(\tilde{P}_{i,j}^{a} )^{+} ={\bf {\rm E}}\left[(\tilde{\eta }_{i}^{a} )^{+} ((\tilde{\eta }_{j}^{a} )^{+} )^{T} \right]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{=\zeta _{i}^{+} (1-\zeta _{j}^{+} )A(\breve{P}_{i,j}^{a} )^{+} +(1-\zeta _{i}^{+} )\zeta _{j}^{+} (\stackrel{\frown}{P}_{i,j}^{a} )^{+} A^{T} } \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{+\zeta _{i}^{+} \zeta _{j}^{+} (\bar{P}_{i,j}^{a} )^{+} +(1-\zeta _{i}^{+} )(1-\zeta _{j}^{+} )(A\tilde{P}_{i,j}^{a} A^{T} +Q),} \end{array}
\end{equation}
where $\stackrel{\frown}{P}_{i,j}^{a} $ and $\breve{P}_{i,j}^{a}$ be the cross-correlated estimation error covariances and their updates are given in \eqref{ZEqnNum358063}-\eqref{ZEqnNum655968}.
The cross-correlated estimation error covariance $(\stackrel{\frown}{{P}}_{i,j}^{a} )^{+}$ in \eqref{ZEqnNum354214} is given by
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum358063}
\begin{array}{l} {(\stackrel{\frown}{P}_{i,j}^{a} )^{+} ={\bf {\rm E}}\left[(\tilde{\eta }_{i}^{a} )^{+} ((\bar{\eta }_{j}^{a} )^{+} )^{T} \right]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{=\zeta _{i}^{+} (\bar{P}_{i,j}^{a} )^{+} +(1-\zeta _{i}^{+} )A\sum _{r\in N_{i} }(\tilde{P}_{i,r}^{a} -\tilde{P}_{i,j}^{a} )(\gamma _{i} A)^{T} +} \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, (1-\zeta _{i}^{+} )[A\stackrel{\frown}{P}_{i,j}^{a} M_{i}^{a} A^{T} +Q],} \end{array}
\end{equation}
where $\tilde{P}_{i,j}^{a}$ and $\breve{P}_{i,j}^{a}$ denote the cross-correlated estimation error covariances evolve according to \eqref{ZEqnNum354214} and \eqref{ZEqnNum655968}. Similarly, $(\breve{P}_{i,j}^{a} )^{+} $ is updated based on the expression given by
\begin{equation} \label{ZEqnNum655968}
\begin{array}{l} {(\breve{P}_{i,j}^{a} )^{+} ={\bf {\rm E}}\left[(\bar{\eta }_{i}^{a} )^{+} ((\tilde{\eta }_{j}^{a} )^{+} )^{T} \right]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{={\bf {\rm E}}\left[(\bar{\eta }_{i}^{a} )^{+} (\zeta _{j}^{+} (\bar{\eta }_{j}^{a} )^{+} +(1-\zeta _{j}^{+} )(A\tilde{\eta }_{j}^{a} +w)\, )^{T} \right]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{=\zeta _{j}^{+} (\bar{P}_{i,j}^{a} )^{+} +(1-\zeta _{j}^{+} )[A(M_{i}^{a} )^{T} \stackrel{\frown}{P}_{i,j}^{a} A^{T} +Q]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{+(1-\zeta _{j}^{+} )A\gamma _{i} \sum _{s\in N_{i} }(\tilde{P}_{s,j}^{a} -\tilde{P}_{i,j}^{a} ) A^{T} .} \end{array}
\end{equation}
Now using \eqref{ZEqnNum240480}-\eqref{ZEqnNum896894}, one can write the covariance of posterior estimation error $\hat{P}_{i}^{a} $ as
\begin{equation} \label{ZEqnNum893608}
\begin{array}{l}
\hat P_i^a = {\rm{E}}[{M_i}\bar \eta _i^a{({M_i}\bar \eta _i^a)^T}] + {\rm{E}}[K_i^a{v_i}{(K_i^a{v_i})^T}] - 2{\rm{E}}[({M_i}\bar \eta _i^a){(K_i^a{v_i})^T}]{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
\,\,\,\,\,\,{\mkern 1mu} - 2{\rm{E}}[K_i^a{v_i}{({\gamma _i}u_i^a)^T}] + {\rm{E}}[({\gamma _i}u_i^a){({\gamma _i}u_i^a)^T}] + 2{\rm{E}}[({M_i}\bar \eta _i^a){({\gamma _i}u_i^a)^T}],
\end{array}
\end{equation}
Using \eqref{ZEqnNum928831} and measurement noise covariance, the first two terms of \eqref{ZEqnNum893608} become
\begin{equation} \label{87)}
\begin{array}{l} {{\bf {\rm E}}[M_{i} \bar{\eta }_{i}^{a} (M_{i} \bar{\eta }_{i}^{a} )^{T} ]=M_{i} \bar{P}_{i}^{a} M_{i}^{T},}\,\,\,\, \,\, {{\bf {\rm E}}[K_{i}^{a} v_{i} (K_{i}^{a} v_{i} )^{T} ]=K_{i}^{a} R_{i} (K_{i}^{a} )^{T}_.} \end{array}
\end{equation}
According to Assumption 1, the measurement noise $v_{i} $ is i.i.d. and uncorrelated with state estimation errors, therefore, the third and fourth terms in \eqref{ZEqnNum893608} become zero. Now $u_{i}^{a} $ in \eqref{ZEqnNum571757} and Assumption 1, the last two terms in \eqref{ZEqnNum893608} can be simplified as
\vspace{-0.2cm}
\begin{equation} \label{88)}
\begin{array}{l}
{\rm{E}}[(u_i^a){(u_i^a)^T}] = {\gamma _i}^2({\rm{E}}\left[ {[\sum\limits_{j \in {N_i}} {(\tilde \eta _j^a - \tilde \eta _i^a)} ][\sum\limits_{j \in {N_i}} ( \tilde \eta _j^a - \tilde \eta _i^a)){]^T}} \right]{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \,\,\,\,\,\,\, + {\rm{E}}[K_i^a{f_i}{(K_i^a{f_i})^T}] - 2K_i^a{\rm{E}}[{f_i}\sum\limits_{j \in {N_i}} {{{(\tilde \eta _j^a - \tilde \eta _i^a)}^T}} ]),{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = {\gamma _i}^2(\sum\limits_{j \in {N_i}} {(\tilde P_j^a - 2\tilde P_{i,j}^a + \tilde P_i^a)} + K_i^a\Sigma _i^f{(K_i^a)^T}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\mkern 1mu} - 2K_i^a{\rm{E}}[{f_i}\sum\limits_{j \in {N_i}} {{{(\tilde \eta _j^a - \tilde \eta _i^a)}^T}} ]),
\end{array}
\end{equation}
\vspace{-0.1cm}
and
\begin{equation} \label{ZEqnNum612155}
\begin{array}{l} {2{\bf {\rm E}}[(u_{i}^{a} )(M_{i} \bar{\eta }_{i}^{a} )^{T} ]=2{\bf {\rm E}}[(\gamma _{i} \sum _{j\in N_{i} }(\tilde{\eta }_{j}^{a} -\tilde{\eta }_{i}^{a} )-K_{i}^{a} f_{i} )(M_{i} \bar{\eta }_{i}^{a} )^{T} ],} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{=2\gamma _{i} \sum _{j\in N_{i} }(\stackrel{\frown}{P}_{i,j}^{a} -\stackrel{\frown}{P}_{i}^{a} )M_{i}^{T} -2K_{i}^{a} {\bf {\rm E}}[f_{i} (\bar{\eta }_{i}^{a} )^{T} ]M_{i}^{T},} \end{array}
\end{equation}
where the cross-correlated term $\stackrel{\frown}{P}_{i,j}^{a} $ is updated according to \eqref{ZEqnNum358063}. Using \eqref{ZEqnNum893608}-\eqref{ZEqnNum612155}, the posterior state estimation error $P_{i}^{a} $ under attacks is given by
\vspace{-0.2cm}
\begin{equation} \label{90)}\nonumber
\begin{array}{l} {\hat{P}_{i}^{a} =M_{i}^{a} \bar{P}_{i}^{a} (M_{i}^{a} )^{T} +K_{i}^{a} [R_{i} +\Sigma _{i}^{f} ](K_{i}^{a} )^{T} -2K_{i}^{a} \Xi _{f} } \\ \,\,\,\,\,\,\,\,\,\,\,{+2\gamma _{i} \sum _{j\in N_{i} }(\stackrel{\frown}{P}_{i,j}^{a} -\stackrel{\frown}{P}_{i}^{a} )(M_{i}^{a} )^{T} +\gamma _{i} {}^{2} (\sum _{j\in N_{i} }(\tilde{P}_{j}^{a} -2\tilde{P}_{i,j}^{a} +\tilde{P}_{i}^{a} ),} \end{array}
\end{equation}
with $\Xi _{f} =[{\bf {\rm E}}[f_{i} \sum _{j\in N_{i} }(\tilde{\eta }_{j}^{a} -\tilde{\eta }_{i}^{a} )^{T} ])+{\bf {\rm E}}[f_{i} (\bar{\eta }_{i}^{a} )^{T} ](M_{i}^{a} )^{T} ].$ This completes the proof.
\vspace{-0.3cm}
\section{Introduction}
{Cyber-physical systems (CPSs) refer to a class of engineering systems that integrate the cyber aspects of computation and communication with physical entities \cite{c1}.} Integrating communication and computation with sensing and control elements has made CPSs a key enabler in designing emerging autonomous and smart systems with the promise of bringing unprecedented benefits to humanity. CPSs have already had a profound impact on variety of engineering sectors, including, process industries \cite{c2}, robotics \cite{c3}, {smart grids \cite{c4}, and intelligent transportation \cite{c5}, to name a few.} Despite their advantages with vast growth and success, these systems are vulnerable to cyber-physical threats and can face fatal consequences if not empowered with resiliency. {The importance of designing resilient and secure CPSs can be witnessed from severe damages made by recently reported cyber-physical attacks \cite{c6_7}}.\par
\subsection{Related Work}
Wireless sensor networks (WSNs) are a class of CPSs for which a set of sensors are spatially distributed to monitor and estimate a variable of interest (e.g., location of a moving target, state of a large-scale system, etc.), {and have various applications such as surveillance and monitoring, target tracking, and active health monitoring \cite{c8}}. In centralized WSNs, all sensors broadcast their measurements to a center at which the information is fused to estimate the state \cite{c9}. These approaches, however, are communication demanding and prone to single-point-of-failure. {To estimate the state with reduced communication burden, a distributed Kalman filter (DKF) is presented in \cite{c10}-\cite{d3}, in which sensors exchange their information only with their neighbors, not with all agents in the network or a central agent.} {Cost constraints on sensor nodes in a WSN result in corresponding constraints on resources such as energy and communications bandwidth. Sensors in a WSN usually carry limited, irreplaceable energy resources and lifetime adequacy is a significant restriction of almost all WSNs. Therefore, it is of vital importance to design event-triggered DKF to reduce the communication burden which consequently improves energy efficiency. To this end, several energy-efficient event-triggered distributed state estimation approaches are presented for which sensor nodes intermittently exchange information \cite{c13}-\cite{c16}. Moreover, the importance of event-triggered state estimation problem is also reported for several practical applications such as smart grids and robotics \cite{r02}-\cite{r04}. Although event-triggered distributed state estimation is resource-efficient, it provides an opportunity for an attacker to harm the network performance and its connectivity by corrupting the information that is exchanged among sensors, as well as to mislead the event-triggered mechanism. Thus, it is of vital importance to design a resilient event-triggered distributed state estimation approach that can perform accurate state estimation despite attacks.} \par
In recent years, secure estimation and secure control of CPSs have received significant attention and remarkable results have been reported for mitigation of cyber-physical attacks, {including denial of service (DoS) attacks \cite{c17}-\cite{c18}, false data injection attacks \cite{c19}-\cite{c23}, and bias injection attacks \cite{c24}. }For the time-triggered distributed scenario, several secure state estimation approaches are presented in \cite{c26}-\cite{c312}. Specifically, in \cite{c26}-\cite{c30} authors presented a distributed estimator that allows agents to perform parameter estimation in the presence of attack by discarding information from the adversarial agents. Byzantine-resilient distributed estimator with deterministic process dynamics is discussed in \cite{c27}. Then, the same authors solved the resilient distributed estimation problem with communication losses and intermittent measurements in \cite{c28}. Attack analysis and detection for distributed Kalman filters are discussed in \cite{c281}. Resilient state estimation subject to DoS attacks for power system and robotics applications is presented in \cite{c310}-\cite{c312}. Although meritable, these aforementioned results for the time-triggered resilient state estimation are not applicable to event-triggered distributed state estimation problems. {Recently, authors in \cite{c17r} addressed the event-triggered distributed state estimation under DoS attacks by employing the covariance intersection fusion approach. Although elegant, the presented approach is not applicable to mitigating the effect of deception attacks. To our knowledge, resilient state estimation for event-triggered DKF under deception attacks is not considered in the literature. For the first time, this work not only detects and mitigate the effect of attacks on sensor and communication channel but also presents a mathematical analysis for different triggering misbehaviors.}
\vspace{-0.35cm}
\subsection{Contributions and outline}
\vspace{-0.1cm}
{This paper contributes to analysis, detection, and mitigation of attacks on event-triggered DKF. To our knowledge, it is the first paper to analyze how the attacker can leverage the event triggering mechanism to damage the state estimation process over WSNs. It also presents a detection mechanism for attacks on event-triggered DKF that does not require the restrictive Gaussian assumption on the probability density function of the attack signal. Finally, a novel meta-Bayesian attack detection mechanism is presented that performs second-order inference to detect stealthy attacks. The details of these contributions are presented as follows:}
\begin{itemize}
\item {Attack analysis: We show that the attacker can cause emerging non-triggering misbehavior so that the compromised sensors do not broadcast any information to their neighbors. This can significantly harm the network connectivity and its collective observability, which is a necessary condition for solving the distributed state estimation problem. We then show that an attacker can achieve continuous-triggering misbehavior which drains the communication resources.}
\item {Attack detections: To detect adversarial intrusions a Kullback-Leibler (KL) divergence based detector is presented and estimated via k-nearest neighbors approach to obviate the restrictive Gaussian assumption on the probability density function of the attack signal.}
\item {Attack mitigation: To mitigate attacks on event-triggered DKF,
a meta-Bayesian approach is employed that performs second-order inference to form confidence and trust about the truthfulness or legitimacy of the outcome of its own first-order inference (i.e., the posterior belief about the state estimate) and those of its neighbors, respectively. Each sensor communicates its confidence to its neighbors and also incorporates the trust about its neighbors into its posterior update law to put less weight on untrusted data and thus successfully discard corrupted information.}
\end{itemize}
\textit{Outline:} The paper is organized as follows. Section II outlines the preliminary background for the event-triggered DKF. Section III formulates the effect of attacks on the event-triggered DKF and analyzes triggering misbehaviors for it. Attack detection mechanism and confidence-trust based secure event-triggered DKF are presented in Section IV and V, respectively. The simulation verifications are provided in Section VI. Finally, concluding remarks are presented in Section VII.
\vspace{-0.1cm}
\section{Notations and Preliminaries}
\subsection{Notations}
The data communication among sensors in a WSN is captured by an undirected graph ${\rm {\mathcal G}}$, consists of a pair $({\rm {\mathcal V}},{\rm {\mathcal E}})$, where ${\rm {\mathcal V}}=\{ 1,2,\ldots ,N\}$ is the set of nodes or sensors and ${\rm {\mathcal E}}\subset {\rm {\mathcal V}}\times {\rm {\mathcal V}}$ is the set of edges. An edge from node $j$ to node $i,$ represented by $(j,i)$, implies that node $j$ can broadcast information to node $i$. Moreover, $N_{i} =\{ j:(j,i)\in {\rm {\mathcal E}}\}$ is the set of neighbors of node $i$ on the graph ${\rm {\mathcal G}}.$ An induced subgraph ${\rm {\mathcal G}}^{w}$ is obtained by removing a set of nodes ${\rm {\mathcal W}}\subset {\rm {\mathcal V}}$ from the original graph ${\rm {\mathcal G}}$, which is represented by nodes set ${\rm {\mathcal V}\backslash {\mathcal W}}$ and contains the edges of ${\rm {\mathcal E}}$ with both endpoints in ${\rm {\mathcal V}\backslash {\mathcal W}}$.
Throughout this paper, ${\bf {\mathbb{R}}}$ and ${\bf {\mathbb{N}}}$ represent the sets of real numbers and natural numbers, respectively. $A^{T}$ denotes transpose of a matrix $A$. $tr(A)$ and $\max (a_{i} )$ represent trace of a matrix $A$ and maximum value in the set, respectively. ${\rm {\mathcal C}}(S)$ represents the cardinality of a set S. $\sigma _{\max } (A),$ $\lambda _{\max } (A),$ and $I_{n}$ represent maximum singular value, maximum eigenvalue of matrix A, and an identity matrix of dimension $n$, respectively. ${\rm {\mathcal U}}(a,b)$ with $a<b$ denotes an uniform distribution between the interval $a$ and $b$. Consider $p_{X} (x)$ as the probability density of the random variable or vector $x$ with $X$ taking values in the finite set $\{ 0,...,p\}.$ When a random variable $X$ is distributed normally with mean $\nu$ and variance $\sigma ^{2},$ we use the notation $X\sim {\rm {\mathcal N}}(\upsilon ,\sigma ^{2} )$. ${\bf {\rm E}}[X]$ and $\Sigma _{X} ={\bf {\rm E}}[(X-{\bf {\rm E}}[X])(X-{\bf {\rm E}}[X])^{T} ]$ denotes, respectively, the expectation and the covariance of $X.$ Finally, ${\bf {\rm E}}[.|.]$ represents the conditional expectation.
\vspace{-0.3cm}
\subsection{Process Dynamics and Sensor Models}
Consider a process that evolves according to the following dynamics
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum820040}
x(k+1)=Ax(k)\, +\, w(k),
\end{equation}
where $A$ denotes the process dynamic matrix, and $x(k)\in {\bf {\mathbb R}}^{n}$ and $w(k)$ are, respectively, the process state and process noise at the time $k$. The process noise $w(k)$ is assumed to be independent and identically distributed (i.i.d.) with Gaussian distribution, and $x_{0} \in {\rm {\mathcal N}}(\hat{x}_{0} ,P_{0} )\,$ represents the initial process state with $\hat{x}_{0}$ as mean and $P_{0}$ as covariance, respectively.
The goal is to estimate the state $x(k)$ for the process \eqref{ZEqnNum820040} in a distributed fashion using $N$ sensor nodes that communicate through the graph ${\rm {\mathcal G}}$, and their sensing models are given by
\begin{equation} \label{ZEqnNum687942}
y_{i} (k)=C_{i} x_{i} (k)\, +\, v_{i} (k);\, \, \, \, \, \, \, \, \, \, \, \forall i=1,\cdots ,N,
\end{equation}
where $y_{i} (k)\in {\bf {\mathbb R}}^{p}$ represents the measurement data with $v_{i} (k)$ as the i.i.d. Gaussian measurement noise and $C_{i}$ as the observation matrix of the sensor $i$, respectively.
\smallskip
\noindent
\textbf{Assumption 1}. The process noise $w(k),$ the measurement noise $v_{i} (k),$ and the initial state $x_{0}$ are uncorrelated random vector sequences.
\smallskip
\noindent
\textbf{Assumption 2}. The sequences $w(k)$ and $v_{i}(k)$ are zero-mean Gaussian noise with
\vspace{-0.15cm}
\[{\bf {\rm E}}[w(k)(w(h))^{T} ]=\mu _{kh} Q\, \, \, \, \]
and
\vspace{-0.15cm}
\[{\bf {\rm E}}[v_{i} (k)(v_{i} (h))^{T} ]=\mu _{kh} R_{i} ,\]
with $\mu _{kh} =0$ if $k\ne h$, and $\mu _{kh} =1$ otherwise. Moreover, $Q\ge0$ and $R_{i}>0$ denote the noise covariance matrices for process and measurement noise, respectively and both are finite.
\smallskip
\noindent
\textbf{Definition 1. (Collectively observable) \cite{c11}.} We call the plant dynamics \eqref{ZEqnNum820040} and the measurement equation \eqref{ZEqnNum687942} collectively observable, if the pair $(A,C_{S} )$ is observable where $C_{s}$ is the stack column vectors of $C_{j}, \,\,\forall j \in S$ with $S\subseteq {\rm {\mathcal V}}$ and ${\rm {\mathcal C}}(S)>N/2$.
\smallskip
\noindent
\textbf{Assumption 3.} The plant dynamics \eqref{ZEqnNum820040} and the measurement equation \eqref{ZEqnNum687942} are collectively observable, but not necessarily locally observable, i.e., $(A,C_{i} )$ $\, \forall i\in {\rm {\mathcal V}}$ is not necessarily observable.
Assumptions $1$ and $2$ are standard assumptions in Kalman filters. {Assumption 3 states that the state of the target in \eqref{ZEqnNum820040} cannot be observed by measurements of any single sensor, i.e., the pairs $(A,C_{i} )$ cannot be observable (see for instances \cite{c11} and \cite{c30}). It also provides the necessary assumption of collectively observable for the estimation problem to be solvable. Also note that under Assumption 2, i.e., the process and measurement covariance are finite, the stochastic observability rank condition coincides with the deterministic observability [Theorem 1, 43]. Therefore, deterministic observability rank condition holds true irrespective of the process and measurement noise.}
\vspace{-0.3cm}
\subsection{Overview of Event-triggered Distributed Kalman Filter}
This subsection presents the overview of the event-triggered DKF for estimating the process state $x(k)$ in \eqref{ZEqnNum820040} from a collection of noisy measurements $y_{i} (k)$ in \eqref{ZEqnNum687942}.
Let the prior and posterior estimates of the target state $x(k)$ for sensor node $i$ at time $k$ be denoted by $x_{i}(k|k-1)$ and $x_{i}(k|k)$, respectively. In the centralized Kalman filter, a recursive rule based on Bayesian inference is employed to compute the posterior estimate $x_{i}(k|k)$ based on its prior estimate $x_{i}(k|k-1)$ and the new measurement $y_{i}(k)$. When the next measurement comes, the previous posterior estimate is used as a new prior and it proceeds with the same recursive estimation rule. In the event-triggered DKF, the recursion rule for computing the posterior incorporates not only its own prior and observations, but also its neighbors' predictive state estimate. Sensor $i$ communicates its prior state estimate to its neighbors and if the norm of the error between the actual output and the predictive output becomes greater than a threshold after a new observation arrives. That is, it employs the following event-triggered mechanism for exchange of data with its neighbors
\vspace{-0.15cm}
\begin{equation} \label{eq3x}
\left\| y_{i} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| <\alpha,
\end{equation}
where $\alpha$ denotes a predefined threshold for event-triggering. Moreover, $\tilde{x}_{i} (k)$ denotes the predictive state estimate for sensor $i$ and follows the update law
\begin{equation} \label{ZEqnNum926700}
\tilde{x}_{i} (k)=\zeta _{i} (k)x_{i} (k|k-1)+(1-\zeta _{i} (k))A\tilde{x}_{i} (k-1),\, \, \forall i\in {\rm {\mathcal V}},
\end{equation}
with $\zeta _{i} (k)\in \left\{0,1\right\}$ as the transmit function. {Note that the predictive state estimate update equation in (4) depends on the value of the transmit function ${{\zeta }_{i}}(k)$ which is either zero or one depending on the triggering condition in (3). When ${{\zeta }_{i}}(k)=1$, then the prior and predictive state estimates are the same, i.e., ${{\tilde{x}}_{i}}(k)={{x}_{i}}(k|k-1)$. When ${{\zeta }_{i}}(k)=0,$ however, the predictive state estimate depends on its own previous state estimate, i.e., ${{\tilde{x}}_{i}}(k)=A{{\tilde{x}}_{i}}(k-1).$ }
Incorporating \eqref{ZEqnNum926700}, the following recursion rule is used to update the posterior state estimate in the event-triggered DKF \cite{c13}, \cite{c15} for sensor $i$ as
\begin{equation} \label{ZEqnNum257073}
\begin{array}{l} {x_{i} (k|k)=x_{i} (k|k-1)+K_{i} (k)(y_{i} (k)-C_{i} x_{i} (k|k-1))} \\ {\, \, \, \, \, \, \,\, \, \, \, \, \,\, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, +\gamma _{i} \sum _{j\in N_{i} }(\tilde{x}_{j} (k)-\tilde{x}_{i} (k) ),} \end{array}
\end{equation}
where
\vspace{-0.25cm}
\begin{equation} \label{ZEqnNum569383}
x_{i} (k|k-1)=Ax_{i} (k-1|k-1),
\end{equation}
is the prior update. Moreover, the second and the third terms in \eqref{ZEqnNum257073} denote, respectively, the innovation part (i.e., the estimation error based on the sensor $i^{th}$ new observation and its prior prediction) and the consensus part (i.e., deviation of the sensor state estimates from its neighbor's state estimates). We call this recursion rule as the \textit{Bayesian first-order inference} on the posterior, which provides the belief over the value of the state.
Moreover, $K_{i} (k)$ and $\gamma _{i}$ in \eqref{ZEqnNum257073}, respectively, denote the Kalman gain and the coupling coefficient. The Kalman gain $K_{i} (k)$ in \eqref{ZEqnNum257073} depends on the estimation error covariance matrices associated with the prior $x_{i} (k|k-1)$ and the posterior $x_{i} (k|k)$ for the sensor $i$. Let define the prior and posterior estimated error covariances as
\begin{equation} \label{ZEqnNum606287}
\begin{array}{l} {P_{i} (k|k-1)={\bf {\rm E}}[(x(k)-x_{i} (k|k-1))(x(k)-x_{i} (k|k-1))^{T} ],} \\ {P_{i} (k|k)={\bf {\rm E}}[(x(k)-x_{i} (k|k))(x(k)-x_{i} (k|k))^{T} ].} \end{array}
\end{equation}
which are simplified as \cite{c13}, \cite{c15}
\begin{equation} \label{ZEqnNum987927}
P_{i} (k|k)=M_{i} (k)P_{i} (k|k-1)(M_{i} (k))^{T} +K_{i} (k)R_{i} (K_{i} (k))^{T} ,
\end{equation}
and
\vspace{-0.25cm}
\begin{equation} \label{9)}
P_{i} (k|k-1)=AP_{i} (k-1|k-1)A^{T} +Q.
\end{equation}
with $M_{i} (k)=I_{n} -K_{i} (k)C_{i}.$ Then, the Kalman gain $K_{i} (k)$ is designed to minimize the estimation covariance and is given by \cite{c13}, \cite{c15}
\begin{equation} \label{ZEqnNum999982}
K_{i} (k)=P_{i} (k|k-1)(C_{i} )^{T} (R_{i} (k)+C_{i} P_{i} (k|k-1)(C_{i} )^{T} )^{-1} .
\end{equation} \par
Let the innovation sequence $r_{i} (k)$ for the node $i$ be defined as
\begin{equation} \label{ZEqnNum276515}
r_{i} (k)=y_{i} (k)-C_{i} x_{i} (k|k-1),
\end{equation}
\vspace{-0.15cm}
where $r_{i}(k)\sim {\rm {\mathcal N}}(0,\Omega _{i} (k))$ with
\begin{equation} \label{ZEqnNum368934} \nonumber
\Omega _{i} (k)={\bf {\rm E}}[r_{i} (k)(r_{i} (k))^{T} ]=C_{i} P_{i} (k|k-1)C_{i} {}^{T} +R_{i} (k).
\end{equation}\par
Note that for the notional simplicity, henceforth we denote the prior and posterior state estimations as $x_{i} (k|k-1)\buildrel\Delta\over= \bar{x}_{i} (k)$ and $x_{i} (k|k)\buildrel\Delta\over= \hat{x}_{i} (k),$ respectively. Also, the prior covariance and the posterior covariance are, respectively, denoted by $P_{i} (k|k-1)\buildrel\Delta\over= \bar{P}_{i} (k)$ and $P_{i} (k|k)\buildrel\Delta\over= \hat{P}_{i} (k)$. \par
\smallskip
{Based on equations (6)-(10)}, the event-triggered DKF algorithm becomes
$\textit{Time\,\,updates:}$ \par \hfill
\vspace{-0.4cm}
\begin{equation}\label{ZEqnNum838493}
\left\{ {\begin{array}{*{20}{c}}
\bar{x}_{i}(k+1)=A{{{\hat{x}}}_{i}(k)}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(a) \\
\bar{P}_{i}(k+1)=A{{{\hat{P}}}_{i}}(k){{A}^{T}}+Q(k)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(b)
\end{array}} \right.
\end{equation}\par
$\textit{Measurment\,\,updates:}$\par
\vspace{-0.4cm}
\begin{equation}\label{ZEqnNum727229}
{\left\{\begin{array}{l} {\hat{x}_{i} (k)=\bar{x}_{i} (k)+K_{i} (k)(y_{i} (k)-C_{i} \bar{x}_{i} (k))} \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, +\gamma _{i} \sum _{j\in N_{i} }(\tilde{x}_{j} (k)-\tilde{x}_{i} (k) ),\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, (a)} \\ {\tilde{x}_{i} (k)=\zeta _{i} (k)\bar{x}_{i} (k)+(1-\zeta _{i} (k))A\tilde{x}_{i} (k-1),\, \, \, \, \, \, \, \, \, \, \, \, \, (b)} \\ {K_{i} (k)=\bar{P}_{i} (k)C_{i}^{T} (R_{i} (k)+C_{i} \bar{P}_{i} (k)C_{i}^{T} )^{-1} ,\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, (c)} \\ {\hat{P}_{i} (k)=M_{i} \bar{P}_{i} (k)M_{i} {}^{T} +K_{i} (k)R_{i} (k)(K_{i} (k))^{T} .\, \, \, \, \, \, \,\, \, \, \, \, \, \, (d)} \end{array}\right. }
\end{equation}
\smallskip
\noindent
\noindent \textbf{Remark 1.} Based on the result presented in [17, Th.1], the event triggered DKF \eqref{ZEqnNum838493}-\eqref{ZEqnNum727229} ensures that the estimation error $\hat{x}_{i} (k)-x(k)$ is exponentially bounded in the mean square sense $\forall i\in {\rm {\mathcal V}}.$
\smallskip
\noindent
\noindent \textbf{Remark 2.} {The consensus gain ${{\gamma }_{i}}$ in (5) is designed such that the stability of the event-triggered DKF in (13)-(14) is guaranteed. Specifically, as shown in [Theorem 2, 19], if
\begin{equation}
\nonumber
{{\gamma }_{i}}=\frac{2(I-{{K}_{i}}{{C}_{i}}){{({{\Gamma }_{i}})}^{-1}}}{{{\lambda }_{\max }}(\mathcal{L}){{\lambda }_{\max }}({{(\Gamma )}^{-1}})}
\end{equation}
where $\mathcal{L}$ denotes the Laplacian matrix associated with the graph $\mathcal{G}$ and $\Gamma =diag\{{{\Gamma }_{1}},..,{{\Gamma }_{N}}\}$ with ${{\Gamma }_{i}}={{(I-{{K}_{i}}{{C}_{i}})}^{T}}{{A}^{T}}{{({{\bar{P}}_{i}})}^{+}}A(I-{{K}_{i}}{{C}_{i}}),\,\,\forall i=\{1,...,N\},$ then the stability of the event-triggered DKF in (13)-(14) is guaranteed. However, the design of event-triggered DKF itself is not the concern of this paper and this paper mainly analyzes the adverse effects of cyber-physical attacks on the event-triggered DKF and proposes an information-theoretic approach based attack detection and mitigation mechanism. Note that the presented attack analysis and mitigation can be extended to other event-triggered methods such as \cite{c14} and \cite{c16} as well.}
\vspace{-0.3cm}
\subsection{Attack Modeling}
In this subsection, we model the effects of attacks on the event-triggered DKF. An attacker can design a false data injection attack to affect the triggering mechanism presented in (\ref{eq3x}) and consequently compromise the system behavior.
\smallskip
\noindent
\textbf{Definition 2. (Compromised and intact sensor node).} We call a sensor node that is directly under attack as a compromised sensor node. A sensor node is called intact if it is not compromised. Throughout the paper, ${\rm {\mathcal V}}^{c}$ and ${\rm {\mathcal V}}\backslash {\rm {\mathcal V}}^{c}$ denote, respectively, the set of compromised and intact sensor nodes.
\smallskip
Consider the sensing model \eqref{ZEqnNum687942} for sensor node $i$ under the effect of the attack as
\begin{equation} \label{ZEqnNum973066}
y_{i}^{a} (k)=y_{i} (k)+f_{i} (k)=C_{i} x_{i} (k)\, +\, v_{i} (k)+f_{i} (k),
\end{equation}
where $y_{i} (k)$ and $y_{i}^{a}(k)$ are, respectively, the sensor $i$'$s$ actual and corrupted measurements and $f_{i} (k)\in {\bf {\rm R}}^{p}$ represents the adversarial input on sensor node $i.$ For a compromised sensor node $i,$ let $p'\subseteq p$ be the subset of measurements disrupted by the attacker.\par
Let the false data injection attack $\bar{f}_{j}(k)$ on the communication link be given by
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum397788}
\bar{x}_{j}^{a} (k)=\bar{x}_{j} (k)+\bar{f}_{j} (k),\, \, \, \forall j\in N_{i} .
\end{equation}
Using \eqref{ZEqnNum973066}-\eqref{ZEqnNum397788}, in the presence of an attack on sensor node $i$ and/or its neighbors, its state estimate equations in \eqref{ZEqnNum727229}-\eqref{ZEqnNum838493} becomes
\begin{equation} \label{ZEqnNum120276}
\left\{\begin{array}{l} {\hat{x}_{i}^{a} (k)=\bar{x}_{i}^{a} (k)+K_{i}^{a} (k)(y_{i} (k)-C_{i} \bar{x}_{i}^{a} (k))} \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, +\gamma _{i} \sum _{j\in N_{i} }(\tilde{x}_{j} (k)-\tilde{x}_{i}^{a} (k) )+f_{i}^{a} (k),} \\ {\bar{x}_{i}^{a} (k+1)=A\hat{x}_{i}^{a} (k),} \\ {\tilde{x}_{i}^{a} (k)=\zeta _{i} (k)\bar{x}_{i}^{a} (k)+(1-\zeta _{i} (k))A\tilde{x}_{i}^{a} (k-1),} \end{array}\right.
\end{equation}
where\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum499212}
f_{i}^{a} (k)=K_{i}^{a} (k)f_{i} (k)+\gamma _{i} \sum _{j\in N_{i} }\tilde{f}_{j} (k) ,
\end{equation}
with
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum429253} \nonumber
\tilde{f}_{j} (k)=\zeta _{j} (k)\bar{f}_{j} (k)+(1-\zeta _{j} (k))\tilde{f}_{j} (k-1).
\end{equation}
The Kalman gain $K_{i}^{a} (k)$ in presence of attack is given by
\begin{equation} \label{ZEqnNum654467}
K_{i}^{a} (k)=\bar{P}_{i}^{a} (k)C_{i}^{T} (R_{i} (k)+C_{i} \bar{P}_{i}^{a} (k)C_{i}^{T} )^{-1} .
\end{equation}
The first part in \eqref{ZEqnNum499212} represents the direct attack on sensor node $i$ and the second part denotes the aggregative effect of adversarial input on neighboring sensors, i.e., $j\in N_{i}$. Moreover, $\hat{x}_{i}^{a}(k),\, \, \bar{x}_{i}^{a} (k),$ and $\tilde{x}_{i}^{a}(k)$ denote, respectively, the corrupted posterior, prior, and predictive state estimates. The Kalman gain $K_{i}^{a}(k)$ depends on the following corrupted prior state estimation error covariance
\begin{equation} \label{ZEqnNum384197}
\bar{P}_{i}^{a} (k+1)=A\hat{P}_{i}^{a} (k)A^{T} +Q.
\end{equation}
where the corrupted posterior state estimation error covariance $\hat{P}_{i}^{a} (k)$ evolution is shown in the following theorem.
\begin{theorem}
Consider the process dynamics \eqref{ZEqnNum820040} with compromised sensor model \eqref{ZEqnNum973066}. Let the state estimation equation be given by \eqref{ZEqnNum120276} in the presence of attacks modeled by $f_{i}^{a}(k)$ in \eqref{ZEqnNum499212}. Then, the corrupted posterior state estimation error covariance $\hat{P}_{i}^{a}(k)$ is given by
\begin{equation} \label{ZEqnNum998129}
\begin{array}{l} {\hat{P}_{i}^{a} (k)=M_{i}^{a} (k)\bar{P}_{i}^{a} (k)(M_{i}^{a} (k))^{T} +K_{i}^{a} (k)[R_{i} (k)+\Sigma _{i}^{f} (k)](K_{i}^{a} (k))^{T} } \\ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, {+2\gamma _{i} \sum _{j\in N_{i} }(\stackrel{\frown}{P}_{i,j}^{a} (k) -\stackrel{\frown}{P}_{i}^{a} (k))(M_{i}^{a} (k))^{T} -2K_{i}^{a} (k)\Xi _{f} (k)} \\ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \,{+\gamma _{i} {}^{2} (\sum _{j\in N_{i} }(\tilde{P}_{j}^{a} (k) -2\tilde{P}_{i,j}^{a} (k)+\tilde{P}_{i}^{a} (k))}, \end{array}
\end{equation}
where $\Sigma _{i}^{f}(k)$ and $\Xi _{f} (k)$ denote the attacker's input dependent covariance matrices and $M_{i}^{a} =(I_{n} -K_{i}^{a} (k)C_{i} )$ with $K_{i}^{a} (k)$ as the Kalman gain and $\bar{P}_{i}^{a} (k)$ as the prior state estimation error covariance update in \eqref{ZEqnNum654467} and \eqref{ZEqnNum384197}, respectively. Moreover, $\tilde{P}_{i,j}^{a} (k)$ and $\stackrel{\frown}{P}_{i,j}^{a}(k)$ are cross-correlated estimation error covariances updated according to \eqref{ZEqnNum928831}-\eqref{ZEqnNum358063}.
\end{theorem}
\begin{proof}
See Appendix A.
\end{proof}
\vspace{-0.2cm}
Note that the corrupted state estimation error covariance recursion $\hat{P}_{i}^{a} (k)$ in \eqref{ZEqnNum998129} depends on the attacker's input distribution. Since the state estimation depends on compromised estimation error covariance $\hat{P}_{i}^{a} (k),$ therefore, the attacker can design its attack signal to blow up the estimates of the desired process state and damage the system performance.
\vspace{-0.2cm}
\section{ Effect of Attack on Triggering Mechanism}
This section presents the effects of cyber-physical attacks on the event-triggered DKF. We show that although event-triggered approaches are energy efficient, they are prone to triggering misbehaviors, which can harm the network connectivity, observability and drain its limited resources.
\vspace{-0.35cm}
\subsection{ Non-triggering Misbehavior}
In this subsection, we show how an attacker can manipulate the sensor measurement to mislead the event-triggered mechanism and damage network connectivity and collective observability by causing \textit{non-triggering misbehavior} as defined in the following Definition 3.
\smallskip
\noindent
\textbf{Definition 3 }(\textbf{Non-triggering Misbehavior).} The attacker designs an attack strategy such that a compromised sensor node does not transmit any information to its neighbors by misleading the triggering mechanism in (\ref{eq3x}), even if the actual performance deviates from the desired one.
The following theorem shows how a false data injection attack, followed by an eavesdropping attack, can manipulate the sensor reading to avoid the event-triggered mechanism (\ref{eq3x}) from being violated while the actual performance could be far from the desired one. To this end, we first define the vertex cut of the graph as follows.
\smallskip
\noindent
\textbf{Definition 4 (Vertex cut).} A set of nodes ${\rm {\mathcal C}}\subset {\rm {\mathcal V}}$ is a vertex cut of a graph ${\rm {\mathcal G}}$ if removing the nodes in the set ${\rm {\mathcal C}}$ results in disconnected graph clusters.
\begin{theorem}
Consider the process dynamics \eqref{ZEqnNum820040} with $N$ sensor nodes \eqref{ZEqnNum687942} communicating over the graph ${\rm {\mathcal G}}$. Let sensor $i$ be under a false data injection attack given by
\begin{equation} \label{ZEqnNum705143}
y_{i}^{a} (k)=y_{i} (k)+\theta _{i}^{a} (k)1_{p} ,\, \, \, \, \forall k\ge L+1,
\end{equation}
where $y_{i}(k)$ is the actual sensor measurement at time instant $k$ and $L$ denotes the last triggering time instant. Moreover, $\theta _{i}^{a}(k)\sim {\rm {\mathcal U}}(a(k),b(k))\, $ is a scalar uniformly distributed random variable in the interval $(a(k),b(k))$ with
\begin{equation} \label{ZEqnNum165624}
\left\{\begin{array}{l} {a(k)=\varphi -\left\| C_{i} \tilde{x}_{i} (k-1)\right\| +\left\| y_{i} (k)\right\|, } \\ {b(k)=\varphi +\left\| C_{i} \tilde{x}_{i} (k-1)\right\| -\left\| y_{i} (k)\right\|, } \end{array}\right.
\end{equation}
where $\tilde{x}_{i} (k)$ and $\varphi <\alpha $ denote, respectively, the predictive state estimate and an arbitrary scalar value less than the triggering threshold $\alpha .$ Then,
\begin{enumerate}
\item The triggering condition (\ref{eq3x}) will not be violated for the sensor node $i$ and it shows non-triggering misbehavior;
\item The original graph ${\rm {\mathcal G}}$ is clustered into several subgraphs, if all sensors in a vertex cut are under attack \eqref{ZEqnNum705143}.
\end{enumerate}
\end{theorem}
\begin{proof}
Taking norms from both sides of \eqref{ZEqnNum705143}, the corrupted sensor measurement $y_{i}^{a} (k)$ becomes
\begin{equation} \label{ZEqnNum862369}
\left\| y_{i}^{a} (k)\right\| =\left\| y_{i} (k)+\theta _{i}^{a} (k)1_{p} \right\| .
\end{equation}
Using the triangular inequality for \eqref{ZEqnNum862369} yields
\begin{equation} \label{ZEqnNum171011}
\left\| y_{i} (k)\right\| -\left\| \theta _{i}^{a} (k)1_{p} \right\| \le \left\| y_{i}^{a} (k)\right\| \le \left\| y_{i} (k)\right\| +\left\| \theta _{i}^{a} (k)1_{p} \right\| .
\end{equation}
Based on the bounds of $\theta _{i}^{a} (k)$, given by \eqref{ZEqnNum165624}, \eqref{ZEqnNum171011} becomes
\begin{equation} \label{27)} \nonumber
\left\| C_{i} \tilde{x}_{i} (k-1)\right\| -\varphi \le \left\| y_{i}^{a} (k)\right\| \le \left\| C_{i} \tilde{x}_{i} (k-1)\right\| +\varphi ,
\end{equation}
which yields
\begin{equation} \label{ZEqnNum939032} \nonumber
(\left\| y_{i}^{a} (k)\right\| -\left\| C_{i} \tilde{x}_{i} (k-1)\right\| -\varphi )(\left\| y_{i}^{a} (k)\right\| -\left\| C_{i} \tilde{x}_{i} (k-1)\right\| +\varphi )\le 0.
\end{equation}
This implies that the condition
\begin{equation} \label{29)} \nonumber
\, \left\| y_{i}^{a} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| \le \varphi <\alpha ,
\end{equation}
always holds true. Therefore, under \eqref{ZEqnNum705143}-\eqref{ZEqnNum165624}, the corrupted sensor node $i$ shows non-triggering misbehavior, which proves part 1.
We now prove part 2. Let ${\rm {\mathcal A}}_{n} \subseteq {\rm {\mathcal V}}^{c}$ be the set of sensor nodes showing non-triggering misbehavior. Then, based on the presented result in part 1, under the attack signal \eqref{ZEqnNum705143}, sensor nodes in the set ${\rm {\mathcal A}}_{n}$ are misled by the attacker and consequently do not transmit any information to their neighbors which make them to act as sink nodes. Since the set of sensor nodes ${\rm {\mathcal A}}_{n} $ is assumed to be a vertex cut. Then, the non-triggering misbehavior of sensor nodes in ${\rm {\mathcal A}}_{n}$ prevents information flow from one portion of the graph ${\rm {\mathcal G}}$ to another portion of the graph ${\rm {\mathcal G}}$ and thus clusters the original graph ${\rm {\mathcal G}}$ into subgraphs. This completes the proof.
\end{proof}
\vspace{-0.3cm}
\noindent
\textbf{Remark 3.} Note that to design the presented strategic false data injection attack signal given in \eqref{ZEqnNum705143} an attacker needs to eavesdrop the actual sensor measurement $y_{i} (k)$ and the last transmitted prior state estimate $\bar{x}_{i} (L)$ through the communication channel. The attacker then determines the predictive state estimate $\tilde{x}_{i} (k)$ using the dynamics in \eqref{ZEqnNum257073} at each time instant $k\ge L+1$ to achieve non-triggering misbehavior for the sensor node $i$.
We provide Example $1$ for further illustration of the results of Theorem 2.
\vspace{-0.0cm}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=2.68in,height=2.8in]{DKF_SMC.jpg}
\vspace{-2pt}\caption{Effect of non-triggering misbehavior on sensor nodes $\{$5,6$\}$ cluster the graph ${\rm {\mathcal G}}$ in the two isolated graphs ${\rm {\mathcal G}}_{1} $ and ${\rm {\mathcal G}}_{2}.$}\label{fig1}
\captionsetup{justification=centering}
\end{center}
\end{figure}
\vspace{-0.15cm}
\noindent
\textbf{Example 1.} Consider a graph topology for a distributed sensor network given in fig. 1. Let the vertex cut ${\rm {\mathcal A}}_{n} =\{ 5,6\}$ be under the presented false data injection attack in Theorem $2$ and show non-triggering misbehavior. Then, the sensor nodes in ${\rm {\mathcal A}}_{n} =\{ 5,6\}$ do not transmit any information to their neighbors under the designed false data injection attack. Moreover, the sensor nodes in ${\rm {\mathcal A}}_{n} =\{ 5,6\}$ act as sink nodes and prevent information flow from subgraph ${\rm {\mathcal G}}_{1}$ to subgraph ${\rm {\mathcal G}}_{2}$ which clusters the graph ${\rm {\mathcal G}}$ into two non-interacting subgraphs ${\rm {\mathcal G}}_{1}$ and ${\rm {\mathcal G}}_{2}$ as shown in Fig. 1. This example shows that the attacker can compromise the vertex cut ${\rm {\mathcal A}}_{n}$ of the original graph ${\rm {\mathcal G}}$ such that it shows non-triggering misbehavior and harm the network connectivity or cluster the graph into various non-interacting subgraphs.
We now analyze the effect of non-triggering misbehavior on the collective observability of the sensor network. To do so the following definitions are needed.
\smallskip
\noindent \textbf{Definition 5 (Potential Set). } A set of nodes ${\rm {\mathcal P} \subset} {\rm {\mathcal V}}$ is said to be a potential set of the graph ${\rm {\mathcal G}}$ if the pair $(A,C_{{\rm {\mathcal V}}\backslash {\rm{\mathcal P}}} )$ is not collectively observable.
\smallskip
\noindent \textbf{Definition 6 (Minimal Potential Set).} A set of nodes ${\rm {\mathcal P} }_{m} \subset {\rm {\mathcal V}}$ is said to be a minimal potential set if ${\rm {\mathcal P} }_{m}$ is a potential set and no subset of ${\rm {\mathcal P}}_{m}$ is a potential set.
\smallskip
\noindent \textbf{Remark 4.} Note that if the attacker knows the graph structure and the local pair$(A,C_{i} ),\, \, \, \forall i\in {\mathcal V}$. Then, the attacker can identify the minimum potential set of sensor nodes ${\rm{\mathcal P}}_{m}$ in the graph ${\rm {\mathcal G}}$ and achieves non-triggering misbehavior for ${\rm {\mathcal P} }_{m}.$ Thus, the set of sensor nodes ${\rm {\mathcal P}}_{m}$ does not exchange any information with its neighbors and becomes isolated in the graph ${\rm {\mathcal G}}$.
\smallskip
\noindent \textbf{Corollary 1.}
\textit{Let the set of sensors that shows non-triggering misbehavior be the minimal potential set ${\rm {\mathcal S}}_{n}$. Then, the network is no longer collectively observable and the process state reconstruction from the distributed sensor measurements is impossible.}
\vspace{-0.1cm}
\begin{proof}
According to the statement of the corollary, ${\rm {\mathcal S}}_{n}$ represents a minimal potential set of the graph ${\rm {\mathcal G}}$ and shows non-triggering misbehavior. Then, the sensor nodes in the set ${\rm {\mathcal S}}_{n}$ do not transmit any information to their neighbors and they act as sink nodes, i.e., they only absorb information. Therefore, the exchange of information happen just between the remaining sensor nodes in the graph ${\rm {\mathcal G}}\backslash {\rm {\mathcal S}}_{n}$. Hence, after excluding the minimum potential nodes ${\rm {\mathcal S}}_{n}$, the pair $(A,C_{{\rm {\mathcal G}}\backslash {\rm {\mathcal S}}_{n} } )$ becomes unobservable based on the Definitions $5$ and $6$, and thus makes the state reconstruction impossible. This completes the proof.
\end{proof}
\vspace{-0.4cm}
\subsection{Continuous-triggering Misbehavior}
In this subsection, we discuss how an attacker can compromise the actual sensor measurement to mislead the event-triggered mechanism and achieves continuous-triggering misbehavior and thus results in a time-driven DKF that not only drains the communication resources but also continuously propagates the adverse effect of attack in the network.
\smallskip
\noindent \textbf{Definition 7} \textbf{(Continuous-triggering Misbehavior).} Let the attacker design an attack strategy such that it deceives the triggering mechanism in (\ref{eq3x}) at each time instant. This turns the event-driven DKF into a time-driven DKF that continuously exchanges corrupted information among sensor nodes. We call this a continuous-triggering misbehavior.
We now show how a reply attack, followed by an eavesdropping attack, can manipulate the sensor reading to cause continuous violation of the event-triggered mechanism (\ref{eq3x}).
\vspace{-0.1cm}
\begin{theorem}
Consider the process dynamics \eqref{ZEqnNum820040} with $N$ sensor nodes \eqref{ZEqnNum687942} communicating over the graph ${\rm {\mathcal G}}.$ Let the sensor node $i$ in \eqref{ZEqnNum687942} be under a replay attack given by\newline
\vspace{-0.35cm}
\begin{equation} \label{ZEqnNum253008}
y_{i}^{a} (k)=C_{i} \bar{x}_{i} (k-1)+\upsilon _{i} (k),\, \, \forall k\ge l+1,
\end{equation}
\vspace{-0.15cm}
\noindent
where $\bar{x}_{i}(k-1)$ represents the last transmitted prior state and $\upsilon_{i} (k)$ denotes a scalar disruption signal. $l$ denotes the last triggering time instant when intact prior state estimate was transmitted. Then, the sensor node $i$ shows continuous-triggering misbehavior if the attacker selects $\left\| \upsilon _{i} (k)\right\| >\alpha.$
\end{theorem}{}
\begin{proof}
To mislead a sensor to cause a continuous-triggering misbehavior, the attacker needs to design the attack signal such that the event-triggered condition (\ref{eq3x}) is constantly being violated, i.e., $\, \left\| y_{i}^{a} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| \ge \alpha $ all the time. The attacker can eavesdrop the last transmitted prior state estimate $\bar{x}_{i}(k-1)$ and design the strategic attack signal given by \eqref{ZEqnNum253008}. Then, one has
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum491202}
\begin{array}{l}
y_i^a(k) - {C_i}{{\tilde x}_i}(k - 1) = {C_i}{{\bar x}_i}(k - 1) + {\delta _i}(k) - {C_i}{{\tilde x}_i}(k - 1){\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
= {C_i}{{\bar x}_i}(k - 1) + {\upsilon _i}(k) - {C_i}[{\zeta _i}(k - 1){{\bar x}_i}(k - 1){\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
\,\,\,\,\, + (1 - {\zeta _i}(k - 1))A{{\bar x}_i}(k - 2)]{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
= (1 - {\zeta _i}(k - 1)){C_i}[{{\bar x}_i}(k - 1) - A{{\bar x}_i}(k - 2)] + {\upsilon _i}(k),
\end{array}
\end{equation}
Taking the norm from both sides of \eqref{ZEqnNum491202} yields
\begin{equation} \label{ZEqnNum734745}
\begin{array}{l} {\left\| y_{i}^{a} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| } \\ {=\left\| (1-\zeta _{i} (k-1))C_{i} [\bar{x}_{i} (k-1)-A\bar{x}_{i} (k-2)]+\upsilon _{i} (k)\right\| ,} \end{array}
\end{equation}
Since for $k=l+1$, $\zeta _{i}(l)=1$
\vspace{-0.15cm}
\begin{equation} \label{34)}
\left\| y_{i}^{a} (l+1)-C_{i} \tilde{x}_{i} (l)\right\| =\left\| \upsilon _{i} (l+1)\right\| ,
\end{equation}
{If the attacker selects $\upsilon _{i}(l+1)$ in \eqref{34)}} such that $\left\| \upsilon _{i} (l+1)\right\| >\alpha $, then the attack signal \eqref{ZEqnNum253008} ensures triggering at time instant $k=l+1.$ Then, based on similar argument for \eqref{ZEqnNum734745}, $\forall k\ge l+1$
\vspace{-0.15cm}
\begin{equation} \label{35)} \nonumber
\left\| y_{i}^{a} (k)-C_{i} \tilde{x}_{i} (k-1)\right\| =\left\| \upsilon _{i} (k)\right\| >\alpha ,
\end{equation}
which ensures continuous triggering misbehavior. This completes the proof.
\end{proof}
\vspace{-0.25cm}
To achieve continuous-triggering misbehavior the attacker needs to eavesdrop prior state estimates $\bar{x}_{i} (k-1)$ at each triggering instant and selects the $\upsilon _{i}(k)$ large enough such that $\left\| \upsilon _{i} (k)\right\| >\alpha $ always holds true.
Note that continuous-triggering misbehavior can completely ruin the advantage of event-triggered mechanisms and turn it into time-driven mechanisms. This significantly increases the communication burden. Since nodes in the WSNs are usually powered through batteries with limited energy, the attacker can drain sensors limited resources by designing the above-discussed attack signals to achieve continuous-triggering misbehavior, and, consequently can make them non-operating in the network along with the deteriorated performance of the network.
Note that although we classified attacks into non-triggering misbehavior and continuous-triggering misbehavior, to analyze how the attacker can leverage the event-triggered mechanism, the following \textit{analysis, detection and mitigation approaches} are not restricted to any class of attacks.
\vspace{-0.3cm}
\section{ Attack Detection}
In this section, we present an entropy estimation-based attack detection approach for the event-triggered DKF.
The KL divergence is a non-negative measure of the relative entropy between two probability distributions which is defined as follows.
\noindent \textbf{Definition 8 (KL Divergence) \cite{c24}}. Let $X$ and $Z$ be two random variables with probability density function $P_{X}$ and $P_{Z}$, respectively. The KL divergence measure between $P_{X}$ and $P_{Z}$ is defined as
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum937457}
D_{KL} (P_{X} ||P_{Z} )=\int _{\theta \in \Theta }P_{X} (\theta )\log \left(\frac{P_{X} (\theta )}{P_{Z} (\theta )} \right) ,
\end{equation}
with the following properties \cite{c32}
\begin{enumerate}
\item $D_{KL} (P_{X} ||P_{Z} )\ge 0;$
\item $D_{KL} (P_{X} ||P_{Z} )=0$ if and only if, $P_{X} =P_{z} ;$
\item $D_{KL} (P_{X} ||P_{Z} )\ne D_{KL} (P_{Z} ||P_{X} ).$
\end{enumerate}
In the existing resilient literature, the entropy-based anomaly detectors need to know the probability density function of sequences, i.e., $P_{X}$ and $P_{Z},$ {in \eqref{ZEqnNum937457}} to determine the relative entropy. In most of the cases, authors assume that the probability density function of corrupted innovation sequence remains Gaussian (see \cite{c24} and \cite{c34} for instance). Since, the attacker's input signal is unknown, it is restrictive to assume that the probability density function of the corrupted sequence remains Gaussian. To relax this \textit{restrictive assumption} on probability density function of the corrupted sequence, we estimate the relative entropy between two random sequences $X$ and $Z$ using \textit{$k-$nearest neighbor $(k-NN)$} based divergence estimator \cite{d5}.
{Let $\{ X_{1},\ldots,X_{n_{1} } \} $ and $\{ Z_{1} ,\ldots ,Z_{n_{2} } \} $ be i.i.d. samples drawn independently from $P_{X} $ and $P_{Z},$ respectively with $X_{j},\,\, Z_{j} \in {\bf {\mathbb R}}^{m}$. Let $d_{k}^{X}(i)$ be the Euclidean distance between $X_{i}$ and its \textit{$k-NN$} in $\{ X_{l} \} _{l\ne i} .$ The \textit{$k-NN$} of a sample $s$ in $\{ s_{1} ,\ldots ,s_{n} \} $ is $s_{i(k)}$ where $i(1),\ldots,i(n)$ such that
\vspace{-0.2cm}
\begin{equation} \label{37)} \nonumber
\left\| s-s_{i(1)} \right\| \le \left\| s-s_{i(2)} \right\| \le \ldots \le \left\| s-s_{i(n)} \right\| .
\end{equation}
More specifically, the Euclidean distance $d_{k}^{X}(i)$ is given by \cite{d5a}
\begin{equation}
\nonumber
\begin{array}{l}
d_k^X(i) = \mathop {\min }\limits_{j = 1, \ldots ,{n_1},j \ne \{ i,{j_1},...,{j_{k - 1}}\} } \left\| {{X_i} - {X_j}} \right\|
\end{array}
\end{equation}}
The \textit{$k-NN$} based relative entropy estimator is given by \cite{d5}
\begin{equation} \label{ZEqnNum207466}
\hat{D}_{KL} (P_{X} ||P_{Z} )=\frac{m}{n_{1} } \sum _{i=1}^{n_{1} }\log \frac{d_{k}^{Z} (i)}{d_{k}^{X} (i)} +\log \frac{n_{2} }{n_{1} -1} .
\end{equation}
The innovation sequences represent the deviation of the actual output of the system from the estimated one. It is known that innovation sequences approach a steady state quickly and thus it is reasonable to design innovation-based anomaly detectors to capture the system abnormality \cite{c24}. Using the innovation sequence of each sensor and the innovation sequences that it estimates for its neighbors, we present innovation based divergence estimator and design detectors to capture the effect of the attacks on the event-triggered DKF.
Based on innovation expression \eqref{ZEqnNum276515}, in the presence of attack, one can write the compromised innovation $r_{i}^{a} (k)$ for sensor node $i$ with disrupted measurement $y_{i}^{a} (k)$ in \eqref{ZEqnNum973066} and state estimation $\bar{x}_{i}^{a} \, $ based on \eqref{ZEqnNum120276} as
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum255093}
r_{i}^{a} (k)=y_{i}^{a} (k)-C_{i} \bar{x}_{i}^{a} (k).
\end{equation}
Let $\{ r_{i}^{a} (l),\ldots ,r_{i}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ be i.i.d. \textit{p}-dimensional samples of corrupted and nominal innovation sequences with probability density function $P_{r_{i}^{a} } $ and $P_{r_{i} },$ respectively. The nominal innovation sequence follows $r_{i}(k)$ defined in \eqref{ZEqnNum276515}. Using \textit{$k-NN$} based relative entropy estimator \eqref{ZEqnNum207466}, one has \cite{d5}
\begin{equation} \label{ZEqnNum280433}
\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )=\frac{p}{w} \sum _{j=1}^{w}\log \frac{d_{k}^{r_{i} } (j)}{d_{k}^{r_{i}^{a} } (j)} +\log \frac{w}{w-1} ,\, \, \forall i\in {\rm {\mathcal V}}.
\end{equation}
Define the average of the estimated KL divergence over a time window of $T$ as
\begin{equation} \label{ZEqnNum738078}
\Phi _{i} (k)=\frac{1}{T} \sum _{l=k-T+1}^{k}\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } ) ,\, \, \forall i\in {\rm {\mathcal V}}.
\end{equation}
Now, in the following theorem, it is shown that the effect of attacks on the sensors can be captured using \eqref{ZEqnNum738078}.
\begin{theorem}
Consider the distributed sensor network \eqref{ZEqnNum820040}-\eqref{ZEqnNum687942} under attack on sensor. Then,
\begin{enumerate}
\item in the absence of attack, $\Phi _{i} (k)=\log (w/w-1),\, \, \, \forall k;$
\item in the presence of attack, $\Phi _{i} (k)>\delta ,\, \, \forall k>l_{a},$ where $\delta $ and $l_{a}$ denotes, respectively, a predefined threshold and the time instant at which the attack happen.
\end{enumerate}
\end{theorem}{}
\begin{proof}
In the absence of attack, the samples of innovation sequences $\{ r_{i}^{a} (l),\ldots ,r_{i}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ are similar. Then, the Euclidean distance $d_{k}^{r_{i}^{a} } (j)=d_{k}^{r_{i} } (j),\, \, \forall j\in \{ 1,...,w\} $ and one has
\begin{equation} \label{ZEqnNum663932}
\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )=\log \frac{w}{w-1} ,\, \, \forall i\in {\rm {\mathcal V}}.
\end{equation}
Based on \eqref{ZEqnNum663932}, one has
\vspace{-0.15cm}
\begin{equation} \label{a43)}
\Phi _{i} (k)=\frac{1}{T} \sum _{l=k-T+1}^{k}\log \frac{w}{w-1} =\log \frac{w}{w-1} < \delta ,\, \, \forall i\in {\rm {\mathcal V}}.
\end{equation}
{where $\log (w/w-1)$ in (42) depends on the sample size of innovation sequence and $\log (w/w-1)\le 0.1,\, \, \, \forall w\ge 10$. Therefore, the predefined threshold $\delta$ can be selected with some $\delta>0.1$ such that the condition in (42) is always satisfied.} This complete the proof of part 1.
In the presence of attack, the samples of innovation sequences $\{ r_{i}^{a} (l),\ldots ,r_{i}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ are different, i.e., $d_{k}^{r_{i}^{a} } (j)\ne d_{k}^{r_{i} } (j),\, \, \forall j\in \{ 1,...,w\} $. More specifically, $d_{k}^{r_{i} } (j)>d_{k}^{r_{i}^{a} } (j), \, \, \forall j\in \{ 1,...,w\} $ due to change in the corrupted innovation sequence. Therefore, based on \eqref{ZEqnNum280433} the estimated relative entropy between sequences becomes
\begin{equation} \label{ZEqnNum657988}
\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )=\frac{p}{w} \sum _{j=1}^{w}\log (1+\frac{\Delta _{k}^{r_{i} } (j)}{d_{k}^{r_{i}^{a} } (j)} ) +\log \frac{w}{w-1} ,\, \forall i\in {\rm {\mathcal V}},
\end{equation}
with $\Delta _{k}^{r_{i} } (j)$ as the change in Euclidean distance due to corrupted innovation sequence. Based on \eqref{ZEqnNum657988}, one has
\begin{equation} \label{ZEqnNum750552}
\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )=\frac{p}{w} \sum _{j=1}^{w}\log (1+\frac{\Delta _{k}^{r_{i} } (j)}{d_{k}^{r_{i}^{a} } (j)} ) +\log \frac{w}{w-1} \gg \log \frac{w}{w-1} .
\end{equation}
Thus, one has
\vspace{-0.2cm}
\begin{equation} \label{46)}
\Phi _{i} (k)=\frac{1}{T} \sum _{l=k-T+1}^{k}\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )>\delta ,\, \, \forall i\in {\rm {\mathcal V}},
\end{equation}
where $T$ and $\delta $ denote the sliding window size and the predefined design threshold. This completes the proof.
\end{proof}
Based on Theorem 4, one can use the following condition for attack detection.
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum861796}
\left\{\begin{array}{l} {\Phi _{i} (k)\, <\delta :H_{0}, } \\ {\Phi _{i} (k)\, >\delta \, \, :H_{1}, } \end{array}\right.
\end{equation}
where $\delta $ denotes the designed threshold for detection, the null hypothesis $H_{0} $ represents the intact mode of sensor nodes and $H_{1}$ denotes the compromised mode of sensor nodes.
\smallskip
\noindent \textbf{Remark 5.} {Note that in the absence of an attack, the innovation sequence has a known zero-mean Gaussian distribution due to the measurement noise. Based on the prior system knowledge, one can always consider that the nominal innovation sequence is zero-mean Gaussian distribution with predefined covariance. The bound on the predefined covariance can be determined during normal operation of the event-triggered DKF. This assumption for the knowledge of the nominal innovation sequence for attack detection is standard in the existing literature (see \cite{c34} for instance). The designed threshold $\delta $ in \eqref{ZEqnNum861796} is a predefined parameter and chosen appropriately for the detection of the attack signal. Moreover, the selection of detection threshold based on expert knowledge is standard in the existing literature. For example, several results on adversary detection and stealthiness have considered similar thresholds \cite{c24}-\cite{c26}. }
\begin{algorithm}[!ht]
\caption{Detecting attacks on sensors.}
\begin{enumerate}
\item [1:] Initialize with a time window $T$ and detection threshold $\delta$.
\item [2:] \textbf{procedure} $\forall i=1,\ldots ,N$
\item [3:] {Use samples of innovation sequences $\{ r_{i}^{a} (l),\ldots,$ \qquad $r_{i}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ based on \eqref{ZEqnNum255093} and \eqref{ZEqnNum276515}, $\forall l\in k$.}
\item [4:] Estimate the $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )$ using \eqref{ZEqnNum750552}.
\item [5:] Compute $\Phi _{i} (k)$ as \eqref{46)} and use condition in \eqref{ZEqnNum861796} to detect attacks on sensors.
\item [6:] \textbf{end procedure}
\end{enumerate}
\end{algorithm}
Based on the results presented in Theorem 4 and Algorithm 1, one can capture attacks on both sensors and communication links, but it cannot identify the specific compromised communication link as modelled in \eqref{ZEqnNum397788}. To detect the source of attacks, we present an estimated entropy-based detector to capture the effect of attacks on the specific communication channel. More specifically, the relative entropy between the estimated innovation sequences for the neighbors at particular sensor node and the nominal innovation sequence of the considered sensor node is estimated using \eqref{ZEqnNum207466}.
Define the estimated innovation sequences $\zeta _{i,j}^{a}(k)$ for a neighbor $j$ under attacks on communication channel from the sensor node $i$ side as
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum178443}
\zeta _{i,j}^{a} (k)=y_{i} (k)-C_{j} \tilde{x}_{j}^{a} (k),
\end{equation}
where $\tilde{x}_{j}^{a}(k)$ is the corrupted communicated state estimation of neighbor $j$ at sensor node $i$ at the last triggering instant.\par
Let $\{ \zeta _{i,j}^{a} (l),\ldots ,\zeta _{i,j}^{a} (l-1+w)\}$ be i.i.d. \textit{p}-dimensional samples of neighbor's estimated innovation at the sensor node $i$
with probability density function $P_{\zeta _{i,j}^{a} }.$ Using \textit{$k-NN$} based relative entropy estimator \eqref{ZEqnNum207466}, one has
\begin{equation} \label{ZEqnNum691139}
\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )=\frac{p}{w} \sum _{j=1}^{w}\log \frac{d_{k}^{r_{i} } (j)}{d_{k}^{\zeta _{i,j}^{a} } (j)} +\log \frac{w}{w-1} ,\, \, \forall i\in {\rm {\mathcal V}},j\in N_{i} .
\end{equation}
Note that in the presence of attacks on the communication channels, the neighbor's actual innovation differs the neighbor's estimated innovation at sensor $i$. {In the absence of the attack, the mean value of all the sensor state estimates converge to the mean of the desired process state at steady state, and, therefore, the innovation sequences $r_{i}$ and $\zeta _{i,j}^{a}$ have the same zero mean Gaussian distributions. In the presence of attack, however, as shown in Theorem 5 and Algorithm 2, their distributions diverge.}
Define the average of the KL divergence over a time window of $T$ as
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum932962}
\Psi _{i,j} (k)=\frac{1}{T} \sum _{l=k-T+1}^{k}\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } ) ,\, \, \forall i\in {\rm {\mathcal V}},\, j\in N_{i} .
\end{equation}
\begin{theorem}
Consider the distributed sensor network \eqref{ZEqnNum820040}-\eqref{ZEqnNum687942} under attack on communication links \eqref{ZEqnNum397788}. Then, in the presence of an attack, $\Psi _{i,j} (k)>\delta ,\, \, \forall k$ where $\delta $ denotes a predefined threshold.
\end{theorem}
\begin{proof}
The result follows a similar argument as given in the proof of part $2$ of Theorem 4.
\end{proof}
\begin{algorithm}[!ht]
\caption{Detecting attack on a specific communication link.}
\begin{enumerate}
\item [1:] Initialize with a time window $T$ and detection threshold $\delta.$
\item [2:] \textbf{procedure} $\forall i=1,\ldots ,N$
\item [3:] {For each sensor node $j\in N_{i} $, use samples of innovation sequences$\{ \zeta _{i,j}^{a} (l),\ldots ,\zeta _{i,j}^{a} (l-1+w)\} $ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\}$ based on \eqref{ZEqnNum178443} and \eqref{ZEqnNum276515}, $\forall l\in k$.}
\item [4:] Estimate the $\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )$ using \eqref{ZEqnNum691139}.
\item [5:] Compute $\Psi _{i,j}(k)$ as \eqref{ZEqnNum932962} and use same argument in \eqref{ZEqnNum861796} to detect attacks on specific communication link.
\item [6:] \textbf{end procedure}
\end{enumerate}
\end{algorithm}
\vspace{-0.4cm}
\section{ Secure Distributed Estimation Mechanism}
This section presents a meta-Bayesian approach for secure event-triggered DKF, which incorporates the outcome of the attack detection mechanism to perform second-order inference and consequently form beliefs over beliefs. That is, the second-order inference forms confidence and trust about the truthfulness or legitimacy of the sensors' own state estimate (i.e., the posterior belief of the first-order Bayesian inference) and those of its neighbor's state estimates, respectively. Each sensor communicates its confidence to its neighbors. Then sensors incorporate the confidence of their neighbors and their own trust about their neighbors into their posterior update laws to successfully discard the corrupted information.
\vspace{-0.4cm}
\noindent
\subsection{Confidence of sensor nodes}
The second-order inference forms a confidence value for each sensor node which determines the level of trustworthiness of the sensor about its own measurement and state estimate (i.e., the posterior belief of the first-order Bayesian inference). If a sensor node is compromised, then the presented attack detector detects the adversary and it then reduces its level of trustworthiness about its own understanding of the environment and communicates it with its neighbors to inform them the significance of its outgoing information and thus slow down the attack propagation.
To determine the confidence of the sensor node $i$, based on the divergence $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )$ from Theorem 4, we first define
\vspace{-0.15cm}
\begin{equation} \label{ZEqnNum125869}
\chi _{i} (k)=\frac{\Upsilon _{1} }{\Upsilon _{1} +\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )} ,
\end{equation}
with $0<\Upsilon _{1} <1$ represents a predefined threshold to account for the channel fading and other uncertainties. Then, in the following lemma, we formally present the results for the confidence of sensor node $i$.
\noindent \textbf{Lemma 1.} \textit{Let $\beta _{i} (k)$ be the confidence of the sensor node $i$ which is updated using
\begin{equation} \label{ZEqnNum359584}
\beta _{i} (k)=\sum _{l=0}^{k-1}(\kappa _{1} )^{k-l+1} \chi _{i} (l),
\end{equation}
where $\chi _{i}(k)$ is defined in \eqref{ZEqnNum125869}, and $0<\kappa _{1}<1$ is a discount factor. Then, $\beta _{i}(k)\in (0,1]$ and
\begin{enumerate}
\item $\beta _{i} (k)\to 0,\, \, \, \forall i\in {\rm {\mathcal V}}^{c} ;$
\item $\beta _{i} (k)\to 1,\, \, \, \forall i\in {\rm {\mathcal V}}\backslash {\rm {\mathcal V}}^{c} .$
\end{enumerate}}
\begin{proof}
Based on the expression \eqref{ZEqnNum125869}, since $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )\ge 0$, one has $\chi _{i} (k)\in (0,1]$. Then, using \eqref{ZEqnNum359584}, one can infer that $\beta _{i} (k)\in (0,1]$.
Now according to Theorem 4, if the sensor node $i$ is under attack, then $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )>>\Upsilon _{1} $ in \eqref{ZEqnNum125869}, which makes $\chi _{i}(k)$ close to zero. Then, based on expression \eqref{ZEqnNum359584} with the discount factor $0<\kappa _{1} <1,$ the confidence $\beta _{i}(k)$ in \eqref{ZEqnNum359584} approaches zero, and thus the $i^{th} $ sensor's belief about the trustworthiness of its own information would be low. This completes the proof of part 1.
On the other hand, based on Theorem 4, in the absence of attacks, $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )\to 0$ as $w\to \infty $, which makes $\chi _{i} (k)$ close to one and, consequently, $\beta _{i} (k)$ becomes close to one. This indicates that the $i^{th}$ sensor node is confident about its own state estimate. This completes the proof of part 2.
\end{proof}
\vspace{-0.1cm}
Note that the expression for the confidence of sensor node $i$ in \eqref{ZEqnNum359584} can be implemented using the following difference equation
\vspace{-0.3cm}
\begin{equation} \label{53)} \nonumber
\beta _{i} (k+1)=\beta _{i} (k)+\kappa _{1} \chi _{i} (k).
\end{equation}
Note also that the discount factor in \eqref{ZEqnNum359584} determines how much we value the current experience with regards to past experiences. It also guarantees that if the attack is not persistent and disappears after a while, or if a short-period adversary rather than attack (such as packet dropout) causes, the belief will be recovered, as it mainly depends on the current circumstances.
\vspace{-0.35cm}
\noindent
\subsection{Trust of sensor nodes about their incoming information}
Similar to the previous subsection, the second-order inference forms trust of sensor nodes to represent their level of trust on their neighboring sensor's state estimates. Trust decides the usefulness of the neighboring information in the state estimation of sensor node $i$.
The trust of the sensor node $i$ on its neighboring sensor $j$ can be determined based on the divergence $\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} })$ in \eqref{ZEqnNum178443} from Theorem 5, from which we define
\begin{equation} \label{ZEqnNum846884}
\theta _{i,j} (k)=\frac{\Lambda _{1} }{\Lambda _{1} +\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )} ,
\end{equation}
where $0<\Lambda _{1} <1$ represents a predefined threshold to account for the channel fading and other uncertainties. Then, in the following lemma, we formally present the results for the trust of the sensor node $i$ on its neighboring sensor $j.$
\smallskip
\noindent \textbf{Lemma 2.} \textit{Let $\sigma _{i,j}(k)$ be the trust of the sensor node $i$ on its neighboring sensor $j$ which is updated using
\begin{equation} \label{ZEqnNum805360}
\sigma _{i,j} (k)=\sum _{l=0}^{k-1}(\kappa _{2} )^{k-l+1} \theta _{i,j} (l),
\end{equation}
where $\theta _{i,j}(k)$ is defined in \eqref{ZEqnNum846884}, and $0<\kappa _{2} <1$ is a discount factor. Then, $\sigma _{i,j}(k)\in (0,1]$ and
\begin{enumerate}
\item $\sigma _{i,j} (k)\to 0,\, \, \, \forall j\in {\rm {\mathcal V}}^{c} \cap N_{i} ;$
\item $\sigma _{i,j} (k)\to 1,\, \, \, \forall j\in {\rm {\mathcal V}}\backslash {\rm {\mathcal V}}^{c} \cap N_{i} .$
\end{enumerate}}
\begin{proof}
The result follows a similar argument as given in the proof of Lemma 1.
\end{proof}
\vspace{-0.2cm}
Note that the trust of sensor node $i$ in \eqref{ZEqnNum805360} can be implemented using the following difference equation
\vspace{-0.2cm}
\begin{equation} \label{56)} \nonumber
\sigma _{i,j} (k+1)=\sigma _{i,j} (k)+\kappa _{2} \theta _{i,j} (k).
\end{equation}
Using the presented idea of trust, one can identify the attacks on the communication channel and discard the contribution of compromised information for the state estimation.
\vspace{-0.35cm}
\subsection{Attack mitigation mechanism using confidence and trust of sensors}
This subsection incorporates the confidence and trust of sensors to design a resilient event-triggered DKF. To this end, using the presented confidence $\beta _{i}(k)$ in \eqref{ZEqnNum359584} and trust $\sigma _{i,j}(k)$ in \eqref{ZEqnNum805360}, we design the resilient form of the event-triggered DKF as
\begin{equation} \label{ZEqnNum565391}
\begin{array}{l} {\hat{x}_{i} (k)=\bar{x}_{i} (k)+K_{i} (k)(\beta _{i} (k)y_{i} (k)+(1-\beta _{i} (k))C_{i} m_{i} (k)-C_{i} \bar{x}_{i} (k))} \\ {\, \, \, \, \, \,\, \, \, \, \,\, \,\, \,\, \, \, \, \, \,\, \, \, \, +\gamma _{i} \sum _{j\in N_{i} }\sigma _{i,j} (k)\beta _{j} (k)(\tilde{x}_{j} (k)-\tilde{x}_{i} (k)),} \end{array}
\end{equation}
where the weighted neighbor's state estimate $m_{i}(k)$ is defined as
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum466700}
\begin{array}{l} {m_{i} (k)=\frac{1}{\left|N_{i} \right|} \sum _{j\in N_{i} }\sigma _{i,j} (k)\beta _{j} (k)\tilde{x}_{j} (k) \approx x(k)+\varepsilon _{i} (k),\, \, \, } \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \qquad \forall k\, \, \, \left\| \varepsilon _{i} (k)\right\| <\tau ,} \end{array}
\end{equation}
where $\varepsilon _{i}(k)$ denotes the deviation between the weighted neighbor's state estimate $m_{i} (k)$ and the actual process state $x(k)$. Note that in \eqref{ZEqnNum466700} the weighted state estimate depends on the trust values $\sigma _{i,j} (k)$ and the confidence values $\beta _{j} (k),\, \, \forall j\in N_{i}.$ Since the weighted state estimate depends only on the information from intact neighbors, then one has $\left\| \varepsilon _{i} (k)\right\| <\tau$ for some $\tau >0,\, \, \forall k.$ For the sake of mathematical representation, we approximate the weighted state estimate $m_{i}(k)$ in terms of the actual process state $x(k)$, i.e., $m_{i}(k)\approx x(k)+\varepsilon _{i} (k).$ We call this a meta-Bayesian inference that integrates the first-order inference (state estimates) with second-order estimates or belief (trust and confidence on the trustworthiness of state estimate beliefs).
Define the prior and predictive state estimation errors as
\begin{equation} \label{ZEqnNum250987}
\begin{array}{l} {\bar{\eta }_{i} (k)=x(k)-\bar{x}_{i} (k)} \\ {\tilde{\eta }_{i} (k)=x(k)-\tilde{x}_{i} (k),} \end{array}
\end{equation}
Using the threshold in triggering mechanism (\ref{eq3x}), one has
\begin{equation} \label{ZEqnNum528573}
\begin{array}{l} {\left\| \tilde{\eta }_{i} (k)\right\| -\left\| x(k+1)-x(k)+v_{i} (k+1)\right\| \le \alpha /\left\| C_{i} \right\| ,} \\ {\left\| \tilde{\eta }_{i} (k)\right\| \le \alpha /\left\| C_{i} \right\| +{\rm {\mathcal B}},} \end{array}
\end{equation}
where ${\rm {\mathcal B}}$ denotes the bound on $\left\| x(k+1)-x(k)+v_{i} (k+1)\right\| .$
\noindent Other notations used in the following theorem are given by
\begin{equation} \label{ZEqnNum500695}
\begin{array}{l} {\bar{\eta }(k)=[\bar{\eta }_{1} (k),\ldots ,\bar{\eta }_{N} (k)],\, \, \, M(k)=diag[M_{1} (k),\ldots ,M_{N} (k)]} \\ {\Upsilon =diag[\gamma _{1} ,\ldots ,\gamma _{N} ],\, \, \Upsilon _{m} =\left\| \max \{ \gamma _{i} \} \right\| ,\, \, \forall i \in \mathcal{V}}, \\ {\bar{\beta }=(I_{N} -diag(\beta _{i} )),\, \, \, \, E(k)=[\varepsilon _{1} (k),\ldots ,\varepsilon _{N} (k)],} \\ {\tilde{\eta }(k)=[\tilde{\eta }_{1} (k),\ldots ,\tilde{\eta }_{N} (k)].} \end{array}
\end{equation}
\noindent
\textbf{Assumption 4.} At least $({\rm {\mathcal C}}(N_{i} )/2)+1$ neighbors of the sensor node $i$ are intact.
Assumption 4 is similar to the assumption found in the secure estimation and control literature \cite{c19}, \cite{c29}. Necessary and sufficient condition for any centralized or distributed estimator to resiliently estimate actual state is that the number of attacked sensors is less than half of all sensors.
\smallskip
\noindent
\noindent \textbf{Remark 6.} {Note that the proposed notion of trust and confidence for hybrid attacks on sensor networks for event-triggered DKF can also be seen as the weightage in the covariance fusion approach. Although covariance intersection-based Kalman consensus filters have been widely used in the literature to deal with unknown correlations in sensor networks (for instants see \cite{c10}-\cite{d10} and \cite{c310}-\cite{c312}), most of these results considered the time-triggered distributed state estimation problem with or without any adversaries. Compared with the existing results, however, a novelty of this work lies in detecting and mitigating the effect of attacks on sensors and communication channels for event-triggered DKF and providing a rigorous mathematical analysis for different triggering misbehaviors.}
\begin{theorem}
Consider the resilient event triggered DKF \eqref{ZEqnNum565391} with the triggering mechanism (\ref{eq3x}). Let the time-varying graph be ${\rm {\mathcal G}}(k)$ such that at each time instant $k,$ Assumptions 3 and 4 are satisfied. Then,
\begin{enumerate}
\item The following uniform bound holds on state estimation error in \eqref{ZEqnNum250987}, despite attacks
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum232225}
\left\| \bar{\eta }(k)\right\| \le (A_{o} )^{k} \left\| \bar{\eta }(0)\right\| +\sum _{m=0}^{k-1}(A_{o} )^{k-m-1} B_{o} ,
\end{equation}
where
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum594295}
\begin{array}{l} {A_{o} =\sigma _{\max } ((I_{N} \otimes A)M(k)),} \\ {B_{o} =\sigma _{\max } (A)\sigma _{\max } ({\rm {\mathcal L}}(k))\Upsilon _{m} \sqrt{N(} \alpha /\left\| C_{i} \right\| +{\rm {\mathcal B}})} \\ {\, \, \, \, \,\, \, \, \, \, \, \,\, \, \, \, \, \, \, \, +(\sigma _{\max } (A)+\sigma _{\max } (A_{o} ))\left\| \bar{\beta }\right\| \sqrt{N} \tau,} \end{array}
\end{equation}
with ${\rm {\mathcal L}}(k)$ denotes the confidence and trust dependent time-varying graph Laplacian matrix, and bound $\tau $ defined in \eqref{ZEqnNum466700};
\item The uniform bound on the state estimation error \eqref{ZEqnNum232225} becomes
\vspace{-0.15cm}
\begin{equation} \label{64)}
{\mathop{\lim }\limits_{k\to \infty }} \left\| \bar{\eta }(k)\right\| \le \frac{A_{o} B_{o} }{1-A_{o} }.
\end{equation}
\end{enumerate}
Moreover, other notations used in \eqref{ZEqnNum594295} are defined in \eqref{ZEqnNum500695}.
\end{theorem}
\begin{proof}
Using the presented resilient estimator \eqref{ZEqnNum565391}, one has
\begin{equation} \label{ZEqnNum429555}
\begin{array}{l} {\bar{x}_{i} (k+1)=A\hat{x}_{i} (k)} \\ \, \, \,\, \, \, \,\,\, \, \, \, \, {=A(\bar{x}_{i} (k)+K_{i} (k)(\beta _{i} (k)y_{i} (k)+(1-\beta _{i} (k))C_{i} m_{i} (k)} \\ \, \, \,\, \, \, \,\,\, \, \quad {-C_{i} \bar{x}_{i} (k))\, +\gamma _{i} \sum _{j\in N_{i} }\sigma _{i,j} (k)\beta _{j} (k)(\tilde{x}_{j} (k)-\tilde{x}_{i} (k))),} \end{array}
\end{equation}
Substituting \eqref{ZEqnNum466700} into \eqref{ZEqnNum429555} and using \eqref{ZEqnNum250987}, the state estimation error dynamics becomes
\begin{equation} \label{ZEqnNum162461}
\begin{array}{l} {\bar{\eta }_{i} (k+1)=AM_{i} (k)\bar{\eta }_{i} (k)+A\gamma _{i} \sum _{j\in N_{i} }a_{ij} (k)(\tilde{\eta }_{j} (k)-\tilde{\eta }_{i} (k) )} \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \, -AK_{i} (k)(1-\beta _{i} (k))C_{i} \varepsilon _{i} (k),} \end{array}
\end{equation}
where $a_{ij} (k)=\sigma _{i,j} (k)\beta _{j} (k)$ and $M_{i} (k)=I-K_{i} (k)C_{i} $.
\noindent Using \eqref{ZEqnNum162461} and notations defined in \eqref{ZEqnNum500695}, the global form of error dynamics becomes
\begin{equation} \label{ZEqnNum454905}
\begin{array}{l} {\bar{\eta }(k+1)=(I_{N} \otimes A)M(k)\bar{\eta }(k)-(\Upsilon \otimes A){\rm {\mathcal L}}(k)\tilde{\eta }(k)} \\ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \,\, \, \, \,\,\, \, \,{-(\bar{\beta }\otimes A)(I_{nN} -M(k))E(k)).} \end{array}
\end{equation}
Note that Assumption 4 implies that the total number of the compromised sensors is less than half of the total number of sensors in the network. That is, if $q$ neighbors of an intact sensor node are attacked and collude to send the same value to mislead it, there still exists $q+1$ intact neighbors that communicate values different from the compromised ones. Moreover, since at least half of the intact sensor's neighbors are intact, it can update its beliefs to discard the compromised neighbor's state estimates. Furthermore, since the time-varying graph ${\rm {\mathcal G}}(k)$ resulting from isolating the compromised sensors, based on Assumptions 3 and 4, the entire network is still collectively observable. Using the trust and confidence of neighboring sensors, the incoming information from the compromised communication channels is discarded.
Now taking norm of equation \eqref{ZEqnNum454905} from both sides and then using the triangular inequality, one has
\begin{equation} \label{ZEqnNum800097}
\begin{array}{l} {\left\| \bar{\eta }(k+1)\right\| \le \left\| (I_{N} \otimes A)M(k)\bar{\eta }(k)\right\| +\left\| (\Upsilon \otimes A){\rm {\mathcal L}}(k)\tilde{\eta }(k)\right\| } \\\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \, {+\left\| (\bar{\beta }\otimes A)(I_{nN} -M(k))E(k)\right\| .} \end{array}
\end{equation}
Using \eqref{ZEqnNum466700}, \eqref{ZEqnNum800097} can be rewritten as
\begin{equation} \label{ZEqnNum810116}
\begin{array}{l} {\left\| \bar{\eta }(k+1)\right\| \le A_{o} \left\| \bar{\eta }(k)\right\| +\sigma _{\max } ({\rm {\mathcal L}}(k))\left\| (\Upsilon \otimes A)\tilde{\eta }(k)\right\| } \\ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \, {+\left\| (\bar{\beta }\otimes A)-(\bar{\beta }\otimes I_{n} )(I_{N} \otimes A)M(k))E(k)\right\| .} \end{array}
\end{equation}
{After some manipulations, equation \eqref{ZEqnNum810116} becomes}
\begin{equation} \label{ZEqnNum297239}
\begin{array}{l} {\left\| \bar{\eta }(k+1)\right\| \le A_{o} \left\| \bar{\eta }(k)\right\| +\sigma _{\max } (A)\sigma _{\max } ({\rm {\mathcal L}}(k))\Upsilon _{m} \left\| \tilde{\eta }(k)\right\| } \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \, +(\sigma _{\max } (A)+\sigma _{\max } (A_{o} ))\left\| \bar{\beta }\right\| \sqrt{N} \tau ,} \end{array}
\end{equation}
with $\Upsilon _{m}$ defined in \eqref{ZEqnNum500695}. Then, using \eqref{ZEqnNum528573}, one can write \eqref{ZEqnNum297239} as
\begin{equation} \label{ZEqnNum560131}
\begin{array}{l} {\left\| \bar{\eta }(k+1)\right\| \le A_{o} \left\| \bar{\eta }(k)\right\| +(\sigma _{\max } (A)+\sigma _{\max } (A_{o} ))\left\| \bar{\beta }\right\| \sqrt{N} \tau} \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \,\, \,\, \, \, \, \, \, \, \, \,\, \,+\sigma _{\max } (A)\sigma _{\max } ({\rm {\mathcal L}}(k))\Upsilon _{m} \sqrt{N(} \alpha /\left\| C_{i} \right\| +{\rm {\mathcal B}})}, \end{array}
\end{equation}
After solving \eqref{ZEqnNum560131}, one has
\vspace{-0.2cm}
\begin{equation} \label{ZEqnNum925065}
\left\| \bar{\eta }(k)\right\| \le (A_{o} )^{k} \left\| \bar{\eta }(0)\right\| +\sum _{m=0}^{k-1}(A_{o} )^{k-m-1} B_{o} ,
\end{equation}
where $A_{0}$ and $B_{0}$ are given in \eqref{ZEqnNum594295}. This completes the proof of part 1. Based on Assumption 3, the distributed sensor network is always collectively observable. Thus, based on result provided in \cite{d6}, one can conclude that $A_{0}$ in \eqref{ZEqnNum925065} is always Schur and then the upper bound on state estimation error becomes \eqref{64)}. This completes the proof.
\end{proof}
\vspace{-0.15cm}
Based on the attack detection approach presented in Algorithms 1 and 2, one can detect the attacker's misbehavior and estimate the actual state using the result presented in Theorem 6 and Algorithm 3.
\begin{algorithm}[!ht]
\caption{Secure Distributed Estimation Mechanism (SDEM).}
\begin{enumerate}
\item [1:] Start with initial innovation sequences and design parameters $\Upsilon _{1} $ and $\Lambda _{1}$.
\item [2:] \textbf{procedure $\forall i=1,\ldots ,N$ }
\item [3:] {Use samples of innovation sequences $\{ r_{i}^{a} (l),\ldots, $ \qquad $r_{i}^{a} (l-1+w)\}$ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\} $ based on \eqref{ZEqnNum255093} and \eqref{ZEqnNum276515}, $\forall l\in k$.}
\item [4:] Estimate the $\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )$ using \eqref{ZEqnNum750552}.
\item [5:] {Based on \eqref{ZEqnNum125869}-\eqref{ZEqnNum359584}, compute confidence $\beta _{i} (k)$ as
\begin{equation}\label{Alg1}
\beta _{i} (k)=\Upsilon _{1} \sum _{l=0}^{k-1}\frac{(\kappa _{1} )^{k-l+1} }{\Upsilon _{1} +\hat{D}_{KL} (P_{r_{i}^{a} } ||P_{r_{i} } )}.
\end{equation}}
\item [6:] {For each sensor node $j\in N_{i} $, use samples of innovation sequences $\{ \zeta _{i,j}^{a} (l),\ldots ,\zeta _{i,j}^{a} (l-1+w)\}$ and $\{ r_{i} (l),\ldots ,r_{i} (l-1+w)\}$ based on \eqref{ZEqnNum178443} and \eqref{ZEqnNum276515}, $\forall l\in k$.}
\item [7:] Estimate the $\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )$ using \eqref{ZEqnNum691139}.
\item [8:] {Using \eqref{ZEqnNum846884}-\eqref{ZEqnNum805360}, compute trust $\sigma _{i,j}(k)$ as
\begin{equation}\label{Alg2}
\sigma _{i,j} (k)=\Lambda _{1} \sum _{l=0}^{k-1}\frac{(\kappa _{2} )^{k-l+1} }{\Lambda _{1} +\hat{D}_{KL} (P_{\zeta _{i,j}^{a} } ||P_{r_{i} } )} \theta _{i,j} (l).
\end{equation}}
\item [9:] {Using the sensor measurement $y_{i} (k)$ with the confidence $\beta _{i} (k)$ {in \eqref{Alg1}}, the trust on neighbor's $\sigma _{i,j} (k)$ {in \eqref{Alg2}} and neighbor's state estimates $\tilde{x}_{j} (k),\, \, \forall j\in N_{i} $, update the resilient state estimator in \eqref{ZEqnNum565391}.}
\item [10:] \textbf{end procedure}
\end{enumerate}
\end{algorithm}
\vspace{-0.2cm}
\section{ Simulation Results}
In this section, we discuss simulation results to demonstrate the efficacy of presented attack detection and mitigation mechanism. The sensor network assumed to have following undirected graph topology as given in Fig. 2 with objective to follow the desired process dynamics.
Consider the process dynamics in \eqref{ZEqnNum820040} for generating the target trajectory as
\begin{equation}
x(k+1) =\left[\begin{array}{cc} {\cos (\pi /200)} & {-\sin (\pi /200)} \\ {\sin (\pi /200)} & {\cos (\pi /200)} \end{array}\right]x(k) \, +\, w(k),
\label{72)}
\end{equation}
with the observation matrix $C_{i} $ in \eqref{ZEqnNum687942}, noise covariances and initial state as
\vspace{-0.2cm}
\begin{equation} \label{721)}
C_{i} =[5\, \, 0;0\, \, 2],\, \, \, \, \, Q=I_{2} ,\, \, \, \, \, R_{i} =\, I_{2} ,\, \, \, \, \, x_{0} =(0.5,0).
\vspace{-0.2cm}
\end{equation}
\vspace{-0.1cm}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=1.45in,height=1.2in]{graph_ET.jpg}
\vspace{-5pt}\caption{Communication topology.}\label{fig2}
\captionsetup{justification=centering}
\end{center}
\end{figure}
\vspace{-0.7cm}
\noindent
\begin{figure}[H]
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=29mm]{est_error.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=28mm]{event.pdf}
\caption{}
\end{subfigure}
\caption{Sensor network without any attack. (a) State estimation errors (b) Transmit function for sensor 2}
\label{m1}
\vspace{-0.35cm}
\end{figure}
For intact sensor network, {based on the process dynamics in \eqref{72)} with noise covariances in \eqref{721)}}, the state estimates of sensors converge to the desired process state in the mean square sense and their state estimation error goes to zero for each sensor node as shown in Fig. 3a. The event generation based on the event-triggering mechanism in (\ref{eq3x}) with the triggering threshold $\alpha =1.8$ is shown in Fig. 3b. Then, we consider the sensor 2 of the network is compromised with the adversarial input $\delta _{2} (k)=2+10\sin (100k)$ after 20 seconds. Fig. 4a shows the attacker's effect on sensor 2 and one can notice that the compromised sensors and other sensors in the network deviates from desired target state and results in non-zero estimation error based on attacker's input. Furthermore, the event generation based on the event-triggering mechanism in (\ref{eq3x}) in the presence of attack is shown in Fig. 4b and it can be seen that after injection of the attack on sensor 2, the event-triggered system becomes time-triggered and shows continuous-triggering misbehavior. This result follows the analysis presented for the continuous-triggering misbehavior. In Fig. 5, we show the results for non-triggering misbehavior for sensor node 2 which follow the presented analysis.
\vspace{-0.1cm}
\begin{figure}[H]
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=31mm]{est_error_ua.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=31mm]{event_ua_u.pdf}
\caption{}
\end{subfigure}
\caption{Sensor node 2 under continuous-triggering misbehavior. (a) State estimation errors (b) Transmit function for sensor 2}
\label{m1}
\vspace{-0.6cm}
\end{figure}
\vspace{-0.2cm}
\begin{figure}[H]
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=30mm]{non-trig_est_error.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=31mm]{NT_event.pdf}
\caption{}
\end{subfigure}
\caption{Sensor node 2 under non-triggering misbehavior. (a) State estimation errors (b) Transmit function for sensor 2}
\label{m1}
\vspace{-0.35cm}
\end{figure}
\vspace{-0.35cm}
\begin{figure}[H]
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=30mm]{Detection_Cyb.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=1.1\linewidth, height=30mm]{confidence_cyb.pdf}
\caption{}
\end{subfigure}
\caption{Sensor node 2 under attack. (a) Estimated KL divergence (b) Confidence of sensors}
\label{m1}
\vspace{-0.35cm}
\end{figure}
\vspace{-0.4cm}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=3.6in,height=1.3in]{final_save3.pdf}
\vspace{-10pt}\caption{State estimation errors under attack on sensor $2$ using proposed resilient state estimator.}\label{fig7}
\captionsetup{justification=centering}
\end{center}
\vspace{-0.6cm}
\end{figure}
\noindent
Now, we detect the effect of the attack on the sensor using presented attack detection mechanism. Fig. 6a shows the result for estimated KL divergence based attack detection mechanism and it illustrates that the after the injection of attack signal the estimated KL divergence starts increasing for compromised sensor node as well as for sensor nodes which has a path from the compromised sensor. One can always design a threshold and detect the effect of the attack in the sensor network and, then isolate the corrupted sensor in WSNs to avoid propagation of attack in the WSNs.\par
The estimated divergence for the compromised sensor, i.e., sensor 2 grows after attack injection at $k=20$ which follows the result presented in the Theorem 4. The confidence of the sensor is evaluated based on the Lemma 1 with the discount factor $\kappa _{1} =0.5$ and the uncertainty threshold $\Upsilon _{1} =0.5$. Fig. 6b shows the confidence of sensors in the presence of the considered attack which is close to one for healthy sensors and tends to zero for the compromised one. Then, the belief based proposed resilient estimator is implemented and Fig. 7 shows the result for the state estimation using the resilient estimator \eqref{ZEqnNum565391}. After the injection of attack, within a few seconds, the sensors reach consensus on the state estimates, i.e., the state estimates of sensors converge to the actual position of the target. The result in Fig. 7 follows Theorem 6.
\vspace{-0.25cm}
\section{ Conclusion}
In this paper, first, we analyze the adverse effects of cyber-physical attacks on the event-triggered distributed Kalman filter (DKF). We show that attacker can adversely affect the performance of the DKF. We also show that the event-triggered mechanism in the DKF can be leveraged by the attacker to result in a non-triggering misbehavior that significantly harms the network connectivity and its collective observability. Then, {to detect adversarial intrusions in the DKF, we relax restrictive Gaussian assumption on probability density functions of attack signals and estimate the Kullback-Leibler (KL) divergence via $k$-nearest neighbors approach. }Finally, to mitigate attacks, a meta-Bayesian approach is presented that incorporates the outcome of the attack detection mechanism to perform second-order inference and consequently form beliefs over beliefs, i.e., confidence and trust of a sensor. Each sensor communicates its confidence to its neighbors. Sensors then incorporate the confidence of their neighbors and their own trust about their neighbors into their posterior update laws to successfully discard corrupted sensor information. Then, the simulation result illustrates the performance of the presented resilient event-triggered DKF.
\vspace{-0.3cm}
\appendices
\section{Proof of Theorem 1}
Note that for the notional simplicity, in the following proof, we keep the sensor index $i$ but ignore the time-indexing $k$. Without the time index, we represent the prior at time $k+1$ as $\bar{x}_{i}^{a} (k+1)\buildrel\Delta\over= (\bar{x}_{i}^{a} )^{+} $ and follow the same for other variables.
Using the process dynamics in \eqref{ZEqnNum820040} and the corrupted prior state estimate in \eqref{ZEqnNum120276}, one has
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum533668}
(\bar{\eta }_{i}^{a} )^{+} =x^{+} -(\bar{x}_{i}^{a} )^{+} =A(x\, -\hat{x}_{i}^{a} )\, +\, w,
\end{equation}
where the compromised posterior state estimate $\hat{x}_{i}^{a} (k)$ follows the dynamics \eqref{ZEqnNum120276}. Similarly, using \eqref{ZEqnNum120276}, the corrupted posterior state estimation error becomes
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum240480}
\eta _{i}^{a} =x-\hat{x}_{i}^{a} =x-\bar{x}_{i}^{a} -K_{i}^{a} (y_{i} -C\bar{x}_{i}^{a} )-\gamma _{i} \sum _{j\in N_{i} }(\tilde{x}_{j}^{a} -\tilde{x}_{i}^{a} )-K_{i}^{a} f_{i} .
\end{equation}
\vspace{-0.12cm}
Then, one can write \eqref{ZEqnNum533668}-\eqref{ZEqnNum240480} as
\begin{equation} \label{ZEqnNum404232}
\left\{\begin{array}{l} {(\bar{\eta }_{i}^{a} )^{+} =A\eta _{i}^{a} \, +\, w,} \\ {\eta _{i}^{a} =(I_{n} -K_{i}^{a} C_{i} )\bar{\eta }_{i}^{a} -K_{i}^{a} v_{i} +u_{i}^{a} ,} \end{array}\right.
\end{equation}
where
\begin{equation} \label{ZEqnNum571757}
\vspace{-0.12cm}
u_{i}^{a} =\gamma _{i} \sum _{j\in N_{i} }(\tilde{\eta }_{j}^{a} -\tilde{\eta }_{i}^{a} )-K_{i}^{a} f_{i} .
\end{equation}
Based on \eqref{ZEqnNum926700}, we define the predictive state estimation error, respectively, under attack as
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum896894}
\begin{array}{l}(\tilde{\eta }_{i}^{a} )^{+} =x^{+} -(\tilde{x}_{i}^{a} )^{+}\\ \, \, \, \, \,\, \, \, \, \, \,\, \, \, \, \, \,\,\, \, \, \,\,\, =\zeta _{i}^{+} (\bar{\eta }_{i}^{a} )^{+} +(1-\zeta _{i}^{+} )A\tilde{\eta }_{i}^{a} . \end{array}
\end{equation}
Using \eqref{ZEqnNum404232}, the corrupted covariance of the prior state estimation error becomes
\begin{equation} \label{ZEqnNum928831}
\begin{array}{l} {(\bar{P}_{i}^{a} )^{+} ={\bf {\rm E}}\left[(\bar{\eta }_{i}^{a} )^{+} ((\bar{\eta }_{i}^{a} )^{+} )^{T} \right],} \\ {\, \, \, \, \,\, \, \, \, \, \, \,\,\, \, \, \, \, \, \, \, \, \, \, ={\bf {\rm E}}\left[(A\eta _{i}^{a} \, +\, w)(A\eta _{i}^{a} \, +\, w\, )^{T} \right]=A\hat{P}_{i}^{a} A^{T} +Q.} \end{array}
\end{equation}
Using the corrupted predictive state estimate error $\, (\tilde{\eta }_{i}^{a} )^{+} $ in \eqref{ZEqnNum896894} with $(\bar{P}_{i,j}^{a} )^{+} =A\hat{P}_{i,j}^{a} A^{T} +Q$, one can write the cross-correlated predictive state estimation error covariance $(\tilde{P}_{i,j}^{a} )^{+} $ as
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum354214}
\begin{array}{l} {(\tilde{P}_{i,j}^{a} )^{+} ={\bf {\rm E}}\left[(\tilde{\eta }_{i}^{a} )^{+} ((\tilde{\eta }_{j}^{a} )^{+} )^{T} \right]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{=\zeta _{i}^{+} (1-\zeta _{j}^{+} )A(\breve{P}_{i,j}^{a} )^{+} +(1-\zeta _{i}^{+} )\zeta _{j}^{+} (\stackrel{\frown}{P}_{i,j}^{a} )^{+} A^{T} } \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{+\zeta _{i}^{+} \zeta _{j}^{+} (\bar{P}_{i,j}^{a} )^{+} +(1-\zeta _{i}^{+} )(1-\zeta _{j}^{+} )(A\tilde{P}_{i,j}^{a} A^{T} +Q),} \end{array}
\end{equation}
where $\stackrel{\frown}{P}_{i,j}^{a} $ and $\breve{P}_{i,j}^{a}$ be the cross-correlated estimation error covariances and their updates are given in \eqref{ZEqnNum358063}-\eqref{ZEqnNum655968}.
The cross-correlated estimation error covariance $(\stackrel{\frown}{{P}}_{i,j}^{a} )^{+}$ in \eqref{ZEqnNum354214} is given by
\vspace{-0.12cm}
\begin{equation} \label{ZEqnNum358063}
\begin{array}{l} {(\stackrel{\frown}{P}_{i,j}^{a} )^{+} ={\bf {\rm E}}\left[(\tilde{\eta }_{i}^{a} )^{+} ((\bar{\eta }_{j}^{a} )^{+} )^{T} \right]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{=\zeta _{i}^{+} (\bar{P}_{i,j}^{a} )^{+} +(1-\zeta _{i}^{+} )A\sum _{r\in N_{i} }(\tilde{P}_{i,r}^{a} -\tilde{P}_{i,j}^{a} )(\gamma _{i} A)^{T} +} \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, (1-\zeta _{i}^{+} )[A\stackrel{\frown}{P}_{i,j}^{a} M_{i}^{a} A^{T} +Q],} \end{array}
\end{equation}
where $\tilde{P}_{i,j}^{a}$ and $\breve{P}_{i,j}^{a}$ denote the cross-correlated estimation error covariances evolve according to \eqref{ZEqnNum354214} and \eqref{ZEqnNum655968}. Similarly, $(\breve{P}_{i,j}^{a} )^{+} $ is updated based on the expression given by
\begin{equation} \label{ZEqnNum655968}
\begin{array}{l} {(\breve{P}_{i,j}^{a} )^{+} ={\bf {\rm E}}\left[(\bar{\eta }_{i}^{a} )^{+} ((\tilde{\eta }_{j}^{a} )^{+} )^{T} \right]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{={\bf {\rm E}}\left[(\bar{\eta }_{i}^{a} )^{+} (\zeta _{j}^{+} (\bar{\eta }_{j}^{a} )^{+} +(1-\zeta _{j}^{+} )(A\tilde{\eta }_{j}^{a} +w)\, )^{T} \right]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{=\zeta _{j}^{+} (\bar{P}_{i,j}^{a} )^{+} +(1-\zeta _{j}^{+} )[A(M_{i}^{a} )^{T} \stackrel{\frown}{P}_{i,j}^{a} A^{T} +Q]} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{+(1-\zeta _{j}^{+} )A\gamma _{i} \sum _{s\in N_{i} }(\tilde{P}_{s,j}^{a} -\tilde{P}_{i,j}^{a} ) A^{T} .} \end{array}
\end{equation}
Now using \eqref{ZEqnNum240480}-\eqref{ZEqnNum896894}, one can write the covariance of posterior estimation error $\hat{P}_{i}^{a} $ as
\begin{equation} \label{ZEqnNum893608}
\begin{array}{l}
\hat P_i^a = {\rm{E}}[{M_i}\bar \eta _i^a{({M_i}\bar \eta _i^a)^T}] + {\rm{E}}[K_i^a{v_i}{(K_i^a{v_i})^T}] - 2{\rm{E}}[({M_i}\bar \eta _i^a){(K_i^a{v_i})^T}]{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
\,\,\,\,\,\,{\mkern 1mu} - 2{\rm{E}}[K_i^a{v_i}{({\gamma _i}u_i^a)^T}] + {\rm{E}}[({\gamma _i}u_i^a){({\gamma _i}u_i^a)^T}] + 2{\rm{E}}[({M_i}\bar \eta _i^a){({\gamma _i}u_i^a)^T}],
\end{array}
\end{equation}
Using \eqref{ZEqnNum928831} and measurement noise covariance, the first two terms of \eqref{ZEqnNum893608} become
\begin{equation} \label{87)}
\begin{array}{l} {{\bf {\rm E}}[M_{i} \bar{\eta }_{i}^{a} (M_{i} \bar{\eta }_{i}^{a} )^{T} ]=M_{i} \bar{P}_{i}^{a} M_{i}^{T},}\,\,\,\, \,\, {{\bf {\rm E}}[K_{i}^{a} v_{i} (K_{i}^{a} v_{i} )^{T} ]=K_{i}^{a} R_{i} (K_{i}^{a} )^{T}_.} \end{array}
\end{equation}
According to Assumption 1, the measurement noise $v_{i} $ is i.i.d. and uncorrelated with state estimation errors, therefore, the third and fourth terms in \eqref{ZEqnNum893608} become zero. Now $u_{i}^{a} $ in \eqref{ZEqnNum571757} and Assumption 1, the last two terms in \eqref{ZEqnNum893608} can be simplified as
\vspace{-0.2cm}
\begin{equation} \label{88)}
\begin{array}{l}
{\rm{E}}[(u_i^a){(u_i^a)^T}] = {\gamma _i}^2({\rm{E}}\left[ {[\sum\limits_{j \in {N_i}} {(\tilde \eta _j^a - \tilde \eta _i^a)} ][\sum\limits_{j \in {N_i}} ( \tilde \eta _j^a - \tilde \eta _i^a)){]^T}} \right]{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \,\,\,\,\,\,\, + {\rm{E}}[K_i^a{f_i}{(K_i^a{f_i})^T}] - 2K_i^a{\rm{E}}[{f_i}\sum\limits_{j \in {N_i}} {{{(\tilde \eta _j^a - \tilde \eta _i^a)}^T}} ]),{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = {\gamma _i}^2(\sum\limits_{j \in {N_i}} {(\tilde P_j^a - 2\tilde P_{i,j}^a + \tilde P_i^a)} + K_i^a\Sigma _i^f{(K_i^a)^T}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \\
{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\mkern 1mu} - 2K_i^a{\rm{E}}[{f_i}\sum\limits_{j \in {N_i}} {{{(\tilde \eta _j^a - \tilde \eta _i^a)}^T}} ]),
\end{array}
\end{equation}
\vspace{-0.1cm}
and
\begin{equation} \label{ZEqnNum612155}
\begin{array}{l} {2{\bf {\rm E}}[(u_{i}^{a} )(M_{i} \bar{\eta }_{i}^{a} )^{T} ]=2{\bf {\rm E}}[(\gamma _{i} \sum _{j\in N_{i} }(\tilde{\eta }_{j}^{a} -\tilde{\eta }_{i}^{a} )-K_{i}^{a} f_{i} )(M_{i} \bar{\eta }_{i}^{a} )^{T} ],} \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{=2\gamma _{i} \sum _{j\in N_{i} }(\stackrel{\frown}{P}_{i,j}^{a} -\stackrel{\frown}{P}_{i}^{a} )M_{i}^{T} -2K_{i}^{a} {\bf {\rm E}}[f_{i} (\bar{\eta }_{i}^{a} )^{T} ]M_{i}^{T},} \end{array}
\end{equation}
where the cross-correlated term $\stackrel{\frown}{P}_{i,j}^{a} $ is updated according to \eqref{ZEqnNum358063}. Using \eqref{ZEqnNum893608}-\eqref{ZEqnNum612155}, the posterior state estimation error $P_{i}^{a} $ under attacks is given by
\vspace{-0.2cm}
\begin{equation} \label{90)}\nonumber
\begin{array}{l} {\hat{P}_{i}^{a} =M_{i}^{a} \bar{P}_{i}^{a} (M_{i}^{a} )^{T} +K_{i}^{a} [R_{i} +\Sigma _{i}^{f} ](K_{i}^{a} )^{T} -2K_{i}^{a} \Xi _{f} } \\ \,\,\,\,\,\,\,\,\,\,\,{+2\gamma _{i} \sum _{j\in N_{i} }(\stackrel{\frown}{P}_{i,j}^{a} -\stackrel{\frown}{P}_{i}^{a} )(M_{i}^{a} )^{T} +\gamma _{i} {}^{2} (\sum _{j\in N_{i} }(\tilde{P}_{j}^{a} -2\tilde{P}_{i,j}^{a} +\tilde{P}_{i}^{a} ),} \end{array}
\end{equation}
with $\Xi _{f} =[{\bf {\rm E}}[f_{i} \sum _{j\in N_{i} }(\tilde{\eta }_{j}^{a} -\tilde{\eta }_{i}^{a} )^{T} ])+{\bf {\rm E}}[f_{i} (\bar{\eta }_{i}^{a} )^{T} ](M_{i}^{a} )^{T} ].$ This completes the proof.
\vspace{-0.3cm}
|
1,108,101,566,504 | arxiv | \section{Introduction}
As overhead imagery captured via remote sensing becomes more abundant,
it is increasingly relied upon as an important source of information
for understanding locations and how they change over time. For
example, methods have been proposed for extracting
roads~\cite{batra2019improved}, detecting
buildings~\cite{bischke2019multi}, estimating land
cover~\cite{robinson2019large}, and interpreting the effects of
natural disasters~\cite{doshi2018satellite}. Unfortunately, various
artifacts contained in the captured imagery, such as clouds, snow, and
shadows, negatively impact the performance of these methods.
Clouds and their properties have long been researched due to their
impact on weather and climate processes~\cite{liou1986influence}. In
an empirical study Wylie et al.~\cite{wylie2005trends} analyze cloud
cover over a 22 year period using atmospheric sounding, finding that
approximately 75 percent off all observations indicated clouds. Given
their high frequency, clouds present persistent challenges for
interpreting overhead imagery and many methods have been proposed for
identifying them~\cite{li2015cloud,xie2017multilevel}.
The primary challenge is that the appearance of clouds can vary
dramatically and collecting manually annotated data is time consuming
and expensive. This issue is further compounded by the various sensor
types and resolutions of satellite imagery, as well as differences in
locations around the globe. Consider the scenario of transitioning to
a new sensor. Instead of collecting large amounts of new annotations,
a method is needed that can function with minimal supervision. In this
work we explore how recent advances in multi-image fusion can be
extended to support cloud detection.
First, we design an architecture for weakly-supervised multi-image
fusion that learns to estimate image quality. Then, we describe two
approaches which take advantage of the resulting quality network to
produce a cloud detector. To support the training and evaluation of
our methods, we collect a large dataset of overhead images captured at
varying timesteps and varying levels of cloud cover. Our contributions
include: 1) an analysis of multi-image fusion on real data, 2) two
approaches for identifying clouds that require limited supervision,
and 3) an extensive evaluation, achieving state-of-the-art results on
a benchmark dataset.
\section{Approach}
Our approach for identifying clouds uses multi-image fusion as a form
of bootstrapping, reducing the need for annotated training data. We
start by describing the architecture for multi-image fusion and then
describe how we extend this architecture for detecting clouds.
\subsection{Multi-Image Fusion}
\label{sec:fusion}
We apply multi-image fusion to take a stack of images over the same
region, $I = \{I_1,\ldots,I_K\}$, where $I_j \in R^{h \times w \times
3}$, and produce a fused image, $F = \phi(I)$, such that $F$ is free
of artifacts. Our approach is inspired by the recent work of Rafique
et al.~\cite{usman2019weakly}. There are two main steps: 1) estimating
a per-pixel quality mask for each image then using the qualities to
compute a fused image and 2) passing the fused image through a
segmentation network to produce a per-pixel semantic labeling. When
trained end-to-end, this architecture learns to estimate per-pixel
image qualities that can be used to produce a fused image with reduced
artifacts, without requiring explicit labels.
\subsubsection{Dataset}
To support the training of our methods, we collected Sentinel-2
imagery from the state of Delaware with varying levels of cloud cover.
Starting from a bounding box around the state, we generated a set of
non-overlapping tiles using the standard XYZ style spherical Mercator
tile. For each tile, we collected a semantic labeling from the
Chesapeake Land Cover dataset~\cite{robinson2019large}, removing tiles
without valid labels. For each remaining tile, we randomly downloaded
six Sentinel-2 images (RGB bands) from the year 2019 that satisfied
the constraint of having between 10\% and 50\% cloud cover in the
parent Sentinel-2 image strip. This process resulted in \num{1033}
unique locations and \num{6198} images (of size $512 \times 512$).
\figref{data} shows some example images from our dataset.
\subsubsection{Method}
Each image $I_j$ is first passed through a \emph{quality} network
which outputs a per-pixel quality mask $Q_j \in R^{h \times w}$ for
each pixel $p$, such that $Q_j(I_j(p)) \in [0,1]$. Given quality masks
for each image, a relative quality score at each pixel is computed by
applying a softmax across images:
\begin{equation}
Q^{*}_j(p) = \frac{e^{Q_j(p)}}{\sum_{k=1}^{K} e^{Q_k(p)}}.
\end{equation}
The final fused image $F_j$ is obtained by averaging the images
weighted by the relative quality score:
\begin{equation}
F_{j}(p) = \sum_{j=1}^{K} I_j(p) Q^{*}_j(p).
\end{equation}
The fused image $F_j$ is passed through a \emph{segmentation} network
to produce a per-pixel labeling. The entire architecture, both quality
network and segmentation network, are optimized using a cross-entropy
loss function.
\subsubsection{Architecture Details}
For the quality network, we use a slightly modified
U-Net~\cite{ronneberger2015u} with the same number of layers but a
quarter of the feature maps compared to the original work. The final
activation is a sigmoid. For the segmentation network, we build on
LinkNet~\cite{chaurasia2017linknet}, a modern, lightweight
segmentation architecture that follows an encoder/decoder approach.
Specifically, we use LinkNet-34, which is LinkNet with a
ResNet-34~\cite{he2016deep} encoder. We initialize the encoder with
weights from a network pretrained on ImageNet.
\subsection{Detecting Clouds}
\begin{figure}[t]
\centering
\setlength\tabcolsep{1pt}
\begin{tabular}{ccc}
\includegraphics[width=.32\linewidth]{dataset/0_label} &
\includegraphics[width=.32\linewidth]{dataset/5_label} &
\includegraphics[width=.32\linewidth]{dataset/31_label} \\
\includegraphics[width=.32\linewidth]{dataset/0_3} &
\includegraphics[width=.32\linewidth]{dataset/5_3} &
\includegraphics[width=.32\linewidth]{dataset/31_0} \\
\includegraphics[width=.32\linewidth]{dataset/0_4} &
\includegraphics[width=.32\linewidth]{dataset/5_5} &
\includegraphics[width=.32\linewidth]{dataset/31_1} \\
\includegraphics[width=.32\linewidth]{dataset/0_5} &
\includegraphics[width=.32\linewidth]{dataset/5_4} &
\includegraphics[width=.32\linewidth]{dataset/31_2} \\
\end{tabular}
\caption{Examples from our dataset for multi-image fusion. (top)
Land cover labeling from the Chesapeake Land Cover
dataset~\cite{robinson2019large}. (bottom) Images of the same
location with varying cloud cover.}
\label{fig:data}
\end{figure}
\begin{figure*}
\centering
\setlength\tabcolsep{1pt}
\begin{tabular}{cccc|ccc}
Image (1 of 6) & Quality & Image (2 of 6) & Quality & Fused & Target & Prediction \\
\includegraphics[width=.139\linewidth]{fusion/4_0_im} &
\includegraphics[width=.139\linewidth]{fusion/4_0_quality} &
\includegraphics[width=.139\linewidth]{fusion/4_2_im} &
\includegraphics[width=.139\linewidth]{fusion/4_2_quality} &
\includegraphics[width=.139\linewidth]{fusion/4_fused} &
\includegraphics[width=.139\linewidth]{fusion/4_target} &
\includegraphics[width=.139\linewidth]{fusion/4_pred} \\
\includegraphics[width=.139\linewidth]{fusion/28_0_im} &
\includegraphics[width=.139\linewidth]{fusion/28_0_quality} &
\includegraphics[width=.139\linewidth]{fusion/28_5_im} &
\includegraphics[width=.139\linewidth]{fusion/28_5_quality} &
\includegraphics[width=.139\linewidth]{fusion/28_fused} &
\includegraphics[width=.139\linewidth]{fusion/28_target} &
\includegraphics[width=.139\linewidth]{fusion/28_pred} \\
\end{tabular}
\caption{Qualitative examples of multi-image fusion. (left)
Example images and estimated quality masks. (right) The fused
image produced using the relative quality scores, the target
segmentation mask, and our prediction.}
\label{fig:fusion}
\end{figure*}
The quality network learns to identify artifacts in the training data
that negatively impact the final segmentation, for example clouds and
regions of no data. We describe two approaches which use the quality
network, trained for multi-image fusion, as a starting point for
learning a cloud detector (per-pixel binary classification). For these
methods, we use the dataset recently introduced by Liu et
al.~\cite{liu2019clouds} with 100 training images and 20 testing
images.
\subsubsection{Quality Calibration}
We apply Platt scaling (which we refer to as quality calibration) to
transform the outputs of the quality network into a distribution over
classes (cloud/not cloud). In practice, this means we fit a logistic
regression model:
\begin{equation}
P(y=1|Q_j(p)) = \frac{1}{1 + e^{\beta_0Q_j(p)+\beta_1}},
\end{equation}
where $\beta_0$ and $\beta_1$ are two learned parameters.
\subsubsection{Fine-Tuning the Quality Network}
Alternatively, we employ transfer learning, freezing all layers of the
quality network except the final three convolutional layers (the last
upsampling block and the final $1\times1$ convolution). Then, we
fine-tune the network for cloud detection. We optimize the network
using the following loss function:
\begin{equation}
\mathcal{L} = \mathcal{L}_{bce} + (1 -\mathcal{L}_{dice})
\end{equation}
where $\mathcal{L}_{bce}$ is binary cross entropy, a standard loss
function used in binary classification tasks, and
$\mathcal{L}_{dice}$ is the dice coefficient, which measures spatial
overlap.
\subsection{Implementation Details}
Our methods are implemented using the
PyTorch~\cite{paszke2017automatic} framework and optimized using
RAdam~\cite{liu2019variance} with Lookahead~\cite{zhang2019lookahead}
($k=5, \alpha=.5$). The learning rate is $\lambda = 10^{-4}$
($10^{-2}$ when fine-tuning). We train all networks with a batch size
of 10 for 100 epochs and train on random crops of size $416 \times
416$. For multi-image fusion, we randomly sample 4 images per location
during training.
\section{Evaluation}
We evaluate our methods both qualitatively and quantitatively through
a variety of experiments.
\subsection{Visual Analysis of Multi-Image Fusion on Real Data}
Previous work on multi-image fusion used training data augmented with
synthetic clouds~\cite{usman2019weakly}. In our work, we train and
evaluate our approach using real images with varying levels of cloud
cover. \figref{fusion} shows example output from our network
(described in \secref{fusion}), including: example images alongside
the estimated quality mask, the fused image using the relative quality
scores, the target label from the Chesapeake Land Cover
dataset~\cite{robinson2019large}, and our prediction. The estimated
quality masks clearly identify artifacts in the imagery, such as
clouds.
\subsection{Quantitative Analysis of Cloud Detection}
Using the dataset recently introduced by Liu et
al.~\cite{liu2019clouds}, we quantitatively evaluate our methods
ability to detect clouds. \tblref{results} shows the results of this
experiment. We compare against a baseline, \emph{Ours (threshold)},
that na\"ively thresholds the quality masks at $.5$ (treating anything
below the threshold as a cloud). The baseline, which requires no
direct supervision, is able to correctly classify over 91\% of pixels.
Applying quality calibration, \emph{Ours (calibrate)}, to the output
of the quality network improves upon this result. Ultimately
fine-tuning, \emph{Ours (fine-tune)}, outperforms all baselines,
achieving superior results than Liu et al.~\cite{liu2019clouds}. Next,
we evaluate the ability of our approach to identify clouds with
varying number of training images (\figref{acc_vs_size}). For this
experiment, we trained each model on a randomly selected subset of the
training data and fine-tuning was limited to 30 epochs. As observed,
our proposed approaches require very few annotated images to produce
reasonable cloud detection results. Finally, \figref{clouds} shows
some example predictions using our best method.
\begin{table}
\centering
\caption{Quantitative evaluation for cloud detection.}
\begin{tabular}{@{}lrrrrr@{}}
\toprule
Method & TPR & TNR & mIoU & Accuracy \\
\bottomrule
Liu et al.~\cite{liu2019clouds} & 0.963 & 0.945 & 89.47\% & 95.87\% \\
Ours (threshold) & \textbf{0.982} & 0.878 & 81.78\% & 91.73\% \\
Ours (calibrate) & 0.933 & \textbf{0.967} & 88.50\% & 95.42\% \\
Ours (fine-tune) & 0.962 & \textbf{0.967} & \textbf{91.24\%} & \textbf{96.51\%} \\
\bottomrule
\end{tabular}
\label{tbl:results}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{acc_vs_size}
\caption{Evaluating the impact of the number of training images on
cloud detection accuracy.}
\label{fig:acc_vs_size}
\end{figure}
\begin{figure}
\centering
\setlength\tabcolsep{1pt}
\begin{tabular}{cccc}
Image & Target & Prediction & Error \\
\includegraphics[width=.24\linewidth]{qualitative/0_im} &
\includegraphics[width=.24\linewidth]{qualitative/0_label} &
\includegraphics[width=.24\linewidth]{qualitative/0_pred} &
\includegraphics[width=.24\linewidth]{qualitative/0_error} \\
\includegraphics[width=.24\linewidth]{qualitative/10_im} &
\includegraphics[width=.24\linewidth]{qualitative/10_label} &
\includegraphics[width=.24\linewidth]{qualitative/10_pred} &
\includegraphics[width=.24\linewidth]{qualitative/10_error} \\
\includegraphics[width=.24\linewidth]{qualitative/18_im} &
\includegraphics[width=.24\linewidth]{qualitative/18_label} &
\includegraphics[width=.24\linewidth]{qualitative/18_pred} &
\includegraphics[width=.24\linewidth]{qualitative/18_error} \\
\end{tabular}
\caption{Example cloud detection results using \emph{Ours
(fine-tune)}. The error image (right) shows false positives
(negatives) color coded as purple (yellow).}
\label{fig:clouds}
\end{figure}
\section{Conclusion}
We presented methods for detecting clouds that require minimal
supervision. Our key insight was to take advantage of multi-image
fusion, which learns to capture the notion of image quality, as a form
of pretraining. To support our methods, we introduced a large dataset
of images with varying levels of cloud cover and a corresponding
per-pixel land cover labelling. Using this dataset, we showed results
for multi-image fusion on real-world imagery. Finally, we presented a
quantitative evaluation of cloud detection, ultimately achieving
state-of-the-art results on an existing cloud detection benchmark
dataset.
|
1,108,101,566,505 | arxiv | \section{Introduction}
Recent advances in the field of machine learning have provided many opportunities for researching applications in the sciences. Fundamentally, the machine learning approach is motivated by the use of pattern recognition and statistical inference in a manner capable of isolating statistically significant features to produce meaningful predictions. This is in contrast to more traditional methods that require explicit instruction sets to elicit meaningful predictions from input data. In many cases, the appropriate instructions are not necessarily known, which renders a machine learning approach attractive for addressing such systems. Generally speaking, obtaining a prediction using a feed-forward artificial neural network involves tuning the weights and edges in a computational graph to minimize error in the network given an input and an output. Interestingly, it has been shown that there is an exact mapping between the variational renormalization group (RG) and the RBM, perhaps implying that machine learning approaches may employ a generalized RG-like scheme to extract relevant features from input data \cite{ml_rg}.
Identifying statistically significant features in physical systems to determine criteria for phase transitions is a well-known approach. One example is the widely-adopted Lindemann parameter, which is often used to characterize the melting of crystal structures by measuring the deviations of atoms from equilibrium positions \cite{lindemann}. For sufficiently complex systems, the identification of such a criterion can be inaccessible through traditional means, which opens an opportunity for machine learning approaches to accomplish such a task. This concept of hidden orderings has been proposed for some systems \cite{hel_ord,cup_hid_ord,hf_hid_ord}. Although inference methods, such as the maximum likelihood method and the maximum entropy method \cite{max_like,max_ent} have been routinely applied on certain physical problems, applications which utilize other machine learning methods have not attracted much attention until recently. A lot of work has been done in recent times for detecting phase transitions and performing structure classifications in various physical systems using machine learning \cite{ml_pom,ml_pt,struc_clss,ml_pt_lat,frust_spin_1,frust_spin_2,melting}. Additionally, some exciting work has been done to investigate the capacity for machine learning methods to accelerate numerical simulations of physical systems \cite{acc_mc}. For the Ising system specifically, there have been many such efforts. Boltzmann machines (BMs), deep belief networks (DBNs), and deep restricted Boltzmann machines (RBMs) have proven to be effective at generating Ising configurations with accurate thermodynamic properties \cite{ising_boltzmann,ising_boltzmann_2}. Variational autoencoders (VAEs) have also been used to successfully extract the Ising order parameter from the predicted latent representations in the vanishing field case \cite{ising_vae,ising_vae_2,ising_vae_3}. Additionally, it has been shown that supervised classifier neural networks are indeed capable of providing an accurate estimation of the Ising critical point in the vanishing field case \cite{ising_class_small,ising_class_order}. Accurate classification can even be obtained with a rather small network and the order parameter can be even be extracted from the decision function of the classification network, albeit through the use of supervised learning with \textit{a priori} categorization labels.
Focus in this work will be placed on the 2-dimensional square Ising model, an oft-employed model for describing magnetic phenomena in the field of statistical mechanics. The model is notable for describing one of the simplest systems in statistical physics that demonstrates a phase transition. In this case, a second-order magnetic transition between the ferromagnetic and paramagnetic phases at the Curie temperature (also referred to as the critical point in the context of a second-order phase transition). The model has seen extensive use in investigating magnetic phenomena in condensed matter physics \cite{spin_glass_1,spin_glass_2,ising_app_1,ising_app_2,ising_app_3,ising_app_4,ising_app_5}. Additionally, the model can be equivalently expressed in the form of the lattice gas model. This is a simple model of density fluctuation and liquid-gas transformations used primarily in chemistry, albeit often with modifications \cite{ising_chem_1,ising_chem_2}. Furthermore, modified versions of lattice gas models have been applied to studying binding behavior in biology \cite{ising_bio_1,ising_bio_2,ising_bio_3}.
The purpose of this work is to explore the capability of machine learning to discriminate between different classes of microstates (the possible microscopic configurations of the thermodynamic system) exhibited by the 2-dimensional square Ising model in an unsupervised manner. The properties of these classes will then be compared to the expected properties of the ferromagnetic and paramagnetic phases, as well as the crossover regions. The ability to use unsupervised machine learning to detect phases of matter carries broad implications for possible applications in the study of physical systems with unknown phase diagrams. Furthermore, the detection of crossover regions opens a new avenue for the study of quantum critical points, where numerical data must be obtained for low, but finite temperatures that exhibit crossover instead of criticality. Examples include data from large scale numerical quantum Monte Carlo simulations for heavy fermion materials and high temperature superconducting cuprates for which quantum critical points are believed to play crucial roles for their interesting properties \cite{qcp,singular_fl,2D_DCA_QCP}. While prior research has utilized either principal component analysis (PCA) or VAEs to perform unsupervised representation learning of Ising configurations \cite{ising_vae,ising_vae_2,ising_vae_3}, this work is taking an alternative approach through the use of an information maximizing conditional generative adversarial network (InfoCGAN) \cite{infocgan}. The prior research has been focused on the extraction of particular physical properties of the Ising configurations such as the magnetization. The magnetization quantifies the order exhibited by the system across the phase transition and serves as a discrimination criterion between physical phases. Thus, such an approach is consistent with traditional approaches in statistical mechanics. By contrast, the InfoCGAN approach pursued in this research uses a direct classification scheme aided by conditioning the network on external physical conditions to predict a categorical latent space. This is similar to other research that has been done to classify Ising configurations in order to predict the critical point \cite{ising_class_small,ising_class_order}, but these approaches utilize supervised learning that requires \textit{a priori} knowledge of the physical phases the configurations belong. By contrast, the research in this work utilizes unsupervised learning that does not require any prior knowledge of physical phases. In effect, the InfoCGAN approach provides a direct unsupervised phase classification scheme that may be useful for analyzing systems in which an order parameter that cleanly translates to a classification criterion is not obvious. Furthermore, this approach grants additional capability for conditioning the neural network on additional known parameters that describe not just the Ising configurations, but the physical conditions the configurations are subject to, which should improve modeling of the distribution of the training data \cite{cgan}. Additionally, the adversarial loss function utilized by the InfoCGAN model as an optimization objective serves as a feature-wise loss function that has been shown to provide greater visual fidelity in outputs for computer vision applications when compared to those obtained with the element-wise reconstruction loss functions used in VAEs \cite{elem_feat}. Given that 2-dimensional square Ising configurations can be readily interpreted as images, it is reasonable to expect that an adversarial approach will perform well in the context of unsupervised learning of the Ising model by comparison to prior research that has relied on VAEs.
\section{The Ising Model}
The Ising model is mathematically expressed in the form of a 2-dimensional array of spins $\sigma_i$ indexed by $i$ that represent a discrete arrangement (lattice) of dipole moments of atomic spins (the magnetic orientation of the lattice sites) \cite{p_cond_mat}. The spins are restricted to spin-up or spin-down alignments represented as $\pm 1$ such that $\sigma_i \in \Bqty{-1,+1}$. The spins are allowed to interact with their nearest neighbors according to the pair-wise interaction strength $J_{ij}$ for neighbors $\sigma_i$ and $\sigma_j$. Additionally, each spin $\sigma_i$ will interact with an external magnetic field $H_i$ where the magnetic moment $\mu$ has been absorbed into the magnetic field $H_i$ at that lattice site. The magnetic moment is an expression of the magnetic strength and orientation of a material and serves to control the interaction strength of the lattice sites with the external field. The full Hamiltonian (an expression of the total energy) describing this system is thus expressed as
\begin{equation}
\mathcal{H} = -\sum_{\expval{i,j}}J_{ij}\sigma_i\sigma_j-\sum_i H_i\sigma_i.
\end{equation}
Where $\expval{i,j}$ indicates adjacent index pairs $i$ and $j$. For $J_{ij} > 0$, the interaction between spins $i$ and $j$ is ferromagnetic, for $J_{ij} < 0$, the interaction between spins $i$ and $j$ is antiferromagnetic, and for $J_{ij} = 0$, the spins $i$ and $j$ are necessarily non-interacting. Furthermore, for $H_i > 0$, the spin at lattice site $i$ tends to prefer a spin-up alignment, with the strength of the preference determined by the strength of the field. By contrast, for $H_i < 0$, the spin at lattice site $i$ tends to prefer a spin-down alignment. For $H_i = 0$, there is no external magnetic field influence on lattice site $i$. Typically, the Ising model is solved for the case $J_{ij} = J$ and $H_i = H$. This is the expression used in this work as follows
\begin{equation}
\mathcal{H} = -J\sum_{\expval{i,j}}\sigma_i\sigma_j-H\sum_i\sigma_i.
\end{equation}
The Ising model can also be re-expressed in the form of a lattice gas. The resulting modified Hamiltonian is represented in the following manner
\begin{equation}
\mathcal{H} = -4J\sum_{\expval{i,j}}n_i n_j - \mu\sum_i n_i.
\end{equation}
Where the external field strength $H$ is reinterpreted as the chemical potential $\mu$, $J$ retains its role as the interaction strength, and $n_i \in \Bqty{0, 1}$ represents the lattice site occupancy in which a lattice site may contain an atom (1) or not (0). The original Ising Hamiltonian can be recovered using the relation $\sigma_i = 2n_i-1$ up to a constant. In this model, the tendency for adjacent spins to align with one another can now be re-interpreted as the tendency for atoms on a lattice to be attracted to one another. Additionally, the chemical potential provides the energy cost associated with adding an atom to the system from a reservoir. This form of the model has many applications in chemistry and biology, as mentioned in the introduction.
The Ising spin configurations are Boltzmann distributed according to the energy and temperature of the system, which can be expressed as
\begin{equation}
p_n = \frac{e^{-\beta E_n}}{Z};\quad Z = \sum_{n} e^{-\beta E_n};\quad \beta = \frac{1}{T}.
\end{equation}
Where $p_n$ is the probability of a state existing in state $n$ at temperature $T$, $E_n$ is the energy of state $n$, $Z$ is the normalization constant referred to as the partition function, and $\beta$ is the inverse temperature. The Boltzmann constant $k_B$ was absorbed into the temperature $T$.
For the Ising model, the vanishing field case $H=0$ is of particular interest. Under these conditions, the model exhibits a second-order phase transition characterized by a critical temperature $T_{C}$ for dimension $d\ge 2$. The 2-dimensional case was originally solved analytically by Onsager in 1944 \cite{onsager}. At low temperatures below the critical point, the system is dominated by nearest-neighbor interactions, which for a ferromagnetic system means that adjacent spins tend to align with one another. As the temperature is increased, thermal fluctuations will eventually overpower the interactions such that the magnetic ordering is destroyed and the orientations of adjacent spins can be taken to be uncorrelated. This is referred to as a paramagnet which will still tend to align with an external field. However, for sufficiently high temperatures, a sufficiently strong external field will be required to overcome the effects of thermal fluctuations. The ferromagnetic and paramagnetic phases are discriminated by an order parameter, which takes the form of the magnetization for the ferromagnetic Ising model. Oftentimes, the average magnetization is used to fulfill this role and for a system composed of $N$ lattice sites, it is expressed as
\begin{equation}
m = \frac{1}{N}\sum_{i=1}^N \sigma_i.
\end{equation}
In the vanishing field case for a system in the thermodynamic limit ($N \rightarrow \infty$, $V \rightarrow \infty$), the average magnetization vanishes at the critical temperature with power law decay behavior $m(T) \sim (T_C-T)^\beta$ with critical exponent $\beta = \frac{1}{8}$ for two dimensions. The magnetic susceptibility $\chi$ and the specific heat capacity $C$ additionally diverge at the critical temperature. They are expressed as follows
\begin{equation}
\chi = \frac{\expval{m^2}-\expval{m}^2}{T};\quad C = \frac{\expval{E^2}-\expval{E}^2}{T^2}.
\end{equation}
The system can be said to be paramagnetic for a vanishing average magnetization and ferromagnetic for a finite average magnetization (with the system categorized as ferromagnetic spin-up or spin-down according to the sign of the magnetization). However, in the presence of a non-vanishing external magnetic field, an alignment preference is introduced according to the sign of the field, which destroys the $Z2$ symmetry. As a result, a first-order phase transition is observed at low temperatures ($T < T_C$) by varying the strength of the magnetic field from negative to positive values (or vice versa), which produces a discontinuous change in the average magnetization across the vanishing field line. Furthermore, in the presence of an external field, there is no longer a second-order phase transition, as the average magnetization no longer vanishes at a critical temperature. Rather, there is a region in which the system tends towards a disordered state from an ordered state. This region is referred to as the crossover region, which is not a thermodynamic phase. Generically, a crossover refers to when a system undergoes a change in phase without encountering a canonical phase transition characterized by a critical point as there are no discontinuities in derivatives of the free energy (as determined by Ehrenfest classification) or symmetry-breaking mechanisms (as determined by Landau classification). A well known example is the BEC-BCS crossover in an ultracold Fermi gas in which tuning the interaction strength (the s-wave scattering length) causes the system to crossover from a Bose-Einstein-condensate state to a Bardeen-Cooper-Schrieffer state \cite{bec_bcs}. Additionally, the Kondo Effect is important in certain metallic compounds with dilute concentrations of magnetic impurities that cross over from a weakly-coupled Fermi liquid phase to a local Fermi liquid phase upon reducing the temperature below some threshold \cite{kondo}. Furthermore, examples of strong crossover phenomena have also been recently discovered in classical models of statistical mechanics such as the Blume-Capel model and the random-field Ising model \cite{crossover_bcm,crossover_rfim}. Additionally, for a finite-sized system, there is no true critical point since the system is not evaluated in the thermodynamic limit.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{spins}
\caption{\footnotesize This diagram depicts the spin configurations in $2$ dimensions.}
\label{fig:spins}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{mag_v}
\caption{\footnotesize This diagram depicts a sketch of the temperature dependence of the magnetization in the vanishing field case.}
\label{fig:mag_v}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{mag_nv}
\caption{\footnotesize This diagram depicts a sketch of the temperature dependence of the magnetization in the non-vanishing field case.}
\label{fig:mag_nv}
\end{figure}
Examples of the types of spin configurations that can be found in the vanishing field case are depicted in Fig.\ref{fig:spins}. Below the critical temperature, nearest neighbor interactions dominate the system, causing the spins to align with one another in either the spin-up or spin-down configuration. The behavior of the average magnetization as a function of temperature is additionally depicted for the vanishing field case and the non-vanishing field case in Fig.\ref{fig:mag_v} and Fig.\ref{fig:mag_nv} respectively. For the vanishing field case, the average magnetization can be seen to vanish in the vicinity of the critical temperature. By contrast, the average magnetization tends to smoothly decay to a small value in the non-vanishing field case, with sign determined by the external magnetic field.
\section{Generative Adversarial Networks and the InfoCGAN Model}
The InfoGAN class of artificial neural networks belong to the larger group of generative adversarial networks (GAN) and are thus guided by the same principles \cite{infogan,gan}. A standard GAN is composed of two primary components, a generator network and a discriminator network, implemented as computational graphs with many adjustable parameters \cite{gan}. In this framework, the networks are interpreted to be contesting with one other in an adversarial game (albeit not always a zero-sum game) to solve a non-convex optimization of loss in the network outputs. The non-convex optimization problem arises in the high-dimensional space created by the large number of parameters within the neural networks, which tends to produce potentially many local minima, saddle points, flat regions, and widely varying curvature on the loss surface. As such, non-convex optimization can be rather difficult. From the perspective of an adversarial game, the discriminator learns to distinguish between ``real'' samples drawn from the distribution of the training data and ``fake'' samples provided by the generator. At the same time, the generator learns to ``fool'' the discriminator with ``fake'' samples. During training, the generator will learn to produce ``fake'' samples that increasingly resemble the ``real'' samples by using information provided by the training of the discriminator. More specifically, the discriminator network transforms an input sample into a measure of the probability that it belongs to the same statistical distribution as the training samples. The output of the discriminator network is a single value belonging to the interval $\hat{v} \in \qty[0,1]$ which is referred to as the validity of the sample. The magnitude of the validity score represents the probability that the input sample belongs to the same distribution as the training samples. The generator network transforms an input Gaussian noise vector into a sample that ideally belongs to the same statistical distribution as the training samples. The output of the generator network is thus of the same shape as the training samples.
The procedure for training a GAN is done by first optimizing the weights of the discriminator network to predict a batch (subset) of the training samples as ``real'' (an output validity score of one) and a batch of samples obtained from the generator network as ``fake'' (an output validity score of zero). Then, the generator network weights are optimized such that given a batch of random Gaussian latent variables, the discriminator outputs a validity score corresponding to a ``real'' sample (with the discriminator weights held constant during this step). This procedure is repeated for each batch in the training set to constitute a training epoch. In this manner, the generator and the discriminator networks ``learn'' from each other, as the discriminator network is optimized to detect differences between the ``real'' training samples and the ``fake'' generated samples while the generator network is optimized to respond to information provided by updates to the discriminator network to generate more convincing ``fake'' samples that the discriminator network identifies as ``real.'' For the implementation used in this work, the minmax GAN (or MM GAN), the adversarial contest between the discriminator and generator networks is indeed a zero-sum game in which gains and losses are balanced between the networks. As with most feedforward artificial neural networks, the tuning of the weights in the networks is done by performing backpropagation to minimize an objective loss function. This is done by calculating the gradients of the loss functions with respect to the parameters in the networks in order and updating them according to the direction of greatest descent in model loss. This is implemented with a chosen optimizer. The ideal outcome of the training process for an MM GAN is then to produce a stable Nash equilibrium between the discriminator and generator networks in which the state of the objective loss function ensures that the networks will not change their behavior regardless of what its opponent may do. For the specific case of the MM GAN, the cost function (sum of the model losses) takes the following form
\begin{align}
\min_{G}\max_{D} \mathcal{V}_{\mathrm{GAN}}(D, G) = &\mathbb{E}_{x\sim P(x)}\qty[\log D(x)]+\\\nonumber
&\mathbb{E}_{z\sim P(z)}\qty[\log\qty(1-D(G(z)))].
\end{align}
Where $x$ represents a training configuration, $z$ represents latent input to the generator network, $P(x)$ represents the distribution of the training samples, $P(z)$ represents the distribution of the latent input to the generator network, $D$ represents the discriminator network, and $G$ represents the generator network. This can be reinterpreted as objective loss functions for the respective networks in the following manner
\begin{align}
\mathcal{L}_D &= -\mathbb{E}_{x\sim P(x)}\qty[\log D(x)]-\mathbb{E}_{\hat{x}\sim P(\hat{x})}\qty[\log\qty(1-D(\hat{x}))] \\
\mathcal{L}_G &= \mathbb{E}_{\hat{x}\sim P(\hat{x})}\qty[\log\qty(1-D(\hat{x}))].
\end{align}
Where now $\hat{x}$ has been adopted to represent a generated sample drawn from the distribution $P(\hat{x})$ provided by the generator network. Training a GAN is notoriously difficult, as the model is sensitive to many major problems. These including simple non-convergence to a Nash equilibrium, mode collapse in which the generator fails to produce a variety of samples, diminished gradients in which vanishing gradients from an effective discriminator fails to provide enough information to the generator, imbalance between the generator and discriminator resulting in overfitting, and sensitivity to training hyperparameters, to name a few.
With the motivations behind GANs in mind, it is clear to see the value that such a framework can provide to researching physical systems. Assuming a GAN is sufficiently trained on configurations drawn from a well-simulated physical system, the distribution of the microstates composing the simulation data can be ``learned,'' which can then be used to predict physical properties of interest. This is not unlike calculating physical properties from the partition function describing a system in statistical physics. However, while the standard GAN framework provides the necessary tools for generating convincing configurations as well as discriminating between ``real'' and ``fake'' ones, this is not necessarily of immediate interest to physical research. Indeed, there is an interpretability problem with GANs. While the important information required to draw a sample from the distribution of the training configurations is ostensibly contained in the latent input to the generator network, there is not always an obvious way to extract desired properties from the description provided by the latent input. That is to say that a standard GAN does not provide disentangled representations in the latent space from which clear features of the data can be interpreted, such as facial expression from computer vision applications. The InfoGAN model provides an encouraging attempt to address this interpretability problem by learning disentangled representations of configurations by maximizing mutual information between small subsets of the latent inputs to the generator network and the output of an auxiliary network to the discriminator network \cite{infogan}.
This is done by including an additional interpretable input $c$ (often referred to as the control codes) alongside the random latent input $z$ to the generator network in order to produce a configuration $\hat{x}$. Through the auxiliary network, inputs to the discriminator network provided by the generator network $\hat{x}$ are mapped to a prediction of the control codes $\hat{c}$ using the same extracted features that the discriminator uses to determine the validity $\hat{v}$ of the input. The auxiliary network is then optimized to minimize the loss between $c$ and $\hat{c}$. Thus, interpetable control codes can be predicted for configurations from the training set through the auxiliary network. This can be incorporated into the cost function when training the discriminator network through an additional mutual information term $I(c; G(c, z))$ where $c$ distributed as $P(c)$ with entropy $H(c)$, $G(c, z)$ is the generator network, and $Q(c|x)$ is the auxiliary network. The mutual information term is estimated using an entropic approach. The InfoGAN cost function is expressed as
\begin{align}
\min_{G}\max_{D} \mathcal{V}_{\mathrm{InfoGAN}}(D, G) &= \mathcal{V}_{\mathrm{GAN}}(D, G) - \lambda I(c; G(c, z));\\
I(c; G(c, z)) &= H(c)-H(c|G(c, z)) \\\nonumber
&\ge \mathbb{E}_{c\sim P(c), x\sim G(c, z)}\qty[\log Q(c|x)]+\\
\nonumber
&\quad H(c).
\end{align}
Where $\lambda$ is a hyperparameter used to weigh the importance of optimizing the mutual information against the generative ability of the model to produce ``real'' samples and the discriminative ability of the model to distinguish ``real'' and ``fake'' samples. The control codes $c$ are free to take on many forms, but are most frequently taken to be uniform, Gaussian, or categorical variables. In this work, categorical variables are employed to allow the network to directly classify different kinds of Ising configurations in an unsupervised manner. The use of conditional variables is not restricted to these control variables, however.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{gan}
\caption{\footnotesize This diagram depicts the structure of a GAN model.}
\label{fig:gan}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{infogan}
\caption{\footnotesize This diagram depicts the the structure of an InfoGAN model.}
\label{fig:infogan}
\end{figure}
The structures described for the GAN and InfoGAN models are respectively depicted in Fig.\ref{fig:gan} and Fig.\ref{fig:infogan}. For the GAN, the generator model $G(z)$ takes a noise vector $z$, which is usually normally distributed, as an input to generate a ``fake'' sample $\hat{x}$. The discriminator model $D(x)$ then takes either a ``real'' sample $x$ or a ``fake'' sample $\hat{x}$ as an input to output a validity score $\hat{v}$, which indicates the likelihood of a sample belonging to the same distribution as the ``real'' samples. The InfoGAN model is structured in a similar manner, however the generator is now a function of both the noise vector $z$ as well as control variables $c$. The control variables may belong to a distribution of choice, but are usually uniformly or categorically distributed. The discriminator model is identical to its GAN counterpart, however it shares some of its layers with an additional auxiliary model $Q(c|x)$. The final layer that is shared between the two models is referred to has a hidden feature layer, which is not directly observed, but serves to contain information about both the validity of the sample as well as its control variable assignments. As such, the auxiliary model outputs a prediction of the control variables $\hat{c}$ just as the discriminator model outputs a prediction of the validity $\hat{v}$. The role of the hidden feature layer is realized through the training process that requires both the validity and control variable predictions to be derived from information contained within the layer.
Additionally, the generator and the discriminator can be conditioned on additional known features. For the Ising configurations, this would be the known physical conditions that the configurations are subject to, namely the temperature and the external magnetic field strength. The inclusion of this information involves conditioning both the generator and discriminator networks on these inputs, denoted with $t$. This type of GAN is referred to as a conditional GAN (CGAN) \cite{cgan}. Through the inclusion of these conditioning variables, training can be stabilized, accelerated, or simply produce better results since the model has access to additional important information regarding the distribution of the training data \cite{cgan}. Furthermore, the conditional distributions are multi-modal, which can mitigate mode collapse in the generator output, though care must be taken to ensure that the generator does not collapse to a single output within the modes provided by the conditional information \cite{cgan_mode}. The cost function reads as
\begin{align}
\min_{G}\max_{D} \mathcal{V}_{\mathrm{CGAN}}(D, G) = &\mathbb{E}_{x\sim P(x)}\qty[\log D(x|t)]+\\\nonumber
&\mathbb{E}_{z\sim P(z)}\qty[\log\qty(1-D(G(z|t)))].
\end{align}
This can then be freely incorporated into the InfoGAN model to produce a so-called InfoCGAN model \cite{infocgan}. The resulting cost is expressed as
\begin{align}
\min_{G}\max_{D} \mathcal{V}_{\mathrm{InfoCGAN}} &= \mathcal{V}_{\mathrm{CGAN}}(D, G) - \lambda I(c; G(c, z|t)) \\
I(c; G(c, z|t)) &= H(c)-H(c|G(c, z|t)) \\\nonumber
&\ge \mathbb{E}_{c\sim P(c), x\sim G(c, z|t)}\qty[\log Q(c|x,t)]+\\\nonumber
&\quad H(c).
\end{align}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{cgan}
\caption{\footnotesize This diagram depicts the structure of a CGAN model.}
\label{fig:cgan}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{infocgan}
\caption{\footnotesize This diagram depicts the the structure of an InfoCGAN model.}
\label{fig:infocgan}
\end{figure}
The structures described for the CGAN and InfoCGAN models are respectively depicted in Fig.\ref{fig:cgan} and Fig.\ref{fig:infocgan}. For the CGAN model, the noise vector $z$ as well as the conditional variables $t$ are provided as input to the generator model $G(z|t)$. In addition to outputting a predicted sample $\hat{x}$, the conditional variables $t$ are also fed through as output. The discriminator model $D(x|t)$ additionally takes not just a ``real'' sample $x$ or a ``fake'' sample $\hat{x}$ as input, but also the conditional variables $t$ while outputting the validity prediction $\hat{v}$ as with the normal GAN model. The InfoCGAN model can then be constructed from the CGAN model in a similar way as the InfoGAN model was constructed from the GAN model through the inclusion of the control code $c$. The generator and auxiliary networks respectively become $G(c,z|t)$ and $Q(c|x,t)$.
\section{Methods}
Monte Carlo simulations of the 2-dimensional square Ising model with linear size $l=27$ were performed over a range of both temperatures and external fields through spin-flip attempts subject to a Metropolis criterion. A large number of configurations were collected for each combination of temperature and external field following thermal equilibration. After the configurations are collected, the Ising spins are rescaled to such that $\Bqty{-1, +1} \rightarrow \Bqty{0, 1}$, which is a common approach in data science when considering binary-valued features. Physically, this is consistent with a lattice gas model, which is equivalent to the Ising model.
These configurations obtained through Monte Carlo sampling compose the training data for an InfoCGAN model that learns to both classify these Ising configurations and generate convincing Ising configurations given a random vector input, a temperature and external field, and a classification. The trained auxiliary model is then used to classify the Ising configurations from the training set and the classifications are analyzed as functions of the temperature and external field to demonstrate a correspondence with the expected physical phases of the Ising model across the temperature and external field range. Five categorical control variables were used for the classifications, which are intended to correspond with the expected three distinct physical phases (ferromagnetic spin-up, ferromagnetic spin-down, and paramagnetic) as well as two classifications that consist of configurations that show still ferromagnetic ordering, albeit with some spins flipped. These two classifications are expected to contain ferromagnetic configurations that are not perfectly (or near-perfectly) ordered due to the finite probability of spins flipping in the Monte Carlo simulations in addition to configurations associated with crossover phenomena in the presence of an external field. These additional classifications were found to be necessary for reliably reproducing the categorical assignments, as using only three categorical assignments would would result in these configurations either being absorbed into the classifications corresponding to perfectly (or near perfectly) ordered ferromagnetic configurations or the classification corresponding to the high temperature weak field configurations. Sometimes, the training process would even predict these categorical assignments in a non-symmetric manner across the vanishing field line if only three categories were used.
During training, the generator and the discriminator reach a Nash equilibrium in which neither the discriminator or the generator gain advantages through continuing their adversarial ``game.'' Once the classifications of the Ising configurations are obtained from the auxiliary network, the average classifications with respect to temperature and the applied external field are compared with the expected behavior of the Ising model with respect to these physical conditions. Along the vanishing field line $(H = 0)$, the average classification of the Ising configurations into the class corresponding to high temperature samples as a function of temperature is interpreted as a probability of the Ising configurations belonging to the paramagnetic phase as a function of temperature. From this probability, an estimation of the critical point is obtained and compared to the behavior of the magnetic susceptibility and the specific heat capacity as functions of temperature. Correspondence of the estimated critical point derived from the auxiliary classifications to the peaks in these functions indicates correspondence between the paramagnetic phase and the class corresponding to the high temperature configurations obtained from the auxiliary network.
\section{Results}
The loss history of the InfoCGAN does indicate the tendency towards a Nash equilibrium, though with occasional classification errors in the later epochs that are nonetheless statistically conscionable. The discrimination loss is essentially equivalent for the ``real'' and ``fake'' samples and is consistently lower than the generation loss, but the results are well within expectations. Training stability was rather quickly obtained. All of the plots in this section are generated with the MatPlotLib package using a perceptually uniform colormap \cite{matplotlib}.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{loss_batch}
\caption{\footnotesize This diagram depicts the average batch loss history of the InfoCGAN model trained on the Ising configurations. The discriminator and generator losses are the validity predictions made by the discriminator while respectively training the discriminator and the generator. The categorical control loss is the categorical crossentropy loss of the categorical control code predictions made by the auxiliary network while training the generator.}
\label{fig:loss_batch}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{loss_epoch}
\caption{\footnotesize This diagram depicts the epoch loss history of the InfoCGAN model trained on the Ising configurations. The discriminator and generator losses are the validity predictions made by the discriminator while respectively training the discriminator and the generator. The categorical control loss is the categorical crossentropy loss of the categorical control code predictions made by the auxiliary network while training the generator.}
\label{fig:loss_epoch}
\end{figure}
The loss history with respect to the batches is depicted in Fig.\ref{fig:loss_batch} while the loss history with respect to the epochs is depicted in Fig.\ref{fig:loss_epoch}. The batch losses refer to the losses for each individual minibatch used during training, which consist of subsets of the total training set. The epoch losses are then the average losses across all samples in the training set for each epoch, in which every sample in the training set has had an opportunity to update the model parameters. The discriminator losses, both ``real'' and ``fake,'' are binary crossentropies of between the validity predictions made by the discriminator model and the provided target validities while it is respectively trained on samples from the training set and samples from the generator model. The loss for the generator is also the binary crossentropy between the validity prediction provided by the static discriminator model and the provided target validities while the generator and auxiliary models are trained. The categorical control loss is then the categorical crossentropy of the predicted categorical controls from the auxiliary model with respect to the categorical codes provided to the generator model during training of the generator and auxiliary models.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{infocgan_0}
\caption{\footnotesize This diagram depicts the average classification probability for the first categorical variable with respect to temperature and the external magnetic field.}
\label{fig:infocgan_0}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{infocgan_1}
\caption{\footnotesize This diagram depicts the average classification probability for the second categorical variable with respect to temperature and the external magnetic field.}
\label{fig:infocgan_1}
\end{figure}
The predicted classification probabilities for the first two categorical control variables are shown with respect to the temperature and the external magnetic field for the Ising configurations are depicted in Fig.\ref{fig:infocgan_0} and Fig.\ref{fig:infocgan_1}. The diagrams illustrate the probability $P_{S_i}(H, T)$ that samples located at an $(H, T)$ coordinate belongs to class $S_i$, where Fig.\ref{fig:infocgan_0} and Fig.\ref{fig:infocgan_1} are for classes $S_0$ and $S_1$ respectively. The color bar defines what is meant by the coloration, where the darkest regions exhibit the smallest probability $P_{S_i}(H, T) = 0$ and the brightest regions exhibit the largest probability $P_{S_i}(H, T) = 1$. The results are exceptional, as the regions of high classification probabilities reflect the expected locations of the low-temperature ordered ferromagnetic phases, complete with symmetry across the vanishing field line. The classification probabilities are very close to 0.5 for each category along the low-temperature regime of the vanishing field, reflecting the fact that in the absence of an external field, there is no imposed preference on the type of magnetic order exhibited by the system. The boundaries of these high probability regions for these categories are rather sharply defined. This is encouraging, as both the training Ising configurations and the generated configurations for these regions possess consistent thermodynamics, showing an average magnetization extremely close to either -1 or +1 across the regions depending on the external field (excluding the vanishing field case, of course). Outside of these predicted classification regions, there is a consistent departure from the perfectly (or near-perfectly) ordered states caused by thermal fluctuations.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{infocgan_2}
\caption{\footnotesize This diagram depicts the average classification probability for the third categorical variable with respect to temperature and the external magnetic field.}
\label{fig:infocgan_2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{infocgan_3}
\caption{\footnotesize This diagram depicts the average classification probability for the fourth categorical variable with respect to temperature and the external magnetic field.}
\label{fig:infocgan_3}
\end{figure}
The predicted classification probabilities for the next two categorical control variables are shown with respect to the temperature and the external magnetic field for the Ising configurations are depicted in Fig.\ref{fig:infocgan_2} and Fig.\ref{fig:infocgan_3}. These classification regions capture the aforementioned configurations in which there are significant departures from the completely ordered ferromagnetic states, but the nearest-neighbor interactions and interactions with the external field (where applicable) are still strong enough that ferromagnetic ordering is readily apparent. Investigation into the configurations contained in these regions shows a smooth decay of the average magnetization with respect to increasing temperature in the presence of an external field, consistent with the expected characteristics of crossover phenomena. Furthermore, as the external field strength increases, the region shifts to greater temperatures, consistent with the expectation that the higher temperatures are required for thermal fluctuations to overpower the influence imposed by the external field. Once again, there is an observed symmetry of the two regions across the vanishing field line as well as reasonably sharply defined region boundaries, similar to what was observed with the near perfectly ordered ferromagnetic regions. However, note that there are non-zero probabilities for these classifications along the vanishing field line. In the thermodynamic limit as system size tends towards infinite $(N \rightarrow \infty)$, this is not a valid result. This behavior can be described as an artifact of the machine learning approach in which the configurations that are approaching the critical temperature demonstrate diminishing ferromagnetic ordering that is qualitatively similar to the properties of the configurations associated with crossover in the presence of an external field. It is expected that this behavior would diminish with increasing system size, but that is beyond the scope of this study at present time.
The crossover line can be roughly be defined as the line in a parameter space for which the correlation length driven by two relevant parameters, in this case the external field and temperature, are roughly comparable with one other. One pragmatic definition is the Widom line, which can be defined as the locus for the maximum of the heat capacity.
The obtained finite area, as opposed to a sharp line, is a reflection of the fact that the present machine learning approach does not pinpoint a sharp single line to separate the two regimes. Instead, it characterizes the line through the competition between the two relevant parameters. The growth of the area to encompass a larger temperature range as the external field grows in magnitude is perhaps a reflection of the weaker correlation at high temperature which leads to a larger uncertainty. This is perhaps unsurprising, as drawing the end of a crossover line is difficult and often not well defined.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{infocgan_4}
\caption{\footnotesize This diagram depicts the classification probability for the fifth categorical variable with respect to temperature and the external magnetic field.}
\label{fig:infocgan_4}
\end{figure}
The predicted classification probability for the final categorical control variable with respect to the temperature and the external magnetic field for the Ising configurations is depicted in Fig.\ref{fig:infocgan_4}. This classification region is corresponds well to configurations at high temperature under the influence of a weak external field, as the configurations contained by it exhibit nearly zero average magnetization due to the effects of thermal fluctuations. Where the ferromagnetic regions failed to predict the critical temperature due to the artifacts of the machine learning categorization approach, this region compensates, as it begins in the immediate vicinity of the true critical point of $T_C \approx 2.269$ calculated by Kramers-Wanier duality \cite{kwd,kwd_2,kwd_3}. Consistent with the prior two sets of classification regions, there is a symmetry across the vanishing field. By contrast with the others, however, there is no need to partition the classification across the vanishing field as the thermal fluctuations are overpowering the influence of the external field and thus the configurations are only weakly responding to it.
In order to further investigate how well-approximated the critical point actually is, a logistic curve can be fit using orthogonal distance regression to the average classification probabilities of the category corresponding to the paramagnetic phase with respect to temperature along $H = 0$. This take the form $P_{S_{\mathrm{paramagnetic}}}(T) = \frac{1}{1+e^{-k(T-\hat{T}_C)}}$ where $k$ is the logistic growth rate of the probability and $\hat{T}_C$ is the estimated critical temperature.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{state}
\caption{\footnotesize This diagram depicts the average classification probabilities of the class assignments predicted by the auxiliary network with respect to temperature in the vanishing field case. Deep purple represents the classifications corresponding to configurations with near perfect ferromagnetic ordering, light purple represents the classifications corresponding to configurations with diminished ferromagnetic ordering, and pink represents the classifications corresponding to disordered configurations. The logistic curve is fit to the disordered probabilities and provides a critical temperature estimate of $\hat{T}_C \approx 2.308$.}
\label{fig:state}
\end{figure}
This is shown in Fig.\ref{fig:state}, where the dotted line indicates the critical temperature estimate at $\hat{T}_C \approx 2.308$. This provides an overestimate of $\approx 1.688\%$. Due to the finite-size effects caused by the rather small system size of linear size $l = 27$, an overestimate of the critical temperature is expected, so this result is consistent with expectations.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{magnetic_susceptibility}
\caption{\footnotesize This diagram depicts the magnetic susceptibility of the Ising configurations with respect to temperature in the vanishing field case. The dotted line corresponds to the estimated critical temperature of $\hat{T}_C \approx 2.308$. The grey regions correspond to the Monte Carlo error.}
\label{fig:ms}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{specific_heat}
\caption{\footnotesize This diagram depicts the specific heat capacity of the Ising configurations with respect to temperature in the vanishing field case. The dotted line corresponds to the estimated critical temperature of $\hat{T}_C \approx 2.308$. The grey regions correspond to the Monte Carlo error.}
\label{fig:sh}
\end{figure}
This estimation of the critical point is consistent with the peaks observed in the magnetic susceptibility $\chi$ and the specific heat capacity $C$ respectively depicted in Fig.\ref{fig:ms} and Fig.\ref{fig:sh}, which would be divergent in the thermodynamic limit. The Monte Carlo errors are calculated using the jackknife technique and are depicted with the grey regions in the figures. Note that the errors are more pronounced in the vicinity of the critical temperature, which accounts for the inconsistency of the location of the peaks between the two measured quantities. These errors are expected to be much lower with larger system sizes. This provides significant evidence of the consistency of the critical temperature estimation provided through the auxiliary network classification with the known physics.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{magnetization}
\caption{\footnotesize This diagram depicts the average magnetizations of the Ising configurations with respect to temperature in the vanishing field case, separated into plurality spin-up and spin-down varieties. The samples are colored according to their classification, with yellow and blue representing the classifications corresponding to configurations with near perfect ferromagnetic ordering, orange and purple representing the classifications corresponding to configurations with diminished ferromagnetic ordering, and pink representing the classifications corresponding to disordered configurations. The dotted line corresponds to the estimated critical temperature of $\hat{T}_C \approx 2.308$.}
\label{fig:magnetization}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{energy}
\caption{\footnotesize This diagram depicts the average energies of the Ising configurations with respect to temperature in the vanishing field case. The spin-up and spin-down varieties of the classifications have been combined. The samples are colored according to their classification, blue representing the classifications corresponding to configurations with near perfect ferromagnetic ordering, purple representing the classifications corresponding to configurations with diminished ferromagnetic ordering, and pink representing the classifications corresponding to disordered configurations. The dotted line corresponds to the estimated critical temperature of $\hat{T}_C \approx 2.308$.}
\label{fig:energy}
\end{figure}
The magnetizations $m$ and energies $E$ of the Ising configurations with respect to temperature are additionally respectively shown in Fig.\ref{fig:magnetization} and Fig.\ref{fig:energy}. The predicted critical temperature is consistent with expectations, as it is located where the strict separation of the spin-up and spin-down configurations breaks down for the magnetization and where the inflection occurs in the energy.
We emphasize that the present analysis is based on one system size. Performing proper finite-size scaling necessarily involves a proper characterization of the error from the machine learning approach, which is not well understood. For the conventional finite-size scaling approach, the error can be argued through various fitting criteria for the finite-size scaling ansatz. There is no simple analogy to this in the machine learning approach.
\section{Conclusions}
The InfoCGAN auxiliary classification network has performed extremely well in predicting phases of the 2-dimensional Ising model. It is important to note how well the approach agrees with known properties of the Ising model despite having no access to any physical \textit{a priori} information beyond the raw configurations, the temperatures, and the external field strengths. From this approach, classifications are obtained that correspond to configurations with near perfect ferromagnetic ordering, diminished ferromagnetic ordering, and disorder. Furthermore, an exceptional estimation of the critical point is provided by this approach given the influence of finite-size effects.
These results show encouraging results to motivate further development of approaches to machine learn phases of matter. For many physical systems, order parameters have not been identified through conventional means and the phase diagrams elude discovery. However, simulation data consisting of configurational information with respect to thermal parameters is readily available. It is clear from this work that machine learning approaches are indeed capable of determining a structural criterion to partition physical samples into classes that correspond to different phases of matter and are even flexible enough to allow for conditioning of the model on additional physical parameters that may prove useful to that end. The use of the InfoCGAN model to perform this task provides a particular advantage in providing a direct unsupervised classification scheme of Ising configurations into classes that correspond to physical phases whereas other research into unsupervised machine learning of the Ising model has instead mapped configurations to representations that can be later used as classification criteria. The direct unsupervised classification approach is applicable to situations in which latent representations may not be easily translated to classification criteria.
There are many opportunities beyond investigating more complex systems by introducing improvements to this method beyond the scope of this work. For instance, finite-size scaling is an important approach towards addressing limitations presented by finite-sized systems for investigation critical phenomena \cite{fsc}. Establishing correspondence between the categorical codes of different system sizes is a challenging proposition, as different InfoCGAN models will need to be trained for each system size, which in turn may require different hyperparameters and training iteration counts to provide similar results. Consequently, numerical difficulties can arise when performing finite-size scaling analysis, as the variation of predicted properties with respect to system size may be difficult to isolate from the variation of systemic errors due to different neural networks being used to extract said properties. Nevertheless, this would be a significant step towards improving InfoCGAN classification of physical configurations into classes that correspond to physical phases.
\section{Acknowledgments}
This work is funded by the NSF EPSCoR CIMM project under award OIA-1541079. Additional support (KMT) was provided by NSF Materials Theory grant DMR-1728457. An award of computer time was provided by the INCITE program. This research also used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
|
1,108,101,566,506 | arxiv | \section*{Methods}
\textbf{NMR experiments} were performed using variable temperature $^4$He cryostats and superconducting magnets. At $T\geq 4$~K magnetic field and frequency were fixed according to the $^1$H resonance condition $\nu_0=^1\gamma B$, with $^1\gamma=42.5774$~MHz/T, as the proton NMR spectrum was resolved within the experimental bandwidth. Due to excessive line broadening below 4~K, the NMR intensity started to drop below the expected $T^{-1}$ behavior, as illustrated by blue circles in the inset of Fig.~\ref{spectra}a that correspond to the integrated spectral weight ($SW$) of the NMR spectrum.
To that end, at $T<4$~K the $^1$H spectra were acquired by sweeping the external field in dense intervals around $B_0=0.9789$~T. As illustrated by the black diamonds in the inset of Fig.~\ref{spectra}a, the entire NMR intensity was recovered in these field-sweep measurements: in the inspected field range the total $SW$ of the $^1$H resonance lines up with the $T^{-1}$ behavior extrapolated from $T\geq 4$~K.
Zero-strain NMR measurements were performed using large single crystals ($3\times 2\times 1$~mm$^3$). The relaxation measurements (Fig.~\ref{T1-ambient}) for in-plane (measured at UCLA) and out-of plane (acquired at TU Wien on a different sample) crystal axes show similar $T_1^{-1}$\ values. The relaxation rate $T_1^{-1}$\ was acquired at the central line ($\nu_0$, $B_0$ in Fig.~\ref{spectra}b). At $T=1.70$~K we measured $T_1^{-1}$\ at $\nu_0=41.68$~MHz also for the two satellite peaks $B_1$ and $B_2$ obtaining values approximately 50\% of $T_1^{-1}$/ at $B_0=0.9789$~T (orange symbols in Fig.~\ref{T1-linewidth}b).
\textbf{Uniaxial strain experiments} were performed using piezoelectric strain cells ($Razorbill$) in the same $^4$He cryostat as the zero-strain studies for in-plane magnetic fields. For that, the $\rm Y_3Cu_9(OH)_{19}Cl_8$\ sample (the single crystal shown in Fig.~\ref{strain}a-c was cut in dimensions $2.5\times 0.45\times 0.24$~mm$^3$ with the longest dimension parallel to the Cu chains of the kagome layer, i.e. $\varepsilon\perp a$) was glued at room temperature on the strain cell using Stycast and dried for several days. Here, the magnetic field direction was parallel to the $a$-axis, i.e. perpendicular to the in-plane fields in Fig.~\ref{spectra}. Still, we observed a similar peak in $T_1^{-1}$\ at $T_{\rm N}$ for both $B\parallel a$ (Fig.~\ref{strain}) and $B\parallel ab$ (Fig.~\ref{T1-ambient}),
hence our measurements of the relaxation rate were suitable to trace the strain dependence of the AFM transition.
The strain voltage was varied at 4~K and successively $T_1^{-1}$\ was measured upon cooling through the transition, both for increasing and decreasing strain as indicated by red and orange symbols in Fig.~\ref{strain}d, respectively. The compression was monitored \textit{in situ} by the built-in capacitive displacement sensor. Similar to previous strain studies~\cite{Luo2019,Chronister2022}, the applied strain $\varepsilon$ was calculated as the displacement divided by the length of the sample surface that remained free of epoxy -- for sample s2 this corresponds to 0.6~mm (Fig.~\ref{strain}a-c).
The linear increase of $T_{\rm N}$ with strain shown in the inset of Fig.~\ref{strain}d lines up for the two samples. Still, we do not exclude that the $real$ compression of the sample is somewhat lower than the present estimate using the built-in capacitive sensor, likely $\varepsilon_{real}\approx 1$\%; also in Sr$_2$RuO$_4$\ the strain of the van Hove singularity was first estimated as 0.6\% \cite{Steppke2017,Luo2019} and later corrected to 0.44\% by using high-precision stress-strain sensors \cite{Barber2019a}.
\acknowledgments We thank M. R. Norman, M. Dressel, H. Jeschke and I. I. Mazin for useful comments and discussions. Support with sample preparation by G. Untereiner is kindly appreciated. A. P. acknowledges support by the Alexander von Humboldt Foundation through the Feodor Lynen Fellowship. Work at UCLA was supported by the National Science Foundation (DMR-2004553).
|
1,108,101,566,507 | arxiv | \section{Introduction}
\IEEEPARstart{I}{nternet} of Things (IoT) is bound to explode with huge business requirements in the beyond fifth generation (B5G) era. With the increasing demand for wireless connection in IoT deployment, teraherhz (THz) is expected to become an attractive candidate spectrum for B5G networks due to the scarcity of spectrum resources. THz wireless transmission rate can exceed 100 Gbit/s, which provides an effective solution for ultra-short-distance and ultra-high-speed wireless transmission. THz waves represent electromagnetic waves with a frequency spectrum between 0.1 and 10 THz, which is between millimeter wave (mmWave) and infrared light \cite{1,2,3}. Compared with mmWave and optical wireless communication, THz has many unique advantages. For example, THz communications support both line-of-sight (LoS) and non-LoS (NLoS) propagation conditions, stronger anti-interference ability, better directionality, higher confidentiality, and can act as a reliable substitute in extreme weather conditions such as rain, fog, and dust \cite{4,5,6}.
However, THz band has not yet been fully developed, and still faces various challenges in different environments, like the high path loss and molecular absorption, channel characteristics, antenna misalignment, and hardware imperfections \cite{7,8}. A major issue refers to the high path loss of the THz frequency band in the urban environment, which implies THz communications having a limited propagation distance. To tackle this problem, appropriate path loss models applied to the THz propagation were presented in \cite{9,10,11,12,13}. For example, a ray-tracing approach for modeling short-distance THz band propagation channels was studied in \cite{10}. A THz band transmission model was proposed in \cite{11}, which considered the path loss and molecular absorption noise suffered by waves in the case of short-range propagation. In particular, J. Kokkoniemi \emph{et al.} proposed a simplified path attenuation model for 275-400 GHz frequency band \cite{12}. Subsequently, with the model proposed in [12], the performance of the THz system was studied in \cite{13}. Additionally, the impact of fading generated from THz propagation is another issue that cannot be neglected. In \cite{14}, the multipath fading of THz channels was modeled by Rician or Rayleigh or Nakagami-$m$ distributions. Furthermore, the shadowing effect has been experimentally verified in \cite{15}. Very recently, THz channel was modeled by an $\alpha$-$\mu$ fading distribution to accommodate the multipath effect \cite{7}. The $\alpha$-$\mu$ distribution is a generalized model, including several important distributions as special cases, such as Gamma, Nakagami-$m$, Weibull, and Rayleigh \cite{16}. By taking the $\alpha$-$\mu$ fading and misalignment effects into consideration, the analytical expressions of the outage probability (OP) and capacity for the THz link were derived and the performance under different fading conditions was discussed. What's more, the misalignment effects between the transmitter and the receiver antennas, also known as pointing errors, become a key issue in THz communications because they lead to significant performance degradation \cite{2,8}. Until now, the impact of pointing errors on the free space optical (FSO) link has been widely studied in \cite{17,18,19}. More recently, many studies have been made to study the effect of the pointing error on the THz-based network \cite{7,20,21}.
On the other hand, the cooperative diversity has been proposed to alleviate the fading caused by the transmission distance as well as multipath effects. The relaying scheme becomes a viable option for obtaining higher link capacity and wider coverage by dividing a long link with poor quality into two or more short links with better quality. At present, the relaying technique has been extensively developed in radio frequency (RF) and FSO communication systems. Meanwhile, a lot of efforts have been made to investigate the performance of mixed dual-hop RF-RF or RF-FSO transmission systems employing decode-and-forward (DF) or amplify-and-forward (AF) relaying (see e.g., \cite{22,23,24,25} and references therein). The mixed RF-FSO approach allows multiple RF messages to be aggregated into a single FSO link to achieve the maximum possible capacity. Motivated by this, mixed THz-RF relaying systems have been presented in order to enable several RF links to feed one high-rate THz link, thereby obtaining considerable performance. For instance, by employing the DF protocol, the outage and error performance of a THz-RF relaying system have been conducted in \cite{21}, where the THz and RF links experience $\alpha$-$\mu$ fading with pointing errors and Rayleigh fading, respectively. The authors in \cite{26,27} have investigated the performance of a mixed THz-RF system with the DF protocol, the exact expressions of OP and lower bound on ergodic capacity have been derived. In addition, considerable efforts have been devoted to evaluate the performance of dual-hop or multi-hop FSO transmission systems using the AF or DF relaying (see e.g., \cite{28},\cite{29} and references therein). In such system models, the relaying scheme can effectively mitigate the performance loss caused by fading and pointing errors compared to a single direct link. More recently, a dual-hop THz system was proposed in \cite{30} where both THz links experience the joint impacts of fading and misalignment effects. However, the authors in [30] only considered the DF case and analyzed the outage probability.
Similar to the model in \cite{30}, in this paper we comprehensively evaluate the performance of a dual-hop THz system with the fixed-gain relays. To the best of the authors' knowledge, the performance of dual-hop THz relaying systems with the fixed-gain AF protocol has not been studied in the literature yet. Specifically, the analytical expressions for the cumulative distribution function (CDF) and probability density function (PDF) of the end-to-end (e2e) signal-to-noise ratios (SNR) are derived. Relying on these obtained results, exact analytical expressions for the OP, average BER, and average channel capacity (ACC) of the considered system are derived in terms of the bivariate Fox's H-function (BFHF). To attain more useful insights, corresponding asymptotic results for the OP and average BER are investigated. Relying on the asymptotic results, we observe that the diversity order of the mixed dual-hop THz system depends on the fading parameters and pointing errors of both THz links. Results show that the performance of considered systems is better than the single THz link. Additionally, the performance of the mixed THz system deteriorates when the system under the conditions of far propagation distance and/or strong fading and/or strong pointing errors. Another contribution of this paper is that we extend the single-relay system to a multi-relay network, and present the asymptotic symbol error rate (SER) analysis for both relay selection and all-relay employed schemes.
The remainder of this paper is organized as follows. In Section II, the system and channel models are introduced. The tractable expressions of the CDF and PDF for a single THz link are given in this section. In Section III, the CDF and PDF of the e2e SNR for the considered dual-hop system are derived. In addition, we obtain the asymptotic CDF to gain more useful insights. In Section IV, we derive the exact analytical expressions of the OP, average BER, and ACC. The asymptotic OP and average BER are also derived. Moreover, the asymptotic SER analysis of the multi-relay case is presented in Section V. Section VI presents illustrative numerical results supplemented by Monte Carlo simulations to verify the accuracy of the performance metrics. Finally, insightful discussions are drawn in Section VII.
\section{System and Channel Models}
We consider a dual-hop THz communication system where a source (S) is communicating with a destination (D) through a single relay (R) with the fixed-gain AF protocol. We assume that the two THz links (i.e. S-R and R-D) follow the $\alpha$-$\mu$ fading with pointing errors. Moreover, S, R, and D are assumed to be equipped with a single highly directive antenna. In addition, we assume that an ideal RF front-end is employed, therefore the impact of hardware imperfections is neglected.
By using the fixed-gain AF relaying, the overall instantaneous SNR $\gamma_o$ of the dual-hop mixed THz system can be given by \cite{31,32}
\begin{align*}
\gamma_o=\frac{\gamma_1\gamma_2}{\gamma_2+C},
\tag{1}\label{1}
\end{align*}
where $C$ is a constant related to the amplification gain [31], and $\gamma_i$ is the instantaneous received SNR of the $i$th hop, $i\in(1, 2)$. From [7, Eq. (26)], the PDF of $\gamma_i$ can be derived by applying [21, Eq. (9)] as
\begin{align*}
f_{\gamma_i}(\gamma_i)=A_i\overline\gamma_{i}^{-\frac{\phi_i}{2}} \gamma_{i}^{\frac{\phi_i}{2}-1}
\Gamma\left(\frac{\alpha_i\mu_i-\phi_i}{\alpha_i},B_i\left(\frac{\gamma_i}{\overline\gamma_i}\right)^{\frac{\alpha_i}{2}}\right),
\tag{2}\label{2}
\end{align*}
where $A_i=\frac{\phi_i\mu_i^{\frac{\phi_i}{\alpha_i}}h_{l,i}^{-\phi_i}}{2\hat{h}_{f,i}^{\alpha_i}A_{o,i}^{\phi_i}\Gamma(\mu_i)}$, $B_i=\frac{\mu_i}{(\hat{h}_{f,i}h_{l,i}A_{o,i})^{\alpha_{i}}}$, $\Gamma(\cdot)$ denotes the gamma function [33, Eq. (8.310)], $\Gamma(\cdot,\cdot)$ represents the incomplete gamma function [33, Eq. (8.350.2)], $\overline\gamma_i$ refers to the average SNR of the $i$th hop, $\alpha_i$ and $\mu_i$ stand for fading parameters of the $\alpha$-$\mu$ distribution, $\hat{h}_{f,i}$ holds for the $\alpha$-root mean value of the fading channel envelope, $A_{o,i}$ is the constant term that defines the pointing loss, and $\phi_i$ denotes the squared ratio between the equivalent beam radius and the pointing error displacement standard deviation $\sigma_{s,i}$ at the receiver [17, Eqs. (9) and (10)]. In addition, $h_{l,i}$ denotes the deterministic path loss of the $i$th THz channel which can be obtained as \cite{7,21}
\begin{align*}
h_{l,i}=\frac{c\sqrt{G_{t}G_{r}}}{4\pi fd_i}\exp \left(-\frac{1}{2}\beta(f)d_i\right),
\tag{3}\label{3}
\end{align*}
where $G_t$ and $G_r$ denote, respectively, the transmit and receive antenna gains of all nodes, $c$ refers to the speed of light, $f$ represents the operating frequency, $d_i$ represents the propagation distance of S-R and R-D links, and $\beta(f)$ stands for the absorption coefficient being function of the relative humidity $\varrho$, atmosphere pressure $p_a$, and temperature $T$ [7, Eqs. (8-17)].
Based on (2), the PDF of $\gamma_i$ can be rewritten by employing [34, Eq. (8.4.16/2)] as
\begin{align*}
f_{\gamma_i}(\gamma_i)=A_i&\overline\gamma_{i}^{-\frac{\phi_i}{2}}\gamma_{i}^{\frac{\phi_i}{2}-1}\\ &\times
\, {\mathrm{G}}_{1,2}^{2,0}\left [{{B_i\left(\frac{\gamma_i}{\overline\gamma_i}\right)^{\frac{\alpha_i}{2}}}\left |{ \begin{matrix} {1}
\\ {0,\frac{\alpha_i\mu_i-\phi_i}{\alpha_i}} \\ \end{matrix} }\right . }\right ]\!,
\tag{4}\label{4}
\end{align*}
where ${\mathrm{G}}_{p,q}^{m,n}[\cdot]$ is the Meijer's G-function [33, Eq. (9.301)]. By using the primary definition of Meijer's G-function, the CDF of the THz link can be derived as
\begin{align*}
F_{\gamma_i}(\gamma_i)=&\frac{2A_i\overline\gamma_{i}^{-\frac{\phi_i}{2}} \gamma_{i}^{\frac{\phi_i}{2}}}{\alpha_i}\\ &\times
\, {\mathrm{G}}_{2,3}^{2,1}\left [{{B_i\left(\frac{\gamma_i}{\overline\gamma_i}\right)^{\frac{\alpha_i}{2}}}
\left |{ \begin{matrix} {1{-}\frac{\phi_i}{\alpha_i},1}
\\ {0,\frac{\alpha_i\mu_i-\phi_i}{\alpha_i},-\frac{\phi_i}{\alpha_i}} \\ \end{matrix} }\right . }\right ]\!.
\tag{5}\label{5}
\end{align*}
\section{Statistical Analysis}
In this section, exact analytical expressions for the CDF and PDF of the e2e SNR are derived.
In addition, we derive the asymptotic CDF at high SNR regimes to get more useful insights.
\subsection{Cumulative Distribution Function}
\subsubsection{Exact Analysis}
From \cite{35,36}, the CDF of the e2e SNR $\gamma_o$ is obtained by taking a series of transformations as
\begin{align*}
F_{\gamma_o}(\gamma)=&\int_{0}^{\infty}
P\left[\frac{\gamma_1\gamma_2}{\gamma_2+C}
<\gamma \Bigg|\gamma_1\right]f_{\gamma_1}(\gamma_1)d\gamma_1 \\
=&F_{\gamma_1}(\gamma)+\int_{0}^{\infty}F_{\gamma_2}
\left(\frac{C\gamma}{x}\right)f_{\gamma_1}(x+\gamma)dx.
\tag{6}\label{6}
\end{align*}
By substituting (4) and (5) into (6), the CDF can be obtained as
\begin{align*}
&F_{\gamma_o}(\gamma)=\frac{2A_1\overline\gamma_{1}^{-\frac{\phi_1}{2}} \gamma^{\frac{\phi_1}{2}}}{\alpha_1}
\, {\mathrm{G}}_{2,3}^{2,1}\left [{{B_1\left(\frac{\gamma}{\overline\gamma_1}\right)^{\frac{\alpha_1}{2}}}
\left |{ \begin{matrix} {1{-}\frac{\phi_1}{\alpha_1},1}
\\ {0,\frac{\alpha_1\mu_1-\phi_1}{\alpha_1},-\frac{\phi_1}{\alpha_1}} \\ \end{matrix} }\right . }\right ]\!\\& +\frac{2A_1A_2\overline\gamma_{1}^{-\frac{\phi_1}{2}}\overline\gamma_{2}^{-\frac{\phi_2}{2}}
C^{\frac{\phi_2}{2}}\gamma^{\frac{\phi_1}{2}}}{\alpha_2}{\rm {H}_{1,0:4,2:2,2}^{0,1:1,3:0,2}}\\
&\left [{\!\!\left .{ \begin{matrix} \left({1{+}\frac{\phi_1}{2}{-}\frac{\phi_2}{2};-\frac{\alpha_2}{2},\frac{\alpha_1}{2}}\right)
\\ -\\ (1,1) \left(1{-}\mu_2{+}\frac{\phi_2}{\alpha_2},1\right) \left(\frac{\phi_2}{2},\frac{\alpha_2}{2}\right)
\left(1{+}\frac{\phi_2}{\alpha_2},1\right) \\
\left(\frac{\phi_2}{\alpha_2},1\right) (0,1)\\ (1,1) \left(1{-}\mu_1{+}\frac{\phi_1}{\alpha_1},1\right) \\
(0,1) \left(\frac{\phi_1}{2},\frac{\alpha_1}{2}\right) \end{matrix} }\right |\!
\frac{\overline\gamma_2^{\frac{\alpha_2}{2}}}{B_2C^{\frac{\alpha_2}{2}}}, \!\frac{\overline\gamma_{1}^{\frac{\alpha_1}{2}}}
{B_1\gamma^{\frac{\alpha_1}{2}}} \!\!}\right ],
\tag{7}\label{7}
\end{align*}
where ${\mathrm{H}}_{p1,q1:p2,q2:p3,q3}^{0,n1:m2,n2:m3,n3}[\cdot,\cdot]$ denotes the BFHF [37, Eq. (2.57)]. It is worth noting that the BFHF can be effectively evaluated in MATLAB or MATHEMATICA, and the available code implementations were elaborated in \cite{38} and \cite{39}.
\textit{Proof:} See Appendix A.
\emph{Special Case:} Consider $\alpha_1=\alpha_2=2$ corresponding to both THz links suffering from the Nakagami-$m$ fading. Then, we have
\begin{align*}
&F_{\gamma_o}^{N}(\gamma)=\zeta_1\overline\gamma_{1}^{-\frac{\phi_1}{2}} \gamma^{\frac{\phi_1}{2}}
\, {\mathrm{G}}_{2,3}^{2,1}\left [{{\frac{\zeta_2\gamma}{\overline\gamma_1}}
\left |{ \begin{matrix} {1{-}\frac{\phi_1}{2},1}
\\ {0,\mu_1{-}\frac{\phi_1}{2},-\frac{\phi_1}{2}} \\ \end{matrix} }\right . }\right ]\!\\& +\zeta_1\zeta_3\overline\gamma_{1}^{-\frac{\phi_1}{2}}\overline\gamma_{2}^{-\frac{\phi_2}{2}}
C^{\frac{\phi_2}{2}}\gamma^{\frac{\phi_1}{2}}{\rm {H}_{1,0:4,2:2,2}^{0,1:1,3:0,2}}\\
&\left [{\!\!\left .{ \begin{matrix} \left({1{+}\frac{\phi_1}{2}{-}\frac{\phi_2}{2};-1,1}\right)
\\ -\\ (1,1) \left(1{-}\mu_2{+}\frac{\phi_2}{2},1\right) \left(\frac{\phi_2}{2},1\right)
\left(1{+}\frac{\phi_2}{2},1\right) \\
\left(\frac{\phi_2}{2},1\right) (0,1)\\ (1,1) \left(1{-}\mu_1{+}\frac{\phi_1}{2},1\right) \\
(0,1) \left(\frac{\phi_1}{2},1\right) \end{matrix} }\right |\!
\frac{\overline\gamma_2}{\zeta_4C}, \!\frac{\overline\gamma_{1}}{\zeta_2\gamma} \!\!}\right ],
\tag{8}\label{8}
\end{align*}
where $\zeta_1=\frac{\phi_1\mu_1^{\frac{\phi_1}{2}}h_{l,1}^{-\phi_1}}{2\hat{h}_{f,1}^{2}A_{o,1}^{\phi_1}\Gamma(\mu_1)}$,
$\zeta_2=\frac{\mu_1}{(\hat{h}_{f,1}h_{l,1}A_{o,1})^{2}}$, $\zeta_3=\frac{\phi_2\mu_2^{\frac{\phi_2}{2}}h_{l,2}^{-\phi_2}}{2\hat{h}_{f,2}^{2}A_{o,2}^{\phi_2}\Gamma(\mu_2)}$,
$\zeta_4=\frac{\mu_2}{(\hat{h}_{f,2}h_{l,2}A_{o,2})^{2}}$. Furthermore, (8) can be further simplified to the CDF that both links suffer from Rayleigh fading by setting $\mu_1=\mu_2=1$. Please note that these results are also novel and have not been presented in the literature yet.
\subsubsection{Asymptotic Analysis}
Since the analytical expression of the CDF given by (7) is a complex expression in terms of the BFHF, it can only provide limited physical insights. As such, the asymptotic analysis of the CDF at high SNRs is also developed. By letting $\overline\gamma_1\to \infty$ and $\overline\gamma_2\to \infty$, applying [40, Eq. (07.34.06.0040.01)] and [41, Eq. (1.8.4)], and doing some algebraic operations, the asymptotic CDF can be derived as (9), shown at the top of the next page. Therefore, one can see that the CDF is reduced to a very simple form, which can be used to get further insights.
\textit{Proof:} See Appendix B.
\subsection{Probability Density Function}
For the fixed-gain AF case, the PDF can be expressed by calculating the derivative of (6) with respect to $\gamma$ as [36, Eq. (51)]
\begin{align*}
f_{\gamma_{o}}(\gamma)=\int_{\gamma}^{\infty}
\frac{C\gamma_{1}}{(\gamma_{1}-\gamma)^2}
f_{\gamma_{2}}\left(\frac{C\gamma}{\gamma_{1}-\gamma}\right)
f_{\gamma_{1}}(\gamma_{1})d\gamma_{1}.
\tag{10}\label{10}
\end{align*}
By inserting (4) into (10) and doing some algebraic operations, the PDF of the dual-hop THz system is attained as
\begin{align*}
&f_{\gamma_o}(\gamma)=A_1A_2\overline\gamma_{1}^{-\frac{\phi_1}{2}}\overline\gamma_{2}^{-\frac{\phi_2}{2}}
C^{\frac{\phi_2}{2}}\gamma^{\frac{\phi_1}{2}-1}{\rm {H}_{1,0:3,1:2,2}^{0,1:0,3:0,2}}\\
&\left [{\!\!\left .{ \begin{matrix} \left({1{+}\frac{\phi_1}{2}{-}\frac{\phi_2}{2};-\frac{\alpha_2}{2},\frac{\alpha_1}{2}}\right)
\\ -\\ (1,1) \left(1{-}\mu_2{+}\frac{\phi_2}{\alpha_2},1\right) \left(1{+}\frac{\phi_2}{2},\frac{\alpha_2}{2}\right) \\
(0,1)\\ (1,1) \left(1{-}\mu_1{+}\frac{\phi_1}{\alpha_1},1\right) \\
(0,1) \left(1{+}\frac{\phi_1}{2},\frac{\alpha_1}{2}\right) \end{matrix} }\right |\!
\frac{\overline\gamma_2^{\frac{\alpha_2}{2}}}{B_2C^{\frac{\alpha_2}{2}}}, \!\frac{\overline\gamma_{1}^{\frac{\alpha_1}{2}}}
{B_1\gamma^{\frac{\alpha_1}{2}}} \!\!}\right ].
\tag{11}\label{11}
\end{align*}
\textit{Proof:} See Appendix C.
\begin{figure*}[t]
\begin{align*}
F_{\gamma_o}^{A}(\gamma)& \underset{\overline\gamma_{1},~\overline\gamma_{2} \to \infty}\approx \frac{2A_1\Gamma\left(\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1}\right)\Gamma\left(\frac{\phi_1}{\alpha_1}\right)}
{\alpha_1\Gamma\left(1{+}\frac{\phi_1}{\alpha_1}\right)}
\left(\frac{\gamma}{\overline\gamma_{1}}\right)^{\frac{\phi_1}{2}}
{+}\frac{2A_1B_{1}^{\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1}}\Gamma\left(-\frac{\alpha_1\mu_1{-}\phi_1}
{\alpha_1}\right)}{\alpha_1\mu_1\Gamma\left(1{-}\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1}\right)}
\left(\frac{\gamma}{\overline\gamma_{1}}\right)^{\frac{\alpha_1\mu_1}{2}}\\
&+\frac{4A_1A_2B_{1}^{\frac{\phi_2{-}\phi_1}{\alpha_1}}\Gamma\left(\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}\right)
\Gamma\left(\frac{\phi_1{-}\phi_2}{\alpha_1}\right)\Gamma\left(\mu_1\frac{\phi_2}{\alpha_1}\right)\Gamma\left(\frac{\phi_2}{2}\right)}
{\alpha_1\alpha_2\Gamma\left(1{-}\frac{\phi_2{-}\phi_1}{\alpha_1}\right)
\Gamma\left(1{+}\frac{\phi_2}{\alpha_2}\right)}\left(\frac{C\gamma}{\overline\gamma_{1}\overline\gamma_{2}}\right)^{\frac{\phi_2}{2}}\\
&+\frac{4A_1A_2B_{1}^{\frac{\alpha_2\mu_2{-}\phi_1}{\alpha_1}}B_{2}^{\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}}
\Gamma\left(-\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}\right)
\Gamma\left(\frac{\phi_1{-}\alpha_2\mu_2}{\alpha_2}\right)\Gamma\left(\mu_1{-}\frac{\alpha_2\mu_2}{\alpha_1}\right)
\Gamma(\mu_2)}{\alpha_1\alpha_2\Gamma\left(1{-}\frac{\alpha_2\mu_2{-}\phi_2}
{\alpha_2}\right)\Gamma\left(1{+}\frac{\phi_1}{\alpha_1}-\frac{\alpha_2\mu_2}{\alpha_1}\right)\Gamma(1{+}\mu_2)}
\left(\frac{C\gamma}{\overline\gamma_{1}\overline\gamma_{2}}\right)^{\frac{\alpha_2\mu_2}{2}}\\
&+\frac{4A_1A_2B_{2}^{\frac{\phi_1{-}\phi_2}{\alpha_2}}\Gamma\left(-\frac{\phi_1{-}\phi_2}{\alpha_2}\right)
\Gamma\left(\mu_2{-}\frac{\phi_1}{\alpha_2}\right)\Gamma\left(\mu_1{-}\frac{\phi_1}{\alpha_1}\right)
\Gamma\left(\frac{\phi_1}{\alpha_2}\right)}
{\alpha_{2}^2\Gamma\left(1{-}\frac{\phi_1{-}\phi_2}{\alpha_2}\right)
\Gamma\left(1{-}\frac{\phi_2{-}\phi_1}{\alpha_1}{-}\frac{\phi_1{-}\phi_2}{\alpha_2}\right)
\Gamma\left(1{+}\frac{\phi_1}{\alpha_2}\right)}
\left(\frac{C\gamma}{\overline\gamma_{1}\overline\gamma_{2}}\right)^{\frac{\phi_1}{2}}\\
&+\frac{4A_1A_2B_{1}^{\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1}}B_{2}^{\frac{\alpha_1\mu_1{-}\phi_2}{\alpha_2}}
\Gamma\left(-\frac{\alpha_1\mu_1{-}\phi_2}{\alpha_2}\right)\Gamma\left(\mu_2{-}\frac{\alpha_1\mu_1}{\alpha_2}\right)
\Gamma\left(\frac{\phi_1}{\alpha_1}{-}\mu_1\right)\Gamma\left(\frac{\alpha_1\mu_1}{\alpha_2}\right)}
{\alpha_{2}^{2}\Gamma\left(1{-}\frac{\alpha_1\mu_1{-}\phi_2}{\alpha_2}\right)
\Gamma\left(1{+}\frac{\phi_1}{\alpha_1}{-}\mu_1\right)
\Gamma\left(1{+}\frac{\alpha_1\mu_1}{\alpha_2}\right)}
\left(\frac{C\gamma}{\overline\gamma_{1}\overline\gamma_{2}}\right)^{\frac{\alpha_1\mu_1}{2}}.
\tag{9}\label{9}
\end{align*}
\hrulefill
\end{figure*}
\section{Performance Analysis}
In this section, important performance metrics of the dual-hop THz system are evaluated, namely the OP, average BER, and ACC. In addition, we provide the asymptotic analysis of the OP and average BER to obtain the considered system's diversity order.
\subsection{Outage Probability}
\subsubsection{Exact Analysis}
In general, if the received instantaneous SNR is lower than a certain received SNR $\gamma_{th}$, the dual-hop relaying system undergoes outage. The OP of the considered AF relaying system can be mathematically written by using (7) as $P_{out}=\Pr[\gamma_o <\gamma_{th}]=F_{\gamma_o}(\gamma_{th})$.
\subsubsection{Asymptotic Analysis}
At high SNRs, the asymptotic OP can be obtained directly from (9), that is, $P_{out}\to F_{\gamma_o}^{A}(\gamma_{th})$. From \cite{42}, assuming $\overline\gamma_i\to \infty$, a very simple form of the OP is asymptotically proposed as $P_{out}\approx (G_c\cdot \overline\gamma_i)^{-G_d}$, where $G_c$ refers to the coding gain, $G_d$ stands for the diversity gain. After a careful inspection, from the asymptotic result of the OP, we can notice that the diversity order of the considered system can be easily deduced as
\begin{align*}
G_d = \min\left\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\phi_2,\alpha_2\mu_2\right\}.
\tag{12}\label{12}
\end{align*}
As can be seen from (12), it is easily noticed that the diversity gain of the dual-hop THz relaying system depends on the multipath fading parameters (i.e. $\alpha_1$, $\mu_1$, $\alpha_2$, and $\mu_2$) and parameters related to pointing errors (i.e. $\phi_1$ and $\phi_2$) of both S-R and R-D links. As far as the authors are aware, this remark has not been presented in the literature before.
\subsection{Average Bit Error Rate}
\subsubsection{Exact Analysis}
From[22, Eq. (25)], the average BER for various modulation schemes is given by
\begin{align*}
\overline{P}_e = \frac{q^p}{2\Gamma(p)}\int_{0}^{\infty}\gamma^{p-1}e^{-q\gamma}F_{\gamma_o}(\gamma)d\gamma,
\tag{13}\label{13}
\end{align*}
where the different values of $p$ and $q$ hold for various modulation methods, $\{p=0.5,q=1\}$ and $\{p=1,q=1\}$ are used for binary phase shift keying (BPSK) and differential PSK (DPSK), respectively. Therefore, substituting (7) into (13), employing [40, Eqs. (07.34.03.0228.01) and (07.34.21.0012.01)] and [33, Eq. (3.326.2)] yields
\begin{align*}
&\overline{P}_{e}=\frac{A_1(q\overline\gamma_{1})^{-\frac{\phi_1}{2}}}{\Gamma(p)\alpha_1}\\ &\times
\, {\mathrm{H}}_{3,3}^{2,2}\left [{{\frac{B_1}{(q\overline\gamma_1)^{\frac{\alpha_1}{2}}}}
\left |{ \begin{matrix} {\left(1{-}p{-}\frac{\phi_1}{\alpha_1},\frac{\alpha_1}{2}\right) \left(1{-}\frac{\phi_1}{\alpha_1},1\right) (1,1)}
\\ {(0,1) \left(\frac{\alpha_1\mu_1-\phi_1}{\alpha_1},1\right) \left(-\frac{\phi_1}{\alpha_1},1\right)} \\ \end{matrix} }\right . }\right ]\!\\& +\frac{A_1A_2\overline\gamma_{1}^{-\frac{\phi_1}{2}}\overline\gamma_{2}^{-\frac{\phi_2}{2}}
C^{\frac{\phi_2}{2}}q^{-\frac{\phi_1}{2}}}{\alpha_2\Gamma(p)}{\rm {H}_{1,0:4,2:2,3}^{0,1:1,3:1,2}}\\
&\left [{\!\!\left .{ \begin{matrix} \left({1{+}\frac{\phi_1}{2}{-}\frac{\phi_2}{2};-\frac{\alpha_2}{2},\frac{\alpha_1}{2}}\right)
\\ -\\ \epsilon_2 \\
\left(\frac{\phi_2}{\alpha_2},1\right) (0,1)\\ (1,1) \left(1{-}\mu_1{+}\frac{\phi_1}{\alpha_1},1\right) \\
\left(\frac{\phi_1}{2}{+}p,\frac{\alpha_1}{2}\right) (0,1) \left(\frac{\phi_1}{2},\frac{\alpha_1}{2}\right) \end{matrix} }\right |\!
\frac{\overline\gamma_2^{\frac{\alpha_2}{2}}}{B_2C^{\frac{\alpha_2}{2}}}, \!\frac{(q\overline\gamma_{1})^{\frac{\alpha_1}{2}}}
{B_1} \!\!}\right ],
\tag{14}\label{14}
\end{align*}
where $\epsilon_2=\left\{(1,1) \left(1{-}\mu_2{+}\frac{\phi_2}{\alpha_2},1\right) \left(\frac{\phi_2}{2},\frac{\alpha_2}{2}\right)
\left(1{+}\frac{\phi_2}{\alpha_2},1\right)\right\}$, ${\mathrm{H}}_{p,q}^{m,n}[\cdot]$ is the Fox's H-function [37, Eq. (1.2)]. Please note that the Fox's H-function can be effectively calculated, and the MATHEMATICA implementation code has been provided in \cite{43}.
\textit{Proof:} See Appendix D.
\begin{figure*}[t]
\begin{align*}
\overline{P}_{e}^{A} &\underset{\overline\gamma_{1},~\overline\gamma_{2} \to \infty}\approx \frac{A_1\Gamma\left(\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1}\right)\Gamma\left(\frac{\phi_1}{\alpha_1}\right)
\Gamma\left(p{+}\frac{\phi_1}{2}\right)}{\alpha_1\Gamma\left(1{+}\frac{\phi_1}{\alpha_1}\right)\Gamma(p)}
\left(\frac{1}{q\overline\gamma_{1}}\right)^{\frac{\phi_1}{2}}+\frac{A_1B_{1}^{\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1}}
\Gamma\left(-\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1}\right)\Gamma\left(p{+}\frac{\alpha_1\mu_1}{2}\right)}
{\alpha_1\mu_1\Gamma\left(1{-}\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1}\right)\Gamma(p)}
\left(\frac{1}{q\overline\gamma_{1}}\right)^{\frac{\alpha_1\mu_1}{2}}\\
&+\frac{2A_1A_2B_{1}^{\frac{\phi_2{-}\phi_1}{\alpha_1}}\Gamma\left(\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}\right)
\Gamma\left(\frac{\phi_1{-}\phi_2}{\alpha_1}\right)\Gamma\left(\mu_1{-}\frac{\phi_2}{\alpha_1}\right)
\Gamma\left(\frac{\phi_2}{\alpha_2}\right)\Gamma\left(p{+}\frac{\phi_2}{2}\right)}
{\alpha_1\alpha_2\Gamma\left(1{-}\frac{\phi_2{-}\phi_1}{\alpha_1}\right)
\Gamma\left(1{+}\frac{\phi_2}{\alpha_2}\right)\Gamma(p)}
\left(\frac{C}{q\overline\gamma_{1}\overline\gamma_{2}}\right)^{\frac{\phi_2}{2}}\\
&+\frac{2A_1A_2B_{1}^{\frac{\alpha_2\mu_2{-}\phi_1}{\alpha_1}}B_{2}^{\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}}
\Gamma\left(-\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}\right)
\Gamma\left(\frac{\phi_1{-}\alpha_2\mu_2}{\alpha_2}\right)\Gamma\left(\mu_1{-}\frac{\alpha_2\mu_2}{\alpha_1}\right)
\Gamma(\mu_2)\Gamma\left(p{+}\frac{\alpha_2\mu_2}{2}\right)}
{\alpha_1\alpha_2\Gamma\left(1{-}\frac{\alpha_2\mu_2{-}\phi_2}
{\alpha_2}\right)\Gamma\left(1{+}\frac{\phi_1}{\alpha_1}-\frac{\alpha_2\mu_2}{\alpha_1}\right)\Gamma(1{+}\mu_2)\Gamma(p)}
\left(\frac{C}{q\overline\gamma_{1}\overline\gamma_{2}}\right)^{\frac{\alpha_2\mu_2}{2}}\\
&+\frac{2A_1A_2B_{2}^{\frac{\phi_1{-}\phi_2}{\alpha_2}}\Gamma\left(-\frac{\phi_1{-}\phi_2}{\alpha_2}\right)
\Gamma\left(\mu_2{-}\frac{\phi_1}{\alpha_2}\right)\Gamma\left(\mu_1{-}\frac{\phi_1}{\alpha_1}\right)
\Gamma\left(\frac{\phi_1}{\alpha_2}\right)\Gamma\left(p{+}\frac{\phi_1}{2}\right)}
{\alpha_{2}^{2}\Gamma\left(1{-}\frac{\phi_1{-}\phi_2}{\alpha_2}\right)
\Gamma\left(1{-}\frac{\phi_2{-}\phi_1}{\alpha_1}{-}\frac{\phi_1{-}\phi_2}{\alpha_2}\right)
\Gamma\left(1{+}\frac{\phi_1}{\alpha_2}\right)\Gamma(p)}
\left(\frac{C}{q\overline\gamma_{1}\overline\gamma_{2}}\right)^{\frac{\phi_1}{2}}\\
&+\frac{2A_1A_2B_{1}^{\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1}}B_{2}^{\frac{\alpha_1\mu_1{-}\phi_2}{\alpha_2}}
\Gamma\left(-\frac{\alpha_1\mu_1{-}\phi_2}{\alpha_2}\right)\Gamma\left(\mu_2{-}\frac{\alpha_1\mu_1}{\alpha_2}\right)
\Gamma\left(\frac{\phi_1}{\alpha_1}{-}\mu_1\right)\Gamma\left(\frac{\alpha_1\mu_1}{\alpha_2}\right)
\Gamma\left(p{+}\frac{\alpha_1\mu_1}{2}\right)}
{\alpha_{2}^{2}\Gamma\left(1{-}\frac{\alpha_1\mu_1{-}\phi_2}{\alpha_2}\right)
\Gamma\left(1{+}\frac{\phi_1}{\alpha_1}{-}\mu_1\right)
\Gamma\left(1{+}\frac{\alpha_1\mu_1}{\alpha_2}\right)\Gamma(p)}
\left(\frac{C}{q\overline\gamma_{1}\overline\gamma_{2}}\right)^{\frac{\alpha_1\mu_1}{2}}.
\tag{15}\label{15}
\end{align*}
\hrulefill
\end{figure*}
\subsubsection{Asymptotic Analysis}
The exact expression of the average BER is also a complex expression in terms of the BFHF. As such, by inserting (9) into (13) and employing [33, Eq. (3.326.2)], the asymptotic average BER can be obtained as (15), shown at the top of the next page. As a double check, from the asymptotic result of the average BER, one can again conclude that the system diversity order is $G_d=\min\left\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\phi_2,\alpha_2\mu_2\right\}$, which is equal to the result previously obtained in the OP analysis.
\subsection{Average Channel Capacity}
From \cite{22,28}, the ACC can be formulated as
\begin{align*}
&\overline{C}=\frac{1}{2\ln(2)}\int_{0}^{\infty} \ln(1+\gamma)f_{\gamma_o}(\gamma)d\gamma.
\tag{16}\label{16}
\end{align*}
Substituting (11) into (16), converting $\ln (1+\gamma)$ into the representation of the Meijer's G-function [34, Eq. (8.4.6/5)] and then following [37, Eqs. (2.9) and (2.57)], we have
\begin{align*}
&\overline{C}=\frac{A_1A_2\overline\gamma_{1}^{-\frac{\phi_1}{2}}\overline\gamma_{2}^{-\frac{\phi_2}{2}}
C^{\frac{\phi_2}{2}}}{2\ln(2)}{\rm {H}_{1,0:3,1:3,3}^{0,1:0,3:1,3}}\\
&\left [{\!\!\left .{ \begin{matrix} \left({1{+}\frac{\phi_1}{2}{-}\frac{\phi_2}{2};-\frac{\alpha_2}{2},\frac{\alpha_1}{2}}\right)
\\ -\\ (1,1) \left(1{-}\mu_2{+}\frac{\phi_2}{\alpha_2},1\right) \left(1{+}\frac{\phi_2}{2},\frac{\alpha_2}{2}\right) \\
(0,1)\\ (1,1) \left(1{-}\mu_1{+}\frac{\phi_1}{\alpha_1},1\right) \left(1{+}\frac{\phi_2}{2},\frac{\alpha_1}{2}\right)\\
\left(1{+}\frac{\phi_1}{2},\frac{\alpha_1}{2}\right) (0,1) \left(\frac{\phi_1}{2},\frac{\alpha_1}{2}\right) \end{matrix} }\right |\!
\frac{\overline\gamma_2^{\frac{\alpha_2}{2}}}{B_2C^{\frac{\alpha_2}{2}}}, \!\frac{\overline\gamma_{1}^{\frac{\alpha_1}{2}}}{B_1} \!\!}\right ].
\tag{17}\label{17}
\end{align*}
\section{Extension to Multi-Relay Systems}
\subsection{System Models}
Previously results are only applicable to the single relay system. In this section, we consider a more general multi-relay cooperative system, as shown in Fig. 1. It is worth noting that, unlike the RF network, the THz link radiates a narrow beam with concentrated energy in a smaller area. Therefore, we assume that $K$ transmitters are needed to completed signal transmission. More specifically, the signal at $\mathrm{S}_{i}, i\in\{1,...,K\}$ is correspondingly transmitted to the relay $\mathrm{R}_{i}$, and then the signal at $\mathrm{R}_{i}$ is amplified and retransmitted to D. Notice that the fixed-gain AF protocol is still employed in this section. We assume that the channels between the relays are orthogonal with each other over time. In addition, both best-relay-selection (BRS) and conventional all-relay participating (ARP) schemes are considered. Subsequently, the asymptotic SER expressions of two schemes are investigated and the performance comparison is provided. For fair comparison, we assume that BRS and ARP schemes have the same total power.
For the ARP scheme, by using maximum ratio combining, the received signals from all branches are weighted and then combined to maximize the output SNR. Therefore, the received SNR at D is [44][45]
\begin{align*}
\gamma_{o}^{ARP} = \frac{1}{2K}\sum_{i=1}^{K}\gamma_{o,i},
\tag{18}\label{18}
\end{align*}
where the factor $1/(2K)$ denotes that $K$ sources and $K$ relays use the $1/(2K)$ transmit power, respectively, $\gamma_{o,i}=\frac{\gamma_{1,i}\gamma_{i,2}}{\gamma_{i,2}+C}$ is the e2e SNR of S-$\mathrm{R}_{i}$-D link, where $\gamma_{1,i}$ and $\gamma_{i,2}$ stand for the instantaneous SNRs of S-$\mathrm{R}_{i}$ and $\mathrm{R}_{i}$-D links, respectively.
While for the BRS scheme, the output SNR can be expressed as [45, Eq. (2)]
\begin{align*}
\gamma_{o}^{BRS} = \frac{1}{2}\operatorname*{max}\limits_{i} \gamma_{o,i},
\tag{19}\label{19}
\end{align*}
where the factor $1/2$ represents that the selected $\mathrm{S}_{i}$ and $\mathrm{R}_{i}$ are set as $1/2$ of the transmit power, respectively. Next, using previously obtained statistics, we derive the asymptotic SER expressions of the multi-relay systems. From [46], the SER with $M$-ary PSK modulation is given by
\begin{align*}
P_{e}^{Q}= \frac{1}{\pi}\int_{0}^{\pi-\frac{\pi}{M}}M_{\gamma_{o}^{Q}}\left(\frac{\sin^2(\pi/M)}{\sin^2\theta}\right)d\theta,
\tag{20}\label{20}
\end{align*}
where $Q\in \{ARP,BRS\}$. For $M=2$, the above expression can be equivalent to the SER of BPSK modulation.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{Fig1.eps}
\caption{Multi-relay cooperative system model.}
\end{figure}
\subsection{ARP scheme}
From (9), assuming that $\overline\gamma_1=\overline\gamma_2=\overline\gamma$, the CDF of the single relay system can be asymptotically rewritten as
\begin{align*}
F_{\gamma_{o,i}}(\gamma)\to \frac{D\gamma^{v}}{\overline\gamma^{G_d}},
\tag{21}\label{21}
\end{align*}
where $D$ is the constant, $v=\min\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\frac{\phi_2}{2},\frac{\alpha_2\mu_2}{2}\}$. The moment generating function (MGF) of $\gamma_{o,i}$ can be formulated by employing the definition $M_{\gamma}(s)\triangleq s\int_{0}^{\infty} e^{-\gamma s}F_{\gamma}(\gamma) d\gamma$ as [28, Eq. (12)]
\begin{align*}
M_{\gamma_{o,i}}(s)=\frac{D\Gamma(v+1)}{\overline\gamma^{G_d}}s^{-v}.
\tag{22}\label{22}
\end{align*}
From [47, Eq. (7)], the MGF of $\gamma_{o}^{ARP}$ is expressed as
\begin{align*}
M_{\gamma_{o}^{ARP}}(s)=\left[M_{\gamma_{o,i}}\left(\frac{s}{2K}\right)\right]^K.
\tag{23}\label{23}
\end{align*}
By substituting (23) into (20) and assuming that $M=2$, the asymptotic SER of the ARP scheme is derived as
\begin{align*}
P_{e}^{ARP}\to \frac{D^K(\Gamma(v+1))^K\Gamma(Kv+\frac{1}{2})(2K)^{Kv}}{2\sqrt{\pi}\overline\gamma^{KG_d}\Gamma(Kv+1)}.
\tag{24}\label{24}
\end{align*}
As can be seen from (24), it reveals that the diversity order of the ARP system is $G_{d}^{ARP}=K\min\left\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\phi_2,\alpha_2\mu_2\right\}$, which depends on the number of relays and fading parameters.
\subsection{BRS scheme}
For the BRS scheme, the CDF of $\gamma_{o}^{BRS}$ can be written as $F_{\gamma_{o}^{BRS}}(\gamma)=[F_{\gamma_{o,i}}(\gamma)]^{K}$ [47, Eq. (13)]. With aid of (19) and (21), the asymptotic CDF of the BRS system can be given by
\begin{align*}
F_{\gamma_{o}^{BRS}}(\gamma)\to \frac{D^K2^{Kv}\gamma^{Kv}}{\overline\gamma^{KG_d}}.
\tag{25}\label{25}
\end{align*}
Therefore, the MGF of $\gamma_{o}^{BRS}$ is derived as
\begin{align*}
M_{\gamma_{o}^{BRS}}(s)= \frac{D^K2^{Kv}\Gamma(Kv+1)}{\overline\gamma^{KG_d}}s^{-Kv}.
\tag{26}\label{26}
\end{align*}
By inserting (26) into (20) and considering the BPSK modulation, the asymptotic SER of the BRS scheme can be formulated as
\begin{align*}
P_{e}^{BRS}\to \frac{D^K\Gamma(Kv+\frac{1}{2})2^{Kv}}{2\sqrt{\pi}\overline\gamma^{KG_d}}.
\tag{27}\label{27}
\end{align*}
From the above expression, it can be noticed that the achievable diversity order of the BRS scheme is maintained relative to that of the ARP scheme.
As such, we obtain the ratio
\begin{align*}
\frac{P_{e}^{BRS}}{P_{e}^{ARP}}= \frac{\Gamma(Kv+1)}{[\Gamma(v+1)]^K}\left(\frac{1}{K}\right)^{Kv}.
\tag{28}\label{28}
\end{align*}
From (28), the ratio is always smaller than 1 when $K\geq 2$. Therefore, we can conclude that based on the assumption of total power constraint, the BRS scheme is superior to the ARP system in terms of the SER performance. In fact, the ARP scheme tracks the fluctuations of all channels and assigns the best-condition channels to R and D. In addition, both S and R need transmit power to forward the information to the next node. If the total system power is limited to 1 and all sources and relays have the same transmit power, the transmit power at each node in the ARP scheme is $1/(2K)$. While for the BRS scheme, the transmit power of the selected S and R is $1/2$, respectively. Since the selective diversity gain is caused by the fluctuation of the channel fading, the larger transmission power at the source will cause the increase of the channel variation, thereby increasing the selective diversity gain. Similar work has also observed this phenomenon [48].
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{Fig2.eps}
\caption{The OP of the dual-hop THz system versus $\overline\gamma$ with different
fading parameters and pointing errors.}
\end{figure}
\section{Numerical Results and Discussions}
In this section, the Monte Carlo simulation is employed to verify the accuracy of the analytical results in this paper. Without loss of generality, a given threshold $\gamma_{th}=2$ dB and a fixed relay gain $C=1.7$ are considered. In addition, we assume that the average SNRs, fading parameters and pointing errors of two THz hops are equal, that is, $\overline\gamma_1=\overline\gamma_2=\overline\gamma$, $\alpha_1=\alpha_2=\alpha$, $\mu_1=\mu_2=\mu$, and $\phi_1=\phi_2=\phi$, except for Fig. 3 and Fig. 6. Unless otherwise is stated, we assume that the transmission distance of two links $d_1=d_2=d_o /2$, $d_o$ is the total transmit distance of the dual-hop THz system, the frequency and the antenna gain are respectively set as $f=300$ GHz and $G_t=G_r=55$ dBi, and the standard environment condition is considered, i.e., the relative humidity, atmosphere pressure and temperature are respectively $\varrho=50\%$, $p_a=101325$ Pa and $T=296$ K.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{Fig3.eps}
\caption{The OP of the dual-hop THz system for varying fading parameters and
pointing errors along with the asymptotic results.}
\end{figure}
Fig. 2 presents the OP of the mixed dual-hop THz system versus $\overline\gamma$ with varying fading parameters and pointing errors. One can observe that as the values of fading parameters increase, the outage performance gets better since the larger values of $\alpha_1$, $\mu_1$, $\alpha_2$, and $\mu_2$ indicate the fading severity of THz links is reduced. Additionally, one can clearly see that pointing errors have a significant impact on the system outage performance. The pointing error parameter $\phi_i$ is related to $\sigma_{s,i}$, the lager the value of $\sigma_{s,i}$ is, the smaller the value of $\phi_i$ is, and higher impact of pointing errors is observed, which therefore leads to worse outage performance.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{Fig4.eps}
\caption{The OP of single THz and dual-hop THz links with different
pointing errors for a total propagation distance of 100 m.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{Fig5.eps}
\caption{The average BER for different modulation schemes of single THz and dual-hop THz links with a total length of 80 m.}
\end{figure}
In order to verify the accuracy of the system's diversity order obtained from the asymptotic OP, Fig. 3 plots the OP versus $\overline\gamma$ with different fading parameters and pointing errors. As can be noticed that, the asymptotic results and the analytic expression of the OP have a tight fit at high SNRs. In addition, it can be observed that the curves have the same or different slopes as changing the values of $\alpha_1$, $\alpha_2$, $\mu_1$, $\mu_2$, $\phi_1$, and $\phi_2$, which is consistent with the diversity order $G_d=\min\left\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\phi_2,\alpha_2\mu_2\right\}$. Moreover, one can also notice that the OP decreases as the values of these fading parameters increase. Therefore, large diversity order can be obtained in the case of weak fading conditions and pointing errors, while strong fading and pointing errors result in a small diversity order.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{Fig6.eps}
\caption{The average BER versus $\overline\gamma$ with different fading parameters and
pointing errors along with the asymptotic results.}
\end{figure}
Fig. 4 shows the OP of single THz and dual-hop THz systems when the total transmission distance is $d_o=100$ m. The multipath fading parameters are set as $\alpha=2$ and $\mu=1$. For the dual-hop system, we consider $d_1=d_2=50$ m. One can see that under the same fading conditions and pointing errors, the OP of the dual-hop relaying THz system is lower than that of the single THz link. Moreover, it is observed in this figure that as the value of $\phi$ decreases, the impact of the pointing error becomes stronger, and therefore the outage performance becomes worse.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{Fig7.eps}
\caption{The ACC versus $d_o$ with different multipath fading parameters and average SNR $\overline\gamma$ when $\phi=3.6333$.}
\end{figure}
In Fig. 5, considering BPSK and DPSK modulation schemes, the average BER of the dual-hop system and the single link is presented. In this setup, the attenuation
parameters are set as $\alpha=2$, $\mu=3$, $\phi=2.0437$, and $d_o=80$ m. It can be clearly seen from this figure that the simulation results perfectly match numerical evaluated ones for all modulation schemes, and the dual-hop scheme has better error performance under two types of modulation schemes than the single THz link. Moreover, BPSK offers lower average BER than DPSK, as expected.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{Fig8.eps}
\caption{The ACC of the dual-hop THz system with different transmit distance and pointing errors.}
\end{figure}
Fig. 6 plots the average BER versus $\overline\gamma$ with varying fading parameters and pointing errors. In addition, the asymptotic average BER analysis given by (15) is illustrated in this figure. It is demonstrated that the proposed asymptotic result has excellent tightness and accuracy at high SNRs. Also, it can be clearly seen that the larger the value of $\alpha_1$ or $\mu_1$ is, the lower the average BER is. Furthermore, one can also observe that, with the increase of $\phi_1$ or $\phi_2$, the error performance of the dual-hop system gets better.
Fig. 7 illustrates the ACC versus the total propagation distance of dual-hop system $d_o$ with different multipath fading parameters and $\overline\gamma$ when $\phi=3.6333$. One can observe that the ACC decreases with the increase of $d_o$. The reason is that as the propagation distance increases, the system suffers from larger path loss and thus reduces the capacity. In addition, the ACC for different $\overline\gamma$ is also plotted. As expected, as $\overline\gamma$ increases, the average capacity increases. Also, one can see that the ACC gets larger value under the weaker fading.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{Fig9.eps}
\caption{SER performance comparison: ARP versus BRS.}
\end{figure}
The curves of ACC versus $\overline\gamma$ for different propagation distances and pointing errors are plotted in Fig. 8. In this setup, the fading parameters are set as $\alpha=2,\mu=1$. One can clearly observe that the ACC is significantly affected by the transmission distance. As the value of $d_o$ increases, the path attenuation of the system increases, resulting in significant degradation of the system capacity performance. In addition, this figure shows the impact of pointing errors of the THz link on the ACC. For example, for a fixed $d_o$, it can be noticed that the ACC performance is getting better when the pointing error effect changes from strong to weak.
Fig. 9 depicts the SER performance comparison for ARP and BRS schemes. In this setup, the fading parameters are set as $\alpha_1=1.2$, $\mu_1=3$, $\alpha_2=1.3$, $\mu_2=2$, $\phi_1=1$, $\phi_2=3.6333$, and the total transmit distance is $d_o=100$ m. As can be clearly seen that the asymptotic results are match perfectly with the simulation results in high SNR regimes. For $K=1$, the system has only a single relay, therefore the SER performance of the BRS system is the same as that of the ARP case, as expected. In addition, one can observe that the two schemes have the same slopes, which implies that the diversity order of the BRS scheme is maintained with that of the ARP case , namely, $G_{d}^{Q}=K\min\left\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\phi_2,\alpha_2\mu_2\right\}$. Moreover, as the value of $K$ increases, the SER of both schemes is reduced, while the BRS scheme performs better than the ARP scheme.
\section{Conclusion}
In this paper, we have investigated the performance of dual-hop THz systems with fixed-gain AF relays. Taking the path loss, multipath fading and pointing errors into account, exact analytical expressions of the OP, average BER, and ACC were derived. Moreover, the accurate and tight asymptotic results of the OP and average BER were presented. Results demonstrated that the impact of multipath fading, pointing errors, and transmit distance significantly affects the performance of the dual-hop THz system. We also obtained that the diversity order of the mixed dual-hop THz system is $G_d=\min\left\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\phi_2,\alpha_2\mu_2\right\}$. The result shown that the diversity order is determined by multipath fading parameters and pointing errors. In addition, for the multi-relay system, the asymptotic SER expressions of both ARP and BRS schemes were derived. It was proven that the BRS fixed-gain AF relaying achieve the same diversity order $G_{d}^{Q}=K\min\left\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\phi_2,\alpha_2\mu_2\right\}$ as and lower SER than the ARP relaying.
\appendices
\section{CDF of The E2E SNR}
From (6), by defining $\mathcal{I}_1=\int_{0}^{\infty}F_{\gamma_2}\left(\frac{C\gamma}{x}\right)f_{\gamma_1}(x+\gamma)dx$, (6) can be rewritten as
\begin{align*}
F_{\gamma_o}(\gamma)=F_{\gamma_1}(\gamma)+\mathcal{I}_{1},
\tag{A.1}\label{A.1}
\end{align*}
where $F_{\gamma_1}(\gamma)$ is given in (5). Substituting (4) and (5) into $\mathcal{I}_1$ and consequently applying [33, Eqs. (9.301), (3.194.3) and (8.384.1)], we have
\begin{align*}
\mathcal{I}_1 &= \frac{2A_1A_2\overline\gamma_{1}^{-\frac{\phi_1}{2}}
\overline\gamma_{2}^{-\frac{\phi_2}{2}}C^{\frac{\phi_2}{2}}\gamma^{\frac{\phi_1}{2}}}{\alpha_2}\\
&\times\frac{1}{(2\pi i)^2}{\int\limits_{\ell_{1}}}{\int\limits_{\ell_{2}}}
\Gamma\left(\frac{\phi_2}{2}{-}\frac{\phi_1}{2}{+}\frac{\alpha_1}{2}t{-}\frac{\alpha_2}{2}s\right)\\
&\times\frac{\Gamma(s)\Gamma\left(s{+}\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}\right)
\Gamma\left(1{-}\frac{\phi_2}{2}{+}\frac{\alpha_2}{2}s\right)\Gamma\left(\frac{\phi_2}{\alpha_2}{-}s\right)}
{\Gamma\left(s{+}1\right)\Gamma\left(1{+}\frac{\phi_2}{\alpha_2}{-}s\right)}\\
&\times\frac{\Gamma(t)\Gamma(t+\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1})}
{\Gamma(t{+}1)\Gamma\left(1{-}\frac{\phi_1}{2}{+}\frac{\alpha_1}{2}t\right)}\\&\times
\left(B_2\left(\frac{C}{\overline\gamma_2}\right)^{\frac{\alpha_2}{2}}\right)^{-s}
\left(B_1\left(\frac{\gamma}{\overline\gamma_{1}}\right)^{\frac{\alpha_1}{2}}\right)^{-t}dsdt,
\tag{A.2}\label{A.2}
\end{align*}
where $\mathcal{\ell}_1$ and $\mathcal{\ell}_2$ stand for the contours in the $s$-plane and the $t$-plane, respectively.
With the aid of (5) and (A.2) and by employing the definition of the BFHF, the CDF of the AF relaying system can be obtained as (7).
\section{Asymptotic CDF of The E2E SNR}
In this appendix, the asymptotic CDF of the considered system is derived. Specifically, when $\overline\gamma_1\to \infty$, the asymptotic expression of $F_{\gamma_1}(\gamma)$ is asymptotically derived by applying [40, Eq. (07.34.06.0040.01)] as
\begin{align*}
F_{\gamma_1}&(\gamma)\to \frac{2A_1\Gamma\left(\frac{\alpha_1\mu_1-\phi_1}{\alpha_i}\right)\Gamma\left(\frac{\phi_1}{\alpha_1}\right)
\gamma^{\frac{\phi_1}{2}}}{\alpha_1\Gamma\left(1+\frac{\phi_1}{\alpha_1}\right)}
\overline\gamma_{1}^{-\frac{\phi_1}{2}}\\
&+\frac{2A_1B_{1}^{\frac{\alpha_1\mu_1-\phi_1}{\alpha_1}}\Gamma\left(-\frac{\alpha_1\mu_1-\phi_1}{\alpha_1}\right)
\gamma^{\frac{\alpha_1\mu_1}{2}}}
{\alpha_1\mu_1\Gamma\left(1-\frac{\alpha_1\mu_1-\phi_1}{\alpha_1}\right)}\overline\gamma_{1}^{-\frac{\alpha_1\mu_1}{2}}.
\tag{B.1}\label{B.1}
\end{align*}
Moreover, relying on [37, Eq. (1.2)], $\mathcal{I}_1$ in (A.2) can be rewritten as
\begin{align*}
&\mathcal{I}_1 = \frac{2A_1A_2\overline\gamma_{1}^{-\frac{\phi_1}{2}}
\overline\gamma_{2}^{-\frac{\phi_2}{2}}C^{\frac{\phi_2}{2}}\gamma^{\frac{\phi_1}{2}}}{\alpha_2}\\
&\times\frac{1}{2\pi i}{\int\limits_{\ell_{1}}}\frac{\Gamma(s)\Gamma\left(s{+}\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}\right)
\Gamma\left(1{-}\frac{\phi_2}{2}{+}\frac{\alpha_2}{2}s\right)\Gamma\left(\frac{\phi_2}{\alpha_2}{-}s\right)}
{\Gamma\left(s{+}1\right)\Gamma\left(1{+}\frac{\phi_2}{\alpha_2}{-}s\right)}\\
&\times\, {\mathrm{H}}_{2,3}^{3,0}\left [{{\frac{B_1\gamma^{\frac{\alpha_1}{2}}}{\overline\gamma_{1}^{\frac{\alpha_1}{2}}}}
\left |{ \begin{matrix} {(1,1), \left(1{-}\frac{\phi_1}{2},\frac{\alpha_1}{2}\right)}
\\ {\left(\frac{\phi_2}{2}{-}\frac{\phi_1}{2}{-}\frac{\alpha_2}{2}s,\frac{\alpha_1}{2}\right) (0,1) \left(\frac{\alpha_1\mu_1-\phi_1}{\alpha_1},1\right)}
\\ \end{matrix} }\right . }\right ]\!\\&\times
\left(B_2C^{\frac{\alpha_2}{2}}\overline\gamma_{2}^{-\frac{\alpha_2}{2}}\right)^{-s}ds.
\tag{B.2}\label{B.2}
\end{align*}
Assuming $\overline\gamma_1\to \infty$ and using [41, Eq. (1.8.4)] yields
\begin{align*}
\, {\mathrm{H}}_{2,3}^{3,0}&\left [{{\frac{B_1\gamma^{\frac{\alpha_1}{2}}}{\overline\gamma_{1}^{\frac{\alpha_1}{2}}}}
\left |{ \begin{matrix} {(1,1), \left(1{-}\frac{\phi_1}{2},\frac{\alpha_1}{2}\right)}
\\ {\left(\frac{\phi_2}{2}{-}\frac{\phi_1}{2}{-}\frac{\alpha_2}{2}s,\frac{\alpha_1}{2}\right) (0,1) \left(\frac{\alpha_1\mu_1-\phi_1}{\alpha_1},1\right)}
\\ \end{matrix} }\right . }\right ]\! \\
\to & \frac{2}{\alpha_1}\frac{\Gamma\left(\frac{\phi_1{-}\phi_2}{\alpha_1}+\frac{\alpha_2}{\alpha_1}s\right)
\Gamma\left(\mu_1{-}\frac{\phi_2}{\alpha_1}{+}\frac{\alpha_2}{\alpha_1}s\right)}
{\Gamma\left(1{-}\frac{\phi_2{-}\phi_1}{\alpha_1}{+}\frac{\alpha_2}{\alpha_1}s\right)
\Gamma\left(1{-}\frac{\phi_2}{2}+\frac{\alpha_2}{2}s\right)}\\
&\times \left(B_1\overline\gamma_{1}^{-\frac{\alpha_1}{2}}
\gamma^{\frac{\alpha_1}{2}}\right)^{\frac{\phi_2{-}\phi_1{-}\alpha_2s}{\alpha_1}}.
\tag{B.3}\label{B.3}
\end{align*}
Plugging (B.3) into (B.2) and using [37, Eq. (1.2)], and doing some algebraic manipulations, we obtain
\begin{align*}
\mathcal{I}_1 &\underset{\overline\gamma_{1} \to \infty}\approx \frac{4A_1A_2B_{1}^{\frac{\phi_2{-}\phi_1}{\alpha_1}}}{\alpha_1\alpha_2}\left(\frac{C\gamma}{\overline\gamma_{1}
\overline\gamma_{2}}\right)^{\frac{\phi_2}{2}}\\
&\times\, {\mathrm{H}}_{3,5}^{4,1}\left [{{B_{1}^{\frac{\alpha_2}{\alpha_1}}B_2
\left(\frac{C\gamma}{\overline\gamma_{1}\overline\gamma_{2}}\right)^{\frac{\alpha_2}{2}}}
\left |{ \begin{matrix} {\kappa_1}
\\ {\kappa_2}
\\ \end{matrix} }\right . }\right ]\!,
\tag{B.4}\label{B.4}
\end{align*}
where
$\kappa_1=\left\{\left(1{-}\frac{\phi_2}{\alpha_2},1\right) (1,1) \left(1{-}\frac{\phi_2{-}\phi_1}{\alpha_1},\frac{\alpha_2}{\alpha_1}\right)\right\}$,
$\kappa_2=\left\{(0,1) \left(\frac{\alpha_2\mu_2-\phi_2}{\alpha_2},1\right) \left(\frac{\phi_1{-}\phi_2}{2},\frac{\alpha_2}{\alpha_1}\right)
\left(\mu_1{-}\frac{\phi_2}{\alpha_1},\frac{\alpha_2}{\alpha_1}\right) \left(-\frac{\phi_2}{\alpha_2},1\right)\right\}$. Subsequently, assuming $\overline\gamma_2\to \infty$ and using [41, Eq. (1.8.4)], $\mathcal{I}_1$ can be further simplified. Finally, taking advantage of the asymptotic expression of $\mathcal{I}_1$ and (B.1), we get the asymptotic CDF of the considered system.
\section{PDF of The E2E SNR}
Plugging (4) in (10) and applying [33, Eq. (9.301)] and taking a series of transformation, we have
\begin{align*}
f_{\gamma_o}&(\gamma) = A_1A_2\overline\gamma_{1}^{-\frac{\phi_1}{2}}
\overline\gamma_{2}^{-\frac{\phi_2}{2}}C^{\frac{\phi_2}{2}}\gamma^{\frac{\phi_1}{2}{-}1}\\
&\times\frac{1}{(2\pi i)^2}{\int\limits_{\ell_{1}}}{\int\limits_{\ell_{2}}}
\Gamma\left(\frac{\phi_2}{2}{-}\frac{\phi_1}{2}{+}\frac{\alpha_1}{2}t{-}\frac{\alpha_2}{2}s\right)\\
&\times\frac{\Gamma(s)\Gamma\left(s{+}\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}\right)
\Gamma\left({-}\frac{\phi_2}{2}{+}\frac{\alpha_2}{2}s\right)}
{\Gamma\left(s{+}1\right)}\\
&\times\frac{\Gamma(t)\Gamma(t+\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1})}
{\Gamma(t{+}1)\Gamma\left({-}\frac{\phi_1}{2}{+}\frac{\alpha_1}{2}t\right)}\\&\times
\left(B_2\left(\frac{C}{\overline\gamma_2}\right)^{\frac{\alpha_2}{2}}\right)^{-s}
\left(B_1\left(\frac{\gamma}{\overline\gamma_{1}}\right)^{\frac{\alpha_1}{2}}\right)^{-t}dsdt.
\tag{C.1}\label{C.1}
\end{align*}
By employing [37, Eqs. (2.56) and (2.57)], the expression of the PDF (C.1) can be rewritten by (11).
\section{Average BER of The E2E SNR}
For a single THz link, by inserting (5) into (13), the average BER $\overline{P}_{e1}$ can be derived by using
[40, Eqs. (07.34.03.0228.01) and (07.34.21.0012.01)] as
\begin{align*}
\overline{P}_{e1}&=\frac{A_1(q\overline\gamma_{1})^{-\frac{\phi_1}{2}}}{\Gamma(p)\alpha_1}\\ &\times
\, {\mathrm{H}}_{2,3}^{2,1}\left [{{\frac{B_1}{(q\overline\gamma_1)^{\frac{\alpha_1}{2}}}}
\left |{ \begin{matrix} {\left(1{-}p{-}\frac{\phi_1}{\alpha_1},\frac{\alpha_1}{2}\right) \left(1{-}\frac{\phi_1}{\alpha_1},1\right) (1,1)}
\\ {(0,1),\left(\frac{\alpha_1\mu_1-\phi_1}{\alpha_1},1\right), \left(-\frac{\phi_1}{\alpha_1},1\right)}
\\ \end{matrix} }\right . }\right ]\!.
\tag{D.1}\label{D.1}
\end{align*}
In addition, substituting (A.1) into (13) and applying [33, Eq. (3.326.2)] yields
\begin{align*}
\overline{P}_{e}& = \overline{P}_{e1}+\frac{A_1A_2\overline\gamma_{1}^{-\frac{\phi_1}{2}}
\overline\gamma_{2}^{-\frac{\phi_2}{2}}C^{\frac{\phi_2}{2}}q^{-\frac{\phi_1}{2}}}{\alpha_2\Gamma(p)}\\
&\times\frac{1}{(2\pi i)^2}{\int\limits_{\ell_{1}}}{\int\limits_{\ell_{2}}}
\Gamma\left(\frac{\phi_2}{2}{-}\frac{\phi_1}{2}{+}\frac{\alpha_1}{2}t{-}\frac{\alpha_2}{2}s\right)\\
&\times\frac{\Gamma(s)\Gamma\left(s{+}\frac{\alpha_2\mu_2{-}\phi_2}{\alpha_2}\right)
\Gamma\left(1{-}\frac{\phi_2}{2}{+}\frac{\alpha_2}{2}s\right)\Gamma\left(\frac{\phi_2}{\alpha_2}{-}s\right)}
{\Gamma\left(s{+}1\right)\Gamma\left(1{+}\frac{\phi_2}{\alpha_2}{-}s\right)}\\
&\times\frac{\Gamma(t)\Gamma(t+\frac{\alpha_1\mu_1{-}\phi_1}{\alpha_1})\Gamma\left(\frac{\phi_1}{2}{+}p{-}\frac{\alpha_1}{2}t\right)}
{\Gamma(t{+}1)\Gamma\left(1{-}\frac{\phi_1}{2}{+}\frac{\alpha_1}{2}t\right)}\\&\times
\left(B_2\left(\frac{C}{\overline\gamma_{2}}\right)^{\frac{\alpha_2}{2}}\right)^{-s}
\left(B_1\left(\frac{1}{\overline\gamma_{1}q}\right)^{\frac{\alpha_1}{2}}\right)^{-t}dsdt.
\tag{D.2}\label{D.2}
\end{align*}
Making full use of (D.1) and (D.2) and [37, Eq. (2.57)], the derived average BER of the dual-hop relaying system can be obtained as in (14).
|
1,108,101,566,508 | arxiv | \section{Introduction}
\label{sec:introduction}
In the global market capitalization, 5G technologies are projected to be worth over USD 12.3 trillion by 2035 \cite{kim2018network}, and network slicing is seen as the key enabling technology that can bring up to 150\% increased revenues for the operators, in comparison with the classical one-big network concept \cite{EricssonAndBT-17}. The idea for network slicing in 5G came from telecommunication industry alliance NGMN in February 2015 \cite{alliance20155g} and very shortly afterwards was accepted by 3GPP \cite{3gpp2015-22-891} as an enabling technology that will bring new services and markets.
The role of network slicing is to enable functional and operational diversity on a common network infrastructure \cite{3gpp}.
The idea is to create multiple isolated networks, termed Network Slice Instances (NSIs), on a common physical infrastructure where physical and virtual resources of each NSI are customized to satisfy the requirements for a specific communication service.
Fig. \ref{lifecycle} presents the management phases of a NSI: 1. preparation; 2. commissioning; 3. operation; and 4. decommissioning.
The preparation phase includes all steps required before the creation of a NSI (creation and verification of network slice template, evaluation of network slice requirements, capacity planning). The lifecycle of a NSI starts with the second phase.
During the commissioning phase, the NSI is created and all resources for the NSI are allocated and instantiated.
In the operation phase, the NSI supports a communication service. First, the NSI is activated and later performance reporting for KPI monitoring as well as modification and de-activation of the NSI happen.
The last phase of NSI lifecycle and NSI management includes termination of the NSI by releasing the dedicated resources and removing the NSI specific configuration from the shared resources. After this phase, the NSI does not exist anymore.
\begin{figure}
\includegraphics[width=3.3in]{lifycycle}
\caption{Management aspects of network slice instance \cite{3gpp2}. We propose a model based on combinatorial designs for the creation and modification steps (represented with thick frames).}
\label{lifecycle}
\end{figure}
The slicing is performed end-to-end (E2E) \cite{DBLP:journals/corr/LiWPU16,8334921}. Thus, a NSI contains Network Slice Subnet Instances (NSSIs) in the New-Generation Radio Access Network (AN) and the Core Network (CN), refereed to as AN and CN NSSIs in Fig. \ref{architecture}, and the interconnections between them. NSSI is a set of network functions (NFs) which can be physical NFs or virtualized NFs. If the NFs are interconnected, the 3GPP management system contains the information relevant to the connections between these NFs such as topology of connections and individual link requirements. Fig. \ref{architecture} shows that one NSI may support a single (e.g. NSI 1) or multiple communication services (e.g. NSI 3). AN and CN NSSIs can be dedicated to one NSI (e.g. CN NSSI 1) or shared by two or more NSIs (e.g. CN NSSI 4).
\begin{figure}
\includegraphics[width=3.3in]{architecture}
\caption{Five services supported by seven NSIs. The NSIs contain NFs, belonging to CN and AN NSSIs, and the interconnection information between the NFs.}
\label{architecture}
\end{figure}
A demanding tenant issues a communication service request which is translated into a slice request (network functions and infrastructure requirements) for the Mobile Network Operator (MNO). The following management functions manage the NSIs to support communication services: Communication Service Management Function (CSMF), Network Slice Management Function (NSMF) and Network Slice Subnet Management Function (NSSMF).
CSMF receives the communication service related requirements by the tenant and converts them into network slice related requirements which are sent to NSMF. NSMF manages and orchestrates the NSI. It configures the NSIs and knows which NSSIs are associated with each NSI (cross-domain management and orchestration (M\&O)). One NSSI can be associated with multiple NSIs where NSSMF manages and orchestrates the NSSIs.
The network slice is instantiated and configured by NSMF where NSMF manages the interactions among the slice instances in terms of resources and features sharing (cross-slice M\&O).
For instance, in Fig. \ref{architecture}, both AN NSSI 1 in the access part and CN NSSI 1 in the core part first have to be defined and instantiated. Then NS 1 is instantiated by combining these two NSSIs.
In spite of the vast number of articles devoted to network slicing, it comes as a surprise that there are still no general precise mathematical models for network slicing and building such models is a challenging task as suggested in \cite{ABIResearchAndIntel2018,kim2018network}.
Moreover, even the taxonomy used by different standardization organizations (for example 3GPP and IETF) is not agreed, although they are addressing the same slicing scenarios. For example what is referred as "hard slicing" by IETF, is referred as non-shared network slice subnet instance by 3GPP (see Definition \ref{def:HardNetworkSlicingIETF} and Definition \ref{def:Non-shared3GPP} below). Similarly, "soft slicing" by IETF (Definition \ref{def:SoftNetworkSlicingIETF}) corresponds to "shared constituent of network slice instance" (Definition \ref{def:Shared3GPP}) by 3GPP.
\begin{definition} [IETF \cite{geng2017network}]\label{def:HardNetworkSlicingIETF}
\emph{Hard slicing} refers to the provision of resources in such a way that they are dedicated to a specific network slice instance.
\end{definition}
\begin{definition} [3GPP \cite{3gpp1}]\label{def:Non-shared3GPP}
A NSSI that is dedicated to one NSI and is not shared as a constituent by two or more NSSI(s) is called \emph{a non-shared NSSI}.
\end{definition}
\begin{definition} [IETF \cite{geng2017network}]\label{def:SoftNetworkSlicingIETF}
\emph{Soft slicing} refers to the provision of resources in such a way that whilst the slices are separated such that they cannot statically interfere with each other, they can interact dynamically, which means they may compete for some particular resource at some specific time.
\end{definition}
\begin{definition} [3GPP \cite{3gpp1}]\label{def:Shared3GPP}
A NSSI may be shared by two or more NSIs, this is called \emph{a shared constituent of NSI}. A NF may be shared by two or more NSSI(s), in which case it is called \emph{a shared constituent of NSSI}.
\end{definition}
\subsection{Related Work}
The ideas for network slicing originates from the areas of Cloud Computing \cite{regalado2011coined}, Software Defined Networks (SDN) proposed by IETF \cite{yang2004forwarding}, Network Functions Virtualisation (NFV) \cite{etsi2013001} and Information-Centric Networking (ICN) \cite{ghodsi2011information}. One of the major research problems is the resource allocation across slices. Several works address the slicing of radio access network resources or cross-domain on VNF level. We mention here some of the most prominent mathematical models developed for network slicing.
Reference \cite{8329496} presents a mathematical model to construct network slice requests and to map them on the network infrastructure. The mapping process is performed on VNF level where first it places the VNFs to the nodes in the network and later it selects that paths between the VNFs and chains them.
With the aim to maximize the long-term network utility, reference \cite{8382171} uses a genetic algorithm to serve slice requests.
Network slicing brings new business models and interactions between the infrastructure providers, the tenants and the customers. This opens many directions for optimizations.
The algorithm for admission and allocation of network slices requests in \cite{8057045} maximizes the infrastructure provider's revenue and ensures that the service guarantees provided to tenants are satisfied.
\subsection{Our Contribution}
In this paper, we offer one mathematical model for the Network Slice Management Function (NSMF) based on combinatorial designs and their algebraic properties.
We see our contribution as one step closer to a general, precise and scalable mathematical model for network slicing. In particular, our mathematical model addresses the tasks of the NSMF in the creation and modification sub-phases of the NSI lifecycle (phases 2 and 3 in Fig. \ref{lifecycle}).
The model uses combinatorial objects known as Latin squares (or Latin rectangles) to describe communication services and the NSSIs. Combinatorial designs \cite{Colbourn:2006:HCD:1202540} have been used for a long time in communications, networking and cryptography. References \cite{10.1007/978-3-642-40552-5_15,6875415,7034478} apply combinational designs for network coding. The authors in \cite{Colbourn1999ApplicationsOC} listed thirteen application areas of combinatorial designs, and in this paper we extend the list with one more application, i.e., configuration of network slices in 5G.
The mathematical properties of our model guarantee conflict resolution for services defined over network slices that compete for resources in CN and AN, as long as the configuration and modification of NSI and NSSI are performed within our model.
The next contribution of this paper is from an optimization point of view. We introduce the notion of utilization ratio function, with aims to describe the functional dependencies between the number of used network resources and the waiting time for establishing the network slice. We present two strategies for the work of NSMF, a non-optimized first-come-first-serve strategy and an optimal strategy, where the optimization objectives are: 1. to maximize the utilization of the network components; and 2. to decrease the average delay time from slice request to slice activation.
Finally, we show some simulation results. The optimal strategy achieved by maximizing the utilization ratio function, provides more than twice better performance in terms of the both objectives compared to the non-optimized strategy.
The rest of this paper is organized as follows. In Section \ref{example}, we give examples of modeling network slicing with combinatorial designs. In Section \ref{sysModel}, we develop general and extended combinatorial designs model for cross-domain end-to-end network slicing that includes both hard and soft slicing. In Section \ref{simul}, we instantiate our general model with several concrete attributes and present algorithms for simulation and optimization of a NSMF for that model. Section \ref{conc} concludes the paper.
\section{Examples of cross-domain network slices} \label{example}
Fig. \ref{architecture} shows five services that are provided on the same infrastructure. The resources in the access network part, such as bandwidth, computing and storage, are represented with 6 AN NSSIs, whereas the resources in the core network part are represented with 6 CN NSSIs. AN and CN NSSIs can be associated with one or multiple NSI(s).
\begin{table}[]
\caption{A rectangular scheme, with services as rows, AN NSSIs as columns, and CN NSSIs as table entries, representing the E2E slicing described in Fig. \ref{architecture}.}
\centering
\begin{tabular}{|c|l||c|c|c|c|c|c|}
\hline
& & \multicolumn{6}{c|}{AN NSSIs} \\ \cline{2-8}
\hline
& & $a_1$ & $a_2$ & $a_3$ & $a_4$ & $a_5$ & $a_6$ \\
\hline \hline
S & $s_1$ & $c_1$ & & & & & \\ \cline{2-8}
E & $s_2$ & & $c_2$ & $c_3$ & & & \\ \cline{2-8}
R & $s_3$ & & & $c_3$ & $c_4$ & & \\ \cline{2-8}
V & $s_4$ & & & & $c_6$ & $c_5$ & \\ \cline{2-8}
. & $s_5$ & & & & & & $c_4$ \\ \cline{2-8}
\hline
\end{tabular}
\label{table:UxA}
\end{table}
Let us denote the set of 5 services by $S = \{ s_1, \ldots, s_5 \} $, the set of 6 AN NSSIs by $A = \{ a_1, \ldots, a_6 \}$ and the set of 6 CN NSSIs by $C = \{ c_1, \ldots, c_6 \}$.
For this concrete example, we can represent the service/NSI/NSSI mapping as a $5 \times 6$ rectangular scheme given in Table \ref{table:UxA}. The services are modeled as rows, and the columns represent the network subnet slices of the access network part.
We fill in the rectangular scheme with elements from the set $C$. For instance, AN NSSI 6 with CN NSSI 4 forms an end-to-end slice (NSI 5) for service 5. We model this in the rectangular scheme by putting $c_4$ in the row $s_5$ and the column $a_6$. For service 4 there are two scheduled subnet slices in the access network: $a_4$ is combined with the $6-$th core network subnet slice $c_6$ and $a_5$ that is combined with the $5-$th core network slice $c_5$. We model this by placing $c_6$ in row $s_4$ and column $a_4$, and by placing $c_5$ in row $s_4$ and column $a_5$.
Note that this configuration is for time slot $t$. The mapping scheme might change at time slot $t+\Delta t$.
When we apply dedicated resource allocation, then neither the same AN NSSI nor CN NSSI can be scheduled for more than one NSI, i.e., one service. In terms of the rectangular scheme in Table \ref{table:UxA} that means that no $c_i$ appears more than once in any column. In other words, a bundle of dedicated resources is allocated.
On the other hand, we can see that we have two $c_3$ in the $3-$rd column $a_3$ and in rows $s_2$ and $s_3$. That means that service 2 and service 3 share the $3-$rd access slice $c_3$. This is a situation when we have shared resources, i.e., soft slicing where the users compete for the resources.
\begin{table}[]
\caption{A rectangular scheme, with core network resources as rows, RAN slices as columns, and services as table entries, representing the slicing described in Fig. \ref{architecture}.}
\centering
\begin{tabular}{|r|l||c|c|c|c|c|c|}
\hline
& & \multicolumn{6}{c|}{AN NSSIs} \\ \cline{2-8}
\hline
& & $a_1$ & $a_2$ & $a_3$ & $a_4$ & $a_5$ & $a_6$ \\
\hline \hline
\ & $c_1$ & $s_1$ & & & & & \\ \cline{2-8}
\ & $c_2$ & & $s_2$ & & & & \\ \cline{2-8}
C & $c_3$ & & & $s_2, s_3$ & & & \\ \cline{2-8}
N & $c_4$ & & & & $s_3$ & & $s_5$ \\ \cline{2-8}
\ & $c_5$ & & & & & $s_4$ & \\ \cline{2-8}
\ & $c_6$ & & & & $s_4$ & & \\ \cline{2-8}
\hline
\end{tabular}
\label{table:CxA}
\end{table}
Another way of modeling the network slicing architecture is the rows to represent the core slices, the columns to represent the access slices and services are the entries in the table, as it is presented in Table \ref{table:CxA}. In the case when we want to have exclusivity, for instance one NSI for low latency and ultra reliable service, then we allocate a specific subnet slice only to one service, i.e., the services are placed in the table exactly only once in each row and column. We will elaborate this later with one precise theorem.
\begin{table}[]
\caption{A rectangular scheme, with service as rows, core network resources slices as columns, and RAN slices as table entries, representing the slicing described in Fig. \ref{architecture}.}
\centering
\begin{tabular}{|c|l||c|c|c|c|c|c|}
\hline
& & \multicolumn{6}{c|}{CN} \\ \cline{2-8}
\hline
& & $c_1$ & $c_2$ & $c_3$ & $c_4$ & $c_5$ & $c_6$ \\
\hline \hline
S & $s_1$ & $a_1$ & & & & & \\ \cline{2-8}
E & $s_2$ & & $a_2$ & $a_3$ & & & \\ \cline{2-8}
R & $s_3$ & & & $a_3$ & $a_4$ & & \\ \cline{2-8}
V & $s_4$ & & & & & $a_5$ & $a_4$ \\ \cline{2-8}
. & $s_5$ & & & & $a_6$ & & \\ \cline{2-8}
\hline
\end{tabular}
\label{table:UxC}
\end{table}
Finally, for a completeness, we present the third rectangular scheme (conjugate to the previous two), with services as rows, CN NSSIs as columns, and AN NSSIs as table entries in Table \ref{table:UxC}.
\section{Combinatorial model of network slicing} \label{sysModel}
We start with some basic definitions about Latin squares and related combinatorial structures.
\begin{definition}
\emph{A Latin square} of order $n$ is an $n \times n$ array in which each cell contains a single symbol from a $n$-set $S$, such that each symbol occurs exactly once in each row and exactly once in each column.
\label{def:Latinsquare}
\end{definition}
\begin{definition}
A $k\times n$ \emph{Latin rectangle} is an $k\times n$ array (where $k \leq n$) in which each cell contains a single symbol from a $n$-set $S$, such that each symbol occurs exactly once in each
row and at most once in each column.
\label{deLatinrectangle}
\end{definition}
\begin{definition}
\emph{A partial Latin square (rectangle)} is a square (rectangular) array $L$ with cells that are either empty or contain exactly one symbol such that no symbol occurs more than once in any row or column.
\label{PartialLatin}
\end{definition}
\begin{figure}[!h]
\minipage{0.17\textwidth}
$ \begin{bmatrix}
1 & 3 & 5 & 2 & 4 \\
4 & 2 & 3 & 1 & 5 \\
3 & 1 & 4 & 5 & 2 \\
5 & 4 & 2 & 3 & 1 \\
2 & 5 & 1 & 4 & 3 \\
\end{bmatrix}$
\endminipage
\minipage{0.17\textwidth}
$ \begin{bmatrix}
1 & 3 & 5 & 2 & 4 \\
4 & 2 & 3 & 1 & 5 \\
3 & 1 & 4 & 5 & 2 \\
\end{bmatrix}$
\endminipage
\minipage{0.17\textwidth}%
$ \begin{bmatrix}
1 & & & & 4 \\
& 2 & 3 & & \\
& & 4 & & \\
5 & 4 & & 3 & 1 \\
\end{bmatrix}$
\endminipage
\caption{A $5\times 5$ Latin Square, a $3\times 5$ Latin rectangle and a partial $4 \times 5$ Latin rectangle.}
\label{fig:LatinExamples}
\end{figure}
In Fig. \ref{fig:LatinExamples} we show an example of a $5\times 5$ Latin Square, a derived $3\times 5$ Latin rectangle and a derived partial $4 \times 5$ Latin rectangle.
\begin{definition}\label{def:conjugates}
Let $L$ be a $n \times n$ Latin square on symbol set $E_3$, with rows indexed by the elements of a $n$-set $E_1$ and columns indexed by the elements of a $n$-set $E_2$. Let us define a set of triplets $T = \{(x_1, x_2, x_3) : L(x_1, x_2) = x_3\}$. Let $\{a, b, c\} = \{1, 2, 3\}$. The $(a, b, c)$-\emph{conjugate} of $L$, $L_{(a,b,c)}$, has rows indexed by $E_a$, columns by $E_b$, and symbols by $E_c$, and is defined by $L_{(a,b,c)}(x_a, x_b) = x_c$ for each $(x_1, x_2, x_3) \in T$.
\end{definition}
Instead of using some general symbol sets $E_1$, $E_2$ and $E_3$ in Definition \ref{def:conjugates}, and in the rest of this paper let us use the set of services $E_1 \equiv S = \{ s_1, \ldots, s_{n_s} \} $, the set of AN NSSIs $E_2 \equiv A = \{ a_1, \ldots, a_{n_a} \}$ and the set of CN NSSIs $E_3 \equiv C = \{ c_1, \ldots, c_{n_c} \}$. In this context, we write $(S,A,C)-$conjugate instead of $(1,2,3)-$conjugate, $(S,C,A)-$conjugate instead of $(1,3,2)-$conjugate and $(C,A,S)-$conjugate instead of $(3,2,1)-$conjugate.
In the light of our introduced mathematical formalism that uses the combinatorial objects of Latin squares and rectangles, instead of the descriptive Definition \ref{def:HardNetworkSlicingIETF} for hard slicing and its equivalent Definition \ref{def:Non-shared3GPP} for dedicated (non-shared) slice subnet instances we offer another definition for hard network slicing in the core and access parts.
\begin{definition}[Hard Core Network Slicing]\label{def:HardCoreNetworkSlicing}
\emph{Hard network slicing of $C$} is a set of triplets $T_{hard, C} = \{(s_i, a_j, c_k) : s_i \in S, a_j \in A, c_k \in C\},$ such that for any two triplets $(s_{i_1}, a_{j_1}, c_{k_1}), (s_{i_2}, a_{j_2}, c_{k_2}) \in T_{hard, C}$ it holds:
\begin{equation}\label{eq:HardCoreNetworkSlicing}
\left\{
\begin{array}{rl}
\text{if } s_{i_1} = s_{i_2} & \text{then } a_{j_1} \neq a_{j_2} \text{ and } c_{k_1} \neq c_{k_2},\\
\text{if } a_{j_1} = a_{j_2} & \text{then } s_{i_1} \neq s_{i_2} \text{ and } c_{k_1} \neq c_{k_2},\\
\end{array} \right.
\end{equation}
\end{definition}
\begin{definition}[Hard Access Network Slicing]\label{def:HardAccessNetworkSlicing}
\emph{Hard network slicing of $A$} is a set of triplets $T_{hard, A} = \{(s_i, a_j, c_k) : s_i \in S, a_j \in A, c_k \in C\},$ such that for any two triplets $(s_{i_1}, a_{j_1}, c_{k_1}), (s_{i_2}, a_{j_2}, c_{k_2}) \in T_{hard, A}$ it holds:
\begin{equation}\label{eq:HardAccessNetworkSlicing}
\left\{
\begin{array}{rl}
\text{if } s_{i_1} = s_{i_2} & \text{then } a_{j_1} \neq a_{j_2} \text{ and } c_{k_1} \neq c_{k_2},\\
\text{if } c_{k_1} = c_{k_2} & \text{then } s_{i_1} \neq s_{i_2} \text{ and } a_{j_1} \neq a_{j_2}.
\end{array} \right.
\end{equation}
\end{definition}
\begin{theorem}\label{Thm:CombinatorialDesignsHardCoreNetworkSlicing}
$T_{hard,C} = \{(s_i, a_j, c_k) : s_i \in S, a_j \in A, c_k \in C\}$ is a hard network slicing, if and only if there exist a partial $(S',A',C')-$conjugate Latin rectangle where $S' \subseteq S$, $A' \subseteq A$ and $C' \subseteq C$.
\end{theorem}
\textbf{Proof:}
If we are given a hard network slicing $T_{hard,C}$, then we can build an array $L$ as in Table \ref{table:UxA}, where the row indexing is by $s_i$ elements in $T_{hard,C}$ that forms a subset $S' \subseteq S$, column indexing is by $a_j$ elements in $T_{hard,C}$ that forms a subset $A' \subseteq A$, and entries by $c_k$ elements in $T_{hard,C}$ that form a subset $C' \subseteq C$. Due to Equation (\ref{eq:HardCoreNetworkSlicing}) in Definition \ref{def:HardCoreNetworkSlicing} it follows that the cells in $L$ are either empty or contain exactly one symbol, and no symbol occurs more than once in any row or column. Thus, the array obtained from $T_{hard}$ is a partial Latin rectangle.
Let $L$ be a partial $(S,A,C)-$conjugate Latin rectangle. Then we can build a set of triplets $T_{hard,C} = \{(s_i, a_j, c_k) : s_i \in S, a_j \in A, c_k \in C\},$ from the non-blank cells in $L$ such that Equation (\ref{eq:HardCoreNetworkSlicing}) holds.
$\blacksquare$
\shorten{
\begin{definition}[Demultiplexing]
Let $L$ be a partial Latin rectangle, where the row indexing is by the set $S = \{s_1,\ldots,s_{n_s}\}$, column indexing is by the set $A = \{a_1,\ldots,a_{n_a}\}$ and the entries in the array are from the set $C = \{c_1,\ldots,c_{n_c}\}$, i.e., by the set triplet $(S, A, C)$. Let $T_{hard} = \{(s, a, c) : s \in S, a \in A, c \in C\}$ be its hard slicing equivalent presentation. Let indexes $s_{i_1},\ldots, s_{i_l} \notin S $ and let for some row index $s_i \in S$ it is true that
\begin{equation}\label{eq:DemultiplexNonEmptySet}
\left\{
\begin{array}{rl}
\mathcal{A}_{s_i} = & \{a_{\alpha} : \exists (s_i, a_{\alpha}, c_{\beta}) \in T_{hard} \} \neq \emptyset,\\
\mathcal{C}_{s_i} = & \{c_{\beta} : \exists (s_i, a_{\alpha}, c_{\beta}) \in T_{hard} \} \neq \emptyset,
\end{array}
\right.
\end{equation}
and let
\begin{equation}\label{eq:DemultiplexingUnion}
\mathcal{C}_{s_i} = \bigcup_{\mu = 1}^{s_{i_l}} \mathcal{C}_{s_{i_\mu}},
\end{equation}
where $\mathcal{C}_{s_{i_\mu}} \subseteq \mathcal{C}_{s_i}$. We say that the array $L_{demux}$ is a \emph{demultiplex of $L$ obtained by demultiplexing $s_i$ to $\{s_{i_1},\ldots, s_{i_l}\}$} if $L_{demux}$ is indexed by the set triplet $(s_{demux}, A, C)$, where $s_{demux} = (S \setminus \{s_i\}) \cup \{s_{i_1},\ldots, s_{i_l}\}$ and where the following relation holds:
where $C_{s_i} = \{c_{\nu} : \forall (s_i, a_{?}, c_{\nu}) \in T_{hard} \}$
\end{definition}
\begin{example}
Let us have the following sets $S = \{ s_1, u, s_4, s_5 \} $, $A = \{ a_1, \ldots, a_6 \}$ and $C = \{ c_1, \ldots, c_6 \}$, and let a partial Latin rectangle $L$ is given by the following array:
\begin{tabular}{|r|l||c|c|c|c|c|c|}
\hline
& & $a_1$ & $a_2$ & $a_3$ & $a_4$ & $a_5$ & $a_6$ \\
\hline \hline
\ & $s_1$ & $c_1$ & & & & & \\ \cline{2-8}
\ & $u$ & & $c_2$ & $c_3$ & $c_4$ & & \\ \cline{2-8}
\ & $s_4$ & & & & $c_6$ & $c_5$ & \\ \cline{2-8}
\ & $s_5$ & & & & & & $c_4$ \\ \cline{2-8}
\hline
\end{tabular}\\
where row indexing is by $S$, column indexing is by $A$, and the array entries are from $C$.
\begin{tabular}{|r|l||c|c|c|c|c|c|}
\hline
& & $a_1$ & $a_2$ & $a_3$ & $a_4$ & $a_5$ & $a_6$ \\
\hline \hline
\ & $s_1$ & $c_1$ & & & & & \\ \cline{2-8}
\ & $s_2$ & & $c_2$ & $c_3$ & & & \\ \cline{2-8}
\ & $s_3$ & & & $c_3$ & $c_4$ & & \\ \cline{2-8}
\ & $s_4$ & & & & $c_6$ & $c_5$ & \\ \cline{2-8}
\ & $s_5$ & & & & & & $c_4$ \\ \cline{2-8}
\hline
\end{tabular}\\
with a new row indexing set $S = \{ s_1, s_2, s_3, s_4, s_5 \} $. The obtained demultiplex is actually the network slicing mapping given in Table \ref{table:UxA}.
\end{example}
}
Definition \ref{def:HardCoreNetworkSlicing}, Definition \ref{def:HardAccessNetworkSlicing} and Theorem \ref{Thm:CombinatorialDesignsHardCoreNetworkSlicing} address the modeling of the hard core slicing with the $(S,A,C)$--conjugate. However, in practice we have network slices with components that are of mixed nature: sometimes a network slice has both core network and access network components as hard components, but sometimes one or both of those components are shared. That situation is best modeled with the $(C,A,S)$--conjugate rectangles, as shown in the next Theorem.
\begin{theorem}\label{Thm:CAS-Reordering}
Let all network slices are represented as a set of triplets $T = \{(c_i, a_j, s_k) : c_i \in C, a_j \in A, s_k \in S\}$, where $i\in\{1,\ldots,n_c\}$, $j\in\{1,\ldots,n_a\}$ and $k\in\{1,\ldots,n_s\}$. Then, there is a rectangular array $\mathcal{R}_{n_c \times n_a}$ of type $(C, A, S)$ and size $n_c \times n_a$
and there are values $1\le n_1 \le n_c$ and $1\le n_2 \le n_a$ such that the array is partitioned in four rectangular sub-arrays
\begin{equation}
\mathcal{R}_{n_c \times n_a} =
\begin{array}{|c||c|c|}
\hline
& \multicolumn{2}{c|}{$A$} \\ \cline{2-3}
\hline
\hline
\multirow{2}{*}{$C$} & \mathcal{R}_{1,1} & \mathcal{R}_{1,2} \\ \cline{2-3}
\ & \mathcal{R}_{2,1} & \mathcal{R}_{2,2} \\ \cline{2-3}
\hline
\end{array}
\end{equation}
where $\mathcal{R}_{1,1} \equiv \mathcal{R}_{n_1 \times n_2}$, $\mathcal{R}_{1,2} \equiv \mathcal{R}_{n_1 \times (n_a - n_2)}$, $\mathcal{R}_{2,1} \equiv \mathcal{R}_{(n_c - n_1) \times n_2}$, $\mathcal{R}_{2,2} \equiv \mathcal{R}_{(n_c - n_1) \times (n_a - n_2)}$, and following holds:
\begin{enumerate}
\item every row and every column in $\mathcal{R}_{1,1}$ have at most one non-empty cell;
\item every row in $\mathcal{R}_{1,2}$ has at most one non-empty cell, but its columns can have none, one or several non-empty cells;
\item every column in $\mathcal{R}_{2,1}$ has at most one non-empty cell, but its rows can have none, one or several non-empty cells;
\item every column and every row in $\mathcal{R}_{2,2}$ can have none, one or several non-empty cells.
\end{enumerate}
\end{theorem}
\textbf{Proof:}
Let us reorder the elements of $C$ as follows: $C_{hard} = \{ c_1, \ldots, c_{n_1} \}$ are components from the core network part that can be used only as dedicated, i.e., hard slicing, $C_{soft} = \{ c_{n_1+1}, \ldots, c_{n_c} \}$ are components that can be shared among NSIs. Then it is clear that $C = C_{hard} \cup C_{soft}$ is represented as a disjunctive union of dedicated and shared core network components. Let us apply the same reordering for the components in the access part, i.e., let us represent $A = A_{hard} \cup A_{soft}$ where $A_{hard} = \{ a_1, \ldots, a_{n_2} \}$ and $A_{soft} = \{ a_{n_2+1}, \ldots, a_{n_a} \}$. With this reordering for every slice $(c_i, a_j, s_k) \in T$ it holds:
\begin{equation*}
\left\{
\begin{array}{rl}
s_k \in \mathcal{R}_{1,1} & \text{if } 1\le i \le n_1 \text{\ and } 1\le j \le n_2,\\
s_k \in \mathcal{R}_{1,2} & \text{if } 1\le i \le n_1 \text{\ and } n_2 + 1\le j \le n_a,\\
s_k \in \mathcal{R}_{2,1} & \text{if } n_1 + 1\le i \le n_c \text{\ and } 1\le j \le n_2,\\
s_k \in \mathcal{R}_{2,2} & \text{if } n_1 + 1\le i \le n_c \text{\ and } n_2 + 1\le j \le n_a.\\
\end{array} \right.
\end{equation*}
Thus, for $s_k \in \mathcal{R}_{1,1}$ we can apply both conditions (\ref{eq:HardCoreNetworkSlicing}) and (\ref{eq:HardAccessNetworkSlicing}) from Definitions \ref{def:HardCoreNetworkSlicing} and \ref{def:HardAccessNetworkSlicing}, and claim 1 from Theorem \ref{Thm:CAS-Reordering} will follow. To see the validity of the claim 2 for $s_k \in \mathcal{R}_{1,2}$ we need only to apply the condition (\ref{eq:HardCoreNetworkSlicing}). Similarly, for the validity of the claim 3 and $s_k \in \mathcal{R}_{2,1}$ we need only to apply the condition (\ref{eq:HardAccessNetworkSlicing}). Then, the correctness of the remaining final claim 4 when $s_k \in \mathcal{R}_{2,2}$ follows.
$\blacksquare$
\begin{example}
Let us represent network slicing case presented in Fig. \ref{architecture} and Table \ref{table:CxA} as a table following Theorem \ref{Thm:CAS-Reordering}.
\begin{table}[h!]
\caption{A rectangular scheme equivalent to Table \ref{table:CxA}.}
\centering
\begin{tabular}{|l||c|c|c||c|c|c|}
\hline
& $a_1$ & $a_2$ & $a_5$& $a_3$ & $a_4$ & $a_6$ \\
\hline \hline
$c_1$ & $s_1$ & & & & & \\ \cline{1-7}
$c_2$ & & $s_2$ & & & & \\ \cline{1-7}
$c_5$ & & & $s_4$& & & \\ \cline{1-7}
\hline
\hline
$c_3$ & & & & $s_2, s_3$ & & \\ \cline{1-7}
$c_4$ & & & & & $s_3$ & $s_5$ \\ \cline{1-7}
$c_6$ & & & & & $s_4$ & \\ \cline{1-7}
\hline
\end{tabular}
\label{table:CxA-TheoremReordering}
\end{table}
\end{example}
\begin{table*}[h!]
\caption{A list of all components used by an NSMF for network slices given in a form of an extended $(C,A,S)$--conjugate as described by the attributes in expression (\ref{eq:NetworkSLice01})}
\centering
\begin{tabular}{|l|c|c|c|}
\parbox{1.0cm}{Notation} & \parbox{3.8cm}{\vspace{0.1cm}\centering Meaning\vspace{0.1cm}} & \parbox{4.8cm}{\centering Relations/functions} & \parbox{5.5cm}{\centering Comment} \\
\hline
\hline
$(c, a, s, t_{s}, t_{w})$ & \parbox{3.8cm}{Network slice} & \parbox{4.8cm}{\vspace{0.1cm} $c \in C$, $a \in A$, $s\in S$, \\ $t_{s}$ - remaining life time of the slice, \\$t_{w}$ - waiting time before slice was activated. \vspace{0.1cm}} & \parbox{5.5cm}{\vspace{0.1cm} Initial value of $t_s$ is a random variable with exponential distribution and average value of $\mu$ time units. In the simulation, $t_s$ is decreased by 1 in every time unit. \vspace{0.1cm}}\\
\hline
$\mu$ & \parbox{3.8cm}{An average life time of a network slice expressed in number of time units and modeled with an exponential distribution} & \parbox{4.8cm}{\vspace{0.1cm} $P(t_s \le x) = \left\{
\begin{array}{rl}
1-e^{-\frac{x}{\mu }} & \text{if } x \ge 0,\\
0 & \text{if } x < 0.
\end{array} \right.$ \\is the probability that the value of $t_s$ is less or equal to some value $x$ i.e. its cumulative distribution function. \vspace{0.1cm}} & \parbox{5.5cm}{\vspace{0.1cm} Expected value of $t_s$ is $E[t_s] = \mu$.\vspace{0.1cm}}\\
\hline
$C_{hard}$ & \parbox{3.8cm}{Set of hard slice core network components} & \parbox{4.8cm}{\vspace{0.1cm} $C_{hard} = \{ c_1, \ldots, c_{n_1} \}$,\ \ \ $|C_{hard}| = n_1$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm}An important parameter for the set $C_{hard}$ is the number of its elements $n_1$.\vspace{0.1cm}} \\
\hline
$C_{soft}$ & \parbox{3.8cm}{Set of shared core network components} & \parbox{4.8cm}{\vspace{0.1cm} $C_{soft} = \{ c_{n_1+1}, \ldots, c_{n_c} \}$,\ \ \ $|C_{soft}| = n_c - n_1$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm}If we denote the total number of core network components with $n_c$ then the number of elements in $C_{soft}$ is given as $n_c - n_1$.\vspace{0.1cm}} \\
\hline
$C$ & \parbox{3.8cm}{\vspace{0.1cm}Set of all core network components \vspace{0.1cm}} & \parbox{4.8cm}{\vspace{0.1cm} $C = C_{hard} \cup C_{soft}$,\ \ \ $|C| = n_c$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm}Total number of core network components is $n_c$.\vspace{0.1cm}} \\
\hline
$A_{hard}$ & \parbox{3.8cm}{Set of hard slice access network components} & \parbox{4.8cm}{\vspace{0.1cm} $A_{hard} = \{ a_1, \ldots, a_{n_2} \}$,\ \ \ $|A_{hard}| = n_2$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm}An important parameter for the set $A_{hard}$ is the number of its elements $n_2$.\vspace{0.1cm}} \\
\hline
$A_{soft}$ & \parbox{3.8cm}{Set of shared access network components} & \parbox{4.8cm}{\vspace{0.1cm} $A_{soft} = \{ a_{n_2+1}, \ldots, a_{n_a} \}$,\ \ \ $|A_{soft}| = n_a - n_2$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm}If we denote the total number of access network components with $n_a$ then the number of elements in $A_{soft}$ is given as $n_a - n_2$.\vspace{0.1cm}} \\
\hline
$A$ & \parbox{3.8cm}{Set of all access network components} & \parbox{4.8cm}{\vspace{0.1cm} $A = A_{hard} \cup A_{soft}$,\ \ \ $|A| = n_a$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm}Total number of access network components is $n_a$.\vspace{0.1cm}} \\
\hline
$S$ & \parbox{3.8cm}{\vspace{0.1cm}Set of all established network slices.\vspace{0.1cm}} & \parbox{4.8cm}{\vspace{0.1cm} $S = \{ s_1, \ldots, s_{n_s} \}$,\ \ \ $|S| = n_s$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm}Number of active network slices in one particular moment $t$ is $n_s$. Notice, that in the next time moment $t + 1$ the number $n_s$ might change. \vspace{0.1cm}} \\
\hline
\parbox{1.0cm}{$req = (c, a, s, t_{s}, 1)$} & \parbox{3.8cm}{Initial request for a network slice} & \parbox{4.8cm}{\vspace{0.1cm} $c \in C$, $a \in A$, $s\in S$ \vspace{0.1cm}} & \parbox{5.5cm}{\vspace{0.1cm} If NSMF decides that there are no resources for this request, $t_w$ is increased by 1, and the request is put back to the waiting queue for the next time unit. \vspace{0.1cm}}\\
\hline
$p_c$ & \parbox{3.8cm}{\vspace{0.1cm}For a requested slice $req = (c, a, s, t_{s}, 0)$ the probability that $c \in C_{hard}$ \vspace{0.1cm}} & \parbox{4.8cm}{\vspace{0.1cm} $P(c \in C_{hard}) = p_c$ \\
$P(c \in C_{soft}) = 1 - p_c$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm}Since dedicated resources are more expensive to use, the probability for requests that will ask for dedicated core network components is usually $p_c<0.5$.\vspace{0.1cm}} \\
\hline
$p_a$ & \parbox{3.8cm}{\vspace{0.1cm}For a requested slice $req = (c, a, s, t_{s}, 0)$ the probability that $a \in A_{hard}$ \vspace{0.1cm}} & \parbox{4.8cm}{\vspace{0.1cm} $P(a \in A_{hard}) = p_a$ \\
$P(a \in A_{soft}) = 1 - p_a$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm}The probability for requests that will ask for dedicated access network components is usually $p_a<0.5$.\vspace{0.1cm}} \\
\hline
$N_{req}$ & \parbox{3.8cm}{\vspace{0.1cm}Number of received requests for network slice in certain time moment $t$ \vspace{0.1cm}} & \parbox{4.8cm}{\vspace{0.1cm} $Req = \{ req_1, \ldots, req_{\tiny N_{req}} \}$,\ \ \ $|Req| = N_{req}$ \vspace{0.1cm} } & \parbox{5.5cm}{\linespread{0.8}\vspace{0.1cm} $N_{req}$ is a random variable with Poisson distribution and average value of $\lambda$.\vspace{0.1cm}} \\
\hline
$\lambda$ & \parbox{3.8cm}{\vspace{0.1cm}An average number of received requests for network slices in certain time moment $t$, modeled with a Poisson distribution \vspace{0.1cm}} & \parbox{4.8cm}{\vspace{0.1cm} $P(N_{req} = k) = \frac{e^{-\lambda } \lambda ^k}{k!}$. \vspace{0.1cm}} & \parbox{5.5cm}{\vspace{0.1cm} Expected value of $N_{req}$ is $E[N_{req}] = \lambda$.\vspace{0.1cm}}\\
\hline
\hline
\end{tabular}
\label{table:all-components-for-NSMF}
\end{table*}
\begin{definition}\label{def:ExtendedConjugate}
We say that a network slice is represented in \emph{extended $(C,A,S)$--conjugate form} if it is given as tuple $(c, a, s, attr_1, \ldots, attr_l)$ where $c \in C$, $a \in A$ and $s \in S$ and $attr_{\nu}$ are some additional attributes that are considered as important features of the slice.
\end{definition}
\section{Simulation of NSMF with several optimization objectives} \label{simul}
Equipped with Theorem \ref{Thm:CAS-Reordering} and Definition \ref{def:ExtendedConjugate} we can implement and simulate any realistic scenario for NSMF.
We assume that requests for resources in the AN and CN parts for implementing slices with different requirements arrive according to Poisson distribution with arrival rate $\lambda$ in each time unit.
NSMF checks if the pool of resources can support the creation of the slice. If not, then the request is re-queued for the next time unit. Upon acceptance, the NSMF creates a new NSI and allocates a corresponding resource bundle (NSSI AN and NSSI CN) to the new NSI.
We consider dynamic deployment where slices have life time of $\nu$ time units distributed with exponential distribution, and the resources allocated to the slices will be released and added back to the resource pool when the slice is deactivated.
By choosing different types of attributes we have opportunity to model different objectives (one or several) of the NSMF such as:
\begin{enumerate}
\item to maximize the utilization of the network components;
\item to decrease the average delay time from slice request to slice activation;
\item to decrease the number of rejected slice requests;
\item to maximize network operator revenue;
\item to maximize the number of slices with high throughput.
\end{enumerate}
In this section we give simulation results of an implementation of a NSMF for simple network slicing described with the following attributes:
\begin{eqnarray}\label{eq:NetworkSLice01}
\text{High level abstraction of a Network Slice Instance} \nonumber\\
(c, a, s, t_{s}, t_{w}) \hspace{1.6cm}
\end{eqnarray}
where $t_{s}$ is the remaining life time of the slice, $t_{w}$ is the time passed from the slice request until the slice was activated. By default $t_{w}=1$ when the request is composed. A full description of all components necessary for implementation of the NSFM is given in Table \ref{table:all-components-for-NSMF}.
\textbf{Note:} With the attribute list described in expression (\ref{eq:NetworkSLice01}) we work with a NSMF model where all hard resources and all soft resources from the core network and the access network are picked from a pool of resources. NSMF in this model has a higher level of abstraction and it does not take into account the specific capacity of the requested resources. Still, as we will show further in this work, even with this very abstracted model, we can infer important conclusions about the functionality of the network slicing concept and NSMF.
Nevertheless, our combinatorial model of network slicing can describe more detailed variants of NSMF. For example,
\begin{eqnarray}\label{eq:NetworkSLiceMoreDetails}
\text{A Network Slice with quantitative resources} \nonumber\\
(c, a, s, t_{s}, t_{w}, r_{c}, r_{a}) \hspace{1.3cm}
\end{eqnarray}
where $t_{s}$ is the remaining life time of the slice, $t_{w}$ is the time passed from the slice request until the time the slice was activated, $r_{c}$ is the quantitative value requested from the core network and $r_{a}$ is the quantitative value requested from the access network.
We now give the algorithm that simulates the work of NSMF with network slices described with the expression (\ref{eq:NetworkSLice01}) and a scenario where rejected requests are added in the waiting queue to be considered for scheduling in the next time unit. Those rejected requests will compete for the network resources with the newly arrived requests.
\begin{algorithm}
\caption{Simulation of NSMF with dynamic deployment and re-queuing of rejected requests.}\label{Alg:NSMF-Re-queuing}
\begin{algorithmic}[1]
\State $ActiveSlices \gets \emptyset$
\State $RejReq \gets \emptyset$
\State $n_s \gets 0$
\For{$t = 1$ to $TimeSimulation$}
\State $N_{req} \gets Poisson(\lambda)$
\State $Req \gets \mathsf{GetRequests}[N_{req}, \mu, p_c, p_a] \cup RejReq$
\State $RejReq \gets \emptyset$
\State $Req \gets \mathsf{HeuristicRearangement}[Req]$
\For{$req = (c, a, s, t_s, t_w) \in Req$ }
\State $(Found_C, Found_A) \gets \mathsf{Dispetch}[req, C, A]$
\If {$Found_C$ AND $Found_A$}
\State $ActiveSlices \gets ActiveSlices \cup \{req\} $
\State $C \gets C \setminus \{req.c\} $
\State $A \gets A \setminus \{req.a\} $
\Else
\State $req.t_w \gets req.t_w + 1$
\State $RejReq \gets RejReq \cup \{req\}$
\EndIf
\EndFor
\State $NewActive \gets \emptyset$
\For{$req = (c, a, s, t_s, t_w) \in ActiveSlices$ }
\State $req.t_s \gets req.t_s - 1$
\If {$req.t_s > 0 $}
\State $NewActive \gets NewActive \cup \{req\} $
\Else
\State $C \gets C \cup \{req.c\} $
\State $A \gets A \cup \{req.a\} $
\EndIf
\EndFor
\State $ActiveSlices \gets NewActive$
\EndFor
\end{algorithmic}
\end{algorithm}
In Algorithm \ref{Alg:NSMF-Re-queuing} we use several sub-functions that we comment here. In Step 5 the variable $N_{req}$ gets a random value according to a Poison distribution with a parameter $\lambda$. In Step 6 the function $\mathsf{GetRequests}[N_{req}, \mu, p_c, p_a]$ returns a set of initial requests $Req$, according to the parameters $N_{req}$, $\mu$, $p_c$, $p_a$ as they are described in Table \ref{table:all-components-for-NSMF}.
In Step 8 there is a call to a procedure that rearranges the active list of requests $Req \gets \mathsf{HeuristicRearangement}[Req]$. That rearrangement can return just the original list of requests if we do not have developed any optimization strategy, or it can perform some heuristics in order to achieve better results with the next subroutine $\mathsf{Dispetch}[req, C, A]$ called in Step 10. Based on the rearrangement described in Theorem \ref{Thm:CAS-Reordering} we have developed one very simple but effective heuristics described in Algorithm \ref{Alg:HeuristicRearangement}. The idea can be briefly described as: give priorities to requests that belong to the rectangles $\mathcal{R}_{1,1}$, then $\mathcal{R}_{1,2}$ and $\mathcal{R}_{2,1}$ and finally to the rectangle $\mathcal{R}_{2,2}$. Within the subsets of requests in these rectangles, give priority to the requests that will finish sooner rather then later (that is sorting in ascending order in Steps 16 - 19).
\begin{algorithm}
\caption{$\mathsf{HeuristicRearangement}[Req]$}\label{Alg:HeuristicRearangement}
\begin{algorithmic}[1]
\State $Req_{1,1} \gets \emptyset$, $Req_{1,2} \gets \emptyset$, $Req_{2,1} \gets \emptyset$, $Req_{2,2} \gets \emptyset$
\For{$req = (c, a, s, t_s, t_w) \in Req$ }
\If {$c\in C_{hard}$ AND $a \in A_{hard}$}
\State $Req_{1,1} \gets Req_{1,1} \cup req$
\EndIf
\If {$c\in C_{hard}$ AND $a \in A_{soft}$}
\State $Req_{1,2} \gets Req_{1,2} \cup req$
\EndIf
\If {$c\in C_{soft}$ AND $a \in A_{hard}$}
\State $Req_{2,1} \gets Req_{2,1} \cup req$
\EndIf
\If {$c\in C_{soft}$ AND $a \in A_{soft}$}
\State $Req_{2,2} \gets Req_{2,2} \cup req$
\EndIf
\EndFor
\State $Req_{1,1} \gets \mathsf{Sort Ascending}[Req_{1,1}, t_s]$
\State $Req_{1,2} \gets \mathsf{Sort Ascending}[Req_{1,2}, t_s]$
\State $Req_{2,1} \gets \mathsf{Sort Ascending}[Req_{2,1}, t_s]$
\State $Req_{2,2} \gets \mathsf{Sort Ascending}[Req_{2,2}, t_s]$
\State $Req \gets Req_{1,1} || Req_{1,2} || Req_{2,1} || Req_{2,2}$
\State \textbf{Return} $Req$
\end{algorithmic}
\end{algorithm}
If $\mathsf{Dispetch}[ ]$ subroutine returns that there are resources both in the core network and in the access network, then that request is activated by adding it in Step 12 to the list of active slices, and the list of core network and access network resources is updated in Steps 13 and 14. If $\mathsf{Dispetch}[ ]$ subroutine returns that there are no available resources then the waiting time for the request is increased by one, and the rejected request is added to the set of rejected requests.
Steps 20 to 30 update the state of the active slices by reducing by 1 all their $t_s$ values. If the slice has a value $t_s$ that is still positive, it will continue to be active for the next time unit. Otherwise, the slice is deactivated and its resources are released and are added back in the pool of available resources (Steps 26 and 27).
\begin{lemma}\label{Lemma:StabilityConditions}
Necessary conditions for Algorithm \ref{Alg:NSMF-Re-queuing} to reach a stationary ergodic stage are the following:
\begin{align}\label{eq:p_c}
p_c & < \frac{n_1}{n_c},\\
p_a & < \frac{n_2}{n_a}, \\
\mu \lambda & < \min\{n_c, n_a\}.
\end{align}
\end{lemma}
\textbf{Proof:}
(sketch) The proof is by assuming that any of the given inequalities is not true and by showing in that case Algorithm \ref{Alg:NSMF-Re-queuing} produces an ever increasing list of requests $Req$. For example, let us assume that $p_c \ge \frac{n_1}{n_c}$. This means that in average, there will be more requests asking for hard core network components than there are available, thus rejecting those requests, i.e., producing longer and longer requests lists $Req$.
A similar reasoning is if we suppose that $\mu \lambda \ge \min\{n_c, n_a\}$. This means that there will be a situation when the number of requests times the number of time units necessary to finish the activity of those requests will surpass the minimum number of available resources either in the core part or in the access part. In that case the rejected requests will be added to the queue of requests, thus contributing for ever-increasing length of the list of requests $Req$.
$\blacksquare$
We have an initial implementation of Algorithms \ref{Alg:NSMF-Re-queuing} and \ref{Alg:HeuristicRearangement} in Mathematica, and next we show several experimental results that confirm the claims of Lemma \ref{Lemma:StabilityConditions}, especially the effects of compliance vs non-compliance with the conditions (\ref{eq:p_c}), (7) and (8).
In Fig. \ref{Fig:ActivationDelay-Compliance099} and Fig. \ref{Fig:ReqQueueSizeComplieance099} we give the results of performing 10 simulations with the following parameters: $n_1 = 50$, $n_c = 350$, $p_c = 0.99 \frac{n_1}{n_c} = 0.141429$, $n_2 = 100$, $n_a = 500$, $p_a = 0.99 \frac{n_2}{n_a} = 0.198$, $\lambda = 10$, $\mu = \lfloor 0.99 \frac{\min\{n_c, n_a\}}{\mu} \rfloor = 34$. The simulation was performed for 100,000 time units. As we can see in Fig. \ref{Fig:ActivationDelay-Compliance099}, there is a transition period of about 15,000 time units until the process becomes stationary ergodic with an average delay $\Delta$ around 3.5 time units. In Fig. \ref{Fig:ReqQueueSizeComplieance099} we show the corresponding queue size for the same simulation. The size of the queue $|Req|$ is stationary ergodic and varies between 16 and 63 requests.
\begin{figure}
\includegraphics[width=3.3in]{ActivationDelayConfirmingLemma1-01}
\put(-90,60){
$\begin{array}{r@{}c@{}l}
p_c & = & \frac{n_1}{n_c} - \varepsilon \vspace{0.2cm} \\
p_a & = & \frac{n_2}{n_a} - \varepsilon \vspace{0.2cm}\\
\mu \lambda & < & \min\{n_c, n_a\}
\end{array}
$}
\put(0,10){$t$}
\put(-240,150){$\Delta$}
\caption{An average activation delay simulating the work of NSMF for 100,000 time units. The average is taken over 10 experiments. After a transitioning phase of about 15,000 time units, the process becomes stationary ergodic and the average delay $\Delta$ is around 3.5 }
\label{Fig:ActivationDelay-Compliance099}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in]{QueueSizeConfirmingLemma1-01}
\put(0,10){$t$}
\put(-240,150){$|Req|$}
\caption{An average request queue size simulating the work of NSMF for 100,000 time units. The average is taken over 10 experiments. The size of the requests queue $|Req|$ is stationary ergodic and varies between 16 and 63.}
\label{Fig:ReqQueueSizeComplieance099}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in]{QueueSizeConfirmingLemma1-02}
\put(-90,60){
$\begin{array}{r@{}c@{}l}
p_c & = & \frac{n_1}{n_c} \vspace{0.2cm} \\
p_a & = & \frac{n_2}{n_a} \vspace{0.2cm}\\
\mu \lambda & = & \min\{n_c, n_a\}
\end{array}
$}
\put(0,10){$t$}
\put(-240,150){$|Req|$}
\caption{An average request queue size simulating the work of NSMF for 10,000 time units. The average is taken over 10 experiments. By having parameters that are upper bounds in Lemma \ref{Lemma:StabilityConditions}, the functioning of the NSMF is not a stable process since the size of the requests queue $|Req|$ is always increasing.}
\label{Fig:ReqQueueSizeNonComplieance}
\end{figure}
In Fig. \ref{Fig:ReqQueueSizeNonComplieance} we show the results of 10 simulations with the values that are the upper bounds in Lemma \ref{Lemma:StabilityConditions}, i.e., $p_c = \frac{n_1}{n_c} = 0.2$, $p_a = \frac{n_2}{n_a} = 0.142857$ and $\mu \lambda = \min\{n_c, n_a\} = 35$. As we can see the size of the requests queue is always increasing as times goes on, indicating that the parameters chosen by the network operator are not sustainable in this model. This simulation analysis indicates also that the network operator should either increase the pool size for the access and core network resources in order to avoid the strict equality or some rejection policy should be introduced.
\shorten{Acceptance ratio is the ratio of the number of network slice requests which have been successfully mapped and the total number of network slice requests.}
\shorten{Imame li static (slices are permanent once they are deployed successfully) ili dynamic (slices have life time and the resources allocated to the slices will be recycled at the end of the life time) deployment }
As mentioned before, we can seek for several optimization objectives within one model of NSMF. Here we give results from simulation of optimized and non-optimized NSMF given by the NSI expression (\ref{eq:NetworkSLice01}), where the optimization is performed in Step 8 of the Algorithm \ref{Alg:NSMF-Re-queuing}, and where the optimization heuristics is given in Algorithm \ref{Alg:HeuristicRearangement}. The optimization objectives are: 1. to maximize the utilization of the network components; and 2. to decrease the average delay time from slice request to slice activation. We argue that by these objectives, indirectly we are achieving also the objective to maximize the network operator revenue.
\begin{definition}\label{Def:Utilization}
Let $U(t), t=1, \ldots$, be a function that denotes the number of network slice resources scheduled by the NSMF at time $t$. An average utilization $V_{[T_1,T_2]}$ of the network components for the NSMF model given by the NSI expression (\ref{eq:NetworkSLice01}), for the time period $[T_1, T_2]$ is defined as
\begin{equation}\label{eq:Utilization}
V_{[T_1,T_2]} = \frac{1}{T_2 - T_1}\sum_{t=T_1}^{T_2} U(t)
\end{equation}
\end{definition}
Without a proof we state here the following Corollary.
\begin{corollary}\label{col:UpperBoundOfUtilization}
For NSMF model given by the NSI expression (\ref{eq:NetworkSLice01}), for any time interval $[T_1,T_2]$, $V_{[T_1,T_2]}$ is upper bounded by $\min (m_c, n_a)$, i.e.,
\begin{equation}\label{eq:UpperBoundOnUtilization}
V_{[T_1,T_2]} \le \min (m_c, n_a).
\end{equation} $\blacksquare$
\end{corollary}
Seeking for optimization strategies that will increase the average utilization of the network components is a desired goal, but is not the most rational optimization objective because it excludes the delay time between the request and the service delivery. Thus, it is much better to set another optimization objective which we define with the next definition.
\begin{definition}\label{def:UtilizationRatio}
Let $U(t), t=1, \ldots$, be a function that denotes the number of network slice resources scheduled by the NSMF at time $t$, and let $\Delta(t), t=1, \ldots$, be a function that denotes the average delay units that network slice requests issued at time $t$ should wait until their activation.
An utilization ratio function $W(t)$ is defined as:
\begin{equation}\label{eq:UtilizationRatioFunction}
W(t) = \frac{U(t)}{\Delta(t)}
\end{equation}
An average utilization ratio $W_{[T_1,T_2]}$ for the time period $[T_1, T_2]$ is defined as
\begin{equation}\label{eq:UtilizationRatio}
W_{[T_1,T_2]} = \frac{1}{T_2 - T_1}\sum_{t=T_1}^{T_2} W(t)
\end{equation}
\end{definition}
Our objective is to define optimization strategies that maximize $W_{[T_1,T_2]}$ for any time interval $[T_1,T_2]$.
In Fig. \ref{Fig:UtilizationRatioComparison01} we show comparison between two utilization ratio functions where one is obtained without any optimization heuristics, i.e., the requests are processed as they come in a first-come-first-serve manner (the blue curve), and the other is obtained by the Algorithm \ref{Alg:HeuristicRearangement} (the orange curve). For the non-optimized version we get $W_{[4000, 10000]} = 42.248$ that means in every moment the ratio between the number of used resources and the waiting time is 42.248. On the other hand, the optimal strategy gives us $W_{[4000, 10000]} = 98.865$ which is more than double than the non-optimized strategy.
\begin{figure}
\centering
\includegraphics[width=3.3in]{UtilizationRatioComparison-01.pdf}
\put(-70,120){
$\begin{array}{r@{}c@{}l}
p_c & = & 0.95 \frac{n_1}{n_c} \vspace{0.2cm} \\
p_a & = & 0.95 \frac{n_2}{n_a} \vspace{0.2cm}\\
\mu & = & 10 \vspace{0.2cm}\\
\lambda & = & 34
\end{array}
$}
\put(0,10){$t$}
\put(-240,150){$W(t)$}
\caption{Utilization ratio function simulating the work of NSMF for 10,000 time units. The average is taken over 10 experiments. The orange curve is obtained by the optimal strategy in Algorithm \ref{Alg:HeuristicRearangement} and the blue curve is for simulation without any optimizations (requests are processed as they arrive).}
\label{Fig:UtilizationRatioComparison01}
\end{figure}
\section{Conclusion} \label{conc}
We proposed a mathematical model for network slicing based on combinatorial designs such as Latin squares and rectangles and their conjugate forms. These combinatorial designs allow us to model both soft and hard slicing in the access and core parts. Moreover, by the introduction of the extended attribute description our model can offer different levels of abstractions for NSMF that combines cross-domain NSSIs in one end-to-end NSI.
From the optimization point of view, in this work we also introduced the notion of utilization ratio function, with aims to describe the functional dependencies between the number of used network resources and the waiting time for establishing the network slice. Then, we presented two strategies for the work of NSMF, a non-optimized first-come-first-serve strategy and an optimal strategy, where the objectives of the optimization are: 1. to maximize the utilization of the network components; and 2. to decrease the average delay time from slice request to slice activation. Simulations results presented in this work show that optimal strategy achieved by maximizing the utilization ratio function provides more than twice better performances in terms of the both objectives.
|
1,108,101,566,509 | arxiv | \section{Introduction \label{s-intro}}
Astronomical observations demonstrate the general trend, indicating that many objects in the universe were formed much
earlier than expected by theory and possibly so old objects are even not allowed by the standard theory.
Among them there are stars in the Milky Way, older than the Galaxy and even
older than the universe, within at least two sigma, distant high redshift (${ z\sim 10}$) objects. such as
early galaxies, QSO/supermassive BHs, gamma-bursters, and early supenovae.
If no explanation is found in the conventional frameworks, there are two
possible ways (maybe more) to explain these puzzling phenomena: \\
1. A novel mechanism of formation of stellar-type objects in very early universe~\cite{ad-sb}.\\
2. Modification of the cosmological expansion regime in such a way that the universe becomes older than calculated in the
frameworks of the standard model, as is done e.g. in ref.~\cite{ad-vh-it}.
Here we discuss the first possibility, more detail of which can be found in our paper~\cite{ad-sb}.
First, let us present the expression of the universe age $t_U$,
as a function of the cosmological redshift $z$:
\be
t(z) = \frac{1}{H}\,\int_0^{{1}/({z+1)}} \frac{dx}
{\sqrt{1-\Omega_{tot} +{\Omega_m}/{x} + x^2\,\Omega_v } },
\label{t-U}
\ee
where $\Omega_a$ is the fractional energy density of matter ($\Omega_m$, baryonic plus dark matter), of dark energy
($\Omega_{v}$), and of the total cosmological energy density ($\Omega_{tot}$).
According to the Planck data, the present day values of theses parameters are:
${\Omega_{tot} = 1}$, ${\Omega_m = 0.317}$, and
${\Omega_v = 0.683}$. There is some tension between the values of
the Hubble parameter measured by Planck, ${ H_{pl} = 67.3}$ km/sec/Mpc and by the traditional astronomical methods,
which can lead to $H$ as large as ${ H_{astr} =~74}$~km/sec/Mpc, see ref.~\cite{planck-prmtr} for discussion. We present a few examples
of the universe age in gigayears for different $z$, the first number corresponds to the Planck value of $H$, and the other, shorter one,
in brackets to the larger astronomical value:
$ t_U \equiv t(0) = 13.8 \,(12.5.)$; $ t(3) = 2.14\, (1.94)$; ${ t(6) = { 0.93;}\, {(0.82)}}$; ${ t(10) = 0.47\,{(0.43)}}$; and
${ t(12) = { 0.37;}\,\, {(0.33)}}$.
\section{Old stars in the Milky Way \label{s-old-stars}}
Recently several stars have been discovered in the Galaxy which ages are unexpectedly high.
Employing thorium and uranium abundances in comparison with each other and with several stable elements
{the age of metal-poor, halo star BD+17$^o$ 3248 was estimated as} ${13.8\pm 4}$ Gyr, as is argued in
ref.~\cite{bd-17}. For comparison the estimated age of the inner halo of the Galaxy is ${11.4\pm 0.7}$ Gyr~\cite{halo}
The age of a star in the galactic halo, HE 1523-0901, was estimated to be
about 13.2 Gyr~\cite{he-1523}.
In this work many different chronometers, such as the U/Th, U/Ir, Th/Eu, and Th/Os ratios to
measure the star age have been employed for the first time.
Most puzzling is probably the determination of the age of metal deficient {high velocity} subgiant in the solar neighborhood,
HD 140283, which has the age ${14.46 \pm 0.31 }$ Gyr~\cite{hd-1402}.
The central value of the age exceeds the universe age by two standard deviations, if ${H= 67.3}$km/sec/Mpc, and for larger
$H = 74$ km/sec/Mpc the star is older than the universe more than by six standard deviations.
\section{High redshift distant objects \label{s-hig-z} }
\subsection{Galaxies \label{ss-galaxies}}
Galaxies at high redshifts, $z \sim 10$, cannot be observed with the usual optical telescopes, which are not sensitive enough
for such distant objects. Fortunately natural gravitational lens telescopes allow to see them, if the "telescope" happens to
be on the light ray form the galaxy to terrestrial observers. In such a way a galaxy at {${z \approx 9.6}$} was
discovered~\cite{gal-500}. The galaxy was formed when the universe was about 500 million years old.
Even more striking, a galaxy at {${z \approx 11}$} has been observed~\cite{gal-300}
which was formed before the universe age was { 0.41 Gyr} (or even shorter with larger H).
Star formation is inhibited in interstellar gas with low metallicity, because higher fraction of metals enhances
gas cooling and facilitates gravitational capture. Interstellar medium is enriched by metals through supernova
explosion. So we need either abundant very early supervonae, or an unusual source of metallicity,
and a new mechanism of galaxy formation.
To make things more puzzling some observed early galaxies are indeed enriched with metals, see
subsection~\ref{ss-sn}.
Quoting ref.~\cite{melia}:
"Observations with WFC3/IR on the Hubble Space Telescope and the use of gravitational lensing techniques
have facilitated the discovery of galaxies as far back as z ~ 10-12, a truly remarkable achievement. However, this
rapid emergence of high-z galaxies, barely ~ 200 Myr after the transition from Population III star formation to Population II,
appears to be in conflict with the standard view of how the early Universe evolved." - the quotation to be continued on top
of the next subsection.
\subsection{Quasars and supermassive black holes at high $z$ and now \label{ss-qso-BH}}
Continuing the quotation from ref.~\cite{melia}:
"This problem is very reminiscent of the better known (and
probably related) premature appearance of supermassive
black holes at ${z\sim 6}$. It is difficult to understand how ${10^9 M_\odot}$ black holes
appeared so quickly after the big bang {without invoking non-standard accretion physics
and the formation of massive seeds, both of which are not seen in the local Universe."
A quasar with maximum {${ z = 7.085}$} has been discovered~\cite{qso-7}, i.e. it was
formed at {${t< 0.75}$ Gyr.} Its luminosity is {${6.3 \cdot 10^{13} L_\odot}$}
and mass {${2 \cdot 10^9 M_\odot}$.
{The quasars are supposed to be supermassive black holes (BH) }
{and their formation in such short time looks problematic by conventional mechanisms.}
There are strong indications that every large galaxy, as well as some relatively small ones,
contain central supermassive black hole.
The mass of the black hole may be larger than ten billions $M_\odot$ in giant elliptical
and compact lenticular galaxies and about a few million $M_\odot$ in spiral galaxies like Milky Way.
The mass of BH is typically 0.1\% of the mass of the stellar bulge of galaxy,
but some galaxies may have huge BH: {e.g. NGC 1277 has
the central BH of ${1.7 \times 10^{10} M_\odot}$, or ${60}$\% of its bulge mass~\cite{NGC1277}.
Another interesting example is a possible existence of a supermassive black hole in an Ultracompact Dwarf Galaxy,
M60-UCD1~\cite{udg} with the mass of about 20 million solar
mass, which is 15\% of the object's total mass. According to the conclusion of the authors, the high black hole mass and
mass fraction suggest that M60-UCD1 is the stripped nucleus of a galaxy. On the other hand, the authors
observed "that M60-UCD1's stellar mass is consistent with its
luminosity, implying many other UCDs may also host supermassive black holes.
This suggests a substantial population of previously unnoticed supermassive
black holes."
These facts create serious problems for the standard scenario of formation of central supermassive
BHs by accretion of matter in the central part of a galaxy.
{An inverted picture looks more plausible, when first a supermassive black holes were formed and
attracted matter serving as seeds for subsequent galaxy formation.}
\subsection{Early Supenovae \label{ss-sn}}
{The medium around the observed early quasars contains
considerable amount of ``metals''} (elements heavier than He).
According to the standard picture, only elements up to ${^4}$He { and traces of Li, Be, B}
were formed in the early universe by BBN, {while heavier elements were created
by stellar nucleosynthesis and} {dispersed in the interstellar space by supernova explosions.}
{If so, prior to QSO creation a rapid star formation should take place.}
{These stars had to produce plenty of supernovae which might enrich interstellar space by metals.}
Observations of high redshift gamma ray bursters (GBR)
also indicate {a high abundance of supernova at large redshifts,} if GBRs
are very early supernovae.
The highest redshift of the observed GBR is 9.4~\cite{GBR-max} and there are a few more
GBRs with smaller but still high redshifts.
{The necessary star formation rate for explanation of these early
GBRs is at odds with the canonical star formation theory.}
{A recent discovery~\cite{gal-10} of an ultra-compact dwarf galaxy
older than 10 Gyr, enriched with metals, and probably with a massive black hole in its center}
seems to be at odds with the standard model as well.
The dynamical mass of this galaxy is ${2\times 10^8 M_\odot}$
and its radius is ${R \sim 24}$ pc, so the galaxy density is extremely high.}
There is a variable central X-ray source with luminocity{ ${L_X \sim 10^{38}}$ erg/s,} which may be
{an AGN associated with a massive black hole} or a low-mass X-ray binary.
\section{A model of formation of compact stellar-like objects and heavy PBH in the very early universe \label{s-model-creation}}
Quite probably the described above puzzling existence of very old objects in the early high metallicity universe will find an
explanation in the frameworks of the conventional astrophysics. However, in absence of such explanation a search for
mechanisms based on new physics is also desirable. We present here an explanation based on early works~\cite{ad-js},
where a simple generalization of the well known Affleck-Dine~\cite{ad-bs} scenario of baryogenesis allows to explain all
observational data described above.
The modification of the Affleck-Dine (AD) scenario of baryogenesis, which give rise to significant production of stellar-like
objects or heavy primordial black holes can be achieved by a simple addition of a
general renormalizable coupling of the scalar baryon, ${\chi}$, to the inflaton field, ${\Phi}$:
\be
U(\chi, \Phi) = U_\chi (\chi) + U_\Phi (\Phi) + U_{\rm int} (\chi,\Phi).
\label{U-of-Phi-chi}
\ee
Here $ {U_\Phi (\Phi) }$ is the inflaton potential, ${ U_\chi (\chi)}$ is
the quartic Affleck-Dine potential, which generically has some flat directions (valleys).
The potential has the form:
\be
U_\chi (\chi) = [m_\chi^2 \chi^2 + \lambda_\chi (\chi^4 + |\chi|^4) + h.c.] +\lambda_2|\chi|^4\ln{\frac{|\chi|^2}{\sigma^2}},
\label{U-of-chi}
\ee
where the last term is the Coleman-Weinberg correction~\cite{CW},
which arises as a result of summation of one-loop diagrams
in scalar field theory with quartic interaction term.
In the classical AD-scenario field ${\chi}$ acquires a large expectation value along a flat
direction, e.g. during inflation and evolves down later, when the Hubble parameter dropped below $m_\chi$.
If flat directions in quadratic and quartic parts of the potential do not coincide, then at the
approach to the minimum ${\chi}$ starts to "rotate" in two dimensional $\{Re \chi, Im \chi\}$-plane. {Rotation means that ${\chi}$
acquires (a large) average baryonic number.}
The additional interaction term of $\chi$ with the inflaton, $\Phi$, is taken in the form:
\be
U_{\rm int} (\chi, \Phi) = \lambda_1 |\chi|^2 \left( \Phi - \Phi_1\right)^2 ,
\label{U-int}
\ee
where ${\Phi_1}$ is some value of the inflaton field which it passes during inflation and ${\lambda_1}$ is a constant.
So there is a mild tuning, but otherwise this is general renormalizable coupling between $\chi$ and $\Phi$, which
surely must exist. This terms acts as a positive time-dependent mass and thus it almost always kept the gate to the valleys closed,
except for a short period when when ${\Phi}$ is near ${\Phi_1}$.}
So there is a small chance for $\chi$ to reach a high value and to create a large baryon asymmetry. The behavior of the
potential $ U_\chi (\chi) + U_{int} (\chi, \Phi)$ for different values of the effective mass is presented in fig.~\ref{fig:Potevolution}.
The potential evolves down from the upper to the lower curve reaching the latter when $\Phi = \Phi_1$ and then the potential
returns back to the higher curve, when $\Phi$ drops below $\Phi_1$.
\begin{figure}
\centering
\includegraphics[scale=0.6]{Potevolution.eps}
\caption{ {Behavior of $U_\chi(\chi)$ for different values of
$m^2_{eff} (t)$.}}
\label{fig:Potevolution}
\end{figure}
Correspondingly field $\chi$ rolls down toward the deeper minimum, oscillates there following the evolution of the minimum,
rolls back to the origin, and starts to rotate around it, as is shown in fig.~\ref{fig:Chimodevol}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1.0]{ChiModEvol.eps}
\caption{ Evolution of $|\chi|$ with time. }
\label{fig:Chimodevol}
\end{figure}
Since the inflaton opens the gate to the deeper minimum only for a short time,
the probability for ${\chi}$ to reach the high value is low, so in most of space baryogenesis
creates normal tiny baryon asymmetry, but {in some bubbles which occupy a small fraction of the
whole volume, baryon asymmetry may be huge.}
After the QCD phase transition, the contrast in baryonic charge density transformed into
perturbations of the energy/mass density and {the bubbles with high ${B}$ formed
PBH's or compact stellar-like objects.} The mass distribution of these high-B bubbles has
practically model independent form:
\be
\frac{dN}{dM} = C_M \exp{ \left[-\gamma\, \ln^2\frac{(M-M_1)^2}{ M_0^2} \right] } \,.
\label{dN-dM}
\ee
with the model dependent parameters $C_M$, $M_1$, and $M_0$.
The values of the parameters can be adjusted in such a way that superheavy
BHs formed at the tail of this distribution would be abundant enough to be present in every large
galaxy and in some small ones. Such heavy PBHs could be seeds for the galaxy formation.
As we mentioned above, there is no satisfactory mechanism of creation of superheavy black holes
in the frameworks of the usual physics, while the considered here mechanism can successfully
achieve that.
This mass distribution naturally explains some unusual features
of stellar mass black holes in the Galaxy.
It was found that their masses are concentrated in narrow range
${ (7.8 \pm 1.2) M_\odot }$~\cite{bh-78}.
This result agrees with another paper where
a peak around ${8M_\odot}$, a paucity of sources with masses below
${5M_\odot}$, and a sharp drop-off above
${10M_\odot}$ are observed~\cite{bh-10}. {These features are not explained in the standard model.}
A modifications of ${U_{int}}$ leads to a more complicated mass spectrum of the early formed stellar type objects,
e.g., if:
\be
U_{\rm int} = {\lambda_1 |\chi|^2 }
\left( \Phi - \Phi_1\right)^2 \left( \Phi - \Phi_2\right)^2 ,
\label{U-int-2}
\ee
we come to a two-peak mass distribution of the PBHs and compact stars, which is
probably observed~\cite{bh-two-peak}, but not yet explained.
{Evolved chemistry in the so early formed QSOs can be explained, at least to some extend,}
{by more efficient production of metals during BBN due to much larger ratio
${\beta =N_B/ N_\gamma}$. }
The standard BBN essentially stops at $^4$He due to very
small ${\beta}$. However, in the model considered here {${\beta}$ is
much larger than the canonical value, even being close or exceeding unity.} In such conditions
much heavier primordial elements can be produced~\cite{jap-bbn}. It is possible that stars which
initiated with more metals than the usual ones could look older than they are and if their age is
evaluated by the standard nuclear chronology, they might even look older than the universe.
\section{Conclusion \label{ss-concl}}
{The scenario may be speculative but not too unnatural and explains a lot:} \\
1. Superheavy BH and early quasar formation with plenty of metals around.\\
{2. High abundance of supenovae and gamma-bursters at ${z\gg 1}$.}\\
{3. Existence very old stars in the Galaxy} and very old galaxies.\\[2mm]
{Additionally, new types of stellar-like objects from the very early universe and
probably abundant cosmic antimatter in the Galaxy are predicted~\cite{bambi-ad}.
A study of astrophysics of such new kind of stars is in order.\\[3mm]
{\bf Acknowledgement.} This work was supported by the grant of the Russian Federation government
11.G34.31.0047.
|
1,108,101,566,510 | arxiv | \section{Introduction}
The DP control system can continuously activate the vessel’s propellers to balance the external disturbances (wind, waves and currents, etc.), and automatically control the position/heading of vessel in horizontal plane. Recently, with the deepening of marine development, the development of marine engineering technology is becoming more and more complex\cite{sorensen-SurveyDynamicPositioning-2011}. The DP technology has been vigorously promoted, because of its broad application prospects, such as drilling, pipe-laying, offlading, and diving support etc. However, DP system development and evaluation for ocean offshore vessel are highly complex and time-consuming.
The performance of DP system should be tested in simulation and model experiment before commissioning test in real. And for determining offshore dynamic positioning operational sea state, numerical and experimental validations are also required. While the model test is always constrained by the significant consumption of time and money, time domain simulation is a common and convenient tool for design, analysis and predication of the DP system. Donnarumma et al. (2015) designed the DP controller structure by model-based design approach using simulation techniques\cite{donnarumma-Numericalmodelsship}. Tannuri et al. (2003) developed a computational simulator for DP Systems enabling the simulation of several DP operations, as drilling station keeping, pipe laying path following and those related to assisted offloading\cite{tannuri-DevelopmentDynamicPosition}. Martelli et al. (2022) designed DP system and evaluated its dynamic performance using a ship’s dynamic simulator\cite{martelli-TimedomainMethodologyAssess-2022a}. Zhan et al. (2013) developed a numerical station-keeping simulation in waves for a simplified FPSO with two DP systems\cite{zhan-DynamicPositioningSimulation-2013}. However, most of the above scholars use simple numerical models in simulation. These simple models only consider the linear superposition of low frequency ship maneuverability model and wave frequency model. These models ignore the effects of fluid memory and frequency dependent hydrodynamic parameters. So it is hard to capture the nonlinear response of the external exciting force on the structures. Simulation results of the platform motion, power consumption has no reliable guiding significance for engineering practice. As Fossen (2011) emphasized clearly, the simulation model should be able to reconstruct the motion response of the real physical system, where including the convincing environmental loads and the fluid-memory effects caused by the hydrodynamic coefficients\cite{fossenHANDBOOKMARINECRAFT}. Therefore, it is necessary to establish a DP simulator considering more accurate hydrodynamic environment simulation rather than just three degree-of-freedom (3 DOF) motion model using constant hydrodynamic parameters.
In addition, the model experiment is an effective means to study the motion response of DP control for vessels. Based on the similarity theory, researchers have carried out a lot of research works on the scale model experiments of DP control. Loria et al. (2000) carried out a 1:70 scaled model ship to validete the separation principle for DP using noisy position measurements\cite{loria-SeparationPrincipleDynamic-2000}. Pettersen and Fossen carried out the experiment of underactuated DP of a ship using a model ship, scale 1:70\cite{pettersen-UnderactuatedDynamicPositioning-2000}. Tannuri et al. (2010) carried out the experiment of sliding mode control for a 1:150 scaled tanker\cite{tannuri-DynamicPositioningSystems-2010}. Hu et al. (2020) carried out a 1:37 scaled model DP experiment of a novel twin-lift decommissioning operation\cite{zhihuanhu-ExperimentalStudyLowspeed-2022}. A more common research and test method is the combination of numerical simulation and experiment method. Experimental tests were performed in combination with numerical analysis in order to validate the control algorithm. Leira et al. (2009) demonstrate the performance of their reliability-based DP system of surface vessels moored to the seabed both by numerical simulations and laboratory experiments on a model vessel\cite{leira-ReliabilitybasedDynamicPosition-2009}. Tannuri and Morishita carried out a simplified experiment composed of a scaled model to pre-validate their simulator of typical DP system\cite{tannuri-ExperimentalNumericalEvaluation-2006}. But it is worth noting that the experimental conditions and equipment are not so easy to construct and obtain, and the commissioning of experiments on site is also very complicated and difficult. Therefore, it is very meaningful to develop a hybrid simulation and experiment test platform, so reliable hydrodynamic simulation can be used to replace a part of experiment works to complete parameter adjustment in advance.
This paper attempts to find a convenient, efficient and accurate test evaluation method for DP system. Therefore, more accurate numerical simulation, model experiment with parameter pre-adjustment, switchable algorithm and parameter control module are considered to build a hybrid simulation and experiment test platform. The simulation environment is constructed in combination with the hydrodynamic programs, including the calculation of frequency-dependent hydrodynamic coefficients and motion response considering fluid memory effect under environment loads of wind, waves and currents. Therefore, more accurate time domain simulation of dynamic positioning system motion response is realized. In addition, the experimental environment is constructed by a hardware framework using the scaled model of real vessel based on similarity theory of the same Froude number. During the experiment, all data are converted to the real ship scale to ensure the consistency of algorithm and control parameters in all numerical simulation, experiment and real ship. This consistency makes it possible to pre-adjust experimental parameters with simulation results. The DP controller ,equipped with switchable complete closed-loop control solution (i.e., reference filter, PID control, QP-based thrust allocation algorithm), has been developed to be compatible with both the simulation environment and the experiment environment.
The present paper is organized as follows: in Section \ref{sec:Overall}, the overall structure and characteristics of hybrid simulation and experiment test platform are briefly introduced. In Section \ref{sec:Simulation}, the calculation of accurate hydrodynamic and motion response in numerical simulation is introduced. In Section \ref{sec:Experiment}, the experiment model scales 1:50 and hardware equipments such as thrusters and observer are shown, and the scale conversion used is introduced. In Section \ref{sec:Control}, we show a modular controller with switchable and online parameter adjustment functions. The results of simulation and experiment are summarized in Section \ref{sec:Results} and some concluding remarks are given at the end of the paper.
\section{Overall Structure of the hybrid platform}
\label{sec:Overall}
The framework of the hybrid simulation and experimental test platform mainly includes three parts: hydrodynamic simulation module, model experiment module and DP controller module. The block diagram of the hybrid test platform can be shown in Figure \ref{fig:overall_structure}. Among them, the hydrodynamics simulation module (a) using hydrodynamics calculation programs to compute the hydrodynamics, environmental loads, and the motion response of DP ship, details in section \ref{sec:Simulation} . The experiment module (b) means the scale model experiment carried out in the laboratory basin also used to test the performance of the DP system confirming with simulation, details in section \ref{sec:Experiment}.The control module (c) is implemented based on Robot Operating System (ROS) environment to meet the purpose of easy expansion and switchability, details in section \ref{sec:Control}. The signal interaction is realized through the local area network (LAN) TCP/IP communication protocol between controller and hydrodynamic simulation module or model experiment module, that is, receiving the ship's position/heading state and sending control commands.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{figure/overall_structure}
\caption{Block diagram of the comprehensive simulation and experiment platform}
\label{fig:overall_structure}
\end{figure}
The design of this framework ensures the consistency of algorithms used in experiments and simulations. In order to achieve the same effective control effect in the scaled model experiment using the control parameters of the full scale obtained by simulation, more accurate calculation of hydrodynamic and motion response is realized in simulation, and the scale conversion is used in the model experiment to make the input and output data of each control loop be the full scale. Therefore, the pre-adjustment of control parameters can be completed in simulation, which can provide reference for model test parameters and shorten field adjusting time in experiment, tightly linking simulation and experiment together. It avoids the big difference of control parameters caused by different simulation and experiment scales mentioned by Ianagui et al.\cite{ianagui-RobustOutputFeedbackControl-2020}.
The control module in the framework is modular designed and can be switchable, which is easy to expand, monitor and adjust parameters online. The realization of these characteristics is based on the use of ROS. ROS has become the standard platform approach for modern robots and is also used in the development of surface or underwater vehicles by other researchers \cite{henriklemckealfheim-DevelopmentDynamicPositioning-2018} \cite{chiaROSApproachMultimode2021}. The structure of ROS enables data to be transferred easily between modules through nodes and topics\cite{amandadattalo-ROSIntroductionROS-2018}.Topics are named buses over which nodes exchange messages. A node is a process that performs computation. Nodes are combined together into a graph and communicate with one another using streaming topics. In this paper, the modular development of the controller is realized , and each algorithm module can be switched independently. Online adjusting of parameters during program execution is also implemented, avoiding recompilation after each parameter adjustment, to improve the speed of parameter tuning.
\textbf{Coordinate system used in this paper} As shown in Figure \ref{fig:coordinate}, the \emph{North-East-Down} (NED) coordinate system $\{n\} = (x_n,y_n,z_n)$ is definited relative to earth as earth-fixed (global) reference frame, and the body-fixed (local) reference frame $\{b\} = (x_b, y_b, z_b)$ is a moving coordinate frame that is fixed to the vessel. The seakeeping reference frame $\{s\}=(x_s, y_s, z_s)$ is not fixed to the vessel; it is fixed to the equilibrium state of vessel. The {s} frame is considered inertial and therefore it is nonaccelerating and fixed in orientation with respect to the {n} frame, $U$ is the average forward speed.
The position or velocities considered in this paper use the following representation: $\eta = [x, y, z, \phi, \theta, \psi]^T \in \{n\}$ is the vector of position/euler angles in earth-fixed reference frame; $ v = [u, v, w, p, q, r]^T \in \{b\}$ is the vector of velocities in body-fixed reference frame. $\delta \eta = [\delta x,\delta y,\delta z,\delta \phi,\delta \theta,\delta \psi]^T \in \{s\}$ is the vector of surge, sway and heave perturbations and roll, pitch and yaw perturbations in seakeeping coordinates, and corresponding $\delta v = [\delta u, \delta v, \delta w, \delta p, \delta q, \delta r]^T \in \{s\}$ is perturbed velocities.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{figure/coordinate}
\caption{Coordinate system used in this paper}
\label{fig:coordinate}
\end{figure}
\section{Simulation Moudule}
\label{sec:Simulation}
The structural block diagram of the hydrodynamics simulation system is shown in Figure \ref{fig:overall_structure}(a). In this module, the surface element model of wet surface is first established so that the Boundary Element Method (BEM) can be used to solve the hydrodynamic parameters. The boundary integral equations for diffraction/radiation potential on vessel surfaces are derived by Green function \cite{Newman-Algorithms-1985} and solved by a numerical method. The method discretes the integral equation on the average wet surface into several quadrilateral panels, and thus the velocity potential of each element can be calculated. Then, frequency-dependent hydrodynamics parameters can be obtained, such as additional mass, damping, and motion and load RAOs.
Furthermore, the motion equation of vessel in time domain are obtained by applying Newton’s second law :
\begin{equation}
\begin{aligned}
\dot{\eta} & =J(\Theta)v
\\
[M_{RB}+A(\infty)]\cdot \delta\dot{v}+\int_{0}^{\infty}{R(\tau)\delta v(t-\tau) d\tau}+ C\eta & ={F}_{\text{thruster}}+{F}_{\text{environment}}
\\
\delta{v} \approx v-\overline{v}; \ \overline{v} & \approx U[1,-\delta \psi,\delta \theta,0,0,0]^T
\\
\delta\dot{v} \approx \dot v-\dot{\overline{v}}; \ \dot{\overline{v}} & \approx U[0,-\delta r,\delta q,0,0,0]^T
\label{eq:motion}
\end{aligned}
\end{equation}
where the $J(\Theta)$ is the transformation matrix between $\{b\}$ and $\{n\}$; $v$ represents the vector of velocity under $\{b\}$ ; $ \delta v, \ \delta\dot{v}$ are the perturbed velocities and accelerations under $\{s\}$; $\overline{v}, \ \dot{\overline{v}}$ represent the average velocities and accelerations; the mass matrix $M_{RB}\in \mathbb{R}^{6 \times 6}$ represents the rigid inertia mass matrix and the added mass matrix $A(\infty)\in \mathbb{R}^{6 \times 6}$ is the infinite frequency added mass matrix giving the vessel's instantaneous response to acceleration; $\tau$ is a time lag integration variable; $R(\tau)$ is the a time-varying matrix of retardation functions proposed by Cummins \cite{Cummins-IRF-1962}, representing the memory effect of fluid; $C \in \mathbb{R}^{6 \times 6}$ is hydrostatic restoring force matrix. The right side of the equation represents the following external forces: $F_\text{thruster}$ is the vector of propulsion force and moment provided by the propellers, $F_\text{environment}\in \mathbb{R}^6$ is the vector of environment load including wave, wind and current loads.
The retardation functions for $R(\tau)$ are calculated by Fourier integration of the frequency-dependent damping matrix $B(f)$ at frequency $f$, $R(\tau) = c(\tau)\int_{f=0}^{\infty}{[4B(f)\cos{(2\pi f t)}]df}$, where $c(\tau) = \exp{[-(3\tau / T_c)]}$ is a cutoff scaling function. $T_c$ is the cutoff time, after which the retardation function is truncated and assumed to be zero, because it decays to zero as $\tau \rightarrow \infty$ . \emph{Impulse Response Function} (IRF) is used to apply that retardation functions at each time step using a convolution integral to account for the past motion of the vessel. As opposed to the IRF, the infinite frequency added mass matrix $A(\infty)$ , giving the vessel's instantaneous response to acceleration, can be obtained from the retardation functions $R(s)$ and the added mass matrice $A(f)$ at frequency $f$ from the equation: $A(\infty) = A(f) + (1/{2\pi f})\int_{s=0}^{T_c}{R(s)\sin{(2\pi f s)}ds}$ \cite{Orcina2015}.
The above motion response calculation is completed using the hydrodynamic software OrcaFlex. The Dynamic Link Library (DLL) of OrcaFlex and socket is used to communicate with the DP control system running in Ubuntu via TCP/IP communication protocol. This allows the two proccess to start at the same time and keep the same time step. Within the time loops the vessel position and heading state at each time step are sent to the DP controller which calculates the demand forces according to the errors from the set point, allocates the demand forces to the thrusters (called thrust command), and then sends the thrust command to the simulator. Simulator will calculate a new vessel state according to the thrust command and send the new state back to the DP controller, where the process repeats to achieve the time domain simulation for continuous dynamic positioning.
\section{Experiment Moudule}
\label{sec:Experiment}
In addition to simulation, the scaled model experiment is a convincible and reliable method to study the motion response of DP control for vessels. The model-scale experiments (1:50) were conducted in the deep-water offshore basin located at Shanghai Jiao Tong University, China. As shown in Figure \ref{fig:overall_structure}(b), the model experiment moudle is composed of scale conversion, vessel model, thrusters, servo motor, onboard controller, observer, etc. The details can be shown in Figure \ref{fig:experiment_structure}.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{experiment_structure}
\caption{Block diagram of experiment platform}
\label{fig:experiment_structure}
\end{figure}
\subsection*{ $\bullet$ Scale Conversion}
In the experiment, the motion response of the reduced scale ship model is under environment and control loads in laboratory basin to simulate the real DP ship. For the model experiment in this paper, the data interacting with the DP program is converted to the real ship scale by assuming that the \textit{Froude number}
\begin{equation}
\text{Fn}= V/\sqrt{Lg} = constant
\end{equation}
where $V$ is velocity of the ship; $L$ is length of the ship; $g$ is the acceleration of gravity.The following scaling are obtained according to the scale ratio $\lambda$: length $L_s/L_m = \lambda$, angle $\phi_s/\phi_m = 1$, linear velocity $V_s/V_m = \sqrt{\lambda}$, angular velocity $r_s/r_m = \sqrt{1/\lambda}$, density of water $\rho_s/\rho_m = \gamma$, force $F_s/F_m = \gamma \lambda^3$, moment $N_s/N_m =\gamma \lambda^4$, time $T_s/T_m = \sqrt{\lambda}$.
In each control loop, the position state of model scale obtained by observer will be converted to full scale and then sent to the control center. Similarly, the full scale thrust commands issued by the controller will also be converted to model scale and then sent to PLC and acted on the hull.Therefore, the control center always processes full-scale data to ensure the consistency of algorithm and control parameters in numerical simulation, experiment and real ship.
\subsection*{ $\bullet$ Vessel model}
The experimental vessel model used in this paper is a model of a shuttle tanker (ST) scale $\lambda=1:50$. The principal dimensions of the ST is shown in Table \ref{tab:main_dimension}. The ST is equipped with four thrusters, two of which are azimuth thrusters, a lateral thruster at the bow and a main thruster at the stern. The arrangement of thrusters and their location and performance parameters are shown in Figure \ref{fig:thrusters} and Table \ref{tab:location_thrusters}.
\begin{table}
\renewcommand{\arraystretch}{1.2}
\caption{Main Dimension of ST ($\lambda$=1:50)}
\label{tab:main_dimension}
\centering
\setlength{\tabcolsep}{3mm}{
\begin{tabular}{l c c c}
\toprule
Main Dimension & Unit & Full Scale & Model Scale \\
\midrule
Length Overall (Hull) & $m$ & 137 & 2.74 \\
Length between Perpendiculars & $m$ & 134.6 & 2.692 \\
Breadth Moulded & $m$ & 22.8 & 0.456 \\
Depth Moulded to Main Deck & $m$ & 12.5 & 0.25 \\
Draught & $m$ & 8.655 & 0.1731 \\
Displacement & $kg$ & 21402$\times$10$^3$ & 167.04 \\
\bottomrule
\end{tabular}}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=4in]{thrusters}
\caption{Arrangement of thrusters}
\label{fig:thrusters}
\end{figure}
\begin{table}
\renewcommand{\arraystretch}{1.2}
\caption{Location (relative to gravity center) \& capacity of thrusters}
\label{tab:location_thrusters}
\centering
\setlength{\tabcolsep}{8mm}{
\begin{tabular}{c c c c}
\toprule
Thruster & X[m] & Y[m] & Max Thrust[KN] \\
\midrule
No.1 & 60.59 & 0 & 246 \\
No.2 & 57.00 & 0 & 275 \\
No.3 & -40.34 & 0 & 275 \\
No.4 & -61.17 & 0 & 480 \\
\bottomrule
\end{tabular}}
\end{table}
Those thrusters driven by servo motors, and a belt steering unit and a additional motor are used to adjust the direction of each azimuth thruster. The velocity of servo motors are controlled by the Programmable Logic Controller (PLC) running onboard. Since the shuttle tanker is equipped with 2 sets of azimuth thrusters and 1 set of lateral thruster and 1 set of main thruster, we require 6 sets of servo motors to drive these thrusters. Therefore, this ship model uses a 48V DC battery pack to power the servo motors and the PLC simultaneously.
\subsection*{ $\bullet$ Control and Communication Center}
The main body of the control center is a computer installed on shore running DP programs, receiving ship state and giving thrust commands. The control center wired connectes to observer for receiving ship state. The industrial WLAN is used for communication between PLC and control center, consist of one Accessing Point (AP) onshore connected the control center and one onboard client module running in PLC. PLC receives the thrusters command and feedbacks the actual speed information to the control center through the WLAN.
\subsection*{ $\bullet$ Observer}
The use of optical motion capture system, makes it possible to measure the position of the model in a non-contact manner, using the Qualisys Track Manager (QTM) software. The position and speed informations of the model obtained by the optical motion capture system are sent to the control center running the DP control algorithm. After receiving the model state, the control center outputs the thruster command after calculation, thus forming a closed-loop control loop.
\section{Control Module}
\label{sec:Control}
The control module algorithm is based on Robot Operation System (ROS) environment.The diagram of this control system is shown in Figure \ref{fig:ros_node}. In order to realize the efficient test and verification of the DP algorithm, the loose coupling modular design of the algorithm is needed. Modular design allows algorithms with different functions to be independent of each other, and different algorithms with the same function are switchable and can be replaced by each other. Therefore, ROS is chooesn as the basis of the control system, which can provide an operating system-like capability for creating modular systems using nodes. In addition, in order to achieve online parameter adjusting, we need to be able to real-time revalue some variables (e.g. $K_P , K_I, K_D$ in PID controller) during program execution. The ROS parameter server helps us achieve this, because its parameter server provides a shared, multi-variate dictionary that is accessible via network APIs. Nodes use this server to store and retrieve parameters at runtime, avoiding recompilation after each parameter adjustment \cite{bradmillerParameterServerROS2018}.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{ros_node}
\caption{Diagram of DP control system}
\label{fig:ros_node}
\end{figure}
As shown in Figure \ref{fig:ros_node}, the TCP/IP communication protocol is used to implement data exchange between control modules and simulation or experiment. In each control closed-loop time step, the control module receives position and speed information of ship model from simulator running on WIN system during numerical simulation, or from optical motion capture system during vessel model experiment. After the calculation is completed, the control module sends thrust commands to simulator software during numerical simulation, or to onboard PLC during experiment. Note that all the state data sent to the control module and the thrust commands given by the controller are based on the full scale ($\lambda$=1:1), and thrust commands is applied to the model test after scaling using the similarity theory according to the corresponding scale, in order to ensure the consistency of the control algorithm applied to simulation and experiments in this way.
The nodes named (a)Reference Filter, (b)Motion Controller and (c)Thrust Allocation algorithm in the blue block in the Figure \ref{fig:ros_node} are the core parts of the control module. Algorithms used in all those modules can be easily switched by algorithms with the same function. Algorithms used for examples in this paper use the corresponding theory shown in the fellowing.
\subsection*{ $\bullet$ Reference Filter}
In order to plan a feasible reference path between the ship and the setpoint, the physical limitations of the ship's velocity and acceleration need to be fully considered. In order to process the step signal of the reference target, Fossen et al. (2011) constructed a ship position and heading reference model \cite{fossenHANDBOOKMARINECRAFT} by connecting the first-order low-pass filter and mass damping spring system in series, which is used to generate smooth reference path.
\begin{equation}
\label{eq:reference}
\eta_{d}^{(3)}+(2\Delta +I)\Omega {{\ddot{\eta }}}_{d}+(2\Delta +I){{\Omega }}^{2}{{\dot{\eta }}}_{d}+{{\Omega }^{3}}{{\eta }_{d}}={{\Omega }^{3}}{{r}}
\end{equation}
where $\eta_{d}= [x_d, y_d, \psi_d]$ is desired position and heading vector calculated with Eq. \ref{eq:reference}; $ r = [x_r, y_r, \psi_r]$ is setpoint inputed by operators; $\Delta$ and $\Omega$ are diagonal matrices composed of relative damping coefficients and natural frequencies, respectively.
\subsection*{ $\bullet$ Motion Controller}
\label{par:PID}
The PID controller is a robust and industry standard used in many applications. As the goal of this paper is to validate the test platform, then classical PID controller is a solid choice. The PID controller attempts to minimise the error $e(k)$ as the difference between a desired set point $\eta_d$ and an observed state $\eta_o = [x, y, \psi]$ in the body-frame coordinate system. The error can be expressed as:
\begin{equation}
e(k)=\eta_o(k)-{{\eta}_{d}}(k)
\end{equation}
In order to realize PID controller on computer, the equation is discretized and the following equation is obtained:
\begin{equation}
{\tau }_{PID}(k)={K}_{P}e(k)+{K}_{I}\sum\limits_{i}^{k}{e(i)}h+{K}_{D}\frac{1}{h}(e(k)-e(k-1))
\end{equation}
where ${K}_{P},{K}_{I},{K}_{D}$ are the proportional, integral and derivative parameters respectively; $h$ is the time step. When tuning the controller manually, operators can think about the physical meaning of parameters to determine the parameters. Under the above formula definition, the corresponding units are:
\begin{equation}
{\tau }_{PID}:\left[\begin{matrix}kN \\ kN \\ kNm \\ \end{matrix}\right],
{K}_{P}:\left[\begin{matrix}\frac{kN}{m} \\ \frac{kN}{m} \\ \frac{kNm}{rad} \\ \end{matrix}\right],
{K}_{I}:\left[\begin{matrix}\frac{kN}{m\cdot s} \\ \frac{kN}{m\cdot s} \\ \frac{kNm}{rad\cdot s} \\ \end{matrix}\right],
{K}_{D}:\left[\begin{matrix}\frac{kN}{m/s} \\ \frac{kN}{m/s} \\\frac{kNm}{rad/s} \\ \end{matrix}\right]
\end{equation}
A frequently used method of tuning the PID is increasing the proportional gain until oscillations around the setpoint is achieved. Then adding derivative gain functioning as dampening to remove the oscillations. Lastly adding integral action removes the constant offset \cite{henriklemckealfheim-DevelopmentDynamicPositioning-2018}. During the adjustment of the control parameters on 3 DOF, maintaining the desired heading angle takes priority over position to produce the desired results \cite{shiCompositeFinitetimeAdaptive2022a}.
\subsection*{ $\bullet$ Thrust allocation}
The role of thrust distribution is to allocate the total desired control force and moment calculated by the motion controller to each actuator, so that the force and moment applied by the actuator on the ship is the same as the desired control force. The force acting on a ship can be written as:
\begin{equation}
{{\tau }_{thruster}}\text{=}B(\alpha )\cdot u
\end{equation}
\begin{equation}
{{B}_{3\times m}}(\alpha )=\left[
\begin{matrix}
\cos {{\alpha }_{1}} & \cdots & \cos {{\alpha }_{m}} \\
\sin {{\alpha }_{1}} & \cdots & \sin {{\alpha }_{m}} \\
-l_{y1}\cos{\alpha_1}+l_{x1}\sin{\alpha_1} & \cdots & -l_{ym}\cos{\alpha_m}+l_{xm}\sin{\alpha_m} \\
\end{matrix}
\right]
\end{equation}
where $u = (u_1, u_2, \cdots, u_m)^T \in \mathbb{R}^m $ represents the thrust of each propeller; $\alpha = (\alpha_1, \alpha_2, \cdots, \alpha_m)^T \in \mathbb{R}^m $ indicates the thrust direction of propeller $i$; $l_{xi}$and $l_{yi}$ represent the X and Y coordinate of propeller $i$ in the body-frame coordinate system, respectively.
For thrust distribution problems, power consumption, angle change, speed change, etc., are frequently taken as optimization objectives to minimize the cost of generating thrust, and quadratic programming (QP) problems with linear constraints are constructed \cite{johansen-ConstrainedNonlinearControl-2004}. The QP problems can be solved very reliably and efficiently \cite{Convex-optimization-2004}. Singularity avoidance is also considered, see \cite{johansenConstrainedNonlinearControl2004a} for the details. The optimization objective function constructed is as follows:
\begin{equation}
J(\alpha , u, s)=\sum\limits_{i=1}^{m}{{{W}_{i}}({{u}_{i}})}+{{(\alpha -{{\alpha }_{0}})}^{T}}\Omega (\alpha -{{\alpha }_{0}})+{{s}^{T}}Qs +\frac{\rho }{\varepsilon +det(B(\alpha ){{B}^{T}}(\alpha ))}
\end{equation}
where ${{W}_{i}}({{u}_{i}}) = {\mid u_i \mid}^{3/2} $ is the power of propeller $i$; $(\alpha-\alpha_0)$ is the propeller angle variation between the next time step and the present moment in operation; $s$ is the relaxation factor term in thrust distribution, that is, the allowable resultant error; and $Q$ is the weight of the relaxation factor. The last item is set to avoid singularity of $B(\alpha)$, the $\rho>0$ is weighted parameter, the $\varepsilon$ is a small positive definite parameter.
In solving the thrust allocation algorithm, the thruster angle, max thrust and change rate should be constrained. The linear constraint conditions are as follows:
\begin{equation}
\begin{aligned}
& u_{\min}\le u \le u_{\max} \\
& \Delta u_{\min} \le u-u_0 \le \Delta u_{\max} \\
& \alpha_{\min}\le \alpha \le \alpha_{\max} \\
& \Delta \alpha_{\min} \le \alpha-\alpha_0 \le \Delta \alpha_{\max} \\
\end{aligned}
\end{equation}
\subsection*{ $\bullet$ Graphical User Interface (GUI)}
The (d) GUI was programed using the QT application, integrated with ROS. The interface allows operators to control vessel in either the simulation or experiment in same way. making the online parameter adjustment in the system more convenient, such as the target state, controller parameter revalue or control algorithm switching.
\section{Simulation and experiment results}
\label{sec:Results}
\subsection{Position keeping test}
\label{sec:Results:pk}
Benefiting from the design of algorithm consistency in this paper, we can first adjust the control parameters in the simulation and provide reference for the experiment. In order to tuning the parameters and verify the control ability of the control module, the numerical simulation test of position keeping under different environmental loads were carried out. During simulation the DP ST is under the environmental condition assuming that wave ($Tp = 8s, Hs = 1.5m$),wind ($v_{wind}=5m/s$) and current ($v_{current}=0.5m/s$) loads are applied in the same direction. In the begining of the test, referring to the tuning method of PID parameters mentioned in section \ref{par:PID}, the PID parameters with good performance are obtained by using the convenient online parameter adjustment technology, show in Table \ref{tab:PID_gains}. Then a series of tests were conducted to obtain those DP capability plots of the ST for maximum offset, heading angle, mean thrust under environment load from 0$\sim$360$^\circ$ presented in Figure \ref{fig:max_offset}.
\begin{table}
\renewcommand{\arraystretch}{1.2}
\caption{The PID gains.}
\label{tab:PID_gains}
\centering
\setlength{\tabcolsep}{3mm}{
\begin{tabular}{c c c c c c c c c}
\toprule
\multicolumn{3}{c}{$K_P$} & \multicolumn{3}{c}{$K_I$} & \multicolumn{3}{c}{$K_D$} \\
X & Y & $\Psi$ & X & Y & $\Psi$ & X & Y & $\Psi$ \\
\midrule
400 & 400 & 400000 & 0.05 & 0.05 & 2 & 2000 & 2000 & 200000 \\
\bottomrule
\end{tabular}}
\end{table}
\begin{figure}[ht]
\centering
\subfloat[{Maximum offset [m]}]{\includegraphics[width=0.4\textwidth]{max_pos_offset}}
\subfloat[{Maximum heading offset [$^{\circ}$]}]{\includegraphics[width=0.4\textwidth]{max_heading_offset}}
\caption{Maximum offset of the ST}
\label{fig:max_offset}
\end{figure}
\FloatBarrier
It can be clearly seen that the lateral environmental load has the greatest influence on the positioning of the ship, and the oblique environmental load has the greatest influence on the heading control.
This is also illustrated by the control forces given by the controller under the action of environmental forces in different directions shown in Figure \ref{fig:mean_ctrl_force}. The controller gives a larger lateral thrust to balance the lateral environmental forces and a larger moment to resist the moment brought by the oblique environmental forces.
\begin{figure}[ht]
\centering
\subfloat[{Mean Thrust X [KN]}]{\includegraphics[width=0.3\textwidth]{mean_x_force}}
\subfloat[{Mean Thrust Y [KN]}]{\includegraphics[width=0.3\textwidth]{mean_y_force}}
\subfloat[{Mean Moment N [KN$\cdot$m]}]{\includegraphics[width=0.3\textwidth]{mean_z_moment}}
\caption{Mean control force of the ST}
\label{fig:mean_ctrl_force}
\end{figure}
\FloatBarrier
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{thruster_error_45}
\caption{Thrust allocation error under Env.1 of 45$^{\circ}$}
\label{fig:thrust_error}
\end{figure}
\FloatBarrier
A specific time histories of thrust error is used to demonstrate the performance of the thrust allocation algorithm. The thrust allocation error under the environment force in 45 degree direction is shown in Figure \ref{fig:thrust_error}, and it can be seen that the error of thrust in X and Y directions is within 5\%, and the error of torque is within 10\%.
At the same time, shown in Figure \ref{fig:thrust_under_45_thrust}\ref{fig:thrust_under_45_direction} the thrust of most of ships' propellers is around 30\% of the maximum thrust, and the angle of the azimuth thrusters is kept within a certain range, showing a good energy-saving performance.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\textwidth]{thruster_thrust_45}
\caption{Thrust of Thrusters under Env.1 of 45$^{\circ}$}
\label{fig:thrust_under_45_thrust}
\end{figure*}
\FloatBarrier
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\textwidth]{thruster_angle_45}
\caption{Direction of Thrusters under Env.1 of 45$^{\circ}$}
\label{fig:thrust_under_45_direction}
\end{figure*}
\FloatBarrier
\subsection{4-corner box manoeuvre test}
On the basis of using simulation to obtain the control parameters shown in Section \ref{sec:Results:pk}, In order to evaluate control effect of the parameters in the model experiment and the ship's motion capability of DP control module, a 4-corner box manoeuvre test was carried out. The test diagram is shown in Figure \ref{fig:4_box_target}. The test is realized by changing the ship target coordinate points to four vertices of a square. Through this test, all motion performance of ship under three degrees of freedom can be detected.
\begin{figure}[ht]
\centering
\subfloat[{Target poses}]{\includegraphics[height=0.3\textwidth]{4_box_target}}
\subfloat[{Time-lapse photography of experiment }]{\includegraphics[height=0.3\textwidth]{4_box_exp_path}}
\caption{Schematic plot of the 4-corner box manoeuvre test}
\label{fig:4_box_target}
\end{figure}
\FloatBarrier
\begin{figure}[ht]
\centering
\subfloat[{Simulation result}]{\includegraphics[width=0.8\textwidth]{4-box-sim-still}}
\\
\subfloat[{Experiment result}]{\includegraphics[width=0.8\textwidth]{4-box-exp-still}}
\caption{Time variation of planar motion in 4-corner box manoeuvre test}
\label{fig:4_box_Time}
\end{figure}
\FloatBarrier
The movement track of the shuttle tanker model in the simulation and experiment is shown in Figure \ref{fig:4_box_Time}, \ref{fig:4_box_Trajectory}.
It can be seen that the position error of the ship in both simulation and experiment is maintained within 1m during the course of travel, and only a position error of about 5m to 10m occurs when switching the moving state. The heading angle error is within 5 degrees during the whole process.
\begin{figure}[ht]
\centering
\subfloat[{Simulation result}]{\includegraphics[height=0.4\textwidth]{4-box-sim-still-path}}
\subfloat[{Experiment result}]{\includegraphics[height=0.4\textwidth]{4-box-exp-still-path}}
\caption{Trajectory of ST in 4-corner box manoeuvre test}
\label{fig:4_box_Trajectory}
\end{figure}
\FloatBarrier
\section{Conclusion}
A hybrid simulation and experiment test platform for DP vessels is successfully built and tested. In the test, firstly, the online parameter adjustment is used for parameter tuning in the simulation environment. On this basis, a series of DP capability tests are carried out in the simulation and experiment environment. The results show that the control system is equally effective in physical model experiment and numerical simulation environment.
Through the above research, some conclusions can be drawn as follows:
\begin{itemize}
\item The accurate simulation model takes account of the fluid memory effect and frequency correlation, which provides strong support for the motion response simulation of DP task and makes the parameter setting more accurate.
\item The controller of modular switchable algorithm and on-line parameter adjustment facilitates parameter tuning and can be extended to comparative testing of various algorithms.
\item In the scale model experiment, the similarity principle is used to realize the interactive control of the full scale. The control parameters of the full scale simulation can provide effective reference for the model experiment and save the adjustment time and cost of the parameters tuning.
\end{itemize}
The future work on simulation and experiment platform construction should include: further refinement of the simulation model, especially the propeller part, so that the friction resistance of the actual ship propeller and the ship wake and thrust deduction can be taken into account in the simulation model. On this basis, the dynamic positioning function of propeller failure can be considered to further improve the dynamic positioning performance of ships in complex operating environment.
\section*{Acknowledgement}
This work is supported by the Major Science and Technology Projects of Hainan Province (No. ZDKJ2019001), the High-Tech Ship Research Project Supported by Ministry of Industry and Information Technology (No. MC-202030-H04) and in part by the State Key Laboratory of Ocean Engineering (Shanghai Jiao Tong University) (Grant No. 1915), to which the authors are most grateful.
\bibliographystyle{unsrt}
|
1,108,101,566,511 | arxiv | \section{Introduction}
The main goal of this monograph is to further the point of view that many
beautiful geometrical and analytical results valid on differentiable manifolds
hold on general metric spaces. Besides the wider relevance gained by
generalization, the foundations of the subject are clarified when the limits
of applicability are explored. This effort has a long and often disjointed
history, only one sliver of which is relevant here. The approach in this
paper, which has been used by several others, is to use the well-known
characterization of a vector in a tangent space as an equivalence class of
curves which are tangent to each other. A \textbf{curve} $c$ on a metric space
$\left( M,d\right) $ is a continuous map $c:\left( \alpha,\beta\right)
\rightarrow M$ where $\left( \alpha,\beta\right) \subset\mathbb{R}$. Two
curves $c_{i}:\left( \alpha_{i},\beta_{i}\right) \rightarrow M$ for $i=1,2$
are \textbf{tangent at }$t\in\left( \alpha_{1},\beta_{1}\right) \cap\left(
\alpha_{2},\beta_{2}\right) $ if%
\[
\underset{h\rightarrow0}{\lim}\frac{d\left( c_{1}\left( t+h\right)
,c_{2}\left( t+h\right) \right) }{h}=0\text{.}%
\]
In this way we may generalize a vector field (a family of vectors) on a
manifold as an arc field (a family of curves) on a metric space--Definition
\ref{ArcField}, below.
It has been said the three pillars of differential geometry are: $\left(
I\right) $ the Inverse Function Theorem, $\left( II\right) $ the Existence
Theorem for ordinary differential equations (ODEs) and $\left( III\right) $
Frobenius' Theorem. All of these classical theorems may be written with vector
fields on manifolds and so may also be written with arc fields on metric
spaces. We expect any result on manifolds which has a sufficiently
geometrically realized proof can be generalized to metric spaces using curves
in place of vectors. A metric space version of $\left( I\right) $ is
contained in \cite{Aubin2}, e.g.; and versions of $\left( II\right) $ have
been proven several times independently in e.g., \cite{Panasyuk2},
\cite{Aubin2}, and \cite{CalcBleecker}--see Theorem \ref{CL} below. A version
of $\left( III\right) $ is the main result of this paper, Theorem
\ref{FrobeniusThm}: an involutive distribution on a complete metric space is
integrable. Since the result is for complete metric spaces, it generalizes the
classical result on Banach manifolds (proven, e.g., in \cite{abrahamMarsden}).
Theorem \ref{FrobeniusThm} further generalizes the classical result by
assuming only Lipschitz-type regularity instead of smoothness, which is of
interest in, for example, control theory.
As far as I have been able to determine, this particular approach to the proof
of Frobenius' classical theorem has not been vetted in the literature--though
it uses basic, well-known ideas. We outline the approach in this paragraph,
simplified to vector fields on a manifold. The terminology and assumptions
will be clarified in the main body of the paper, and Figures
\ref{FigFrobProof1} and \ref{FigFrobProof2} from Section \ref{SectionFrobThm}
may aid intuition. The crux of the local Frobenius result in two dimensions is
as follows: Given two transverse vector fields $f,g:M\rightarrow TM$ there
exists an integral surface (tangent to linear combinations of $f$ and $g$)
through any point $x_{0}\in M$ when the Lie bracket satisfies $\left[
f,g\right] =af+bg$ for some choice of functions $a,b:M\rightarrow\mathbb{R}$
(involutivity of $f$ and $g$). To prove this, define
\[
S:=\left\{ F_{t}G_{s}\left( x_{0}\right) \in M:\left| s\right| ,\left|
t\right| <\delta\right\}
\]
where $F$ and $G$ are the local flows of $f$ and $g$ guaranteed to exist by
$\left( II\right) $. Since $f$ and $g$ are transverse, we may choose
$\delta>0$ small enough for $S$ to be a well-defined surface. $S$ will be
shown to be the desired integral surface through $x_{0}$. Notice $S$ is
tangent to $f$ by construction, but it is not immediately clear $S$ is tangent
to $a^{\prime}f+b^{\prime}g$ for arbitrarily chosen $a^{\prime},b^{\prime}%
\in\mathbb{R}$. Notice, though, that by construction $S$ is tangent to $g$ at
any point $x=G_{s}\left( x_{0}\right) $, and also $S$ is tangent to
$a^{\prime\prime}f+b^{\prime\prime}g$ at $x$ for functions $a^{\prime\prime}$
and $b^{\prime\prime}$. Therefore establishing%
\begin{equation}
\left( F_{t}\right) ^{\ast}\left( a^{\prime}f+b^{\prime}g\right)
=a^{\prime\prime}f+b^{\prime\prime}g\text{\quad at\quad}x=G_{s}\left(
x_{0}\right) \label{FrobOutline5}%
\end{equation}
for some functions $a^{\prime\prime}$ and $b^{\prime\prime}$, proves $S$ is
tangent to $a^{\prime}F+b^{\prime}G$ at an arbitrary point $z=F_{t}%
G_{s}\left( x_{0}\right) \in S$, since the push-forward $\left(
F_{t}\right) _{\ast}$ and the pull-back $\left( F_{t}\right) ^{\ast}$ are
inverse to each other and preserve tangency since they are local
lipeomorphisms. Next since the Lie bracket equals the Lie derivative,%
\[
\underset{h\rightarrow0}{\lim}\frac{F_{h}^{\ast}\left( g\right) -g}%
{h}=\left[ f,g\right] =af+bg
\]
by involutivity so%
\[
F_{h}^{\ast}\left( g\right) =g+h\left( af+bg\right) +o\left( h\right)
=\widetilde{a}f+\widetilde{b}g+o\left( h\right) \text{.}%
\]
Using the fact that $F_{h}^{\ast}\left( f\right) =f$ for any $h$, and the
linearity of pullback for fixed $t$, we have for functions $a_{i}$ and
$b_{i}:M\rightarrow\mathbb{R}$%
\[
F_{t/n}^{\ast}\left( a_{i}f+b_{i}g\right) =\left( a_{i+1}f+b_{i+1}g\right)
+o\left( 1/n\right)
\]
for some functions $a_{i+1}$ and $b_{i+1}$. Then since%
\[
F_{t}^{\ast}=\underset{n\text{ times}}{\underbrace{F_{t/n}^{\ast}F_{t/n}%
^{\ast}...F_{t/n}^{\ast}}}=\left( F_{t/n}^{\ast}\right) ^{\left( n\right)
}%
\]
(where the superscript in round brackets denotes composition $n$ times) we get
$\left( \text{\ref{FrobOutline5}}\right) $ as follows:%
\begin{align*}
F_{t}^{\ast}\left( a_{0}f+b_{0}g\right) & =\underset{n\rightarrow\infty
}{\lim}\left( F_{t/n}^{\ast}\right) ^{\left( n\right) }\left(
a_{0}f+b_{0}g\right) \\
& =\underset{n\rightarrow\infty}{\lim}a_{n}f+b_{n}g+no\left( 1/n\right)
=a_{\infty}f+b_{\infty}g+0
\end{align*}
completing the sketch for manifolds.
A pivotal fact on which the metric space version relies is that arc fields
which satisfy certain Lipschitz-type conditions generate unique local flows
(proven in \cite{CalcBleecker} and reviewed in Section \ref{SectionReview}).
Also a natural linear structure may be associated with a metric space (though
it has no \textit{a priori} linear structure) using compositions of flows
which faithfully generalizes the linearity of vector fields; this was
introduced in \cite{Colombo}. We present this in Section
\ref{SectionBracket&linearity} along with the generalization of the Lie
bracket for vector fields which uses the well-known asymptotic
characterization of the Lie bracket; i.e., successively follow the flows
forward and backward for time $\sqrt{t}$. This investigation further clarifies
for us the surprising fact Sussman and others have noted: smoothness is not
necessary to define a geometrically meaningful Lie bracket. In Section
\ref{PushFwdSect}, the pull-back along a flow is shown to behave naturally
with linearity and the bracket, which mimics properties of the Lie derivative
on manifolds. Many more such algebraic properties are valid than are contained
in these sections, but in this monograph we present only the minimum machinery
directly relevant to proving Frobenius' Theorem in Section
\ref{SectionFrobThm}.
Section \ref{SectionDist&Foliations} applies this local Frobenius theorem to
study foliations yielding a global theorem on metric spaces. A metric space
generalization of the Nagumo-Brezis Invariance Theorem is proven, which is
used to show integrable distributions are involutive. We do not discuss the
facet of the classical Global Frobenius Theorem which guarantees local
coordinates on which there exist coordinate vector fields tangent or
perpendicular to an involutive distribution. In light of these results,
however, this now seems ripe for exploration.
Section \ref{SectionCommut} proves a well-known result from Hamiltonian
dynamics is also valid for metric spaces: two flows commute if and only if the
bracket is $0$. This is not exactly a corollary of the metric space Frobenius
Theorem, but the proof is a mere simplification of that from Theorem
\ref{FrobeniusThm}.
Finally in Section \ref{SectionExs} an almost trivial example applying these
ideas has a result which astounded me: Any Lebesgue square-integrable function
may be approximated using successive compositions of two elementary flows,
starting from the constant zero function. In other words, $L^{2}\left(
\mathbb{R}\right) $ is controllable by two flows. You may skip straight to
this Example \ref{ExL2decomp} after perusing the following review and the
definitions in Section \ref{SectionBracket&linearity}. \cite{Sontag} is an
accessible text introducing the terminology of control theory with remarks and
references on infinite dimensional controllability.
\section{\label{SectionReview}Review of terminology and basic results}
The proofs of all of the results from this section are contained in
\cite{CalcBleecker} for forward flows, also called semi-flows. Minimal
changes, stated here, give us the corresponding results for (bidirectional) flows.
A \textbf{metric space} $\left( M,d\right) $ is a set of points $M$ with a
function $d:M\times M\rightarrow\mathbb{R}$ called the \textbf{metric} which
has the following properties:%
\[%
\begin{array}
[c]{lll}%
\left( \text{i}\right) & d(x,y)\geq0 & \text{\textbf{positivity}}\\
\left( \text{ii}\right) & d(x,y)=0\text{\quad\textit{iff\quad}}x=y &
\text{\textbf{nondegeneracy}}\\
\left( \text{iii}\right) & d(x,y)=d(y,x) & \text{\textbf{symmetry}}\\
\left( \text{iv}\right) & d(x,y)\leq d(x,z)+d(z,y) & \text{\textbf{triangle
inequality}}%
\end{array}
\]
for all $x,y,z\in M$. The open ball of radius $r$ about $x\in M$ is denoted by
$B\left( x,r\right) :=\left\{ y:d\left( x,y\right) <r\right\} $. We
assume throughout this paper that $\left( M,d\right) $ is a locally complete
metric space, i.e., there exists a complete neighborhood of each point in $M$.
Denote the open ball in $M$ about $x_{0}\in M$ with radius $r$ by%
\[
B\left( x_{0},r\right) :=\left\{ x\in M:d\left( x,x_{0}\right)
<r\right\} \text{.}%
\]
A map $f:\left( M,d_{M}\right) \rightarrow\left( N,d_{N}\right) $ between
metric spaces is \textbf{Lipschitz} continuous if there exists $K_{f}\geq0$
such that%
\[
d_{N}\left( f\left( x_{1}\right) ,f\left( x_{2}\right) \right) \leq
K_{f}d_{M}\left( x_{1},x_{2}\right)
\]
for all $x_{1},x_{2}\in M$. A \textbf{lipeomorphism} is an invertible
Lipschitz map whose inverse is also Lipschitz (i.e., stronger than a
homeomorphism, weaker than a diffeomorphism).
The following definition is made in analogy with vector fields on manifolds,
where vectors are represented as curves on the manifold.
\begin{definition}
\label{ArcField}An \textbf{arc field} on $M$ is a continuous map
$X:M\times\left[ -1,1\right] \rightarrow M$ such that for all $x\in M$,
$X\left( x,0\right) =x$,%
\[
\rho\left( x\right) :=\sup_{s\neq t}\frac{d\left( X\left( x,s\right)
,X\left( x,t\right) \right) }{\left| s-t\right| }<\infty,
\]
$($i.e., $X\left( x,\cdot\right) $ is Lipschitz$)$, and the function
$\rho\left( x\right) $ is locally bounded so%
\[
\rho\left( x,r\right) :=\sup_{y\in B\left( x,r\right) }\left\{
\rho\left( y\right) \right\} <\infty,
\]
for $r>0$ sufficiently small.
A \textbf{solution curve }to $X$ is a curve $\sigma$ tangent to $X$, i.e.,
$\sigma:\left( \alpha,\beta\right) \rightarrow M$ for some open interval
$\left( \alpha,\beta\right) \subset\mathbb{R}$ has the following property
for each $t\in\left( \alpha,\beta\right) $%
\begin{equation}
\lim_{h\rightarrow0}\frac{d\left( \sigma\left( t+h\right) ,X\left(
\sigma\left( t\right) ,h\right) \right) }{h}=0\text{,} \label{solncond1}%
\end{equation}
i.e., $d\left( \sigma\left( t+h\right) ,X\left( \sigma\left( t\right)
,h\right) \right) =o\left( h\right) $.
\end{definition}
$\rho$ is a bound on the speed of the arcs. $\alpha$ and $\beta$ are members
of the extended reals $\mathbb{R}\cup\left\{ \pm\infty\right\} $.
The two variables for arc fields and flows which are usually denoted by $x$
and $t$ are often thought of as representing space and time. In this paper
$x,y,$ and $z$ are used for space variables, while $r,s,t,$ and $h$ may fill
the time variable slot. An arc field $X$ will often have its variables migrate
liberally between parentheses and subscripts%
\[
X\left( x,t\right) =X_{x}\left( t\right) =X_{t}\left( x\right)
\]
depending on which variable we wish to emphasize in a calculation. We also use
this convention for flows $F$ defined below.
The following conditions guarantee existence and uniqueness of solutions.
\bigskip\noindent\textbf{Condition E1:} For each $x_{0}\in M$, there are
positive constants $r,\delta$ and $\Lambda_{X}$ such that for all $x,y\in
B\left( x_{0},r\right) $ and $t\in\left( -\delta,\delta\right) $%
\[
d\left( X_{t}\left( x\right) ,X_{t}\left( y\right) \right) \leq d\left(
x,y\right) \left( 1+\left| t\right| \Lambda_{X}\right) \text{.}%
\]
\noindent\textbf{Condition E2}:%
\[
d\left( X_{s+t}\left( x\right) ,X_{t}\left( X_{s}\left( x\right)
\right) \right) =O\left( st\right)
\]
as $st\rightarrow0$ locally uniformly in $x$; in other words, for each
$x_{0}\in M$, there are positive constants $r,\delta$ and $\Omega_{X}$ such
that for all $x\in B\left( x_{0},r\right) $ and $s,t\in\left(
-\delta,\delta\right) $%
\[
d\left( X_{s+t}\left( x\right) ,X_{t}\left( X_{s}\left( x\right)
\right) \right) \leq\left| st\right| \Omega_{X}\text{.}%
\]
\bigskip%
\begin{figure}
[ptbh]
\begin{center}
\includegraphics[
height=1.8161in,
width=4.1381in
]%
{CondE1andE2.eps}%
\caption{Conditions E1 and E2}%
\end{center}
\end{figure}
\begin{theorem}
\label{CL}Let $X$ be an arc field satisfying E1 and E2 on a locally complete
metric space $M$. Then given any point $x\in M,$ there exists a unique
solution $\sigma_{x}:\left( \alpha_{x},\beta_{x}\right) \rightarrow M$ with
$\sigma_{x}\left( 0\right) =x$.
\end{theorem}
Several remarks are in order. Here, $x$ is called the \textbf{initial
condition} for the solution $\sigma_{x}$ in the above theorem. Uniqueness of
solutions means that for any $x\in M$, the curve $\sigma_{x}:\left(
\alpha_{x},\beta_{x}\right) \rightarrow M$ has maximal domain $\left(
\alpha_{x},\beta_{x}\right) $ in the sense that for any other solution
$\widehat{\sigma}_{x}:\left( \widehat{\alpha}_{x},\widehat{\beta}_{x}\right)
\rightarrow M$ also having initial condition $x$, we have $\left(
\widehat{\alpha}_{x},\widehat{\beta}_{x}\right) \subset\left( \alpha
_{x},\beta_{x}\right) $ and $\widehat{\sigma}_{x}=\sigma_{x}|_{\left(
\widehat{\alpha}_{x},\widehat{\beta}_{x}\right) }$ $($i.e., $\sigma_{x}$ is
the \textbf{maximal solution curve}$)$.
The proof of Theorem \ref{CL} is constructive and shows the \textbf{Euler
curves} $X_{t/n}^{\left( n\right) }\left( x\right) $ converge to the
solution. Here we are using $f^{\left( n\right) }$ to denote the composition
of a map $f:M\rightarrow M$ with itself $n$ times so%
\[
X_{\frac{t}{n}}^{\left( n\right) }\left( x\right) =\text{ }\underset
{n\text{ times}}{\underbrace{X_{\frac{t}{n}}\circ X_{\frac{t}{n}}\circ...\circ
X_{\frac{t}{n}}}}\left( x\right)
\]
and we have
\[
\underset{n\rightarrow\infty}{\lim}X_{\frac{t}{n}}^{\left( n\right) }\left(
x\right) =\sigma_{x}\left( t\right) \text{.}%
\]
for suitably small $\left| t\right| $. Other, slightly different
formulations of Euler curves also lead to the same result, $\sigma$, under
Conditions E1 and E2, e.g.,%
\[
\xi_{n}\left( t\right) :=X_{t-i\cdot2^{-n}}X_{2^{-n}}^{\left( i\right)
}\left( x\right) \text{\quad for \quad}i\cdot2^{-n}\leq t\leq\left(
i+1\right) 2^{-n}%
\]
also has%
\[
\underset{n\rightarrow\infty}{\lim}\xi_{n}\left( t\right) =\sigma_{x}\left(
t\right)
\]
for suitably small $\left| t\right| $.
Theorem \ref{CL} and those that follow are true under more general conditions
outlined in \cite{CalcBleecker} and \cite{Panasyuk2}. But throughout this
paper and in every application I've seen, E1 and E2 are satisfied and are
\underline{E}asier to use.
\begin{example}
\label{Banach Example}A \textbf{Banach space} $\left( M,\left\|
\cdot\right\| \right) $ is a normed vector space, complete in its norm
$($e.g., $\mathbb{R}^{n}$ with Euclidean norm$)$. A Banach space is an example
of a metric space with $d\left( u,v\right) :=\left\| u-v\right\| $. A
\textbf{vector field} on a Banach space $M$ is a map $f:M\rightarrow M$. A
\textbf{solution} to a vector field $f$ with \textbf{initial condition} $x$ is
a curve $\sigma_{x}:\left( \alpha,\beta\right) \rightarrow M$ defined on an
open interval $\left( \alpha,\beta\right) \subset\mathbb{R}$ containing $0 $
such that $\sigma_{x}\left( 0\right) =x$ and $\sigma_{x}^{\prime}\left(
t\right) =f\left( \sigma_{x}\left( t\right) \right) $ for all
$t\in\left( \alpha,\beta\right) $. The classical Picard-Lindel\"{o}f Theorem
guarantees unique solutions for any locally Lipschitz $f$. With a few tricks,
most differential equations can be represented as vector fields on a suitably
abstract space.
Every Lipschitz vector field $f:M\rightarrow M$ gives rise to an arc field
$X\left( x,t\right) :=x+tf\left( x\right) $ and it is easy to check $X$
satisfies E1 and E2 $($cf. \cite{CalcBleecker}$)$. Further the solutions to
the arc field are exactly the solutions to the vector field. Therefore Theorem
\ref{CL} is a generalization of the classical Picard-Lindel\"{o}f Theorem.
\end{example}
\begin{remark}
\label{RemUniformSolutions}Of prime import for this monograph, the proof of
Theorem \ref{CL} actually shows solutions are \textbf{locally uniformly
tangent} to $X$:
\[
d\left( X_{x}\left( t\right) ,\sigma_{x}\left( t\right) \right)
=o\left( t\right)
\]
locally uniformly for $x\in M$, i.e., for each $x_{0}\in M$ there exists a
constant $r>0$ such that for any $\varepsilon>0$ there exists a $\delta>0$
such that for all $x\in B\left( x_{0},r\right) $%
\[
\frac{d\left( X_{x}\left( t\right) ,\sigma_{x}\left( t\right) \right)
}{\left| t\right| }<\varepsilon
\]
whenever $0<\left| t\right| <\delta$.
More than this, the proof also shows solutions are tangent uniformly for all
arc fields $X$ which satisfy $E1$ and $E2$ for specified $\Lambda$ and
$\Omega$.
\end{remark}
We denote local uniform tangency of two arc fields $X$ and $Y$ by $X\sim Y$.
It is easy to check $\sim$ is an equivalence relation. E.g., transitivity
follows from the triangle inequality:%
\[
\frac{d\left( X_{t}\left( x\right) ,Z_{t}\left( x\right) \right)
}{\left| t\right| }\leq\frac{d\left( X_{t}\left( x\right) ,Y_{t}\left(
x\right) \right) }{\left| t\right| }+\frac{d\left( Y_{t}\left( x\right)
,Z_{t}\left( x\right) \right) }{\left| t\right| }\text{.}%
\]
We use the symbol $\sim$ in many contexts in this paper (particularly Section
\ref{SectionDist&Foliations}), but there is always a local-uniform-tangency
property associated with it.
\begin{corollary}
Assume the conditions of Theorem \ref{CL} and let $s\in\left( \alpha
_{x},\beta_{x}\right) $ and $y=\sigma_{x}(s)$. \ Then $\alpha_{y}=\alpha
_{x}-s$ and $\beta_{y}=\beta_{x}-s$ so%
\[
\left( \alpha_{y},\beta_{y}\right) =\left( \alpha_{\sigma_{x}(s)}%
,\beta_{\sigma_{x}(s)}\right) =\{t:\alpha_{x}-s<t<\beta_{x}-s\}\text{.}%
\]
Thus $t\in\left( \alpha_{y},\beta_{y}\right) $ if and only if $t+s\in\left(
\alpha_{x},\beta_{x}\right) ,$ and then we have
\[
\sigma_{\sigma_{x}(s)}(t)=\sigma_{x}(s+t)\text{.}%
\]
Defining $W\subset M\times\mathbb{R}$ by%
\[
W:=\{(x,t)\in M\times\mathbb{R}:t\in\left( \alpha_{x},\beta_{x}\right) \}
\]
and $F:W\rightarrow M$ by $F(x,t):=\sigma_{x}(t)$ we have:\bigskip
\noindent$\left( i\right) $\quad$M\times\{0\}\subset W$ and $F(x,0)=x$ for
all $x\in M$.
\noindent$\left( ii\right) $\quad For each (fixed) $x\in M$, $F(x,\cdot
):\left( \alpha_{x},\beta_{x}\right) \rightarrow M$ is the maximal solution
$\sigma_{x}$ to $X$.
\noindent$\left( iii\right) $\quad$F(t,F(s,x))=F(t+s,x)$.\bigskip
\end{corollary}
$F$ is called the \textbf{local flow} generated by the arc field $X$. Compare
Condition E2 with property $\left( iii\right) $ above to see why an arc
field might be thought of as a ``pre-flow''.
\begin{theorem}
\label{ExpGrowth}Let $\sigma_{x}:\left( \alpha_{x},\beta_{x}\right)
\rightarrow M$ and $\sigma_{y}:\left( \alpha_{y},\beta_{y}\right)
\rightarrow M$ be two solutions to an arc field $X$ which satisfies E1. Assume
$\left( \alpha_{x},\beta_{x}\right) \cap\left( \alpha_{y},\beta_{y}\right)
\supset I$ for some interval $I$, and assume $\Lambda_{X}$ holds on a set
containing%
\[
\left\{ \sigma_{x}\left( t\right) :t\in I\right\} \cup\left\{ \sigma
_{y}\left( t\right) :t\in I\right\} \text{.}%
\]
Then
\[
d\left( \sigma_{x}\left( t\right) ,\sigma_{y}\left( t\right) \right)
\leq e^{\Lambda_{X}\left| t\right| }d\left( x,y\right) \text{\ for all
}t\in I\text{,}%
\]
i.e.,%
\begin{equation}
d\left( F_{t}\left( x\right) ,F_{t}\left( y\right) \right) \leq
e^{\Lambda_{X}\left| t\right| }d\left( x,y\right) \text{.}
\label{CondExpGrowth}%
\end{equation}
\end{theorem}
\begin{theorem}
For $F$ and $W$ as above, $W$ is open in $M\times\mathbb{R}$ and $F$ is
continuous on $W$.
\end{theorem}
For fixed $t$ it is clear $F_{t}$ is a local lipeomorphism, when defined, by
Theorem \ref{ExpGrowth}. Compare Condition E1 with line $\left(
\text{\ref{CondExpGrowth}}\right) $ to see why E1 may be thought of as a
local linearity property for $X$, needed for the continuity of $F$.
\begin{definition}
\label{LinSpeed}An arc field $X$ on a metric space $M$ is said to have
\textbf{linear speed growth} if there is a point $x\in M$ and positive
constants $c_{1}$ and $c_{2}$ such that for all $r>0$
\begin{equation}
\rho\left( x,r\right) \leq c_{1}r+c_{2}, \label{LinGro}%
\end{equation}
where $\rho\left( x,r\right) $ is the local bound on speed given in
Definition \ref{ArcField}.
\end{definition}
\begin{theorem}
\label{LongTime}Let $X$ be an arc field on a complete metric space $M$, which
satisfies E1 and E2 and has linear speed growth. Then $F$ is a $($full$)$
\textbf{flow} with domain $W=M\times\mathbb{R}$.
\end{theorem}
\begin{example}
\label{Flow<-->ArcField}Every local flow on a metric space is generated by an
arc field. Any local flow $F$ gives rise to an arc field $\overline{F}%
:M\times\left[ -1,1\right] \rightarrow M$ defined by%
\[
\overline{F}\left( x,t\right) :=\left\{
\begin{array}
[c]{ll}%
F\left( x,t\right) & \text{if }t\in\left( \frac{\alpha_{x}}{2}%
,\frac{\beta_{x}}{2}\right) \\
F\left( x,\frac{\alpha_{x}}{2}\right) & \text{if }t\in\left[
-1,\frac{\alpha_{x}}{2}\right] \\
F\left( x,\frac{\beta_{x}}{2}\right) & \text{if }t\in\left[ \frac{\beta
_{x}}{2},1\right] \text{.}%
\end{array}
\right.
\]
$($The issue here is that $F$, being a \textit{local} flow, may have
$\alpha_{x}$ or $\beta_{x}<1$.$)$ Clearly the local flow generated by
$\overline{F} $ is $F$. Since all our concerns with arc fields are local, we
will never focus on $t\notin\left( \frac{\alpha_{x}}{2},\frac{\beta_{x}}%
{2}\right) $ and henceforth we will not notationally distinguish between
$\overline{F}$ and $F$ as arc fields.
\end{example}
With this identification of flows being arc fields (but not necessarily
\textit{vice}-\textit{versa}) we may simplify Remark \ref{RemUniformSolutions}
to: $X\sim F$ if $X$ satisfies E1 and E2.
\section{The bracket and linearity\label{SectionBracket&linearity}}
To simplify notation we drop parentheses for expressions such as $Y_{t}\circ
X_{s}\left( x\right) =Y_{t}\left( X_{s}\left( x\right) \right) $ and
write $Y_{t}X_{s}\left( x\right) $ since the composition of arbitrary maps
is associative.
\begin{definition}
The \textbf{bracket} of two arc fields $X$ and $Y$ is the map $\left[
X,Y\right] :M\times\left[ -1,1\right] \rightarrow M$ with%
\begin{equation}
\left[ X,Y\right] \left( x,t\right) :=\left\{
\begin{array}
[c]{c}%
Y_{-\sqrt{t}}X_{-\sqrt{t}}Y_{\sqrt{t}}X_{\sqrt{t}}\left( x\right) \\
X_{-\sqrt{\left| t\right| }}Y_{-\sqrt{\left| t\right| }}X_{\sqrt{\left|
t\right| }}Y_{\sqrt{\left| t\right| }}\left( x\right)
\end{array}
\right.
\begin{array}
[c]{c}%
\text{for }t\geq0\\
\text{for }t<0\text{.}%
\end{array}
\label{bracketDef}%
\end{equation}
\end{definition}
There are many different equivalent characterizations of the Lie bracket on a
manifold. $\left( \ref{bracketDef}\right) $ uses the obvious choice of the
\textbf{asymptotic} characterization to generalize the concept to metric
spaces. $\left[ X,Y\right] \left( x,t\right) $ traces out a small
``parallelogram'' in $M$ starting at $x$, which hopefully almost returns to
$x$. The bracket measures the failure of $X$ and $Y$ to commute as will be
made clear in Theorems \ref{ThmCommute} and \ref{FrobeniusThm}.
\begin{definition}
We say $X$ \textbf{\&} $Y$ \textbf{close} if%
\[
d\left( Y_{s}X_{t}\left( x\right) ,X_{t}Y_{s}\left( x\right) \right)
=O\left( \left| st\right| \right)
\]
locally uniformly in $x$, i.e., if for each $x_{0}\in M$ there exist positive
constants $C_{XY},\delta,$ and $r$ such that for all $x\in B\left(
x_{0},r\right) $%
\[
d\left( Y_{s}X_{t}\left( x\right) ,X_{t}Y_{s}\left( x\right) \right)
\leq C_{XY}\left| st\right|
\]
for all $\left| s\right| ,\left| t\right| <\delta$.
\end{definition}
\begin{lemma}
\label{LemmaClose}If $X$ \& $Y$ close and satisfy E1 and E2 then%
\[
d\left( Y_{-t}X_{-t}Y_{t}X_{t}\left( x\right) ,x\right) =O\left(
t^{2}\right)
\]
locally uniformly for $x\in M$.
\end{lemma}
\begin{proof}%
\begin{align*}
& d\left( Y_{-s}X_{-t}Y_{s}X_{t}\left( x\right) ,x\right) \\
& \leq d\left( Y_{-s}X_{-t}Y_{s}X_{t}\left( x\right) ,Y_{-s}X_{-t}%
X_{t}Y_{s}\left( x\right) \right) +d\left( Y_{-s}X_{-t}X_{t}Y_{s}\left(
x\right) ,Y_{-s}Y_{s}\left( x\right) \right) +d\left( Y_{-s}Y_{s}\left(
x\right) ,x\right) \\
& \leq d\left( Y_{s}X_{t}\left( x\right) ,X_{t}Y_{s}\left( x\right)
\right) \left( 1+\left| s\right| \Lambda_{Y}\right) \left( 1+\left|
t\right| \Lambda_{X}\right) +t^{2}\Omega_{X}\left( 1+\left| s\right|
\Lambda_{Y}\right) +s^{2}\Omega_{Y}\\
& \leq C_{XY}\left| st\right| \left( 1+\left| s\right| \Lambda
_{Y}\right) \left( 1+\left| t\right| \Lambda_{X}\right) +t^{2}\Omega
_{X}\left( 1+\left| s\right| \Lambda_{Y}\right) +s^{2}\Omega_{Y}\leq
C\left( \left| st\right| +t^{2}+s^{2}\right)
\end{align*}
where%
\[
C:=\max\left\{ C_{XY}\left( 1+\Lambda_{Y}\right) \left( 1+\Lambda
_{X}\right) ,\Omega_{X}\left( 1+\Lambda_{Y}\right) ,\Omega_{Y}\right\} .
\]
Letting $s=t$ gives the result.
\end{proof}
\begin{proposition}
If $X$ \& $Y$ close and satisfy E1 and E2 then $\left[ X,Y\right] $ is an
arc field.
\end{proposition}
\begin{proof}
We establish the local bound on speed. The purpose of Lemma \ref{LemmaClose}
is to give $d\left( \left[ X,Y\right] \left( x,t\right) ,x\right)
=O\left( t\right) $ for $t\geq0$. Similarly, for $t<0$%
\begin{align*}
& d\left( X_{t}Y_{t}X_{-t}Y_{-t}\left( x\right) ,x\right) \\
& \leq d\left( X_{t}Y_{t}X_{-t}Y_{-t}\left( x\right) ,X_{t}X_{-t}\left(
x\right) \right) +d\left( X_{t}X_{-t}\left( x\right) ,x\right) \\
& \leq d\left( Y_{t}X_{-t}Y_{-t}\left( x\right) ,X_{-t}\left( x\right)
\right) \left( 1+\left| t\right| \Lambda_{X}\right) +t^{2}\Omega_{X}%
\end{align*}
which, using this trick again, gives%
\begin{align*}
& \leq d\left( X_{-t}Y_{-t}\left( x\right) ,Y_{-t}X_{-t}\left( x\right)
\right) \left( 1+\left| t\right| \Lambda_{X}\right) \left( 1+\left|
t\right| \Lambda_{Y}\right) \\
& +t^{2}\Omega_{Y}\left( 1+\left| t\right| \Lambda_{Y}\right)
+t^{2}\Omega_{X}=O\left( t^{2}\right) \text{ since }X\text{ \& }Y\text{
close.}%
\end{align*}
Therefore
\[
d\left( \left[ X,Y\right] _{t}\left( x\right) ,x\right) =O\left(
t\right)
\]
for both positive and negative $t$. Then since $\sqrt{\left| t\right| }$ is
Lipschitz except at $t=0$ we see $\left[ X,Y\right] $ has bounded speed.
\end{proof}
\begin{example}
As in Example \ref{Banach Example} let $f,g:B\rightarrow B$ be Lipschitz
vector fields on a Banach space $B$, and let $X$ and $Y$ be their
corresponding arc fields%
\begin{align*}
X\left( x,t\right) & :=x+tf\left( x\right) \\
Y\left( x,t\right) & :=x+tg\left( x\right)
\end{align*}
It is easy to check $X$ \& $Y$ close:%
\begin{align*}
& d\left( Y_{s}X_{t}\left( x\right) ,X_{t}Y_{s}\left( x\right) \right)
\\
& =\left\| x+tf\left( x\right) +sg\left( x+tf\left( x\right) \right)
-\left[ x+sg\left( x\right) +tf\left( x+sg\left( x\right) \right)
\right] \right\| \\
& \leq\left| t\right| \left\| f\left( x\right) -f\left( x+sg\left(
x\right) \right) \right\| +\left| s\right| \left\| g\left( x+tf\left(
x\right) \right) -g\left( x\right) \right\| \\
& \leq\left| t\right| K_{f}\left\| x-\left( x+sg\left( x\right)
\right) \right\| +\left| s\right| K_{g}\left\| x+tf\left( x\right)
-x\right\| \\
& \leq\left| st\right| \left( K_{f}\left\| g\left( x\right) \right\|
+K_{g}\left\| f\left( x\right) \right\| \right)
\end{align*}
so $C_{XY}:=\left( K_{f}\left\| g\left( x\right) \right\| +K_{g}\left\|
f\left( x\right) \right\| \right) $.
Therefore, even though the vector fields may not be smooth, so their Lie
bracket is undefined, their metric space bracket is meaningful and will give
us geometric information as we shall see in Theorem \ref{FrobeniusThm}.
\end{example}
\begin{definition}
If $X$ and $Y$ are arc fields on $M$ then define $X+Y$ to be the arc field on
$M$ given by%
\[
\left( X+Y\right) _{t}\left( x\right) :=Y_{t}X_{t}\left( x\right)
\text{.}%
\]
For any function $a:M\rightarrow\mathbb{R}$ define the arc field $aX$ by%
\begin{equation}
aX\left( x,t\right) :=X\left( x,a\left( x\right) t\right) \text{.}
\label{aX}%
\end{equation}
If $a$ is Lipschitz, then $aX$ is an arc field.
\end{definition}
To be fastidiously precise we need to define $aX_{x}\left( t\right) $ for
all $t\in\left[ -1,1\right] $ so technically we must specify%
\begin{equation}
aX\left( x,t\right) :=\left\{
\begin{array}
[c]{c}%
X\left( x,a\left( x\right) t\right) \\
X\left( x,1\right) \\
X\left( x,-1\right) \\
x
\end{array}
\right.
\begin{array}
[c]{c}%
-\frac{1}{\left| a\left( x\right) \right| }\leq t\leq\frac{1}{\left|
a\left( x\right) \right| }\\
t>1/\left| a\left( x\right) \right| \\
t<-1/\left| a\left( x\right) \right| \\
\text{for }-1\leq t\leq1
\end{array}%
\begin{array}
[c]{c}%
\left\}
\begin{array}
[c]{c}%
\text{ }\\
\text{when }a\left( x\right) \neq0\\
\text{ }%
\end{array}
\right. \\
\text{if }a\left( x\right) =0
\end{array}
\label{finickyArcDef}%
\end{equation}
using the trick from Example \ref{Flow<-->ArcField}. Again, we will not burden
ourselves with this detail; in all cases our concern with the properties of an
arc field $X_{x}\left( t\right) $ is only near $t=0$.
It is a simple definition check to prove $aX$ is an arc field when $a$ is
Lipschitz, since $aX_{x}\left( t\right) =X_{x}\left( a\left( x\right)
t\right) $ is Lipschitz in $t$ if $X_{x}\left( t\right) $ is: assuming
$a\left( x\right) \neq0$,
\begin{align*}
\rho_{aX}\left( x\right) & :=\sup_{s\neq t}\frac{d\left( X_{x}\left(
a\left( x\right) s\right) ,X_{x}\left( a\left( x\right) t\right)
\right) }{\left| s-t\right| }=\sup_{s\neq t}\frac{d\left( X_{x}\left(
s\right) ,X_{x}\left( t\right) \right) }{\left| \frac{s}{a\left(
x\right) }-\frac{t}{a\left( x\right) }\right| }\\
& =a\left( x\right) \sup_{s\neq t}\frac{d\left( X_{x}\left( s\right)
,X_{x}\left( t\right) \right) }{\left| s-t\right| }=a\left( x\right)
\rho_{X}\left( x\right)
\end{align*}
so%
\begin{align*}
\rho_{aX}\left( x,r\right) & :=\sup_{y\in B\left( x,r\right) }\left\{
\rho_{aX}\left( y\right) \right\} =\sup_{y\in B\left( x,r\right)
}\left\{ a\left( y\right) \rho\left( y\right) \right\} \\
& \leq\left( a\left( x\right) +rK_{a}\right) \rho_{X}\left( x,r\right)
<\infty\text{.}%
\end{align*}
Now we have the beginnings of a linear structure associated with $M$. For
instance, expressions such as $X-Y$ make sense:%
\[
X-Y:=X+(-1)Y
\]
where $-1$ is a constant function on $M$. Further, $0$ is an arc field defined
as the constant map%
\[
0\left( x,t\right) :=x
\]
and satisfies $0+X=X=X+0$ for any $X$. Notice from the definition, we have
$\left[ X,Y\right] =-\left[ Y,X\right] $. Another trivial definition check
shows this multiplication is associative and commutative:%
\[
\left( a\cdot b\right) X=a\left( bX\right) \qquad\text{and}\qquad\left(
a\cdot b\right) X=\left( b\cdot a\right) X
\]
where $\cdot$ denotes multiplication of functions.
\begin{proposition}
\label{PropX+Y E1&2}Assume $X$ \& $Y$ close and satisfy E1 and E2. Then their
sum $X+Y$ satisfies E1 and E2.
\end{proposition}
\begin{proof}
Checking Condition E1:%
\begin{align*}
& d\left( \left( X+Y\right) _{t}\left( x\right) ,\left( X+Y\right)
_{t}\left( y\right) \right) \\
& =d\left( Y_{t}X_{t}\left( x\right) ,Y_{t}X_{t}\left( y\right) \right)
\leq d\left( X_{t}\left( x\right) ,X_{t}\left( y\right) \right) \left(
1+\left| t\right| \Lambda_{Y}\right) \\
& \leq d\left( x,y\right) \left( 1+\left| t\right| \Lambda_{X}\right)
\left( 1+\left| t\right| \Lambda_{Y}\right) \leq d\left( x,y\right)
\left( 1+\left| t\right| \left( \Lambda_{X}+\Lambda_{Y}\right)
+t^{2}\Lambda_{X}\Lambda_{Y}\right) \\
& \leq d\left( x,y\right) \left( 1+\left| t\right| \Lambda_{X+Y}\right)
\end{align*}
where $\Lambda_{X+Y}:=\Lambda_{X}+\Lambda_{Y}+\Lambda_{X}\Lambda_{Y}<\infty$.
Condition E2:%
\begin{align}
& d\left( \left( X+Y\right) _{s+t}\left( x\right) ,\left( X+Y\right)
_{t}\left( X+Y\right) _{s}\left( x\right) \right) \nonumber\\
& =d\left( Y_{s+t}X_{s+t}\left( x\right) ,Y_{t}X_{t}Y_{s}X_{s}\left(
x\right) \right) \nonumber\\
& \leq d\left( Y_{s+t}X_{s+t}\left( x\right) ,Y_{t}Y_{s}X_{s+t}\left(
x\right) \right) +d\left( Y_{t}Y_{s}X_{s+t}\left( x\right) ,Y_{t}%
X_{t}Y_{s}X_{s}\left( x\right) \right) \nonumber\\
& \leq\left| st\right| \Omega_{Y}+d\left( Y_{s}X_{s+t}\left( x\right)
,X_{t}Y_{s}X_{s}\left( x\right) \right) \left( 1+\left| t\right|
\Lambda_{Y}\right) \nonumber\\
& \leq\left| st\right| \Omega_{X}+\left[ d\left( Y_{s}X_{s+t}\left(
x\right) ,Y_{s}X_{t}X_{s}\left( x\right) \right) +d\left( Y_{s}%
X_{t}\left( y\right) ,X_{t}Y_{s}\left( y\right) \right) \right] \left(
1+t\Lambda_{X}\right) \label{SumE12proof5}%
\end{align}
where $y:=X_{s}\left( x\right) $. Notice
\begin{align*}
d\left( Y_{s}X_{s+t}\left( x\right) ,Y_{s}X_{t}X_{s}\left( x\right)
\right) & \leq d\left( X_{s+t}\left( x\right) ,X_{t}X_{s}\left(
x\right) \right) \left( 1+\left| s\right| \Lambda_{Y}\right) \\
& \leq\left| st\right| \Omega_{X}\left( 1+\left| s\right| \Lambda
_{Y}\right) =O\left( \left| st\right| \right)
\end{align*}
and the last summand of $\left( \text{\ref{SumE12proof5}}\right) $is also
$O\left( \left| st\right| \right) $ since $X$ \& $Y$ close, so E2 is satisfied.
\end{proof}
So in this case, the flow $H$ generated by $X+Y$ is computable with Euler
curves as%
\begin{equation}
H\left( x,t\right) =\underset{n\rightarrow\infty}{\lim}\left( X+Y\right)
_{t/n}^{\left( n\right) }\left( x\right) =\underset{n\rightarrow\infty
}{\lim}\left( Y_{t/n}X_{t/n}\right) ^{\left( n\right) }\left( x\right)
\text{.} \label{X+Y Flow}%
\end{equation}
Therefore, this definition of $X+Y$ using compositions is a direct
generalization of the concept of adding vector fields on a differentiable
manifold (see \cite[Section 4.1A]{abrahamMarsden}). One of the inspirations
for this paper, \cite{Colombo} introduced the sum of semigroups on a metric
space in the same spirit as defined here, with commensurable conditions.
When $X$ \& $Y$ close and satisfy E1 and E2, we also have $\left( X+Y\right)
\sim\left( Y+X\right) $ since%
\[
\left( Y_{t/n}X_{t/n}\right) ^{\left( n\right) }=Y_{t/n}\left(
X_{t/n}Y_{t/n}\right) ^{\left( n-1\right) }X_{t/n}%
\]
whence both arc fields $X+Y$ and $Y+X$ are (locally uniformly) tangent to the
flow $H$ using $\left( \ref{X+Y Flow}\right) $.
\begin{proposition}
\label{Prop_aXE1&2}If $X$ satisfies E1 and E2 and $a:M\rightarrow\mathbb{R}$
is a Lipschitz function, then $aX$ satisfies E1 and E2.
\end{proposition}
\begin{proof}
E1:
\begin{align*}
& d\left( aX_{x}\left( t\right) ,aX_{y}\left( t\right) \right) \\
& =d\left( X_{x}\left( a\left( x\right) t\right) ,X_{y}\left( a\left(
y\right) t\right) \right) \\
& \leq d\left( X_{x}\left( a\left( x\right) t\right) ,X_{x}\left(
a\left( y\right) t\right) \right) +d\left( X_{x}\left( a\left(
y\right) t\right) ,X_{y}\left( a\left( y\right) t\right) \right) \\
& \leq\left| a\left( x\right) t-a\left( y\right) t\right| \rho\left(
x\right) +d\left( x,y\right) \left( 1+a\left( y\right) \left| t\right|
\Lambda_{X}\right) \\
& \leq d\left( x,y\right) \left( K_{a}\left| t\right| \rho\left(
x\right) +1+a\left( y\right) \left| t\right| \Lambda_{X}\right)
=d\left( x,y\right) \left( 1+\left| t\right| \Lambda_{aX}\right)
\end{align*}
where $\Lambda_{aX}:=K_{a}\rho\left( x\right) +a\left( y\right)
\Lambda_{X}<\infty.$
E2: For all $x_{0}\in M$ and $\delta>0$ we know $a$ is bounded by some $A>0$
on $B\left( x_{0},\delta\right) $ since $a$ is Lipschitz.%
\begin{align*}
& d\left( aX_{x}\left( s+t\right) ,aX_{aX_{x}\left( s\right) }\left(
t\right) \right) \\
& =d\left( X_{x}\left( a\left( x\right) \left( s+t\right) \right)
,X_{X_{x}\left( a\left( x\right) s\right) }\left( a\left( X_{x}\left(
a\left( x\right) s\right) \right) t\right) \right) \\
& \leq d\left( X_{x}\left( a\left( x\right) \left( s+t\right) \right)
,X_{X_{x}\left( a\left( x\right) s\right) }\left( a\left( x\right)
t\right) \right) \\
& +d\left( X_{X_{x}\left( a\left( x\right) s\right) }\left( a\left(
x\right) t\right) ,X_{X_{x}\left( a\left( x\right) s\right) }\left(
a\left( X_{x}\left( a\left( x\right) s\right) \right) t\right) \right)
\\
& \leq a\left( x\right) \left| s\right| \cdot a\left( x\right) \left|
t\right| \Omega_{X}+\rho\cdot\left| a\left( x\right) t-a\left(
X_{x}\left( a\left( x\right) s\right) \right) t\right| \\
& \leq\left| st\right| \left[ a\left( x\right) \right] ^{2}\Omega
_{X}+\left| t\right| \rho K_{a}d\left( x,X_{x}\left( a\left( x\right)
s\right) \right) \\
& \leq\left| st\right| \left[ a\left( x\right) \right] ^{2}\Omega
_{X}+\left| st\right| \rho^{2}K_{a}a\left( x\right) \leq\left| st\right|
\Omega_{aX}%
\end{align*}
where $\Omega_{aX}:=A^{2}\Omega_{X}+\rho^{2}K_{a}A$.
\end{proof}
Combining these results gives
\begin{theorem}
\label{ThmaX+bYflow}If $a$ and $b$ are locally Lipschitz functions and $X$ \&
$Y$ close and satisfy E1 and E2, then $aX+bY$ is an arc field which satisfies
E1 and E2 and so has a unique local flow.
If in addition $a$ and $b$ are globally Lipschitz and $X$ and $Y$ have linear
speed growth, then $aX+bY$ generates a unique flow.
\end{theorem}
\begin{proof}
We haven't proven $aX$ and $bY$ close, but this is a straightforward
definition check, as is the fact that $aX+bY$ has linear speed growth.
\end{proof}
Local flows have the following useful linearity property:
\begin{proposition}
\label{FlowLinearity}If $F$ is a local flow then interpreting $F$ as an arc
field we can perform the following operations:
1. if $a$ and $b$ are constant then\quad$aF+bF=\left( a+b\right) F$
2. if $a$ and $b$ are real functions then $\left( aF+bF\right) _{t}\left(
x\right) =\left( a+b\circ\left( aF\right) _{t}\right) F_{t}\left(
x\right) $.
\end{proposition}
\begin{proof}
This is another obvious definition check:%
\[%
\begin{array}
[c]{l}%
2.\quad\left( aF+bF\right) _{t}\left( x\right) =\left( bF\right)
_{t}\left( aF\right) _{t}\left( x\right) =F_{b\left( \left( aF\right)
_{t}\left( x\right) \right) t}F_{a\left( x\right) t}\left( x\right) \\
\quad\quad\quad=F_{\left( a\left( x\right) +\left( b\circ\left(
aF\right) _{t}\right) \left( x\right) \right) t}\left( x\right)
=\left( a+b\circ\left( aF\right) _{t}\right) F_{t}\left( x\right)
\end{array}
\]
and 1. follows from 2.
\end{proof}
\section{Contravariance\label{PushFwdSect}}
If $\phi:M_{1}\rightarrow M_{2}$ is a lipeomorphism (i.e., an invertible
Lipschitz map with Lipschitz inverse), then the pull-back of an arc field $X$
on $M_{2}$ is the arc field $\phi^{\ast}X$ on $M_{1}$ given by%
\[
\phi^{\ast}X\left( x,t\right) :=\phi^{-1}\left( X\left( \phi\left(
x\right) ,t\right) \right)
\]
or in other notation,%
\[
\left( \phi^{\ast}X\right) _{t}\left( x\right) =\phi^{-1}X_{t}\phi\left(
x\right)
\]
which is a direct analog of the pull-back of a vector field on a manifold
using curves to represent vectors. The definition for flows is identical,
replacing $X$ with $F$. The pull-back to $M_{1}$ of a solution $\sigma$ to an
arc field on $M_{2}$ is analogous:%
\[
\left( \phi^{\ast}\sigma\right) _{x}\left( t\right) :=\phi^{-1}\left(
\sigma_{\phi\left( x\right) }\left( t\right) \right) \text{.}%
\]
The pull-back of a function $a:M_{2}\rightarrow\mathbb{R}$ is the function
$\phi^{\ast}a:M_{1}\rightarrow\mathbb{R}$ defined as $\phi^{\ast}a\left(
x\right) :=a\left( \phi\left( x\right) \right) $.
\begin{proposition}
\label{PushFwd Sol'ns}If $\phi:M_{1}\rightarrow M_{2}$ is a lipeomorphism and
the arc field $X$ on $M_{2}$ has unique solutions then $\phi^{\ast}X$ has
unique solutions. The solutions to $\phi^{\ast}X$ are the pull-backs of
solutions to $X$.
\end{proposition}
\begin{proof}
This is obvious: if $F$ is a local flow for $X$ then%
\begin{align*}
& d\left( \phi^{\ast}X\left( \phi^{\ast}F\left( x,t\right) ,s\right)
,\phi^{\ast}F\left( x,t+s\right) \right) \\
& =d\left( \phi^{-1}X\left[ \phi\phi^{-1}F\left( \phi\left( x\right)
,t\right) ,s\right] ,\phi^{-1}F\left( \phi\left( x\right) ,t+s\right)
\right) \\
& =d\left( \phi^{-1}X\left[ F\left( \phi\left( x\right) ,t\right)
,s\right] ,\phi^{-1}F\left( \phi\left( x\right) ,t+s\right) \right) \\
& \leq K_{\phi}d\left( X\left[ F\left( \phi\left( x\right) ,t\right)
,s\right] ,F\left( \phi\left( x\right) ,t+s\right) \right) =K_{\phi
}o\left( s\right) =o\left( s\right)
\end{align*}
so $\phi^{\ast}F$ is a flow (solution) for $\phi^{\ast}X$.
Similarly if $\sigma$ is a solution to $\phi^{\ast}X$ then $\left( \phi
^{-1}\right) ^{\ast}\sigma$ is a solution to $X$ so by uniqueness there can
be only one such $\sigma$.
\end{proof}
The push-forward of any function, curve or flow is defined similarly, e.g.,%
\[
\phi_{\ast}F\left( x,t\right) :=\phi\left( F\left( \phi^{-1}\left(
x\right) ,t\right) \right) \text{.}%
\]
It is easy to check push-forward is covariant (i.e., $\left( \phi\circ
\psi\right) _{\ast}=\phi_{\ast}\circ\psi_{\ast}$) and pull-back is
contravariant (i.e., $\left( \phi\circ\psi\right) ^{\ast}=\psi^{\ast}%
\circ\phi^{\ast}$). It is also clear that push-forward and pull-back are
inverse operations and Proposition \ref{PushFwd Sol'ns} holds \textit{mutatis
mutandis} for push-forward in place of pull-back.
\begin{proposition}
[Linearity of Pull-back]\label{PullbackLinear}If $X$ and $Y$ are arc fields on
$M$ and $\phi:M_{1}\rightarrow M_{2}$ is a lipeomorphism, then
$\left( i\right) $ $\phi^{\ast}\left( X+Y\right) =\phi^{\ast}\left(
X\right) +\phi^{\ast}\left( Y\right) $
$\left( ii\right) $ $\phi^{\ast}\left( aX\right) =\left( a\circ
\phi\right) \phi^{\ast}\left( X\right) =\phi^{\ast}\left( a\right)
\phi^{\ast}\left( X\right) $.
\end{proposition}
\begin{proof}
Trivial definition check.
\end{proof}
Since the pull-back and linearity are established for arc fields, we can now
explore another characterization of the bracket. In the context of $M$ being a
smooth manifold, let $F$ and $G$ be local flows generated by smooth vector
fields $f$ and $g$. There it is well known the following ``dynamic''
characterization of the Lie bracket is equivalent to the asymptotic
characterization%
\begin{equation}
\left[ f,g\right] =\left. \frac{d}{dt}\left( F_{t}\right) ^{\ast
}g\right| _{t=0}\text{.} \label{LieDerivative}%
\end{equation}
Using%
\[
\left. \frac{d}{dt}\left( F_{t}\right) ^{\ast}g\right| _{t=0}%
=\underset{t\rightarrow0}{\lim}\frac{\left( F_{t}\right) ^{\ast}g-g}%
{t}=\left[ f,g\right]
\]
for inspiration, we return to the context of metric spaces where, with $F$ and
$G$ viewed as arc fields, their bracket $\left[ F,G\right] $ is defined, and
then%
\begin{align}
F_{t}^{\ast}G_{t}\left( x\right) & =\left( t\left[ F,G\right]
+G\right) _{t}\left( x\right) \text{\qquad\qquad for }t\geq0\text{ and
}\label{LieD=BracketArcFieldversion}\\
F_{s}^{\ast}G_{s}\left( x\right) & =\left( -s\left[ -F,-G\right]
-G\right) _{-s}\left( x\right) \text{\qquad for }s<0
\label{LieD=BracketArcFieldversion2}%
\end{align}
which hold because%
\begin{gather*}
\left( t\left[ F,G\right] +G\right) _{t}\left( x\right) =G_{t}\left[
F,G\right] _{t^{2}}\left( x\right) \\
=G_{t}G_{-t}F_{-t}G_{t}F_{t}\left( x\right) =F_{-t}G_{t}F_{t}\left(
x\right) =F_{t}^{\ast}G_{t}\left( x\right)
\end{gather*}
and%
\begin{align*}
& \left( -s\left[ -F,-G\right] -G\right) _{-s}\left( x\right) \\
& =G_{s}\left[ -F,-G\right] _{s^{2}}\left( x\right) =G_{s}\left(
-G\right) _{-\left| s\right| }\left( -F\right) _{-\left| s\right|
}\left( -G\right) _{\left| s\right| }\left( -F\right) _{\left|
s\right| }\left( x\right) \\
& =G_{s}G_{\left| s\right| }F_{\left| s\right| }G_{-\left| s\right|
}F_{-\left| s\right| }\left( x\right) =F_{-s}G_{s}F_{s}\left( x\right)
=F_{s}^{\ast}G_{s}\left( x\right) \text{.}%
\end{align*}
These facts will be used in the heart of the proof of our main result, Theorem
\ref{FrobeniusThm}, as will the following
\begin{proposition}
\label{PullbackX=X}$\left( F_{s}\right) ^{\ast}X\sim X$.
\end{proposition}
\begin{proof}
Using the properties of flows $F_{t}=F_{-s+t+s}=F_{-s}F_{t}F_{s}$ and
$F_{t}^{-1}=F_{t}$ we get%
\begin{align*}
& d\left( \left( \left( F_{s}\right) ^{\ast}X\right) _{t}\left(
x\right) ,X_{t}\left( x\right) \right) \\
& \leq d\left( F_{-s}X_{t}F_{s}\left( x\right) ,F_{-s}F_{t}F_{s}\left(
x\right) \right) +d\left( F_{t}\left( x\right) ,X_{t}\left( x\right)
\right) \\
& \leq e^{s\Lambda_{X}}d\left( X_{t}\left( y\right) ,F_{t}\left(
y\right) \right) +o\left( t\right) =o\left( t\right)
\end{align*}
where $y:=F_{s}\left( x\right) $ and the exponential comes from Theorem
\ref{ExpGrowth}.
\end{proof}
\section{Local Frobenius Theorem\label{SectionFrobThm}}
\begin{definition}
Two arc fields $X$ and $Y$ are $($locally uniformly$)$ \textbf{transverse} if
for each $x_{0}\in M$ there exists a $\delta>0$ such that%
\[
d\left( X_{s}\left( x\right) ,Y_{t}\left( x\right) \right) \geq
\delta\left( \left| s\right| +\left| t\right| \right)
\]
for $\left| t\right| <\delta$ for all $x\in B\left( x_{0},\delta\right) $.
\end{definition}
\begin{example}
On the plane $\mathbb{R}^{2}$ with Euclidean norm $\left\| \cdot\right\| $
any two linearly independent vectors $u,v\in\mathbb{R}^{2}$ give us the
transverse arc fields%
\[
X_{t}\left( x\right) :=x+tu\qquad\text{and}\qquad Y_{t}\left( x\right)
:=x+tv\text{.}%
\]
To check this, it is easiest to define a new norm on $\mathbb{R}^{2}$ by%
\[
\left\| x\right\| _{uv}:=\left| x_{1}\right| +\left| x_{2}\right|
\]
where $x=x_{1}u+x_{2}v$ and $x_{1},x_{2}\in\mathbb{R}$. Since all norms on
$\mathbb{R}^{2}$ are metrically equivalent there must exist a constant $C>0$
such that $\left\| x\right\| _{uv}\leq C\left\| x\right\| $ for all
$x\in\mathbb{R}^{2}$. Then taking $\delta:=\frac{1}{C}$%
\[
d\left( X_{s}\left( x\right) ,Y_{t}\left( x\right) \right) =\left\|
su-tv\right\| \geq\delta\left\| su-tv\right\| _{uv}=\delta\left( \left|
s\right| +\left| t\right| \right) \text{.}%
\]
A localization argument shows any pair of continuous vector fields $f$ and $g$
on a differentiable manifold give transverse arc fields if $f$ and $g$ are
non-colinear at each point.
\end{example}
A (2-dimensional) \textbf{surface} is a 2-dimensional topological manifold,
i.e., locally homeomorphic to $\mathbb{R}^{2}$.
For any subset $N\subset M$ and element $x\in M$ the \textbf{distance} from
$x$ to $N$ is defined as
\[
d\left( x,N\right) :=\inf\left\{ d\left( x,y\right) :y\in N\right\}
\text{.}%
\]
This function $d$ is not a metric, obviously, but it does satisfy the triangle
inequality:%
\[
d\left( x,N\right) \leq d\left( x,y\right) +d\left( y,N\right)
\]
for all $x,y\in M$.
\begin{definition}
A surface $S\subset M$ is an \textbf{integral surface} of two arc fields $X$
and $Y$ if for any Lipschitz functions $a,b:M\rightarrow\mathbb{R}$ then $S$
is \textbf{locally uniformly tangent} to $aX+bY$ for $x\in S$, i.e.,%
\[
d\left( \left( aX+bY\right) _{t}\left( x\right) ,S\right) =o\left(
t\right)
\]
locally uniformly in $x$. Locally uniform tangency is denoted $S\sim aX+bY$.
\end{definition}
\begin{theorem}
\label{FrobeniusThm}Assume $X$ \& $Y$ close, are transverse, and satisfy E1
and E2 on a locally complete metric space $M$. Let $F$ and $G$ be the local
flows of $X$ and $Y$. If $\left[ F,G\right] \sim aX+bY$ $($locally uniform
tangency$)$ for some Lipschitz functions $a,b:M\rightarrow\mathbb{R}$, then
for each $x_{0}\in M$ there exists an integral surface $S$ through $x_{0}$.
\end{theorem}
\begin{proof}
It may be beneficial to review the outline of this proof from the third
paragraph of the introduction. The metric space constructs of the previous
sections will now be inserted into the manifold outline. A rigorous
verification of the analytic estimates requires some tedious, but
straightforward, calculations detailed here.
Define
\[
S:=\left\{ F_{t}G_{s}\left( x_{0}\right) :\left| s\right| ,\left|
t\right| <\delta\right\}
\]
where $\delta>0$ is chosen small enough for $S$ to be a well-defined surface
(Figure \ref{FigFrobProof1}).
\begin{figure}
[ptbh]
\begin{center}
\includegraphics[
height=2.3722in,
width=3.6019in
]%
{FrobProof1.eps}%
\caption{integral surface $S$}%
\label{FigFrobProof1}%
\end{center}
\end{figure}
I.e., $F_{t_{1}}G_{s_{1}}\left( x_{0}\right) =F_{t_{2}}G_{s_{2}}\left(
x_{0}\right) $ implies $t_{1}=t_{2}$ and $s_{1}=s_{2}$ so%
\[
\phi:\left( -\delta,\delta\right) \times\left( -\delta,\delta\right)
\subset\mathbb{R}^{2}\rightarrow S\subset M
\]
defined by $\phi\left( s,t\right) :=F_{t}G_{s}\left( x_{0}\right) $ is a
homeomorphism. Finding such a $\delta$ is possible since $X$ and $Y$ are
transverse. To see this, assume the contrary. Then there are different choices
of $s_{i}$ and $t_{i}$ which give $F_{t_{1}}G_{s_{1}}\left( x_{0}\right)
=F_{t_{2}}G_{s_{2}}\left( x_{0}\right) $ which implies $G_{s_{1}}\left(
x_{0}\right) =F_{t_{3}}G_{s_{2}}\left( x_{0}\right) $ and letting
$y:=G_{s_{2}}\left( x_{0}\right) $ we must also then have
\begin{equation}
F_{t}\left( y\right) =G_{s}\left( y\right) \text{.}\label{FrobProof3}%
\end{equation}
If this contrary assumption were true, then for all $\varepsilon>0$ there
would exist $s$ and $t$ with $\left| s\right| ,\left| t\right|
<\varepsilon$ such that $\left( \ref{FrobProof3}\right) $ holds. Since $X$
and $Y$ are transverse, this cannot be so.
We will show $S$ is the desired integral surface through $x_{0}$. Assume
$\delta$ is also chosen small enough so throughout $S$ the functions $\left|
a\right| $ and $\left| b\right| $ are bounded, while the constants
$\Lambda$, $\Omega$, and $\rho$ for $X$ and $Y$ hold uniformly, and that the
closure of $B\left( x,2\delta\left( \rho+1\right) \right) $ is complete.
This is possible because $F$ and $G$ have locally bounded speeds, since $X$
and $Y$ do.
Notice $S\sim X$ by construction, but it is not immediately clear $S\sim
a^{\prime}X+b^{\prime}Y$ for arbitrarily chosen $a^{\prime},b^{\prime}%
\in\mathbb{R}$. Notice we can use
\[
a^{\prime}X+b^{\prime}Y\sim a^{\prime}F+b^{\prime}G\sim b^{\prime}G+a^{\prime
}F\sim b^{\prime}Y+a^{\prime}X
\]
and so we will show $S\sim a^{\prime}F+b^{\prime}G$. We need to show this is
true for an arbitrary point $z\in S,$ so assume $z:=F_{t}G_{s}\left(
x_{0}\right) $ for some $s$ and $t\in\mathbb{R}$. Notice by the construction
of $S$ we have $S\sim a^{\prime\prime}F+b^{\prime\prime}G$ at $x:=G_{s}\left(
x_{0}\right) $ for an arbitrary choice of Lipschitz functions $a^{\prime
\prime}$ and $b^{\prime\prime}$ since $a^{\prime\prime}F+b^{\prime\prime}G\sim
b^{\prime\prime}G+a^{\prime\prime}F$ and%
\begin{align*}
& \left( b^{\prime\prime}G+a^{\prime\prime}F\right) _{h}\left( x\right) \\
& =F_{a^{\prime\prime}\left( G_{b^{\prime\prime}\left( x\right) h}\left(
x\right) \right) h}G_{b^{\prime\prime}\left( x\right) h}\left( x\right)
=F_{a^{\prime\prime}\left( G_{b^{\prime\prime}\left( x\right) h}\left(
x\right) \right) h}G_{b^{\prime\prime}\left( x\right) h}\left( x\right)
\\
& =F_{a^{\prime\prime}\left( G_{b^{\prime\prime}\left( x\right) h}\left(
x\right) \right) h}G_{b^{\prime\prime}\left( x\right) h}G_{s}\left(
x_{0}\right) \in S
\end{align*}
when $h$ is small.
($x_{0},x,z,s$ and $t$ are now fixed for the remainder of the proof; however,
we only explicitly check the case $t>0$, indicating the changes where needed
to check the $t<0$ case.)
If we prove%
\begin{equation}
\left( F_{t}\right) ^{\ast}\left( a^{\prime}F+b^{\prime}G\right) \sim
S\text{\qquad at\qquad}x=G_{s}\left( x_{0}\right) \label{FrobProof4}%
\end{equation}
this will prove $S\sim a^{\prime}F+b^{\prime}G$ at $z$, since the push-forward
$\left( F_{t}\right) _{\ast}$ and the pull-back $\left( F_{t}\right)
^{\ast}$ are inverse and local lipeomorphisms and so preserve tangency. See
Figure \ref{FigFrobProof2}.%
\begin{figure}
[ptbh]
\begin{center}
\includegraphics[
height=2.1689in,
width=3.9392in
]%
{FrobProofFig2.eps}%
\caption{pull-back to $G_{s}\left( x_{0}\right) $}%
\label{FigFrobProof2}%
\end{center}
\end{figure}
Restating $\left( \text{\ref{LieD=BracketArcFieldversion}}\right) $:%
\[
F_{t}^{\ast}G_{t}\left( x\right) =\left( t\left[ F,G\right] +G\right)
_{t}\left( x\right)
\]
so%
\begin{equation}
F_{t/n}^{\ast}G_{t/n}\left( x\right) =\left( \tfrac{t}{n}\left[
F,G\right] +G\right) _{t/n}\left( x\right) \label{FrobProof20}%
\end{equation}
for our previously fixed small $t\geq0$ and arbitrary positive integer
$n\in\mathbb{N}$. (For $t<0$ use $\left(
\text{\ref{LieD=BracketArcFieldversion2}}\right) $ instead.) For any arc
fields $Z$ and $\overline{Z}$ clearly%
\begin{gather}
d\left( Z_{t}\left( x\right) ,\overline{Z}_{t}\left( x\right) \right)
=o\left( t\right) \qquad\text{implies}\nonumber\\
d\left( \left( tZ\right) _{t}\left( x\right) ,\left( t\overline
{Z}\right) _{t}\left( x\right) \right) =d\left( \left( Z\right)
_{t^{2}}\left( x\right) ,\left( \overline{Z}\right) _{t^{2}}\left(
x\right) \right) =o\left( t^{2}\right) \label{FrobProof30}%
\end{gather}
and so%
\begin{gather}
\left[ F,G\right] \sim aF+bG\qquad\text{implies}\nonumber\\
d\left( \left( \tfrac{t}{n}\left[ F,G\right] \right) _{t/n}\left(
x\right) ,\left( \left( \tfrac{t}{n}\left( aF+bG\right) \right) \right)
_{t/n}\left( x\right) \right) =o\left( \tfrac{1}{n^{2}}\right)
\label{FrobProof40}%
\end{gather}
since $t$ is fixed.
We use these facts to establish $\left( \text{\ref{FrobProof4}}\right) $,
first checking%
\[
d\left( \left( F_{t}^{\ast}\left( a^{\prime}F+b^{\prime}G\right) \right)
_{t/n}\left( x\right) ,S\right) =o\left( \tfrac{1}{n}\right)
\]
as $n\rightarrow\infty$. At the end of the proof we will replace $t/n$ by
arbitrary $r\rightarrow0$. Using the linearity of pull-back (Proposition
\ref{PullbackLinear}) we get
\begin{align}
& d\left( \left( F_{t}^{\ast}\left( a^{\prime}F+b^{\prime}G\right)
\right) _{t/n}\left( x\right) ,S\right) \nonumber\\
& =d\left( \left( \left( a^{\prime}\circ F_{t}\right) F_{t}^{\ast}\left(
F\right) +\left( b^{\prime}\circ F_{t}\right) F_{t/n}^{\ast\left(
n\right) }\left( G\right) \right) _{t/n}\left( x\right) ,S\right)
\nonumber\\
& =d\left( \left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n\right) }\left(
G\right) \right) _{t/n}\left( x\right) ,S\right) \nonumber
\end{align}
where $a_{0}:=a^{\prime}\circ F_{t}$ and $b_{0}:=b^{\prime}\circ F_{t}$. Using
$\left( \text{\ref{FrobProof20}}\right) $ means this last estimate is%
\begin{align}
& =d\left( \left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n-1\right) }\left(
\tfrac{t}{n}\left[ F,G\right] +G\right) \right) _{t/n}\left( x\right)
,S\right) \nonumber\\
& \leq d\left( \left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n-1\right) }\left(
\tfrac{t}{n}\left[ F,G\right] +G\right) \right) _{t/n}\left( x\right)
,\left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n-1\right) }\left( \tfrac{t}%
{n}\left( aF+bG\right) +G\right) \right) _{t/n}\left( x\right) \right)
\nonumber\\
& +d\left( \left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n-1\right) }\left(
\tfrac{t}{n}\left( aF+bG\right) +G\right) \right) _{t/n}\left( x\right)
,S\right) \text{.} \label{FrobProof45}%
\end{align}
We estimate the first term as%
\begin{align*}
& d\left( \left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n-1\right) }\left(
\tfrac{t}{n}\left[ F,G\right] +G\right) \right) _{t/n}\left( x\right)
,\left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n-1\right) }\left( \tfrac{t}%
{n}\left( aF+bG\right) +G\right) \right) _{t/n}\left( x\right) \right)
\\
& =d\left( \left( b_{0}F_{\left( n-1\right) t/n}^{\ast}\left( \tfrac
{t}{n}\left[ F,G\right] +G\right) \right) _{t/n}\left( y\right) ,\left(
b_{0}F_{\left( n-1\right) t/n}^{\ast}\left( \tfrac{t}{n}\left(
aF+bG\right) +G\right) \right) _{t/n}\left( y\right) \right)
\end{align*}
where $y:=a_{0}F_{t/n}\left( x\right) $%
\begin{align}
& =d\left( \left( F_{\left( n-1\right) t/n}^{\ast}\left( \tfrac{t}%
{n}\left[ F,G\right] +G\right) \right) _{b_{0}\left( y\right)
t/n}\left( y\right) ,\left( F_{\left( n-1\right) t/n}^{\ast}\left(
\tfrac{t}{n}\left( aF+bG\right) +G\right) \right) _{b_{0}\left( y\right)
t/n}\left( y\right) \right) \nonumber\\
& =d\left(
\begin{array}
[c]{c}%
\left( F_{-\left( n-1\right) t/n}\left( \tfrac{t}{n}\left[ F,G\right]
+G\right) \right) _{b_{0}\left( y\right) t/n}\left( F_{\left(
n-1\right) t/n}\left( y\right) \right) \\
,\left( F_{-\left( n-1\right) t/n}\left( \tfrac{t}{n}\left( aF+bG\right)
+G\right) \right) _{b_{0}\left( y\right) t/n}\left( F_{\left(
n-1\right) t/n}\left( y\right) \right)
\end{array}
\right) \nonumber\\
& =d\left(
\begin{array}
[c]{c}%
\left( F_{-\left( n-1\right) t/n}\left( \tfrac{t}{n}\left[ F,G\right]
+G\right) \right) _{b_{0}\left( y\right) t/n}\left( z\right) \\
,\left( F_{-\left( n-1\right) t/n}\left( \tfrac{t}{n}\left( aF+bG\right)
+G\right) \right) _{b_{0}\left( y\right) t/n}\left( z\right)
\end{array}
\right) \label{FrobProof60}%
\end{align}
where $z:=F_{\left( n-1\right) t/n}\left( y\right) $. Then by Theorem
\ref{ExpGrowth}, $\left( \ref{FrobProof60}\right) $ is%
\begin{align}
& \leq d\left( \left( \tfrac{t}{n}\left[ F,G\right] +G\right)
_{b_{0}\left( y\right) t/n}\left( z\right) ,\left( \tfrac{t}{n}\left(
aF+bG\right) +G\right) _{b_{0}\left( y\right) t/n}\left( z\right)
\right) e^{\Lambda_{X}\left( n-1\right) t/n}\nonumber\\
& =d\left( G_{b_{0}\left( y\right) t/n}\left( \tfrac{t}{n}\left[
F,G\right] \right) _{b_{0}\left( y\right) t/n}\left( z\right)
,G_{b_{0}\left( y\right) t/n}\left( \tfrac{t}{n}\left( aF+bG\right)
\right) _{b_{0}\left( y\right) t/n}\left( z\right) \right)
e^{\Lambda_{X}\left( n-1\right) t/n}\nonumber\\
& \leq d\left( \left( \tfrac{t}{n}\left[ F,G\right] \right)
_{b_{0}\left( y\right) t/n}\left( z\right) ,\left( \tfrac{t}{n}\left(
aF+bG\right) \right) _{b_{0}\left( y\right) t/n}\left( z\right) \right)
e^{\Lambda_{X}\left( n-1\right) t/n}e^{\Lambda_{Y}b_{0}\left( y\right)
t/n}\nonumber\\
& \leq r\left( b_{0}\left( y\right) \left( \tfrac{t}{n}\right)
^{2}\right) e^{\Lambda_{X}\left( n-1\right) t/n+\Lambda_{Y}b_{0}\left(
y\right) t/n}=:o_{1}\left( \tfrac{1}{n^{2}}\right) \label{FrobProof65}%
\end{align}
where we define%
\[
r\left( s\right) :=d\left( \left[ F,G\right] _{s}\left( z\right)
,\left( aF+bG\right) _{s}\left( z\right) \right) \text{.}%
\]
By the main assumption of the theorem, $r\left( s\right) =o\left( s\right)
$ so notice we have $o_{1}\left( \tfrac{1}{n^{2}}\right) =o\left( \tfrac
{1}{n^{2}}\right) $ but we need to keep a careful record of this estimate as
we will be summing $n$ terms like it; the subscript distinguishes $o_{1}$ as a
specific function.
Substituting $\left( \ref{FrobProof65}\right) $ into $\left(
\ref{FrobProof45}\right) $ gives%
\begin{align}
& d\left( \left( F_{t}^{\ast}\left( a^{\prime}F+b^{\prime}G\right)
\right) _{t/n}\left( x\right) ,S\right) \nonumber\\
& =d\left( \left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n\right) }G\right)
_{t/n}\left( x\right) ,S\right) \label{FrobProof70}\\
& \leq d\left( \left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n-1\right) }\left(
\tfrac{t}{n}\left( aF+bG\right) +G\right) \right) _{t/n}\left( x\right)
,S\right) +o_{1}\left( \tfrac{1}{n^{2}}\right) \nonumber\\
& =d\left( \left(
\begin{array}
[c]{c}%
a_{0}F+b_{0}\tfrac{t}{n}\left( a\circ F_{\left( n-1\right) t/n}\right) F\\
+b_{0}\cdot\left( \tfrac{t}{n}\left( b\circ F_{\left( n-1\right)
t/n}\right) +1\right) F_{t/n}^{\ast\left( n-1\right) }G
\end{array}
\right) _{t/n}\left( x\right) ,S\right) +o_{1}\left( \tfrac{1}{n^{2}%
}\right) \nonumber\\
& =d\left( \left(
\begin{array}
[c]{c}%
\left[ a_{0}+\left( b_{0}\tfrac{t}{n}\left( a\circ F_{\left( n-1\right)
t/n}\right) \right) \circ\left( a_{0}F_{t/n}\right) \right] F\\
+b_{0}\cdot\left( \tfrac{t}{n}\left( b\circ F_{\left( n-1\right)
t/n}\right) +1\right) F_{t/n}^{\ast\left( n-1\right) }G
\end{array}
\right) _{t/n}\left( x\right) ,S\right) +o_{1}\left( \tfrac{1}{n^{2}%
}\right) \nonumber\\
& =d\left( \left( a_{1}F+b_{1}F_{t/n}^{\ast\left( n-1\right) }G\right)
_{t/n}\left( x\right) ,S\right) +o_{1}\left( \tfrac{1}{n^{2}}\right)
\label{FrobProof80}%
\end{align}
where%
\begin{align*}
a_{1} & :=a_{0}+\left( b_{0}\tfrac{t}{n}\left( a\circ F_{\left(
n-1\right) t/n}\right) \right) \circ\left( a_{0}F_{t/n}\right)
\qquad\text{and}\\
b_{1} & :=b_{0}\cdot\left( \tfrac{t}{n}\left( b\circ F_{\left(
n-1\right) t/n}\right) +1\right) \text{.}%
\end{align*}
This painful calculation from the third line to the fourth line employs the
linearity of pull-back (Proposition \ref{PullbackLinear}); while the fifth
line is due to the linearity of $F$ (Proposition \ref{FlowLinearity}).
After toiling through these many complicated estimates we can relax a bit,
since the rest of the proof follows more mechanically by iterating the result
of lines $\left( \ref{FrobProof70}\right) $ and $\left( \ref{FrobProof80}%
\right) $:%
\begin{align}
& d\left( \left( a_{0}F+b_{0}F_{t/n}^{\ast\left( n\right) }G\right)
_{t/n}\left( x\right) ,S\right) \nonumber\\
& \leq d\left( \left( a_{1}F+b_{1}F_{t/n}^{\ast\left( n-1\right)
}G\right) _{t/n}\left( x\right) ,S\right) +o_{1}\left( \tfrac{1}{n^{2}%
}\right) \nonumber\\
& \leq d\left( \left( a_{2}F+b_{2}F_{t/n}^{\ast\left( n-2\right)
}G\right) _{t/n}\left( x\right) ,S\right) +o_{1}\left( \tfrac{1}{n^{2}%
}\right) +o_{2}\left( \tfrac{1}{n^{2}}\right) \nonumber\\
& \leq...\leq d\left( \left( a_{n}F+b_{n}G\right) _{t/n}\left( x\right)
,S\right) +\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}o_{i}\left( \tfrac{1}{n^{2}}\right) \label{FrobProof90}%
\end{align}
where
\begin{align*}
a_{2} & :=a_{1}+\left( b_{1}\tfrac{t}{n}\left( a\circ F_{\left(
n-2\right) t/n}\right) \right) \circ\left( a_{1}F_{t/n}\right) \\
b_{2} & :=b_{1}\cdot\left( \tfrac{t}{n}\left( b\circ F_{\left(
n-2\right) t/n}\right) +1\right) \text{\qquad\qquad and in general}\\
a_{i} & :=a_{i-1}+\left( b_{i-1}\tfrac{t}{n}\left( a\circ F_{\left(
n-i\right) t/n}\right) \right) \circ\left( a_{i-1}F_{t/n}\right) \\
b_{i} & :=b_{i-1}\cdot\left( \tfrac{t}{n}\left( b\circ F_{\left(
n-i\right) t/n}\right) +1\right)
\end{align*}
In the region of interest the $\left| a\right| $ and $\left| a_{0}\right|
$ are bounded by some $A\in\mathbb{R}$ and $\left| b\right| $ and $\left|
b_{0}\right| $ are bounded by some $B\in\mathbb{R}$ so%
\begin{align*}
\left| b_{1}\right| & =\left| b_{0}\cdot\left( \tfrac{t}{n}\left(
b\circ F_{\left( n-1\right) t/n}\right) +1\right) \right| \leq B\left(
\tfrac{t}{n}B+1\right) \\
\left| b_{2}\right| & =\left| b_{1}\cdot\left( \tfrac{t}{n}\left(
b\circ F_{\left( n-1\right) t/n}\right) +1\right) \right| \leq B\left(
\tfrac{t}{n}B+1\right) ^{2}\\
\left| b_{i}\right| & \leq B\left( \tfrac{t}{n}B+1\right) ^{i}%
\text{\qquad and}%
\end{align*}%
\begin{align*}
\left| a_{1}\right| & =\left| a_{0}+b_{0}\tfrac{t}{n}\left( a\circ
F_{\left( n-1\right) t/n}\right) \right| \leq A+B\tfrac{t}{n}A\\
\left| a_{2}\right| & =\left| a_{1}+b_{1}\tfrac{t}{n}\left( a\circ
F_{\left( n-2\right) t/n}\right) \right| \leq\left( A+B\tfrac{t}%
{n}A\right) +B\left( \tfrac{t}{n}B+1\right) \tfrac{t}{n}A\\
\left| a_{3}\right| & =\left| a_{2}+b_{2}\tfrac{t}{n}\left( a\circ
F_{\left( n-3\right) t/n}\right) \right| \\
& \leq A+B\tfrac{t}{n}A+B\left( \tfrac{t}{n}B+1\right) \tfrac{t}%
{n}A+B\left( \tfrac{t}{n}B+1\right) ^{2}\tfrac{t}{n}A\\
\left| a_{i}\right| & \leq A+\tfrac{t}{n}AB\overset{i-1}{\underset{k=0}{%
{\textstyle\sum}
}}\left( \tfrac{t}{n}B+1\right) ^{k}=A+\tfrac{t}{n}AB\frac{\left( \tfrac
{t}{n}B+1\right) ^{i}-1}{\tfrac{t}{n}B}\\
& =A\left( \tfrac{t}{n}B+1\right) ^{i}\text{.}%
\end{align*}
Therefore%
\begin{align*}
\left| b_{n}\right| & \leq B\left( \tfrac{t}{n}B+1\right) ^{n}\leq
Be^{tB}\text{\qquad and}\\
\left| a_{n}\right| & \leq A\left( \tfrac{t}{n}B+1\right) ^{n}\leq
Ae^{tB}\text{.}%
\end{align*}
Penultimately, we need to estimate the $o_{i}\left( \tfrac{1}{n^{2}}\right)
$. Remember from line $\left( \ref{FrobProof65}\right) $%
\[
o_{1}\left( \tfrac{1}{n^{2}}\right) :=r\left( b_{0}\left( y\right)
\left( \tfrac{t}{n}\right) ^{2}\right) e^{\Lambda_{X}\left( n-1\right)
t/n+\Lambda_{Y}b_{0}\left( y\right) t/n}%
\]
where $r\left( s\right) =o\left( s\right) $, so%
\begin{align*}
o_{2}\left( \tfrac{1}{n^{2}}\right) & =r\left( b_{1}\left( y\right)
\left( \tfrac{t}{n}\right) ^{2}\right) e^{\Lambda_{X}\left( n-2\right)
t/n+\Lambda_{Y}b_{1}\left( y\right) t/n}\\
& \leq B\left( \tfrac{t}{n}B+1\right) o\left( \left( \tfrac{t}{n}\right)
^{2}\right) e^{\Lambda_{X}\left( n-2\right) t/n+\Lambda_{Y}B\left(
\tfrac{t}{n}B+1\right) t/n}\\
o_{i}\left( \tfrac{1}{n^{2}}\right) & =r\left( b_{i-1}\left( y\right)
\left( \tfrac{t}{n}\right) ^{2}\right) e^{\Lambda_{X}\left( n-i\right)
t/n+\Lambda_{Y}b_{i-1}\left( y\right) t/n}\text{.}%
\end{align*}
Consequently%
\begin{align*}
\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}o_{i}\left( \tfrac{1}{n^{2}}\right) & \leq\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}r\left( b_{i-1}\left( y\right) \left( \tfrac{t}{n}\right) ^{2}\right)
e^{\Lambda_{X}\left( n-i\right) t/n+\Lambda_{Y}B\left( \tfrac{t}%
{n}B+1\right) ^{i-1}t/n}\\
& \leq o\left( \left( \tfrac{t}{n}\right) ^{2}\right) Be^{tB}\overset
{n}{\underset{i=1}{%
{\textstyle\sum}
}}e^{\Lambda_{X}\left( n-i\right) t/n+\Lambda_{Y}B\left( \tfrac{t}%
{n}B+1\right) ^{i-1}t/n}%
\end{align*}
since $r\left( b_{i-1}\left( y\right) \left( \tfrac{t}{n}\right)
^{2}\right) =o\left( \left( \tfrac{t}{n}\right) ^{2}\right) Be^{tB}$ for
all $i$. Therefore%
\[
\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}o_{i}\left( \tfrac{1}{n^{2}}\right) \leq o\left( \left( \tfrac{t}%
{n}\right) ^{2}\right) Be^{tB}ne^{\Lambda_{X}t+\Lambda_{Y}Be^{tB}%
t/n}=o\left( \tfrac{1}{n}\right)
\]
as $n\rightarrow\infty$. Putting this into $\left( \ref{FrobProof90}\right)
$ gives%
\[
d\left( \left( F_{t}^{\ast}\left( a^{\prime}F+b^{\prime}G\right) \right)
_{t/n}\left( x\right) ,S\right) \leq d\left( \left( a_{n}F+b_{n}G\right)
_{t/n}\left( x\right) ,S\right) +o\left( \tfrac{1}{n}\right) =o\left(
\tfrac{1}{n}\right)
\]
because of the uniform bound on $\left| a_{n}\right| $ and $\left|
b_{n}\right| $. To see this notice%
\[
d\left( \left( a_{\ast}F+b_{\ast}G\right) _{t/n}\left( x\right)
,S\right) =o\left( \tfrac{1}{n}\right)
\]
uniformly for bounded $a_{\ast}$ and $b_{\ast}$ since $a_{\ast}F+b_{\ast}G\sim
b_{\ast}G+a_{\ast}F$ and as before $\left( b_{\ast}G+a_{\ast}F\right)
_{t}\left( x\right) \in S$ using the uniform $\Lambda$ and $\Omega$ derived
in the proofs of Propositions \ref{PropX+Y E1&2} and \ref{Prop_aXE1&2} (cf.
Remark \ref{RemUniformSolutions}).
Finally we need to check%
\[
d\left( \left( F_{t}^{\ast}\left( a^{\prime}F+b^{\prime}G\right) \right)
_{r}\left( x\right) ,S\right) =o\left( r\right)
\]
when $r$ is not necessarily $t/n$. We may assume $0<t<1$ and $0<r<t$ so that
$t=nr+\varepsilon$ for some $0\leq\varepsilon<r$ and integer $n$ with
$\frac{t}{r}-1<n\leq\frac{t}{r}$. Therefore the above calculations give%
\begin{align*}
& d\left( \left( F_{t}^{\ast}\left( a^{\prime}F+b^{\prime}G\right)
\right) _{r}\left( x\right) ,S\right) =d\left( \left( F_{\varepsilon
}^{\ast}F_{r}^{\ast\left( n\right) }\left( cF+dG\right) \right)
_{r}\left( x\right) ,S\right) \\
& \leq d\left( F_{\varepsilon}^{\ast}\left( a_{n}F+b_{n}G\right)
_{r}\left( x\right) ,S\right) +o\left( r\right) =o\left( r\right)
\text{.}%
\end{align*}
\end{proof}
The $n$-dimensional corollary of this $2$-dimensional version is given in the
next section.
\begin{remark}
\label{Rem2ndObracket}In the assumptions of Theorem \ref{FrobeniusThm}
$\left[ F,G\right] $ can be replaced with $\left[ X,Y\right] $ when they
are tangent. Since the brackets use $\sqrt{t}$ we have $\left[ F,G\right]
\sim\left[ X,Y\right] $ when $X$ and $Y$ are $2$nd-order tangent to their
flows, i.e.,
\begin{align*}
d\left( X_{t}\left( x\right) ,F_{t}\left( x\right) \right) & =O\left(
t^{2}\right) \qquad\qquad\text{and}\\
d\left( Y_{t}\left( x\right) ,G_{t}\left( x\right) \right) & =O\left(
t^{2}\right)
\end{align*}
locally uniformly. We denote $2$nd-order local uniform tangency by $X\approx
F$. This holds, for example, when $X$ comes from a twice continuously
differentiable vector field by Taylor's theorem. But in formulating our
theorem for the nonsmooth case, the two brackets are not interchangeable.
Beware: 2nd-order tangency is ``big oh'' of $t^{2}$, not ``little oh''.
We might have chosen to define the bracket $\left[ X,Y\right] $ using the
flows instead of the arc fields to simplify the statements of Theorem
\ref{FrobeniusThm} and those below. However it is often easier to calculate
the bracket and to check closure using arc fields instead of the flows.
In light of this remark, Theorem \ref{FrobeniusThm} gives
\end{remark}
\begin{corollary}
Assume $X$ \& $Y$ close, are transverse, and satisfy E1 and E2 on a locally
complete metric space $M$. Further assume $X$ and $Y$ are 2nd-order tangent to
their local flows $F$ and $G$. If $\left[ X,Y\right] \sim aX+bY$ for some
Lipschitz functions $a,b:M\rightarrow\mathbb{R}$, then for each $x_{0}\in M$
there exists an integral surface $S$ through $x_{0}$.
\end{corollary}
\section{Global Frobenius Theorem\label{SectionDist&Foliations}}
The goal of this section is to recast Theorem \ref{FrobeniusThm} in the
language of distributions and foliations, and so we begin with several
definitions. $M$ is, as ever, a locally complete metric space.
\begin{definition}
A \textbf{distribution} $\Delta$ on $M$ is a set of curves in $M$.
\end{definition}
\begin{example}
A single arc field $X$ gives a distribution by forgetting $X$ is continuous in
its space variable $x$, and defining $\Delta=\left\{ X\left( x,\cdot\right)
:x\in M\right\} $. Any union of arc fields similarly gives a distribution.
Given two arc fields $X$ and $Y$, their \textbf{linear span} is a
distribution:%
\[
\Delta\left( X,Y\right) =\left\{ \left( aX+bY\right) \left(
x,\cdot\right) :a,b\in\mathbb{R},x\in M\right\} \text{.}%
\]
\end{example}
The direct sum of an arbitrary collection of arc fields similarly gives a
distribution, defined with finite summands. All of the following definitions
can, of course, be made with arbitrary indexing sets; but we will only use
finite sets of generators in the theorems of this paper.
Denote $\Delta_{x}:=\left\{ c\in\Delta:c\left( 0\right) =x\right\} $.
\begin{definition}
$X$ is $($locally uniformly$)$ \textbf{transverse} to $\Delta$ if for all
$x_{0}\in M$ there exists a $\delta>0$ such that for all $x\in B\left(
x_{0},\delta\right) $ we have
\[
d\left( X_{x}\left( t\right) ,c\left( s\right) \right) \geq\delta\left(
\left| s\right| +\left| t\right| \right)
\]
for all $c\in\Delta_{x}$ and all $\left| s\right| ,\left| t\right|
<\delta$. The arc fields $\overset{1}{X},$ $\overset{2}{X},$ $...,$
$\overset{n}{X}$ are \textbf{transverse} to each other if for each
$i\in\left\{ 1,...,n\right\} $ we have $\overset{i}{X}$ transverse to%
\[
\Delta\left( \overset{1}{X},\overset{2}{X},...,\overset{i-1}{X},\overset
{i+1}{X},...,\overset{n}{X}\right) \text{.}%
\]
\end{definition}
For $y\in M$ define%
\[
d\left( y,\Delta_{x}\right) :=\inf\left\{ d\left( y,c\left( t\right)
\right) :c\in\Delta_{x}\text{ and }t\in dom\left( c\right) \right\}
\text{.}%
\]
If $\Delta_{x}=\emptyset$ then, as usual, the distance is $\infty$ by
definition. So if $X$ is transverse to $\Delta$ then if for all $x_{0}\in M$
there exists a $\delta>0$ such that for all $x\in B\left( x_{0}%
,\delta\right) $ we have
\[
d\left( X_{x}\left( t\right) ,\Delta\right) \geq\delta\left| t\right|
\]
for all $\left| t\right| <\delta$.
\begin{definition}
$X$ is \textbf{tangent} to $\Delta$ if for each $x\in M$
\[
d\left( X_{x}\left( t\right) ,\Delta_{x}\right) =o\left( t\right)
\text{.}%
\]
If this distance is $o\left( t\right) $ locally uniformly in $x\in M$ then
$X $ is \textbf{locally uniformly tangent} to $\Delta$, denoted $X\sim\Delta$.
Two distributions $\Delta$ and $\widetilde{\Delta}$ are \textbf{tangent} if
for each $c\in\Delta$ there exists $\widetilde{c}\in\widetilde{\Delta}$ such
that $\widetilde{c}$ is tangent to $c$ $($at $t=0),$ and \textit{vice-versa,}
for each $\widetilde{c}\in\widetilde{\Delta}$ there exists $c\in\Delta$ such
that $c$ is tangent to $\widetilde{c}$. Local uniform tangency is defined in
the obvious way, and denoted $\Delta\sim\widetilde{\Delta}$. Again, $\sim$ is
an equivalence relation.
\end{definition}
\begin{definition}
A distribution $\Delta$ is $n$-\textbf{dimensional} if there exists a set of
$n$ transverse arc fields $\left\{ \overset{1}{X},\overset{2}{X}%
,...,\overset{n}{X}\right\} $ which all mutually close and satisfy E1 and E2
such that $\Delta\sim\Delta\left( \overset{1}{X},\overset{2}{X}%
,...,\overset{n}{X}\right) $.
\end{definition}
Given $X$, if there exist Lipschitz functions $a_{k}:M\rightarrow\mathbb{R}$
such that $X\sim\overset{n}{\underset{k=1}{\sum}}a_{k}\overset{k}{X}$ then
clearly $X\sim\Delta\left( \overset{1}{X},\overset{2}{X},...,\overset{n}%
{X}\right) $.
\begin{definition}
An $n$-dimensional distribution $\Delta\sim\Delta\left( \overset{1}%
{X},\overset{2}{X},...,\overset{n}{X}\right) $ is \textbf{involutive} if for
each choice of $i,j\in\left\{ 1,...,n\right\} $ we have%
\[
\left[ \overset{i}{X},\overset{j}{X}\right] \sim\Delta\text{.}%
\]
\end{definition}
\begin{definition}
An \textbf{surface} $S$ in $M$ is an $n$-dimensional topological manifold
$S\subset M$. A surface is \textbf{locally uniformly tangent} to an arc field
$X$, denoted $X\sim S$, if $d\left( X_{t}\left( x\right) ,S\right)
=o\left( t\right) $ locally uniformly in $x$.
A surface is said to be an \textbf{integral surface} for an $n$-dimensional
distribution $\Delta\sim\Delta\left( \overset{1}{X},\overset{2}%
{X},...,\overset{n}{X}\right) $ if $\overset{n}{\underset{k=1}{\sum}}%
a_{k}\overset{k}{X}\sim S$ for any choice of Lipschitz functions
$a_{k}:M\rightarrow\mathbb{R}$.
A distribution is said to be \textbf{integrable} if there exists an integral
surface through every point in $M$.
\end{definition}
Theorem \ref{FrobeniusThm} has the following corollary:
\begin{theorem}
\label{ThmInvol=>Integ}An $n$-dimensional involutive distribution is integrable.
\end{theorem}
\begin{proof}
$n=1$ is Theorem \ref{CL}. $n=2$ is Theorem \ref{FrobeniusThm}. Now proceed by
induction. We do enough of the case $n=3$ to suggest the path; and much of
this is copied from the proof of Theorem \ref{FrobeniusThm}.
Choose $x_{0}\in M$. Let $X,Y,$ and $Z$ be the transverse arc fields
guaranteed in the definition of a 3-dimensional distribution. If we find an
integral surface $S$ for $\Delta\left( X,Y,Z\right) $ through $x_{0}$ then
obviously $S$ is an integral surface for $\Delta$. Let $F,G$, and $H$ be the
local flows of $X,Y$, and $Z$ and define%
\[
S:=\left\{ F_{t}G_{s}H_{r}\left( x_{0}\right) :\left| r\right| ,\left|
s\right| ,\left| t\right| <\delta\right\}
\]
with $\delta>0$ chosen small enough as in the proof of Theorem
\ref{FrobeniusThm} so that $S$ is a three dimensional manifold. Again we may
assume $\delta$ is also chosen small enough so that throughout $S$ the
functions $\left| a_{k}\right| $ are bounded by $A$, the constants $\Lambda
$, $\Omega$, and $\rho$ for $X,Y$ and $Z$ hold uniformly, and the closure of
$B\left( x,3\delta\left( \rho+1\right) \right) $ is complete. Notice
\[
\underline{S}:=\left\{ G_{s}H_{r}\left( x_{0}\right) :\left| r\right|
,\left| s\right| <\delta\right\}
\]
is an integral surface through $x_{0}$ for $\Delta\left( Y,Z\right) $ by the
proof of Theorem \ref{FrobeniusThm}. Notice $S\sim X$ by construction, but it
is not immediately clear $S\sim a^{\prime}X+b^{\prime}Y+c^{\prime}Z $ for
arbitrarily chosen $a^{\prime},b^{\prime},c^{\prime}\in\mathbb{R}$. Again we
really only need to show $S\sim a^{\prime}F+b^{\prime}G+c^{\prime}H$ for an
arbitrary point $z:=F_{t}G_{s}H_{r}\left( x_{0}\right) \in S$, and again it
is sufficient to prove%
\[
\left( F_{t}\right) ^{\ast}\left( a^{\prime}F+b^{\prime}G+c^{\prime
}H\right) \sim S\text{\qquad at\qquad}y=G_{s}H_{r}\left( x_{0}\right)
\]
by the construction of $S$. Continue as above adapting the same tricks from
the proof of Theorem \ref{FrobeniusThm} to the extra dimension.
\end{proof}
Similar to the definition for a surface, an arbitrary set $S\subset M$ is
defined to be \textbf{locally uniformly tangent} to $X$ if%
\[
d\left( X_{t}\left( y\right) ,S\right) =o\left( t\right)
\]
locally uniformly for $y\in S$, denoted $S\sim X$.
\begin{lemma}
\label{LemmaNagumo}Let $\sigma_{x}:\left( \alpha,\beta\right) \rightarrow
U\subset M$ be a solution to $X$ which meets Condition E1 with uniform
constant $\Lambda$ on a neighborhood $U$. Assume $S\subset U$ is a closed set
with $S\sim X$. Then%
\[
d\left( \sigma_{x}\left( t\right) ,S\right) \leq e^{\Lambda\left|
t\right| }d\left( x,S\right) \text{\ for all }t\in\left( \alpha
,\beta\right) \text{.}%
\]
\end{lemma}
\begin{proof}
(Adapted from the proof of Theorem \ref{ExpGrowth}.)
We check only $t>0$. Define%
\[
g\left( t\right) :=e^{-\Lambda t}d\left( \sigma_{x}\left( t\right)
,S\right) \text{.}%
\]
For $h\geq0$, we have
\begin{align*}
& g\left( t+h\right) -g\left( t\right) \\
& =e^{-\Lambda\left( t+h\right) }d\left( \sigma_{x}\left( t+h\right)
,S\right) -e^{-\Lambda t}d\left( \sigma_{x}\left( t\right) ,S\right) \\
& \leq e^{-\Lambda\left( t+h\right) }\left[ d\left( \sigma_{x}\left(
t+h\right) ,X_{h}\left( \sigma_{x}\left( t\right) \right) \right)
+d\left( X_{h}\left( \sigma_{x}\left( t\right) \right) ,X_{h}\left(
y\right) \right) +d\left( X_{h}\left( y\right) ,S\right) \right] \\
& -e^{-\Lambda t}d\left( \sigma_{x}\left( t\right) ,S\right)
\end{align*}
for any $y\in S,$ which in turn is%
\begin{align*}
& \leq e^{-\Lambda\left( t+h\right) }\left[ d\left( X_{h}\left(
\sigma_{x}\left( t\right) \right) ,X_{h}\left( y\right) \right)
+o\left( h\right) \right] -e^{-\Lambda t}d\left( \sigma_{x}\left(
t\right) ,S\right) \\
& \leq e^{-\Lambda t}e^{-\Lambda h}d\left( \sigma_{x}\left( t\right)
,y\right) \left( 1+\Lambda h\right) -e^{-\Lambda t}d\left( \sigma
_{x}\left( t\right) ,S\right) +o\left( h\right) \\
& =\left[ e^{-\Lambda h}\left( 1+\Lambda h\right) d\left( \sigma
_{x}\left( t\right) ,y\right) -d\left( \sigma_{x}\left( t\right)
,S\right) \right] e^{-\Lambda t}+o\left( h\right) \text{.}%
\end{align*}
Therefore%
\[
g\left( t+h\right) -g\left( t\right) \leq\left[ e^{-\Lambda h}\left(
1+\Lambda h\right) -1\right] e^{-\Lambda t}d\left( \sigma_{x}\left(
t\right) ,S\right) +o\left( h\right)
\]
since $y$ was arbitrary in $S$. Thus%
\begin{align*}
& g\left( t+h\right) -g\left( t\right) \\
& \leq o\left( h\right) e^{-\Lambda t}d\left( \sigma_{x}\left( t\right)
,S\right) +o\left( h\right) =o\left( h\right) \left( g\left( t\right)
+1\right) .
\end{align*}
Hence, the upper forward derivative of $g\left( t\right) $ is nonpositive;
i.e.,
\[
D^{+}g\left( t\right) :=\overline{\lim_{h\rightarrow0^{+}}}\,\left(
\frac{g\left( t+h\right) -g\left( t\right) }{h}\right) \leq0.
\]
Consequently, $g\left( t\right) \leq g\left( 0\right) $ or
\[
d\left( \sigma_{x}\left( t\right) ,S\right) \leq e^{\Lambda t}d\left(
\sigma_{x}\left( 0\right) ,S\right) =e^{\Lambda t}d\left( x,S\right) .
\]
\end{proof}
Choosing $x\in S$ in Lemma \ref{LemmaNagumo} gives the following metric space
generalization of the Nagumo-Brezis Invariance Theorem (Example \ref{Banach
Example} shows how this generalizes the Banach space setting). We state and
prove only the bidirectional case; the case for forward flows is easily
adapted \textit{mutatis mutandis}. Cf. \cite{Motreanu Pavel} for an exposition
on general invariance theorems.
\begin{theorem}
\label{ThmNagumo}Let $X$ satisfy E1 and E2 and assume a closed set $S\subset
M$ has $S\sim X$. Then for any $x\in S$ we have $F_{t}\left( x\right) \in S$
for all $t\in\left( \alpha_{x},\beta_{x}\right) $. I.e., $S$ is an
\textbf{invariant set} under the flow $F$.
\end{theorem}
\begin{theorem}
\label{ThmUniqueIntSurfs}The integral surfaces guaranteed by Theorem
\ref{ThmInvol=>Integ} are unique in the following sense: if $S_{1}$ and
$S_{2}$ are integral surfaces through $x\in M$, then $S_{1}\cap S_{2}$ is an
integral surface.
\end{theorem}
\begin{proof}
The case $n=1$ is true by the uniqueness of integral curves.
For higher dimensions $n$, Theorem \ref{ThmNagumo} guarantees $S_{1}$ and
$S_{2}$ contain local integral curves for $\overset{n}{\underset{k=1}{\sum}%
}a_{k}\overset{k}{X}$ for all choices of $a_{k}\in\mathbb{R}$ with initial
condition $x$. Since the $\overset{k}{X}$ are transverse, there is a small
neighborhood of $x$ on which all the choices of the parameters $a_{k}$ give
local non-intersecting curves in $M$ which fill up $n$ dimensions.
\end{proof}
Therefore, by continuation we have a unique maximal connected integral surface
through each point.
\begin{definition}
A \textbf{foliation} partitions $M$ into a set of subsets $\Phi:=\left\{
\mathcal{L}_{i}\right\} _{i\in I}$ for some indexing set $I$, where the
subsets $\mathcal{L}_{i}\subset M$ $\left( \text{called \textbf{leaves}%
}\right) $ are disjoint, connected topological manifolds each having the same dimension.
A foliation $\Phi$ is \textbf{tangent} to a distribution $\Delta$ if the
leaves are integral surfaces. When a foliation exists which is tangent to a
distribution $\Delta$ we say $\Delta$ \textbf{foliates} $M$.
\end{definition}
\begin{theorem}
\label{ThmInvol=>Foliat}An $n$-dimensional involutive distribution has a
unique tangent foliation.
\end{theorem}
\begin{proof}
Theorems \ref{ThmInvol=>Integ} and \ref{ThmUniqueIntSurfs} guarantee the
existence of the leaves, i.e., the unique maximal integral surfaces.
\end{proof}
The converse of this result is easy to prove in the classical context on a
Banach space. I do not believe it is true here. Instead we have the following
partial converse. Cf. Remark \ref{Rem2ndObracket}.
\begin{proposition}
Let $\Delta\sim\Delta\left( \overset{1}{X},\overset{2}{X},...,\overset{n}%
{X}\right) $ be an $n$-dimensional distribution with $\overset{i}{X}%
\approx\overset{i}{F}$ where $\overset{i}{F}$ is the local flow for
$\overset{i}{X}$. If $\Delta$ foliates $M$ then $\Delta$ is involutive.
\end{proposition}
\begin{proof}
Remark \ref{Rem2ndObracket} gives $\left[ \overset{i}{X},\overset{j}%
{X}\right] \sim\left[ \overset{i}{F},\overset{j}{F}\right] $ and Theorem
\ref{ThmNagumo} gives $\left[ \overset{i}{F},\overset{j}{F}\right]
_{t}\left( x\right) \in\mathcal{L}_{i}$ if $x\in\mathcal{L}_{i}$ so $\left[
\overset{i}{F},\overset{j}{F}\right] \sim\Delta$.
\end{proof}
Collecting all these results we have the following version of the Global
Frobenius Theorem.
\begin{theorem}
\label{ThmGlobFrob}Let $\Delta\sim\Delta\left( \overset{1}{X},\overset{2}%
{X},...,\overset{n}{X}\right) $ be an $n$-dimensional distribution on a
locally complete metric space $M$, with $\overset{i}{X}\approx\overset{i}{F} $
where $\overset{i}{F}$ is the local flow for $\overset{i}{X}$. The following
are equivalent$:$
1. $\Delta$ is involutive
2. $\Delta$ is integrable
3. $\Delta$ foliates $M$.
\end{theorem}
\section{Commutativity of Flows\label{SectionCommut}}
\begin{theorem}
\label{ThmCommute}Assume $X$ and $Y$ satisfy E1 and E2 on a locally complete
metric space $M$. Let $F$ and $G$ be the local flows of $X$ and $Y$. Then
$\left[ F,G\right] \sim0$ if and only if $F$ and $G$ commute, i.e.,%
\[
F_{t}G_{s}\left( x\right) =G_{s}F_{t}\left( x\right) \text{,\qquad
i.e.,\qquad}F_{t}^{\ast}\left( G\right) =G\text{.}%
\]
\end{theorem}
\begin{proof}
The assumption $\left[ F,G\right] \sim aX+bY$ with $a=b=0$ allows us to copy
the approach in the proof of Theorem \ref{FrobeniusThm}. Let $\delta>0$ be
chosen small enough so
1. the functions $\left| a\right| $ and $\left| b\right| $ are bounded
2. the constants $\Lambda$, $\Omega$, and $\rho$ for $X$ and $Y$ hold uniformly
3. $\left[ F,G\right] \sim0$ uniformly
\noindent all on $S:=B\left( x,2\delta\left( \rho+1\right) \right) $ and
that $S$ is also complete. We check $t>0$. Since $F_{t}^{\ast}\left(
G\right) $ and $G$ are both local flows, we only need to show they are
tangent to each other and then they must be equal by uniqueness of solutions.
Imagine being in the context of differentiable manifolds. There, for vector
fields $f$ and $g$ with local flows $F$ and $G$, we would have%
\[
\underset{h\rightarrow0}{\lim}\frac{F_{h}^{\ast}\left( g\right) -g}%
{h}=\mathcal{L}_{f}g=\left[ f,g\right] =0
\]
so $F_{h}^{\ast}\left( g\right) =g+o\left( h\right) $ and thus we expect%
\[
F_{h}^{\ast}\left( g\right) =g+o\left( h\right) \text{.}%
\]
We might use this idea as before with the linearity of pull-back (Proposition
\ref{PullbackLinear}) to get%
\[
F_{t}^{\ast}\left( g\right) =\underset{n\rightarrow\infty}{\lim}%
F_{t/n}^{\ast\left( n\right) }\left( g\right) =\underset{n\rightarrow
\infty}{\lim}g+no\left( 1/n\right) =g
\]
as desired.
Now in our context of metric spaces with $t>0$, line $\left(
\text{\ref{LieD=BracketArcFieldversion}}\right) $ again gives%
\[
F_{t/n}^{\ast}\left( G\right) _{t/n}\left( x\right) =\left( \tfrac{t}%
{n}\left[ F,G\right] +G\right) _{t/n}\left( x\right) \text{.}%
\]
For $t<0$ one would use $\left( \text{\ref{LieD=BracketArcFieldversion2}%
}\right) $. Also we again have%
\begin{gather*}
\left[ F,G\right] \sim0\qquad\text{implies}\\
d\left( \left( \tfrac{t}{n}\left[ F,G\right] \right) _{t/n}\left(
x\right) ,x\right) =o\left( \tfrac{1}{n^{2}}\right) \text{.}%
\end{gather*}
Using these tricks (and Theorem \ref{ExpGrowth} in the fourth line following)
gives%
\begin{align*}
& d\left( \left( F_{t}^{\ast}\left( G\right) \right) _{t/n}\left(
x\right) ,G_{t/n}\left( x\right) \right) =d\left( \left( F_{t/n}%
^{\ast\left( n-1\right) }F_{t/n}^{\ast}\left( G\right) \right)
_{t/n}\left( x\right) ,G_{t/n}\left( x\right) \right) \\
& =d\left( F_{t/n}^{\ast\left( n-1\right) }\left( \tfrac{t}{n}\left[
F,G\right] +G\right) _{t/n}\left( x\right) ,G_{t/n}\left( x\right)
\right) \\
& \leq d\left( F_{t/n}^{\ast\left( n-1\right) }\left( G_{t/n}\tfrac{t}%
{n}\left[ F,G\right] _{t/n}\left( x\right) \right) ,F_{t/n}^{\ast\left(
n-1\right) }G_{t/n}\left( x\right) \right) +d\left( F_{t/n}^{\ast\left(
n-1\right) }G_{t/n}\left( x\right) ,G_{t/n}\left( x\right) \right) \\
& \leq d\left( G_{t/n}\tfrac{t}{n}\left[ F,G\right] _{t/n}\left(
y\right) ,G_{t/n}\left( y\right) \right) e^{\Lambda_{X}\frac{t\left(
n-1\right) }{n}}+d\left( F_{t/n}^{\ast\left( n-1\right) }G_{t/n}\left(
x\right) ,G_{t/n}\left( x\right) \right)
\end{align*}
where $y:=F_{\left( n-1\right) t/n}\left( x\right) $%
\[
\leq d\left( \tfrac{t}{n}\left[ F,G\right] _{t/n}\left( y\right)
,y\right) e^{\Lambda_{Y}t/n}e^{\Lambda_{X}\frac{t\left( n-1\right) }{n}%
}+d\left( F_{t/n}^{\ast\left( n-1\right) }G_{t/n}\left( x\right)
,G_{t/n}\left( x\right) \right)
\]
and so%
\begin{align*}
& d\left( \left( F_{t}^{\ast}\left( G\right) \right) _{t/n}\left(
x\right) ,G_{t/n}\left( x\right) \right) \\
& \leq d\left( F_{t/n}^{\ast\left( n-1\right) }G_{t/n}\left( x\right)
,G_{t/n}\left( x\right) \right) +e^{\Lambda_{Y}t/n+\Lambda_{X}%
\frac{t\left( n-1\right) }{n}}o_{1}\left( \tfrac{1}{n^{2}}\right)
\end{align*}
where $o_{1}\left( \tfrac{1}{n^{2}}\right) :=d\left( \tfrac{t}{n}\left[
F,G\right] _{t/n}\left( y\right) ,y\right) $.
Iterating this result gives%
\begin{align*}
& d\left( \left( F_{t/n}^{\ast n}\left( G\right) \right) _{t/n}\left(
x\right) ,G_{t/n}\left( x\right) \right) \\
& \leq d\left( F_{t/n}^{\ast\left( n-1\right) }G_{t/n}\left( x\right)
,G_{t/n}\left( x\right) \right) +e^{\Lambda_{Y}t/n+\Lambda_{X}%
\frac{t\left( n-1\right) }{n}}o_{1}\left( \tfrac{1}{n^{2}}\right) \\
& \leq d\left( F_{t/n}^{\ast\left( n-2\right) }G_{t/n}\left( x\right)
,G_{t/n}\left( x\right) \right) +e^{\Lambda_{Y}t/n+\Lambda_{X}%
\frac{t\left( n-2\right) }{n}}o_{2}\left( \tfrac{1}{n^{2}}\right)
+e^{\Lambda_{Y}t/n+\Lambda_{X}\frac{t\left( n-1\right) }{n}}o_{1}\left(
\tfrac{1}{n^{2}}\right) \\
& \leq...\leq d\left( F_{t/n}^{0}G_{t/n}\left( x\right) ,G_{t/n}\left(
x\right) \right) +e^{\Lambda_{Y}t/n}\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}o_{i}\left( \tfrac{1}{n^{2}}\right) e^{\Lambda_{X}\frac{t\left(
n-i\right) }{n}}\\
& =e^{\Lambda_{Y}t/n}\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}o_{i}\left( \tfrac{1}{n^{2}}\right) e^{\Lambda_{X}\frac{t\left(
n-i\right) }{n}}%
\end{align*}
where $o_{i}\left( \tfrac{1}{n^{2}}\right) :=d\left( \tfrac{t}{n}\left[
F,G\right] _{t/n}\left( y_{i}\right) ,y_{i}\right) $ and $y_{i}%
:=F_{\left( n-i\right) t/n}\left( x\right) $. Since $d\left( \tfrac{t}%
{n}\left[ F,G\right] _{t/n}\left( y\right) ,y\right) =o\left( \tfrac
{1}{n^{2}}\right) $ uniformly for $y\in B\left( x,2\delta\left(
\rho+1\right) \right) $ we have%
\begin{align*}
& d\left( \left( F_{t/n}^{\ast n}\left( G\right) \right) _{t/n}\left(
x\right) ,G_{t/n}\left( x\right) \right) \\
& \leq e^{\Lambda_{Y}t/n}\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}o_{i}\left( \tfrac{1}{n^{2}}\right) e^{\Lambda_{X}\frac{t\left(
n-i\right) }{n}}=o\left( \tfrac{1}{n^{2}}\right) e^{\Lambda_{Y}t/n}%
\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}e^{\Lambda_{X}\frac{t\left( n-i\right) }{n}}\\
& =o\left( \tfrac{1}{n^{2}}\right) e^{\Lambda_{Y}t/n}e^{\Lambda_{X}%
t}\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}\left( e^{-\frac{t}{n}}\right) ^{i}=o\left( \tfrac{1}{n^{2}}\right)
e^{\Lambda_{Y}t/n+\Lambda_{X}t}\frac{1-\left( e^{-\frac{t}{n}}\right)
^{n+1}}{1-\left( e^{-\frac{t}{n}}\right) }\text{.}%
\end{align*}
So%
\[
d\left( \left( F_{t}^{\ast}\left( G\right) \right) _{t/n}\left(
x\right) ,G_{t/n}\left( x\right) \right) =o\left( \tfrac{1}{n}\right)
\]
and $F_{t}^{\ast}\left( G\right) \sim G$ by the same argument at the last
paragraph of the proof of Theorem \ref{FrobeniusThm}.
The converse is trivial.
\end{proof}
Using Example \ref{Banach Example}, this theorem applies to the non-locally
compact setting with nonsmooth vector fields. \cite{RampSuss}, another paper
which inspires this monograph, obtains similar results with a very different approach.
\section{Examples\label{SectionExs}}
\begin{example}
Let $M$ be a Banach space. First let $X$ and $Y$ be translations in the
directions of $u$ and $v\in M$%
\[
X_{t}\left( x\right) :=x+tu\text{\qquad}Y_{t}\left( x\right) :=x+tv
\]
then $F=X$ and $G=Y$ for $\left| t\right| \leq1$. Obviously $\left[
F,G\right] =0$ and the flows commute.
Next consider the dilations $X$ and $Y$ about $u$ and $v\in M$%
\[
X_{t}\left( x\right) :=\left( 1+t\right) \left( x-u\right)
+u\text{\qquad}Y_{t}\left( x\right) :=\left( 1+t\right) \left(
x-v\right) +v\text{.}%
\]
The flows are computable with little effort using Euler curves, e.g.,%
\[
F_{t}\left( x\right) =\underset{n\rightarrow\infty}{\lim}X_{t/n}^{\left(
n\right) }\left( x\right) =e^{t}x-\left( e^{t}-1\right) u\text{.}%
\]
Then for $t\geq0$%
\begin{align*}
& \left[ F,G\right] _{t^{2}}\left( x\right) \\
& =G_{-t}F_{-t}G_{t}F_{t}\left( x\right) \\
& =e^{-t}\left[ e^{-t}\left( e^{t}\left[ e^{t}x-\left( e^{t}-1\right)
u\right] -\left( e^{t}-1\right) v\right) -\left( e^{-t}-1\right)
u\right] -\left( e^{-t}-1\right) v\\
& =x-u+e^{-t}u-e^{-t}v+e^{-2t}v-e^{-2t}u+e^{-t}u-e^{-t}v+v\\
& =x+\left( v-u\right) \left( e^{-t}-1\right) ^{2}%
\end{align*}
so $\left[ F,G\right] \sim Z$ where $Z$ is the translation $Z_{t}\left(
x\right) :=x+t\left( v-u\right) $ since, for instance with $t>0$%
\begin{align*}
& d\left( \left[ F,G\right] _{t}\left( x\right) ,Z_{t}\left( x\right)
\right) \\
& =\left| v-u\right| \left| \left( e^{-\sqrt{t}}-1\right) ^{2}-t\right|
=\left| t\right| \left| v-u\right| \left| \left( \frac{e^{-\sqrt{t}}%
-1}{\sqrt{t}}\right) ^{2}-1\right| =o\left( t\right) \text{.}%
\end{align*}
Hence the distribution $\Delta\left( X,Y\right) $ is not involutive.
However, this shows three dilations $X,Y,Z$ about linearly independent $u,v,w
$ generate all translations using their brackets. Using the same tricks we've
just employed, it is easy to check the bracket of a dilation and a translation
is tangent to a translation, e.g., if $F_{t}\left( x\right) :=x+tu$ and
$G_{t}\left( x\right) :=e^{t}x$ $($dilation about $0)$ then $\left[
F,G\right] \sim F$ since for $t>0$%
\[
\left[ F,G\right] _{t^{2}}\left( x\right) =G_{-t}F_{-t}G_{t}F_{t}\left(
x\right) =e^{-t}\left[ e^{t}\left[ x+tu\right] -tu\right] =x+tu\left(
1-e^{-t}\right)
\]
and so
\[
d\left( \left[ F,G\right] _{t}\left( x\right) ,F_{t}\left( x\right)
\right) =\left| tu\right| \left| \tfrac{1-e^{-\sqrt{t}}}{\sqrt{t}%
}-1\right| =o\left( t\right) \text{.}%
\]
To summarize:%
\begin{equation}%
\begin{array}
[c]{ll}%
\Delta\left( translations\right) & involutive\\
\Delta\left( dilations\right) & \text{not involutive}\\
\Delta\left( dilations,translations\right) & involutive\text{.}%
\end{array}
\label{ExSummary}%
\end{equation}
\end{example}
The previous example holds with minor modifications on the metric space
$\left( H\left( \mathbb{R}^{n}\right) ,d_{H}\right) $ where $H\left(
\mathbb{R}^{n}\right) $ is the set of non-void compact subsets of
$\mathbb{R}^{n}$ and $d_{H}$ is the Hausdorff metric. Theorem
\ref{ThmInvol=>Foliat} gives foliations.
\begin{example}
[two parameter decomposition of $L^{2}$]\label{ExL2decomp}Now let $M$ be real
Hilbert space $L^{2}\left( \mathbb{R}\right) $. Since $M$ is Banach the
results of the previous example hold. Denote translation by the function $h\in
L^{2}\left( \mathbb{R}\right) $ by%
\[
X_{t}\left( f\right) :=f+th\text{.}%
\]
Now however, there is another obvious candidate for an elementary flow:
translation with respect to the variable $x$, i.e.,%
\[
Y_{t}\left( f\right) \left( x\right) :=f\left( x+t\right) \text{.}%
\]
Unlike dilation and translation, the dynamic engendered by $Y$ seemingly has
nothing to do with the vector space structure of $L^{2}\left( \mathbb{R}%
\right) $. In fact, despite appearances, $Y$ is a nonsmooth flow: notice for
example with the characteristic function $\chi$ as initial condition,%
\[
\left. \frac{d}{dt}Y_{t}\left( \chi_{\left[ 0,1\right] }\right) \right|
_{t=0}\notin L^{2}\left( \mathbb{R}\right) \text{.}%
\]
Interpreted as a flow on a metric space, however, this is no obstacle. We
refer to $X$ as \textbf{vector space translation} and $Y$ as \textbf{function
translation}. Notice $X$ and $Y$ are their own flows $($for $\left| t\right|
\leq1)$. It is straightforward to check $X$ \& $Y$ close when, for example,
$h\in C^{1}\left( \mathbb{R}\right) $ with derivative $h^{\prime}\in
L^{2}\left( \mathbb{R}\right) :$%
\begin{align*}
& d\left( Y_{s}X_{t}\left( f\right) ,X_{t}Y_{s}\left( f\right) \right)
\\
& =\sqrt{\int\left( f\left( x+s\right) +th\left( x+s\right) -\left[
f\left( x+s\right) +th\left( x\right) \right] \right) ^{2}dx}\\
& =\left| st\right| \sqrt{\int\left( \frac{h\left( x+s\right) -h\left(
x\right) }{s}\right) ^{2}dx}\\
& =O\left( \left| st\right| \right)
\end{align*}
uniformly. Since they obviously satisfy E1 and E2, Theorem \ref{ThmaX+bYflow}
promises a unique flow for their sum. This was introduced by Colombo and Corli
in \cite[section 5.2]{Colombo} with other interesting function space examples,
which they also characterize with partial differential equations.
Let us now compute the bracket. We check $t>0$ explicitly, skipping the case
$t\leq0$ though this is just as easy.%
\begin{align*}
& \left[ X,Y\right] _{t^{2}}\left( f\right) \left( x\right) \\
& =Y_{-t}X_{-t}Y_{t}X_{t}\left( f\right) \left( x\right) =Y_{-t}%
X_{-t}\left[ f\left( x+t\right) +th\left( x+t\right) \right] \\
& =f\left( x\right) +th\left( x\right) -th\left( x-t\right) =f\left(
x\right) +t^{2}\left[ \frac{h\left( x\right) -h\left( x-t\right) }%
{t}\right] \text{.}%
\end{align*}
Defining a new arc field $Z_{t}\left( f\right) :=f+th^{\prime}$ we therefore
have%
\[
d\left( \left[ X,Y\right] _{t}\left( f\right) ,Z_{t}\left( f\right)
\right) =\left| t\right| \sqrt{\underset{\mathbb{R}}{\int}\left(
\frac{h\left( x\right) -h\left( x-t\right) }{t}-h^{\prime}\left(
x\right) \right) ^{2}dx}=o\left( t\right)
\]
when $h\in C^{1}\left( \mathbb{R}\right) $ with $h^{\prime}\in L^{2}\left(
\mathbb{R}\right) $. Thus $\left[ X,Y\right] \sim Z$.
This has remarkable consequences. Using the idea of Chow's Theorem from
control theory $\left( \text{also called the Chow-Rashevsky Theorem or
Hermes' Theorem}\right) $, if the $\left( n+1\right) $-st derivative
$h^{\left[ n+1\right] }$ is not contained in $span\left\{ h^{\left[
i\right] }:0\leq i\leq n\right\} $ then iterating the process of bracketing
$X$ and $Y$ generates a large space reachable via repeated compositions of $X$
and $Y$. Denoting%
\begin{equation}
\overset{n}{Z}_{t}\left( f\right) :=f+th^{\left[ n\right] }
\label{ExL2dec10}%
\end{equation}
successive brackets of $X$ and $Y$ are%
\begin{gather}
\left[ X,Y\right] \sim Z=:\overset{1}{Z}\nonumber\\
\left[ X\overset{2}{,}Y\right] :=\left[ \left[ X,Y\right] ,Y\right]
\sim\overset{2}{Z}\nonumber\\
\left[ X\overset{n}{,}Y\right] :=\underset{n\text{ times}}{\underbrace
{\left[ \left[ ...\left[ \left[ X,Y\right] ,Y\right] ,...,Y\right]
,Y\right] }}\sim\overset{n}{Z}\text{.} \label{ExL2dec5}%
\end{gather}
For notational purposes we set $\left[ X\overset{0}{,}Y\right] :=X$. In the
particular case $h\left( x\right) :=e^{-x^{2}}$ all of $L^{2}\left(
\mathbb{R}\right) $ is reachable by $X$ and $Y$.
To see this we apply the theory of orthogonal functions with the
Hermite\footnote{We may of course use other orthogonal families with a
different choice of $h$, particularly when the domain of interest is other
than $\mathbb{R}$; e.g., scaled Chebyshev polynomials for $\left[
0,2\pi\right) $, etc. We expect many choices of $h$ give controllable systems
whether the brackets generate orthogonal sets or not.} polynomials%
\[
H_{n}\left( x\right) :=\left( -1\right) ^{n}e^{x^{2}}\frac{d^{n}}{dx^{n}%
}e^{-x^{2}}=\left( -1\right) ^{n}e^{x^{2}}h^{\left[ n\right] }\left(
x\right)
\]
which have dense span in $L^{2}\left( \mathbb{R}\right) $ when multiplied by
$e^{-x^{2}/2}$. Those familiar with orthogonal expansions can predict the
rest; we review some of the details.%
\[
\left\{ \dfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}H_{n}\left( x\right)
e^{-x^{2}/2}:n\in\mathbb{N}\right\}
\]
is a basis of $L^{2}\left( \mathbb{R}\right) $ and is orthonormal since%
\begin{equation}
\underset{\mathbb{R}}{%
{\textstyle\int}
}H_{m}\left( x\right) H_{n}\left( x\right) e^{-x^{2}}dx=n!2^{n}\sqrt{\pi
}\delta_{mn}\text{.} \label{ExHermite2}%
\end{equation}
The Hermite polynomials also satisfy some useful relations%
\begin{equation}
H_{n+1}\left( x\right) =2xH_{n}\left( x\right) -2nH_{n-1}\left( x\right)
\qquad\text{and}\qquad H_{n}^{\prime}\left( x\right) =2nH_{n-1}\left(
x\right) \text{.} \label{ExHermite6}%
\end{equation}
Given any $g\in L^{2}\left( \mathbb{R}\right) $ it is possible to write%
\begin{equation}
g\left( x\right) =\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}\tfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}H_{n}\left( x\right) e^{-x^{2}/2}
\label{ExHermite10}%
\end{equation}
$($equality in the $L^{2}$ sense$)$ where%
\[
a_{n}:=\tfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}\underset{\mathbb{R}}{%
{\textstyle\int}
}g\left( x\right) H_{n}\left( x\right) e^{-x^{2}/2}\left( x\right)
dx\in\mathbb{R}\text{.}%
\]
The necessity of this formula for $a_{n}$ can easily be checked by multiplying
both sides of $\left( \ref{ExHermite10}\right) $ by $H_{n}\left( x\right)
e^{-x^{2}/2}$, integrating and applying $\left( \ref{ExHermite2}\right) $.
However, we want%
\[
g=\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}h^{\left[ n\right] }%
\]
so apply the above process to $g\left( x\right) e^{x^{2}/2}$
instead\footnote{The function $g\left( x\right) e^{x^{2}/2}$ is no longer
necessarily $L^{2}$, of course, but here we lapse into the habit of ignoring
convergence issues as they are important for the theoretical proof that all of
$L^{2}\left( \mathbb{R}\right) $ is reachable with $X$ and $Y$, but not
central to this demonstration. This theoretical lapse is easily remedied by
multiplying by the characteristic function $\chi_{\left[ -m,m\right] }$ to
guarantee all of the following integrals converge, then letting $m\rightarrow
\infty$ at the end.}. Then%
\begin{align*}
g\left( x\right) e^{x^{2}/2} & =\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}b_{n}\tfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}H_{n}\left( x\right) e^{-x^{2}%
/2}\text{\qquad so}\\
g & =\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}h^{\left[ n\right] }%
\end{align*}
where%
\begin{align*}
b_{n} & :=\tfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}\underset{\mathbb{R}}{%
{\textstyle\int}
}g\left( x\right) e^{x^{2}/2}H_{n}\left( x\right) e^{-x^{2}/2}\left(
x\right) dx\text{\qquad so that}\\
c_{n} & :=\tfrac{\left( -1\right) ^{n}}{n!2^{n}\sqrt{2\pi}}\underset
{\mathbb{R}}{%
{\textstyle\int}
}g\left( x\right) h^{\left[ n\right] }\left( x\right) e^{x^{2}}\left(
x\right) dx\text{.}%
\end{align*}
Therefore when $N$ is large, $g$ is approximated by
\[
\overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}h^{\left[ n\right] }=F_{1}\left( 0\right)
\]
where $F$ is the flow of the arc field%
\[
\widetilde{X}:=\overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}\left[ X\overset{n}{,}Y\right]
\]
which we follow for unit time starting with initial condition $0\in
L^{2}\left( \mathbb{R}\right) $. $F$ can, of course, be approximated by
Euler curves%
\[
F_{1}\left( 0\right) =\underset{n\rightarrow\infty}{\lim}\widetilde{X}%
_{1/n}^{\left( n\right) }\left( 0\right)
\]
and since $\widetilde{X}$ is merely a $($complicated$)$ composition of $X$ and
$Y$, this gives us a simple algorithm for approximating any function $g$ with
only two simple flows.
Let us compute a basic example to illustrate this surprising fact. Choosing at
random $g\left( x\right) :=\chi_{\left[ 0,1\right] }\left( x\right) $,
the characteristic function of the unit interval, we have%
\begin{align*}
c_{n} & :=\tfrac{\left( -1\right) ^{n}}{n!2^{n}\sqrt{2\pi}}\underset
{0}{\overset{1}{%
{\textstyle\int}
}}H_{n}\left( x\right) dx=\tfrac{\left( -1\right) ^{n}}{2\left(
n+1\right) n!2^{n}\sqrt{2\pi}}\left[ H_{n+1}\left( 1\right) -H_{n+1}%
\left( 0\right) \right] \qquad\text{so, e.g.,}\\
c_{0} & =\tfrac{1}{\sqrt{2\pi}},\text{ }c_{1}=\tfrac{-1}{2\sqrt{2\pi}%
},\text{ }c_{2}=\tfrac{1}{12\sqrt{2\pi}},\text{ }c_{3}=\tfrac{1}{12\sqrt{2\pi
}},\text{ }c_{4}=\tfrac{1}{480\sqrt{2\pi}},\text{ etc.}%
\end{align*}
by $\left( \ref{ExHermite6}\right) $. Then stopping for the purposes of
illustration at $N=3$ our function $g$ is approximated by%
\[
\overset{3}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}h^{\left[ n\right] }\text{.}%
\]
Notice the flow of $\overset{i}{Z}$ from $\left( \text{\ref{ExL2dec10}%
}\right) $ is locally the same as $\overset{i}{Z}$ since it is just vector
space translation, so we will use the same symbol. All vector space
translations commute under $($arc field$)$ addition, and the arc field%
\[
\widetilde{Z}_{t}\left( f\right) :=\left( c_{0}\overset{0}{Z}+c_{1}%
\overset{1}{Z}+c_{2}\overset{2}{Z}+c_{3}\overset{3}{Z}\right) _{t}\left(
f\right)
\]
is locally equal to its flow. Obviously%
\[
\widetilde{Z}_{1}\left( 0\right) =\overset{3}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}h^{\left[ n\right] }%
\]
and $\widetilde{Z}\sim\widetilde{X}$ where%
\begin{align*}
\widetilde{X}_{t}\left( f\right) & :=\left( c_{0}X+c_{1}\left[
X,Y\right] +c_{2}\left[ \left[ X,Y\right] ,Y\right] +c_{3}\left[ \left[
\left[ X,Y\right] ,Y\right] ,Y\right] \right) _{t}\left( f\right) \\
& =\left( \overset{3}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}\left[ X\overset{n}{,}Y\right] \right) _{t}\left( f\right)
\text{.}%
\end{align*}
Remember the arc field bracket and the arc field sum are defined as nothing
more than compositions of arc fields, e.g.,%
\begin{align*}
& \left( c_{0}X+c_{1}\left[ X,Y\right] +c_{2}\left[ \left[ X,Y\right]
,Y\right] \right) _{t}\\
& =\left[ \left[ X,Y\right] ,Y\right] _{c_{2}t}\left[ X,Y\right]
_{c_{1}t}X_{c_{0}t}%
\end{align*}
and, e.g., when $t>0$%
\begin{align*}
& c_{2}\left[ \left[ X,Y\right] ,Y\right] _{t}\\
& =Y_{-\sqrt{c_{2}t}}\left( X_{-\sqrt{c_{2}t}}Y_{-\sqrt{c_{2}t}}%
X_{\sqrt{c_{2}t}}Y_{\sqrt{c_{2}t}}\right) Y_{\sqrt{c_{2}t}}(Y_{-\sqrt{c_{2}%
t}}X_{-\sqrt{c_{2}t}}Y_{\sqrt{c_{2}t}}X_{\sqrt{c_{2}t}})\text{.}%
\end{align*}
Therefore this approximation of $g$ is achieved by computing the Euler curves
for $\widetilde{X}$ which is a complicated process $($with a simple formula$)$
of composing the elementary operations of function translation $(Y)$ and
vector space translation by the Gaussian $(X)$.
Continuing the example, for choices of $h$ other than the Gaussian it may be
the case that $h^{\left[ n+1\right] }\in span\left\{ h^{\left[ i\right]
}:0\leq i\leq n\right\} $. Then the space reachable by $X$ and $Y$ is
precisely limited. E.g., when $h$ is a trigonometric function from the
orthogonal Fourier decomposition of $L^{2}$ the parameter space is
two-dimensional, or when $h$ is an $n$-th order polynomial in the context of
$M=L^{2}\left[ a,b\right] $ then the parameter space is $\left( n+1\right)
$-dimensional.
Restating these results in different terminology: Controlling amplitude and
phase the 2-parameter system is holonomically constrained. Controlling phase
and superposition perturbation $(Y$ and $X)$ generates a larger space of
signals; how much $Y$ and $X$ deviate from holonomy depends on the choice of
perturbation function $h$. Consequently, a result for signal analysis is:
controlling two parameters is enough to generate any signal.
\end{example}
We collect some of the results of the previous example. Denote the
\textbf{reachable set} of $X$ and $Y$ by%
\[
R\left( X,Y\right) :=\left\{ Y_{s_{n}}X_{t_{n}}Y_{s_{n-1}}X_{t_{n-1}%
}...Y_{s_{1}}X_{t_{1}}\left( 0\right) \in L^{2}\left( \mathbb{R}\right)
:s_{i},t_{i}\in\mathbb{R},\text{ }n\in\mathbb{N}\right\}
\]
where $0\in L^{2}\left( \mathbb{R}\right) $ is the constant function.
$R\left( X,Y\right) $ is the set of all finite compositions of $X$ and $Y$.
\begin{theorem}
Let $h\in L^{2}\left( \mathbb{R}\right) $ be the Gaussian $h\left(
x\right) :=e^{-x^{2}}$ and define
\[
X_{t}\left( f\right) :=f+th\qquad\text{and}\qquad Y_{t}\left( f\right)
\left( x\right) :=f\left( x+t\right) \text{.}%
\]
Then $R\left( X,Y\right) $ is dense in $L^{2}\left( \mathbb{R}\right) $.
\end{theorem}
\begin{algorithm}
Let $g\in L^{2}\left( \mathbb{R}\right) $ be such that $\underset
{\mathbb{R}}{\int}\left[ g\left( x\right) e^{x^{2}/2}\right] ^{2}%
dx<\infty$. Then
\[
g=\underset{n\rightarrow\infty}{\lim}\widetilde{X}_{1/n}^{\left( n\right)
}\left( 0\right)
\]
where%
\begin{gather*}
\widetilde{X}:=\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}\left[ X\overset{n}{,}Y\right] \text{\qquad with}\qquad c_{n}%
:=\tfrac{\left( -1\right) ^{n}}{n!2^{n}\sqrt{2\pi}}\underset{\mathbb{R}}{%
{\textstyle\int}
}g\left( x\right) h^{\left[ n\right] }\left( x\right) e^{x^{2}}\left(
x\right) dx\\
\text{and}\qquad\left[ X\overset{n}{,}Y\right] :=\underset{n\text{ times}%
}{\underbrace{\left[ \left[ ...\left[ \left[ X,Y\right] ,Y\right]
,...,Y\right] ,Y\right] }}%
\end{gather*}
and%
\[
\left[ X,Y\right] \left( f,t\right) :=\left\{
\begin{array}
[c]{c}%
Y_{-\sqrt{t}}X_{-\sqrt{t}}Y_{\sqrt{t}}X_{\sqrt{t}}\left( f\right) \\
X_{-\sqrt{\left| t\right| }}Y_{-\sqrt{\left| t\right| }}X_{\sqrt{\left|
t\right| }}Y_{\sqrt{\left| t\right| }}\left( f\right)
\end{array}
\right.
\begin{array}
[c]{c}%
\text{for }t\geq0\\
\text{for }t<0
\end{array}
\]
for any $f\in L^{2}\left( \mathbb{R}\right) $.
\end{algorithm}
\begin{example}
Let us continue Example \ref{ExL2decomp} with $M=L^{2}\left( \mathbb{R}%
\right) $ and%
\[
X_{t}\left( f\right) :=f+th\qquad\text{and}\qquad Y_{t}\left( f\right)
\left( x\right) :=f\left( x+t\right)
\]
which are vector space translation and function translation. Define the arc
fields%
\[
V_{t}\left( f\right) :=e^{t}f\qquad\text{and}\qquad W_{t}\left( f\right)
\left( x\right) :=f\left( e^{t}x\right)
\]
which may be thought of as \textbf{vector space dilation} $($about the point
$0\in M)$ and \textbf{function dilation} $($about the point $0\in\mathbb{R})
$. Again, $V$ and $W$ are coincident with their own flows. Using the same
approach as in Example \ref{ExL2decomp} it is easy to check the brackets
satisfy%
\[%
\begin{array}
[c]{ll}%
\left[ X,Y\right] _{t}\left( f\right) =f+th^{\prime}+o\left( t\right) &
\qquad\left[ X,V\right] _{t}\left( f\right) =f+th+o\left( t\right) \\
\left[ X,W\right] _{t}\left( f\right) \left( x\right) =f\left(
x\right) +txh^{\prime}\left( x\right) +o\left( t\right) & \qquad\left[
Y,V\right] =0\\
\left[ Y,W\right] _{t}\left( f\right) \left( x\right) =f\left(
x-t\right) +o\left( t\right) & \qquad\left[ V,W\right] =0
\end{array}
\]
assuming for the $\left[ X,Y\right] $ and $\left[ X,W\right] $
calculations that $h\in C^{1}\left( \mathbb{R}\right) $ and $h^{\prime}\in
L^{2}\left( \mathbb{R}\right) $. Consequently%
\[%
\begin{array}
[c]{ll}%
\Delta\left( X,Y\right) & \text{may be highly non-involutive depending on
}h\text{,}\\
\Delta\left( X,V\right) & \text{is involutive, but }X\text{ and }V\text{ do
\textbf{not} commute,}\\
\Delta\left( X,W\right) & \text{may be highly non-involutive depending on
}h\text{,}\\
\Delta\left( Y,V\right) & \text{is involutive; }Y\text{ and }V\text{
commute,}\\
\Delta\left( Y,W\right) & \text{is involutive, but }Y\text{ and }W\text{ do
\textbf{not} commute,}\\
\Delta\left( V,W\right) & \text{is involutive; }V\text{ and }W\text{
commute.}%
\end{array}
\]
When $h$ is chosen correctly, $X$ and $W$ control many function spaces,
similarly to $X$ and $Y$. The four involutive distributions foliate
$L^{2}\left( \mathbb{R}\right) $.
\end{example}
|
1,108,101,566,512 | arxiv | \section{Introduction}
From contact tracing devices preventing the epidemic spread to vehicular networks and smart cities, \emph{cyber-physical systems} (CPS) are pervasive information and communication technologies augmenting the human perception and control over the physical world. CPS consist of engineering, physical and biological systems that are tightly interacting with computational entities through sensors and actuators. CPS are networked at every scale and they are connected to the Internet (Internet of Things) enabling the humans and other software systems to inter-operate with them through the World Wide Web.
A prominent example can be found in the modern automotive systems where the extensive integration of sensor networks and embedded computational units has lead to the development of various driving assistance features that facilitate the driver during repetitive maneuvers and protect the passengers from hazardous situations. Furthermore, the upcoming 5G networks will empower soon vehicles also with the possibility to exchange information among each other and the road infrastructure about position and speed of vehicles, driving conditions on a particular road, accidents, or traffic jams.
Thus, this dynamic network infrastructure promises to enhance further autonomous driving applications, to reduce traffic congestion and to improve safety.
The rise of this disrupting and revolutionary technology comes at a price: these systems are becoming so ubiquitous that unexpected failures can potentially manifest causing fatal accidents and diminishing the trust of the public opinion.
Their safety-critical nature~\cite{RatasichKGGSB19} requires the engineers to verify their correct execution with respect to rigorously defined spatial and temporal requirements.
Detecting anomalies and failures at design time is extremely challenging. Exhaustive
verification techniques such as model checking
are limited to very small instances due to state-space explosion. A more practical approach is to simulate a digital replica of the CPS (its digital twin)
and to test the behaviour under different scenarios.
Requirements are generally
expressed in a formal specification language that can be
monitored online (during the simulation) or offline over the simulation traces.
This approach, called also
\emph{specification-based monitoring}~\cite{bartocci2018specification}, is nowadays the core functionality of several other computer-aided verification and design techniques for CPS such as statistical model checking~\cite{YounesS06,BortolussiMS2016ic,DavidLLMW11},
falsification analysis~\cite{SilvettiPB17,YaghoubiF2017acc,sankaranarayanan_model-based_2017,AbbasFSIG13tecs}, failure explanation/debugging~\cite{BartocciFMN18,BartocciMMMN19,BartocciMMMNP20} and parameter synthesis~\cite{DonzeKR09hscc,DonzeCLL09,Bortolussi2015,TCS2015}.
The majority of specification languages and tools available for CPS supports only the monitoring of temporal properties. Examples are Metric Temporal Logic (MTL)~\cite{Koymans90},
Signal Temporal Logic (STL)~\cite{mn13,maler2016runtime}, Timed Regular Expressions (TRE)~\cite{AsarinCM02} and Shape Expressions~\cite{BartocciDGMNQ20}.
However, spatio-temporal patterns plays a key role in the understanding of how emergent behaviors can arise
from local interactions in such complex systems of systems.
Thus, an important problem is then how to specify in a formal language spatio-temporal requirements~\cite{NenziBBLV20}, and how to efficiently detect/monitor them on the actual CPS or on the simulation of its digital twin.
In this paper, we present the Spatio-Temporal Reach and Escape Logic (STREL), a spatio-temporal specification language originally introduced in~\cite{Bartocci17memocode} and recently supported by the \textsc{MoonLight}~\cite{BartocciBLNS20} tool. STREL enables the specification of spatio-temporal requirements and their monitoring over the execution of mobile and spatially distributed components.
In this framework, space is represented as a weighted graph, describing the topological configurations in which the single
components (nodes of the graph) are arranged. Both nodes and edges have attributes modelling physical and logical quantities that can change in time.
STREL extends Signal Temporal Logic~\cite{Maler2004} with two spatial operators \emph{reach} and \emph{escape} from which is possible to derive other spatial modalities such as \emph{everywhere}, \emph{somewhere} and \emph{surround}. These operators enable a monitoring procedure where the satisfaction of the property at each location depends only on the satisfaction of its neighbours. Furthermore, we show how STREL can be interpreted according different semantics (Boolean, real-valued) semantics based on constraint semirings, an algebraic structure suitable for constraint satisfaction and optimisation.
The use of STREL is by no means restricted to CPS as application domain, but it is capable of capturing many interesting notions in other spatio-temporal systems, including biological systems (e.g. Turing patterns~\cite{BartocciBMNS15, bartocci2014,NenziBCLM18}), epidemic spreading scenarios (in real space or on networks)~\cite{network_epidemic2015}, or ecological systems. In these cases, monitoring algorithms typically act as key subroutines in statistical model checking
~\cite{BartocciBMNS15}.
This paper extends our preliminary work~\cite{Bartocci17memocode} as follows:
\begin{itemize}
\item we guide the reader through the all paper using a running example to facilitate the comprehension of our framework in each step;
\item we simplify the definition of dynamical spatial model and of the spatio-temporal semantics;
\item we extend spatial STREL operators to support interval constraints on distances;
\item we propose new monitoring algorithms, more efficient and able to work with interval constraints. We also provide correctness proofs and discuss in detail its algorithmic complexity;
\item we consider a second case study where we monitor
spatio-temporal requirements in STREL on a simulated epidemic spreading model for COVID19;
\end{itemize}
The rest of the paper is organized as follows. We discuss
the related work in Section~\ref{sec:related}. In Section~\ref{sec:runningExample} we introduce the reader with a running example while in Section~\ref{sec:definitions} we present the considered model of space and the type of signals. The STREL specification language is presented in Section~\ref{sec:ReachSTL} and the offline monitoring algorithms are discussed in Section ~\ref{sec:alg}. In Section~\ref{sec:ZigBee} and \ref{sec:epidemic} we discuss the two case studies: the ZigBee protocol and the COVID-19 epidemic spreading. Finally, Section~\ref{sec:conclusion} presents the conclusions.
\section{Related Work}
\label{sec:related}
Monitoring spatial-temporal properties over CPS executions was initially addressed in~\cite{Talcott08} where Talcott introduced the notion of spatial-temporal event-based model.
In this model, actions (e.g. the exchange of messages, or a physical changes) are labeled with time and space stamps and they trigger events that are further processed by a monitor.
In~\cite{TVG09}
the model was further extended to enable different space representations. Although the approaches in~\cite{Talcott08,TVG09}
provide a valuable algorithmic framework,
they lack a specification language and the monitors cannot be automatically generated.
In the context of \emph{collective adaptive systems}~\cite{CianciaLLM16},
several mathematical structures such as \emph{topological spaces}, \emph{closure spaces}, \emph{quasi-discrete closure spaces} and \emph{finite graphs}~\cite{NenziBCLM15}
have been employed to reason about spatial relations (e.g. \emph{closeness} and \emph{neighborhood}) of interacting agents. Another line of research in~\cite{GrosuSCWEB09,spatel,bartocci2014} proposed the use of \emph{quad-trees} spatial structures~\cite{FinkelB74} in~\cite{GrosuSCWEB09,spatel,bartocci2014} to reason about fractal-like spatial patterns or spatial
superposition properties in a grid, such as electrical spiral formation in cardiac tissues~\cite{GrosuSCWEB09} or power management requirements in a smart grid~\cite{spatel}. However, quad-trees are spatial data structures that are not invariant with respect to isometric transformations such as translation, rotation and reflection:
two spatial patterns that are equivalent modulo an isometric transformation
are usually represented by two different quad-trees. To overcome this limitation, in our
approach we have considered weighted graphs
where the edges represent the spatial relations between the involved entities.
Spatial logics have been the subject of theoretically investigation since at least almost a couple of decades~\cite{handbookSP}.
The work in~\cite{handbookSP} focuses on theoretically investigation, expressivity and decidability, often in continuous space. Less attention has been placed on more practical aspects, especially in the verification procedures.
For example, model checking techniques for spatial models have been considered more recently.
In~\cite{ciancia2014}, the authors introduce a {\it Spatial Logic for Closure Spaces} (SLCS) that leverages a discrete and topological notion of space, based on closure spaces~\cite{Gal99}. An extension of the SLCS with temporal aspects, as ``snapshot'' models, was proposed later in~\cite{CianciaGGLLM15}. This extends SLCS with the temporal modality of the branching logic {\it Computation Tree Logic}~\cite{EmersonH83}. However, the algorithms to check snapshot models are very computational expensive and are susceptible to state-space explosion problems because the spatial formulae need to be recomputed at every state.
It is also worth mentioning \textsc{VoxLogicA}~\cite{BelmonteCLM19,BuonamiciBCLM20} a recently developed a spatial model checking tool for image analysis. However, this tool is not suitable to monitor CPS, because it is specialized for medical imaging and does not take into consideration time.
Relevant works are also those on spatial logics for process algebra with locations such as in~\cite{CC04,Cardelli2000mobile}, or spatial logic for rewrite theories~\cite{rew1}.
Other logic-based formalisms have been introduced
to reason about the topological~\cite{BC02} or directional~\cite{BS10} aspects of locally
interacting components.
In the topological approach~\cite{BC02}, the entities are sets of points in the space and the relation between them is preserved under translation, scaling and rotation.
If the relation between objects depends on their relative position then the spatial model supports the
directional reasoning. These logics are highly computationally complex~\cite{BS10} or even undecidable~\cite{MR99} and indeed impractical to use.
Monitoring spatial-temporal behaviors has recently started to receive more attention
with several works such as {\it Spatial-Temporal Logic} (SpaTeL)~\cite{spatel}, {\it Signal Spatio-Temporal Logic} SSTL~\cite{NenziBCLM15}, {\it Spatial Aggregation Signal Temporal Logic}
(SaSTL)~\cite{MaBLS020,sastl2021} and {\it Spatio-Temporal Reach and Escape Logic} STREL~\cite{Bartocci17memocode}.
SpaTeL is the unification of
{\it Signal Temporal Logic}~\cite{Maler2004} (STL) and {\it Tree Spatial Superposition Logic} (TSSL)
introduced in~\cite{bartocci2014} to classify and detect spatial patterns that are expressed using quad trees~\cite{FinkelB74}. This allows one to capture very complex spatial structures, but at the price of a complex formulation of spatial properties, which are in practice only learned from some template images.
SSTL instead combines STL with two spatial modalities, one expressing that something is true \emph{somewhere} nearby and the other capturing the notion of being \emph{surrounded} by a region that satisfies a given spatio-temporal property. SSTL has
two possible semantics a Boolean and a real-valued one.
SSTL~\cite{NenziBCLM18} operates over a static topological space while STREL on the contrary can monitor entities over a dynamic topological space.
Furthermore, STREL generalizes SSTL spatial modalities with the \emph{reach} and \emph{escape} operators, simplifying the monitoring that can be computed locally with respect to each node. SaSTL~\cite{MaBLS020,sastl2021} is
recently proposed specification language that augment STL with two new logical operators for expressing spatial aggregation and spatial counting characteristics that are typical in monitoring spatio-temporal requirements in a smart city. Similarly to SSTL, also SaSTL operates only on a static topological space. Finally, another important key characteristic of STREL with respect to all the aforementioned
spatio-temporal specification languages is the possibility to define the semantics using constraint semirings algebraic structures. This provides the possibility to elegantly define a unified monitoring framework for both the qualitative and quantitative semantics (similarly to~\cite{JaksicBGN18} for the STL case). Moreover, it opens our framework to the possibility to define new semantics for STREL by just defining and plugging-in a new semiring algebraic structure but without the need to redefine the monitoring algorithm.
\section{Running Example: A Mobile ad hoc sensor network}
\label{sec:runningExample}
A mobile ad-hoc sensor network (MANET) can consist of up ten thousands of mobile devices connected wirelessly usually deployed to monitor environmental changes such as pollution, humidity, light and temperature.
Each sensor node is equipped with a sensing transducer, data processor, a radio transceiver and an embedded battery. A node can move independently in any direction and indeed can change its links to other devices frequently.
Two nodes can communicate each other if their Euclidean distance is at most their communication range as depicted in Fig.~\ref{fig:proxconnect}~(right) .
Moreover, the nodes can be of different type and their behaviour and communication can depend on their types. In the next section we consider the simplest MANET with all nodes of the same type, while in Section~\ref{sec:ReachSTL} we will differentiate them to describe more complex behaviours.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{img/p1}
\includegraphics[scale=0.5]{img/c1}
\caption{Proximity graph (left) and Connectivity graph (right)}
\label{fig:proxconnect}
\end{figure}
\section{Spatial Models, Signals and Traces}
\label{sec:definitions}
In this section, we introduce the model of space we consider,
and the type of signals that the logic specifies.
\subsection{Constraint Semirings}
An elegant and general way to represent the result of monitoring is based
on \emph{constraint semiring}. This is an algebraic structure that consists
of a domain and two operations named \emph{choose} and \emph{combine}.
Constraint semirings are subclass of semirings which have been shown
to be very flexible, expressive and convenient for a wide range of problems,
in particular for optimisation and solving problems with soft constraints
and multiple criteria~\cite{BMR97}, and in model checking~{\protect{\cite{LM05}}}.
\begin{definition}[semiring]
A \emph{semiring} is a tuple $\langle A, \oplus, \otimes, \bot, \top \rangle$ composed by
a set $A$, two operators $\oplus$, $\otimes$ and two constants $\bot$,
$\top$ such that:
\begin{itemize}
\item $\oplus : A \times A \rightarrow A$ is an associative, commutative operator to ``choose'' among values\footnote{We let
$x\oplus y$ to denote $\oplus(\{ x , y\})$.},
with $\bot$ as unit element ($\bot \oplus a = a, \forall a \in A$).
\item $\otimes : A \times A \rightarrow A$ is an associative operator to ``combine'' values with $\top$ as unit element ( $\top \otimes a = a, \forall a \in A$) and $\bot$ as absorbing element ($\bot \otimes a = \bot, \forall a \in A$ )
\item $\otimes$ distributes over $\oplus$;
\end{itemize}
\end{definition}
\begin{definition}[constraint semiring]
A \emph{constraint semiring}
is a semiring $\langle A, \oplus, \otimes, \bot, \top \rangle$ such that:
\begin{itemize}
\item $\oplus$ is defined over $2^A$, idempotent ( $a\in A$ $a\oplus a=a\otimes a =a$) and has $\top$ as absorbing element ( $\top \oplus a = \top$)
\item $\otimes$ is commutative
\item $\sqsubseteq$, which is defined as $a\sqsubseteq b$ iff {$a\oplus b=b$},
provides a complete lattice $\langle A , \sqsubseteq , \bot, \top \rangle$.
\end{itemize}
We say that a \emph{constraint semiring} $A$ is \emph{idempotent} if and only if also the combine operator $\otimes$ is idempotent, i.e. $a=a\otimes a =a$. Moreover, we say that a \emph{semiring}
$A$ is \emph{total} when $\sqsubseteq$ is a {\emph{total order}}.
\end{definition}
With an abuse of notation we sometimes refer to a semiring
$\langle A, \oplus,\otimes , \bot, \top \rangle$ with the carrier $A$
and to its components by subscripting them with the carrier, i.e.,
$\oplus_A$, $\otimes_A$, $\bot_A$ and $\top_A$. For the sake of a lighter
notation we drop the subscripts when clear from the context.
\begin{example}\label{ex:semirings}
Typical examples of semirings are\footnote{We use $\mathbb{R}^{\infty}$ (resp. $\mathbb{N}^{\infty}$) to denote $\mathbb{R}\cup\{-\infty,+\infty\}$ (resp. $\mathbb{N}\cup\{\infty\}$).}:
\begin{itemize}
\item the Boolean semiring $\langle \{\mathit{true},\mathit{false}\}, \vee, \wedge, \mathit{false}, \mathit{true} \rangle$;
\item the tropical semiring $\langle \mathbb{R}_{\geq 0}^{\infty},\emph{min},+,+\infty,0 \rangle$;
\item the max/min semiring: $\langle \mathbb{R}^{\infty}, \emph{max},\emph{min}, -\infty, +\infty \rangle$ ;
\item the integer semiring: $\langle \mathbb{N}^{\infty}, \emph{max},\emph{min}, 0, +\infty \rangle$.
\end{itemize}
Boolean, max/min and integer semirings are \emph{idempotent} while tropical semiring is not. All the above semirings are \emph{total}.
\end{example}
One of the advantages of \emph{semirings} is that these can be easily composed. For instance, if $A$ and $B$ are two semirings, one can consider the \emph{cartesian product} $\langle A\times B,(\bot_A,\bot_B), (\top_A,\top_B), \oplus,\otimes\rangle$ where operations are applied elementwise.
\subsection{Spatial model}
Space is represented via a graph with edges having a weight from a set $A$.
We consider directed graphs (undirected graphs can consider a symmetric relation).
\begin{definition}[$A-$spatial model]
An $A-$\emph{spatial model} $\mathcal{S}$ is a pair $\langle L, \mathbf{W}\rangle$ where:
\begin{itemize}
\item $L$ is a set of \emph{locations}, also named \emph{space universe};
\item $\mathbf{W}\subseteq L\times A\times L$ is a \emph{proximity function} associating at most one label $w \in A$ with each distinct pair $\ell_1,\ell_2\in L$.
\end{itemize}
\end{definition}
We will use $\mathbb{S}_{A}$ to denote the set of $A$-\emph{spatial models}, while $\mathbb{S}^{L}_{A}$ indicates the set of $A$-\emph{spatial models} having $L$ as a set of locations. In the following, we will equivalently write $(\ell_1,w,\ell_2)\in \mathbf{W}$ as $\mathbf{W}(\ell_1,\ell_2)=w$ or $\nextto{\ell_1}{w}{\ell_2}$, saying that $\ell_1$ is \emph{next to} $\ell_2$ with weight $w \in A$.
\begin{example} $\mathbb{R}_{\geq 0}^{\infty}$-spatial model on \emph{tropical semiring} (see Example~\ref{ex:semirings}) can be used to represent standard {\it weighed graphs} as Figure~\ref{fig:spmodel}. $L$ is the set of nodes and the proximity function $\mathbf{W}$ defines the weight of the edges, e.g. $\mathbf{W}(\ell_2,\ell_7)= \mathbf{W}(\ell_7,\ell_2) =5$
{\small
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}
[scale=.6,auto=left,every node/.style={circle,thick,inner sep=0pt,minimum size=6mm}]
\node (1) [draw = black] at (-1,-1) {$\ell_1$};
\node (2) [draw = black] at ( -1,1) {$\ell_2$};
\node (3) [draw = black] at ( -2,3) {$\ell_3$};
\node (4) [draw = black] at ( -5, 1){$\ell_4$};
\node (5) [draw = black] at ( 2, 2) {$\ell_5$};
\node (6) [draw = black] at (-4, 2) {$\ell_6$};
\node (7) [draw = black] at (1, 0) {$\ell_7$};
\node (8) [draw = black] at (-3,0) {$\ell_8$};
\node (9)[draw = black] at (4,1) {$\ell_9$};
\draw [-] (1) -- (2) node[midway] {2};
\draw [-] (1) -- (8) node[midway] {1};
\draw [-] (2) -- (7) node[midway] {5};
\draw [-] (2) -- (3) node[midway] {1};
\draw [-] (4) -- (6) node[midway] {4};
\draw [-] (8) -- (6) node[midway] {3};
\draw [-] (7) -- (9) node[midway] {7};
\draw [-] (7) -- (5) node[midway] {2};
\draw [-] (3) -- (5) node[midway] {3};
\end{tikzpicture}
\end{center}
\caption{Example of a weighted undirected graph; e.g. $\mathbf{W}(\ell_2,\ell_7)= \mathbf{W}(\ell_7,\ell_2) =5$ }
\label{fig:spmodel}
\end{figure}
}
\end{example}
A special class of spatial models are the ones based on \emph{Euclidean spaces}.
\begin{definition}[Euclidean spatial model]
\label{def:Euclidean}
Let $L$ be a set of locations, $R\subseteq L\times L$ a (reflexive) relation and $\lambda: L\rightarrow \mathbb{R}^{2}$ a function mapping each location to a point in $\mathbb{R}^{2}$, we let $\mathcal{E}(L,R,\lambda)$ be the $\mathbb{R}^{\infty}\times\mathbb{R}^{\infty}$-spatial model\footnote{$\mathbb{R}^{\infty}$ is the \emph{max/min} semiring considered in Example~\ref{ex:semirings}.} $\langle L, \mathbf{W}^{\lambda, R}\rangle$ such that:
\begin{displaymath}
\mathbf{W}^{\lambda,R}=\{ (\ell_1,\lambda(\ell_1)-\lambda(\ell_2),\ell_2) | (\ell_1,\ell_2)\in R \}
\end{displaymath}
\label{def:euclisomod}
\end{definition}
Note that we label edges with a 2-dimensional vector $w$ describing how to reach $\ell_2$ from $\ell_1$, i.e., $\lambda(\ell_1) - w = \lambda(\ell_2)$. This obviously allows us to compute the euclidean distance between $\ell_1$ and $\ell_2$ as $\| w \|_2$, but, as we will see, allows us to compute the euclidean distance of any pair of locations connected by any path, not necessarily by a line in the plane.
\begin{example}
\label{ex:manet}
When considering a MANET, we can easily define different proximity functions for the same set of locations, where each location represents a mobile device.
Given a set of $n$ reference points in a two-dimensional Euclidean plane, a Voronoi
diagram~\cite{Aurenhammer1991} partitions the plane into set of $n$ regions, one per reference point,
assigning each point of the plane to the region corresponding to the closest reference point.
The dual of the Voronoi diagram is the proximity graph or Delaunay triangulation~\cite{Delaunay1934}.
In Figure~\ref{fig:proxconnect} (left) we can see an example of Voronoi diagram (in blue) and proximity graph (in red).
The proximity function can then be defined with respect to the Cartesian coordinates, as in Definition~\ref{def:euclisomod}:
\begin{math}
\mathbf{W}^{\mu,R}(\ell_i,\ell_j)=\mu(\ell_i)-\mu(\ell_j)=(x_i,y_i)-(x_j,y_j)= (x_i - x_j , y_i -y_j)
\end{math}, where
$(x_i,y_i)$ are the plane coordinates of the location $\ell_i$.
The proximity function can be also equal to a value that depends of other specific characteristics or behaviors of our nodes. For instance, Fig.~\ref{fig:proxconnect}~(right) represents the connectivity graph of MANET. In this case a location $\ell_i$ is next to a location $\ell_j$ if and only if they are within their communication range.
\end{example}
Given an $A$-spatial model we can define \emph{routes}.
\begin{definition}[route]
Let $\mathcal{S}=\langle L,\mathbf{W}\rangle$, a \emph{route} $\tau$
is an infinite sequence $\ell_0 \ell_1\cdots \ell_k\cdots$ in $L^{\omega}$ such that for any $i\geq 0$, $\nextto{\ell_i}{d}{\ell_{i+1}}$.
\end{definition}
Let $\tau=\ell_0 \ell_1\cdots \ell_k\cdots$ be a route, $i\in \mathbb{N}$ and $\ell_i \in L$, we use:
\begin{itemize}
\item $\tau[i]$ to denote the $i$-th node $\ell_i$ in $\tau$;
\item $\tau[i..]$ to indicate the suffix route $\ell_i \ell_{i+1} \cdots$;
\item $\ell \in \tau$ when there exists an index $i$ such that $\tau[i]=\ell$, while we use $\ell\not\in \tau$ if this index does not exist;
\item $\tau(\ell)$ to denote the first occurrence of $\ell$ in $\tau$:
\[
\tau(\ell)=\left\{
\begin{array}{ll}
\min\{ i | \tau[i]=\ell \} & \mbox{if $\ell\in \tau$}\\
\infty & \mbox{otherwise} \\
\end{array}
\right.
\]
\end{itemize}
We also use $Routes(\mathcal{S})$ to denote the set of routes in $\mathcal{S}$, while $Routes(\mathcal{S},\ell)$ denotes the set of routes starting from $\ell \in L$.
We can use routes to define the \emph{distance} among two locations in a \emph{spatial model}. This distance is computed via an appropriate function $f$ that combines all the weights in a route into a value taken from an appropriate \emph{total ordered} monoid $B$.
\begin{definition}[Distance Domain]
\label{def:distDom}
We say that \emph{distance domain} $(B,\bot_B,\top_B,+_{B},\leq_{B})$ whenever $\leq_{B}$ is a total order relation over $B$ where $\bot_{B}$ is the minimum while $\top_{B}$ is the maximum and $(B,\bot_B,+_{B})$ is a monoid. Given a distance domain $B$, we will use $\bot_{B}$, $\top_{B}$, $+_B$ and $\leq_B$ to denote its elements.
\end{definition}
\begin{definition}[Distance Function and Distance over paths]
Let $\mathcal{S}=\langle L,\mathbf{W}\rangle$ be an $\mathcal{S}=\langle L,\mathbf{W}\rangle$
be an $A$-spatial model, $\tau$ a route in $\mathcal{S}$, $\langle B,\bot_{B},\top_{B},+_{B},\leq_{B} \rangle$ a \emph{distance domain}, we call $f:A\rightarrow B$ the \emph{distance function}, associating elements of $A$ to the distance domain $B$. The distance $d_{\tau}^{f}[i]$ up-to index $i\in \mathbb{N}^{\infty}$ is defined as follows:
\[
d_{\tau}^{f}[i]= \begin{cases}
\bot_{B} & i=0 \\
\infty & i=\infty \\
f(w) +_{B} d_{\tau[1..]}^{f}[i-1] & (i>0)\mbox{ and } \nextto{\tau[0]}{w}{\tau[1]}
\end{cases} \\
\]
\noindent
Given a locations $\ell\in L$, the distance over $\tau$ up-to $\ell$ is then $d_{\tau}^{f}[\ell] = d_{\tau}^{f}[\tau(\ell)]$ if $\ell\in \tau$, while it is $\top_{B}$ if $\ell\not\in \rho$.
\end{definition}
\begin{example}
\label{ex:distancefunction}
Considering again a MANET, one could be interested in different types of distances, e.g.,
\emph{counting} the number of \emph{hop}, or distances induced by the weights of the Euclidean space structure.
\noindent
To count the number of hop, we can simply use the function
$hop: A \rightarrow \mathbb{N}^{\infty}$, taking values in the distance domain $\langle \mathbb{N}^{\infty}, 0, \infty, +, \leq \rangle$,:
\[
hop(w)=1
\]
and in this case $d^{hop}_\tau[i]=i$
Considering the proximity function $\mathbf{W}^{\mu,R}(\ell_i,\ell_j)$ computed from the Cartesian coordinates and the distance domain $\langle \mathbb{R}^{\infty}, 0, \infty, +, \leq \rangle$, we can use the Euclidean distance $\Delta(x,y)= \| (x,y) \|$, where $(x,y)$ are the coordinates of the vectors returned by $\mathbf{W}^{\mu,R}$.
It is easy to see that for any route $\tau$ and for any location $\ell \in L$ in $\tau$, the function $d_{\tau}^{\Delta}(\ell)$ yields the sum of lengths of the edges in $\mathbb{R}^{2}$ connecting $\ell $ to $\tau(0)$.
Given a distance function $f:A\rightarrow B$, the distance between two locations $\ell_1$ and $\ell_2$ in a $A$-spatial model is obtained by choosing the minimum distance along all possible routes starting from $\ell_1$ and ending in $\ell_2$:
\[d^{f}_{\mathcal{S}}[\ell_1,\ell_2] = \min\left\{ d^{f}_{\tau}[\ell_2] | \tau\in Routes(\mathcal{S},\ell_1) \right\}.
\]
\begin{example}
\label{ex:distanceLocations}
Consider again the distance functions defined for a MANETS. For \emph{hop}, we are taking the minimum hop-length over all paths connecting $\ell_1$ and $\ell_2$, resulting in the shortest path distance.
\end{example}
\end{example}
\subsection{Spatio-Temporal Signals}
\begin{definition}
A
{\emph{signal domain}} is a tuple $\langle D, \oplus,\otimes, \odot,\bot,\top\rangle$ where:
\begin{itemize}
\item $\langle D, \oplus,\otimes,\bot, \top\rangle$, is an \emph{idempotent semiring};
\item $\odot: D\rightarrow D$, is a \emph{negation function} such that:
\begin{itemize}
\item $\odot\top =\bot$;
\item $\odot\bot = \top$;
\item $\odot(v_1\oplus v_2)=(\odot v_1)\otimes (\odot v_2)$
\item $\odot(v_1\otimes v_2)=(\odot v_1)\oplus (\odot v_2)$
\item for any $v\in D$, $\odot ( \odot v ) = v$.
\end{itemize}
\end{itemize}
\end{definition}
In this paper we will consider two \emph{signal domains}:
\begin{itemize}
\item Boolean signal domain $\langle \{ \top , \bot \}, \vee, \wedge,\neg, ,\bot,\top, \rangle$ for qualitative monitoring;
\item {Max/min signal domain $\langle \mathbb{R}^{\infty}, \max, \min, -, \bot, \top,\rangle$} for quantitative monitoring.
\end{itemize}
For signal domains we will use the same notation and notational conventions introduced for semirings.
\begin{definition} Let $\mathbb{T}=[0, T] \subseteq \mathbb{R}_{\geq 0}$ a time domain and $\langle D, \oplus,\otimes, \odot ,\bot ,\top \rangle$ a \emph{signal domain}, a \emph{temporal $D$-signal} $\nu$ is a function
$\nu: \mathbb{T}\rightarrow D$.
\noindent
Consider a finite sequence:
\[
\tilde{\tsign} = [(t_{0}, d_0),\ldots,(t_{n}, d_{n})]
\]
such that for $\forall i, t_i\in \mathbb{T}$, $t_i<t_{i+1}$ and $d_i\in D$.
We let $\tilde{\tsign}$ denote a \emph{piecewise constant temporal $D$-signal} in $\mathbb{T}=[t_0, T]$, that is
\[
\tilde{\tsign}(t) = \begin{cases}
& d_i \quad \text{ for } t_{i} \leq t < t_{i+1}, \\
& d_n \quad \text{ for } t_{n} \leq t \leq T;
\end{cases} \\
\]
\end{definition}
Given a \emph{piecewise constant temporal signal} $\tilde{\tsign}=[(t_{0}, d_0),\ldots,(t_{n}, d_{n})]$ we will use $\mathcal{T}( \tilde{\tsign} )$ to denote the set $\{ t_0,\ldots, t_n \}$ of \emph{time steps} in $\tilde{\tsign}$; $start(\tilde{\tsign})$ to denote $t_0$; while we will say that $\tilde{\tsign}$ is \emph{minimal} if and only if for any $i$, $d_i\not=d_{i+1}$.
We will also let $\tilde{\tsign}[ t=d ]$ to denote the signal obtained from $\tilde{\tsign}$ by adding the element $(t,d)$.
Finally, if $\nu_1$ and $\nu_2$ are two $D$-temporal signals, and $op: D\times D\rightarrow D$, $\nu_1~op~\nu_2$ denotes the signal associating with each time $t$ the value $\nu_1(t)~op~\nu_2(t)$. Similarly, if $op:D_1 \rightarrow D_2$, $op~\nu_1$ denotes the $D_2-$signal associating with $t$ the value $op~ \nu_1(t)$.
\begin{definition}[Spatial $D$-signal] Let $L$ be a \emph{space universe}, and $\langle D, \oplus,\otimes, \odot ,\bot ,\top\rangle$ a signal domain. A \emph{spatial $D$-signal} is a function $\mathbf{s}: L\rightarrow D$.
\end{definition}
\begin{definition}[Spatio-temporal $D$-signal]
Let $L$ be a space universe, $\mathbb{T}=[0, T]$ a time domain, and $\langle D, \oplus,\otimes, \odot,\top,\bot\rangle$ a signal domain, a spatio-temporal $D$-signal is a function
\[ \sigma: L \rightarrow \mathbb{T} \rightarrow D \]
\noindent
such that $\sigma(\ell)=\nu$ is a temporal signal that returns a value $\nu(t) \in {D}$ for each time $t \in \mathbb{T}$. We say that $\sigma$ is \emph{piecewise constant} when for any $\ell$, $\sigma(\ell)$ is a \emph{piecewise constant temporal signal}. \emph{Piecewise constants} signal are denoted by $\tilde{\sigma}$. Moreover, we will use $\mathcal{T}(\tilde{\sigma})$ to denote $\bigcup_{\ell}\mathcal{T}(\tilde{\sigma}(\ell))$. Finally, we let $\tilde{\sigma}@t$ denote the spatial signal associating each location $\ell$ with $\tilde{\sigma}(\ell,t)$.
\end{definition}
Given a spatio-temporal signal $\sigma$, we will use $\sigma@t$
to denote the \emph{spatial signal} at time $t$, i.e. the signal $\mathbf{s}$ such that $\mathbf{s}(\ell)=\sigma(\ell)(t)$, for any $\ell \in L$.
Different kinds of signals can be considered while the signal domain $D$ is changed. Signals with $D= \{ true , false \}$ are called \emph{Boolean signals}; with $D = \mathbb{R}^{\infty}$ are called real-valued or \emph{quantitative signals}.
\begin{definition}[$D$-Trace]
Let $L$ be a space universe, a {\it spatio-temporal $D$-trace} is a function
$\vec x: L \rightarrow \mathbb{T} \rightarrow D_1 \times \cdots \times D_n$
such that for any $\ell\in L$ yields a vector of temporal signals $\vec{x}(\ell)=(\nu_1,\ldots,\nu_n)$.
In the rest of the paper we will use $\vec{x}(\ell,t)$ to denote $\vec{x}(\ell)(t)$.
\end{definition}
\begin{example}
We can consider a $(\mathbb{R} \times \mathbb{R})$-spatio-temporal trace of our sensor network as $\vec x: L \rightarrow \mathbb{T} \rightarrow \mathbb{R} \times \mathbb{R}$ that associates a set of temporal signals $\vec x(\ell)= (\nu_B,\nu_T)$ at each location, where $\nu_B$ and $\nu_T$ respectively correspond to the temporal signals of the battery and the temperature in location $\ell$, and each signal has domain $\langle \mathbb{R}, \max, \min, -, \bot, \top,\rangle$.
\end{example}
We plan to work with spatial models that can dynamically change their configurations. For this reason, we need to define a function that returns the spatial configuration at each time.
\begin{definition}[Dynamical $A$-Spatial Model]
Let $L$ be a spatial universe, a {\it dynamical $A$-spatial model} is a function $\mathcal{S} : \mathbb{T}\rightarrow \mathbb{S}^{L}_{A}$ associating each element in the time domain $\mathbb{T}$ with $A$-spatial model $\mathcal{S}(t)=\langle L, \mathbf{W}\rangle$ that describes the spatial configuration of locations.
\end{definition}
With an abuse of notation we use $\mathcal{S}$ for both a dynamical spatial model and a static spatial model, where for any $t$ $\mathcal{S} =\mathcal{S}(t)$.
\begin{example}
Let us considering a MANET with a proximity graph.
Figure~\ref{fig:voronoimobility} shows two different snapshots, $\mathcal{S}(t_1)=\langle L,\mathbf{W}_1 \rangle$ and $\mathcal{S}(t_2)=\langle L,\mathbf{W}_2 \rangle$, of the the dynamical spatial model $S$ for time $t_1$ and $t_2$. We can see that locations $\ell_1$ and $\ell_2$ change their position, this changed also the Voronoi diagram and the proximity graph.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{img/voronoi1}
\includegraphics[scale=0.4]{img/voronoi2}
\caption{Two snapshots of a spatial model with 7 locations $\ell_1,\dots,\ell_7$ that move in a 2D Euclidian space. The plane is partitioned using a Voronoi Diagram ({\color{blue} blue}). In {\color{red} red} we have the proximity graph.}
\label{fig:voronoimobility}
\end{figure}
\end{example}
\section{Spatio-temporal Reach and Escape Logic}
\label{sec:ReachSTL}
In this section, we present the {\it Spatio-Temporal Reach
and Escape Logic}~(STREL), an extension of the {\it Signal Temporal Logic}.
We define the syntax and the semantics of STREL, describing
in detail the spatial operators and their expressiveness.
The syntax of STREL is given by
%
{\small
\[
\varphi :=
\mu \mid
\neg \varphi \mid
\varphi_{1} \wedge \varphi_{2} \mid
\varphi_{1} \: \until{[t_{1},t_{2}]} \: \varphi_{2} \mid
\varphi_{1} \: \since{[t_{1},t_{2}]} \: \varphi_{2} \mid
\varphi_{1} \: \reach{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi_{2} \mid
\escape{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi
\]
}
where $\mu$ is an {\it atomic predicate} ($AP$), {\it negation} $\neg$ and {\it conjunction} $\wedge$ are the standard Boolean connectives, $\until{[t_{1},t_{2}]}$ and $\since{[t_{1},t_{2}]}$ are the {\it Until} and the {\it Since} temporal modalities, with $[t_{1},t_{2}]$ a real positive closed interval. For more details about the temporal operators, we refer the reader to~\cite{MalerN13, Maler2004, Donze2013}.
The spatial modalities are the {\it reachability} $\reach{[d_{1},d_{2}]}{f:A\rightarrow B}$ and the {\it escape} $\escape{[d_{1},d_{2}]}{f:A\rightarrow B}$ operators, with $f:A \rightarrow B$ a \emph{distance function} (see Definition~\ref{ex:distancefunction}), $B$ a \emph{distance domain}, and $d_{1},d_{2}\in B$ with $d_1\leq_{B} d_2$.
In what follows we will omit the type info about function $f$ when it is clear from the context or where it does not play any role.
The reachability operator $\phi_1 \reach{[d_1, d_2]}{f}\phi_2$ describes the behaviour of reaching a location satisfying property $\phi_2$, with a distance from the initial location belongs to $[d_1, d_2]$, passing only through locations that satisfy $\phi_1$.
The escape operator $\escape{[d_1, d_2]}f{\phi}$, instead, describes the possibility of escaping from a certain region via a route passing only through locations that satisfy $\phi$, with distance between the starting location of the path and the last that belongs to the interval $[d_1, d_2]$.
As customary, we can derive the {\it disjunction} operator $\vee$ and the future {\it eventually} $\ev{[t_{1},t_{2}]}$ and {\it always} $\glob{[t_{1},t_{2}]}$ operators from the until temporal modality, and the corresponding past variants from the since temporal modality, see~\cite{MalerN13} for details.
We can define also other three derived spatial operators: the {\it somewhere} $\somewhere{\leq d}{f:A\rightarrow B}\phi$ and the {\it everywhere} $\everywhere{\leq d}{f:A\rightarrow B}\phi$ that describe behaviors of some or of all locations at a certain distance from a specific point, and the {\it surround} that expresses the topological notion of being surrounded by a $\phi_2$-region, while being in a $\phi_1$-region, with additional metric constraints. A more thorough discussion of the spatial operators will be given after introducing the semantics.
\subsection{Semantics}
The semantics of STREL is evaluated point-wise at each time and each location. We stress that each STREL formula $\varphi$ abstracts from the specific domain used to express the satisfaction value of $\varphi$.
These, of course, are needed to define the semantics. In the following, we assume that $D_1$ is the domain of the spatio-temporal traces, $D_2$ is the semiring where the logic is evaluated and $B$ is a distance domain as defined in Definition~\ref{def:distDom}.
\begin{definition} [Semantics]
\label{generalsemantics}
Let $\mathcal{S}$ be a dynamical $A$-spatial model, $D_1$ and $D_2$ be two signal domains, $\vec x$ be a {\it spatio-temporal $D_1$-trace} for $L$ and $\mathcal{S}$ a dynamical spatial model.
The $D_2$-monitoring function $\mathbf{m}$ of $\vec x$ is recursively defined in Table~\ref{tab:monitoring}.
\end{definition}
\begin{table*}
\begin{center}
\begin{tabular}{rcl}
\small
$\mathbf{m}( \mathcal{S}, \vec{x}, \mu, t, \ell)$ & $=$ & $\iota(\mu,\vec{x}(\ell,t))$ \\[.2cm]
$\mathbf{m}(\mathcal{S}, \vec{x}, \neg\varphi, t, \ell)$ & $=$ & $\odot_{D_{2}} \mathbf{m}(\mathcal{S}, \vec{x}, \varphi, t, \ell)$ \\[.2cm]
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \wedge \varphi_2, t, \ell)$ & $=$ & $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \ell) \otimes_{D_2} \mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t, \ell)$ \\[.2cm]
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \until{[t_{1},t_{2}]} \: \varphi_{2}, t, \ell)$ & $=$ & ${\bigoplus_{D_2}}_{t' \in [t+t_{1}, t+t_{2}]} \big (\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2, t', \ell) \otimes_{D_2} {\bigotimes_{D_2}}_{t'' \in [t, t']} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t'', \ell) \big) $ \\[.2cm]
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \since{[t_{1},t_{2}]} \: \varphi_{2}, t, \ell)$ & $=$ &
${\bigoplus_{D_2}}_{t' \in [t-t_{2}, t-t_{1}]} \big (\mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t', \ell) \otimes_{D_2} {\bigotimes_{D_2}}_{t'' \in [t', t]} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t'', \ell) \big) $ \\[.2cm]
$\mathbf{m}(\mathcal{S}, \vec{x}, \varphi_{1} \: \reach{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi_{2}, t, \ell)$ & $=$ \\
\multicolumn{3}{r}{
${\bigoplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{ \bigoplus_{D_2}}_{i : \left(d_{\tau}^{f}[i] \in [d_{1},d_{2}]\right)}
\left(
\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2, t, \tau[i])
\otimes_{D_{2}}
{\bigotimes_{D_2}}_{j < i}
\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \tau[j])
\right)$} \\[.2cm]
$\mathbf{m}( \mathcal{S}, \vec{x}, \escape{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi, t, \ell)$ & $=$ &
${\bigoplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{\bigoplus_{D_2}}_{\ell'\in \tau : \left(d_{\mathcal{S}(t)}^{f}[\ell,\ell'] \in [d_{1},d_{2}]\right)}
~~{\bigotimes_{D_2}}_{i \leq \tau(\ell')}
\mathbf{m}( \mathcal{S}, \vec{x}, \varphi, t, \tau[i])$ \\[.2cm]
\end{tabular}
\end{center}
\caption{Monitoring function.}
\label{tab:monitoring}
\end{table*}
Given a formula $\phi$, the function $\mathbf{m}( \mathcal{S}, \vec{x}, \phi, t, \ell)$ corresponds to the evaluation of the formula at time $t$ in the location $\ell$.
The procedure will be exactly the same for different choices of the formula evaluation domain, just operators have to be interpreted according to the chosen semirings and signal domains.
In particular, the choice of the signal domain $D_2$ produces different types of semantics.
In this paper, we consider two signal domains: $\mathbb{B}$ and $\mathbb{R^\infty}$, giving rise to qualitative and quantitative monitoring, corresponding respectively to a Boolean answer value and real satisfaction value.
For the
Boolean signal domain ($D_2 = \langle \{ \top , \bot \}, \vee, \wedge,\neg \rangle $ ),
we say that $(\mathcal{S} , \vec{x}(\ell,t))$ satisfies a formula $\phi$ iff $\mathbf{m}( \mathcal{S}, \vec{x}, \phi, t, \ell)= \top$.
For {max/min signal domain $\langle \mathbb{R}^{\infty}, \max, \min, -, \bot, \top,\rangle$} we say that $(\mathcal{S} , \vec{x}(\ell,t))$ satisfies a formula $\phi$ iff
$\mathbf{m}( \mathcal{S}, \vec{x}, \phi, t, \ell) > 0$.
In the following, we will use $\tilde{\sigma}^{\mathcal{S},\vec{x}}_{\phi}$ to denote the spatio-temporal $D_2$-signal such that for any $t$ and $\ell$ $\mathbf{m}( \mathcal{S}, \vec{x}, \phi, t, \ell)=\tilde{\sigma}^{\mathcal{S},\vec{x}}_{\phi}(t)$.
We describe now in detail the semantics through
the sensor network example as the system on which we specify our properties, in particular we will use the graph in Figure~\ref{fig:spprop} to describe the spatial operators.
\begin{example}[ZigBee protocol]
\label{ex:zigbee}
In Fig.~\ref{fig:spprop}, the graph represents a MANET. In particular, we consider the nodes with three different roles such as the ones implemented in the ZigBee protocol: {\it coordinator}, {\it router} and {\it EndDevice}. The Coordinator node $( {\color{green!45!black!70!} coord } )$, represented in green color in the graph, is unique in each network and is responsible to initialize the network. After the initialisation of the network has started, the coordinator behaves as a router.
The Router node $(\router)$, represented in red color in the graph, acts as a intermediate router, passing on data from other devices. The EndDevice node $( {\color{blue!45!black!70!} end\_dev } )$, represented in blue, can communicate
only with a parent node (either the Coordinator or a Router) and it is unable to relay data from other devices.
Nodes move in space and the figure corresponds to the spatial configuration at a fixed time $t$.
As spatio-temporal trace let us consider a $\{ {\color{green!45!black!70!} coord } , \router, {\color{blue!45!black!70!} end\_dev } \} \times \mathbb{R}$-trace $\vec x: L \rightarrow \mathbb{T} \rightarrow \mathbb{Z} \times \mathbb{R}^{\infty}$ denoting the pair: (kind of node, level of battery), i.e. $\vec x (\ell, t)= ( {\color{green!45!black!70!} coord } , b)$ if $\ell$ is a coordinator, $\vec x (\ell, t)= ( \router,b) $ if $\ell$ is a router, and $\vec x (\ell, t)= ( {\color{blue!45!black!70!} end\_dev } ,b) $ if $\ell$ is an end node, where $b$ is the level of the battery.
\end{example}
\noindent{\bf Atomic Proposition.}
The function $\iota: AP\times D_1^{n} \rightarrow D_2$ is the \emph{signal interpretation function} and permits to translate the input trace in a different ${D}_2$-spatio temporal signal for each atomic proposition in $AP$, which will be the input of the monitoring procedure.
Different types of atomic propositions and signal interpretations are admissible. E.g., we can simply consider a finite set
$\{p_1, \dots, p_n \}=AP$ and an interpretation function $\iota(p_i,\vec x(\ell,t))=\top$ iff $x_i(\ell,t)=\top$. In Fig.~\ref{fig:spprop}, we can consider atomic propositions describing the type of node, i.e., the Boolean propositions $\{ {\color{green!45!black!70!} coord } , \router, {\color{blue!45!black!70!} end\_dev } \}$ are true if the node is of the corresponding type. In case of real valued signals and of a quantitative interpretation of the logic ($D_2$ being in this case the real valued max/min semiring), we can consider inequalities $\mu=(g(\vec{x})\geq 0)$ for some real function $g$ and define $\iota(\mu,\vec{t,\ell})=g(\vec{x,t})$, e.g. $b > 0.5$, that means "the level of the battery is greater than $50\%$
\noindent{\bf Negation.} The negation operator is interpreted with the negation function $\odot_{D_{2}}$ of the chosen signal domain; e.g.
$\mathbf{m}(\mathcal{S}, \vec{x}, \neg\varphi, t, \ell)= \neg \mathbf{m}( \mathcal{S}, \vec{x}, \varphi, t, \ell)$ for the Boolean signal domain and $\mathbf{m}(\mathcal{S}, \vec{x}, \neg\varphi, t, \ell)= - \mathbf{m}( \mathcal{S}, \vec{x}, \varphi, t, \ell)$ for the quantitative ones.
\noindent{\bf Conjunction and Disjunction} The conjunction operator $\varphi_1 \wedge \varphi_2$ is interpreted with $\otimes_{D_2}$, $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \wedge \varphi_2, t, \ell) = \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \ell) \otimes_{D_2} \mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t, \ell)$, which corresponds to wedge $\wedge$ operator for the Boolean semantics. This means that $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \wedge \varphi_2, t, \ell)=1$ iff both $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1)$ and
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2)$ are equal to 1.
For the quantitative semantics $\otimes_{D_2}$ is interpreted ad the minimum operator, so $\otimes_{D_2}$, $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \wedge \varphi_2, t, \ell) = \min(\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \ell) , \mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t, \ell))$. Similarly the disjunction $\varphi_1 \vee \varphi_2$ is interpreted through the $\oplus_{D_2}$ operator, i.e. $\otimes_{D_2}$, $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \vee \varphi_2, t, \ell) = \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \ell) \oplus_{D_2} \mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t, \ell)$, which corresponds to the $\vee$ for the Boolean semantics and to the $\max$ for the quantitative one.
In the rest of the description, we focus on the Boolean semantics, i.e. $D_2 = \langle \{ \top , \bot \}, \vee, \wedge,\neg \rangle $, the Quantitative semantics can be derived substituting $\vee, \wedge$ with $\min, \max$, as seen for conjunction and disjunction.
\noindent{\bf Until.} \[\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \until{[t_{1},t_{2}]} \: \varphi_{2}, t, \ell) = \bigvee_{t' \in t + [t_{1}, t_{2}]} \linebreak (\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2, t', \ell) \wedge \bigwedge_{t'' \in [t, t']} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t'', \ell) \big) .\]
As customary, $(\mathcal{S}, \vec{x}(\ell,t))$ satisfies $\varphi_{1} \: \until{[t_{1},t_{2}]} \: \varphi_{2}$ iff it satisfies $\varphi_{1}$ from $t$ until, in a time between $t_{1}$ and $t_{2}$ time units in the future, $\varphi_{2}$ becomes true. Note how the temporal operators are evaluated in each location separately.
\noindent{\bf Since.} \[\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \since{[t_{1},t_{2}]} \: \varphi_{2}, t, \ell) = \bigvee_{t' \in t - [-t_{2}, -t_{1}]} \linebreak \big (\mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t', \ell) \wedge \bigwedge_{t'' \in [t', t]} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t'', \ell) \big)\]
$(\mathcal{S}, \vec{x}(\ell,t)$ satisfies $\varphi_{1} \: \since{[t_{1},t_{2}]} \: \varphi_{2}$ iff it satisfies $\varphi_{1}$ from now since, in a time between $t_{1}$ and $t_{2}$ time units in the past, $\varphi_{2}$ was true.
Except for the interpretation function, the semantics of the Boolean and the temporal operators is directly derived from and coincident with that of STL (qualitative for Boolean signal domain and quantitative for an $\mathbb{R}^\infty$ signal domain), see~\cite{Maler2004, Donze2013} for details.
\noindent{\bf Reachability.} \[\mathbf{m}(\mathcal{S}, \vec{x}, \varphi_{1} \: \reach{[d_{1},d_{2}]}{f} \: \varphi_{2}, t, \ell)=
\bigvee_{\tau\in Routes(\mathcal{S}(t),\ell)} \linebreak
\bigvee_{i:\left(d_{\tau}^{f}[i]
\in [d_{1},d_{2}]\right)}
( \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2, t, \tau[i])
\wedge
\bigwedge_{j < i} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \tau[j]) )\]
\noindent $(\mathcal{S} , \vec{x}(\ell,t))$ satisfies
$\varphi_{1} \: \reach{d}{f} \: \varphi_{2}$ iff it satisfies $\varphi_2$ in a location $\ell'$ reachable from $\ell$ through a route $\tau$, with a length $d_{\tau}^{f}[\ell']$ belonging to the interval $[d_1, d_2]$, and such that $\tau[0]=\ell$ and all its elements with index less than $\tau(\ell')$ satisfy $\varphi_1$.
In Figure~\ref{fig:spprop}, we report an example of reachability property, considering f as the $hop$ function described in Example~\ref{ex:distancefunction}. In the graph, the location $\ell_6$ (meaning the trajectory $\vec{x}$ at time t in position $\ell_6$) satisfies $ {\color{blue!45!black!70!} end\_dev } \: \reach{[0,1]}{hop} \: \router$. Indeed, there exists a route $\tau = \ell_6\ell_5$ such that $d_{\tau}^{hop}[1]=1$, where $\tau[0]=\ell_6$, $\tau[1]=\ell_5$, $\tau[1]$ satisfies the red property (it is a router) and all the other elements of the route satisfy the blue property (they are end-devices). Instead, for example, the location $\ell_8$ does not satisfy the property because it does not satisfies the blue (end-device) property.
\noindent{\bf Escape.} \[\mathbf{m}( \mathcal{S}, \vec{x}, \escape{[d_1, d_2]}{f} \: \varphi, t, \ell)=
\bigvee_{\tau\in Routes(\mathcal{S}(t),\ell)} \linebreak
\bigvee_{\ell'\in \tau:\left(d_{\lambda(t)}^{f}[\ell,\ell'] \in [d_1, d_2]\right) }
~\bigwedge_{i \leq \tau(\ell')}
\mathbf{m}( \mathcal{S}, \vec{x}, \varphi, t, \tau[i]).\]
\noindent $(\mathcal{S}, \vec{x}(\ell,t))$
satisfies $\escape{d}{f} \: \varphi$ if and only if there exists a route $\tau$ and a location $\ell'\in \tau$ such that $\tau[0]=\ell$ and $d_\mathcal{S}[\tau[0],\ell']$ belongs to the interval $[d_1, d_2]$, while $\ell'$ and all the elements $\tau[0],...\tau[k-1]$ (with $\tau(\ell')=k$) satisfy satisfies $\varphi$.
In Fig~\ref{fig:spprop}, we report an example of escape property. In the graph, the location $\ell_{10}$ satisfies $ \escape{[2, \infty]}{hop} \: \neg {\color{blue!45!black!70!} end\_dev } $. Indeed, there exists a route $\tau = \ell_{10}\ell_7\ell_8$ such that $\tau[0]=\ell_{10}$, $\tau[2]=\ell_8$, $d_S^{hop}[\ell_{10},\ell_1]=2$ and $\ell_{10}$, $\ell_7$ and $\ell_8$ do not satisfy the blue property, i.e. they are not end-devices. Note that the route $\ell_{10}\ell_{9}$ is not a good route to satisfy the property because the distance $d_S^{hop}[\ell_{10},\ell_{9}]=1$.
\begin{figure}[ht]
{\small
\center
\begin{tikzpicture}
[scale=.6,auto=left,every node/.style={circle,thick,inner sep=0pt,minimum size=6mm}]
\node (1) [fill=blue!45!black!70!, draw = black] at (-3,-3) {\wh{1}};
\node (2) [fill=blue!45!black!70!, draw = black] at ( 1,-2) {\wh{2}};
\node (3) [fill=blue!45!black!70!, draw = black] at ( 3,-1) {\wh{3}};
\node (4) [fill=blue!45!black!70!, draw = black] at ( -3, 1) {\wh{4}};
\node (5) [fill=red!50!black!70!, draw = black] at ( 1, 2) {\wh{5}};
\node (6) [fill=blue!45!black!70!, draw = black] at (-1, 2) {\wh{6}};
\node (7) [fill=red!50!black!70!, draw = black] at (0, 0) {\wh{7}};
\node (8) [fill=red!50!black!70!, draw = black] at (-2,-1) {\wh{8}};
\node (9) [fill=red!50!black!70!, draw = black] at (3,3) {\wh{9}};
\node (10) [fill=green!45!black!70!, draw = black] at (4,1) {\wh{10}};
\node (11) [fill=red!50!black!70!, draw = black] at (5,0) {\wh{11}};
\node (12) [fill=blue!45!black!70!, draw = black] at (6,-2) {\wh{12}};
\node (13) [fill=blue!45!black!70!, draw = black] at (8,1) {\wh{13}};
\node (14) [fill=blue!45!black!70!, draw = black] at (5,3) {\wh{14}};
\node (15) [fill=blue!45!black!70!, draw = black] at (8,-1) {\wh{15}};
\node (16) [fill=red!50!black!70!, draw = black] at (6.5,1.8) {\wh{16}};
\draw [-] (1) -- (8) node[midway] {};
\draw [-] (2) -- (7) node[midway] {};
\draw [-] (8) -- (6) node[midway] {};
\draw [-] (8) -- (7) node[midway] {};
\draw [-] (7) -- (10) node[midway] {};
\draw [-] (7) -- (5) node[midway] {};
\draw [-] (3) -- (10) node[midway] {};
\draw [-] (6) -- (5) node[midway] {};
\draw [-] (10) -- (11) node[midway] {};
\draw [-] (10) -- (9) node[midway] {};
\draw [-] (11) -- (15) node[midway] {};
\draw [-] (11) -- (12) node[midway] {};
\draw [-] (9) -- (14) node[midway] {};
\draw [-] (10) -- (14) node[midway] {};
\draw [-] (10) -- (16) node[midway] {};
\draw [-] (11) -- (16) node[midway] {};
\draw [-] (13) -- (16) node[midway] {};
\draw [-] (8) -- (4) node[midway] {};
\end{tikzpicture}
\caption{
Example of spatial properties. {\bf Reachability:} $ {\color{blue!45!black!70!} end\_dev } \: \reach{[0,1]}{hop} \: \router$. {\bf Escape:} $ \escape{[2, \infty]}{hop} \: \neg {\color{blue!45!black!70!} end\_dev } $.
{\bf Somewhere}: $\somewhere{[0, 4] }{hop} {\color{green!45!black!70!} coord } $. {\bf Everywhere}: $\everywhere{[0, 2] }{hop} \router$. {\bf Surround:} $ ( {\color{green!45!black!70!} coord } \vee \router ) \surround{[0,3]}{hop} \: {\color{blue!45!black!70!} end\_dev } $. }
\label{fig:spprop}
}
\end{figure}
We can also derive other three spatial operators: {\it somewhere}, {\it everywhere} and {\it surround}.
\noindent{\bf Somewhere.} $ \somewhere{ [0, d] }{f} \varphi := true \reach{[0, d]}{f} \varphi $
is satisfied by $(\mathcal{S} , x(t,\ell))$
iff there exists a location that satisfies $\varphi$ reachable from $\ell$ via a route $\tau$ with a distance belonging to the interval $[0, d]$. This length is computed via the function $f$. In Fig.~\ref{fig:spprop}, all the locations satisfy the property $\somewhere{[0,4]}{hop} {\color{green!45!black!70!} coord } $ because, for all $\ell_i$, there is always a path $\tau = \ell_i \dots \ell_{10}$ with a length $d_\tau^{hop}(k)<4$, where $\tau[0]=\ell_{i}$, $\tau[k]=\ell_{10}$, and $\ell_{10}$ satisfies the green property.
\noindent{\bf Everywhere.} $ \everywhere{[0,d]}{f} \varphi := \neg \somewhere{[0,d]}{f} \neg \varphi $
is satisfied by $(\mathcal{S}, \vec{x}((\ell,t))$ iff all the locations reachable from $\ell$ via a path, with length belonging to the interval $[0, d]$, satisfy $\varphi$. In Fig.~\ref{fig:spprop}, there are no locations that satisfy the property $\everywhere{[0,2]}{hop} \router$ because for all the locations $\ell_i$ there is a path $\tau=\ell_i\ell_j$ s.t. $\ell_j$ does not satisfy the red property.
\noindent{\bf {Surround}.} $\varphi_{1} \surround{[0, d]}{f} \varphi_{2} := \varphi_{1} \wedge \neg (\varphi_{1}\reach{[0, d]}{f} \neg (\varphi_1 \vee \varphi_{2}) \wedge \neg (\escape{\neg [d, \infty]}{f} \varphi_{1}) $ expresses the topological notion of being surrounded by a $\varphi_2$-region, while being in a $\varphi_{1}$-region, with an additional metric constraint. The operator has been introduced in~\cite{CianciaLLM16} as a basic operator, while here it is a derived one. The idea is that one cannot escape from a $\varphi_{1}$-region without passing from a location that satisfies $\varphi_2$ and, in any case, one has to reach a $\varphi_2$-location via a path with a length less or equal to $d$. In Fig.~\ref{fig:spprop}, the location $\ell_{10}$ satisfies the property $ ( {\color{green!45!black!70!} coord } \: \vee \: \router ) \surround{[0,3]}{hop} \: {\color{blue!45!black!70!} end\_dev } $. In fact, it satisfies the green property, it cannot reach a location that does not satisfy the blue or the red property via a path with length lesser or equal to 3 and it cannot escape through a path satisfying the green or red properties at a distance more than 3.
The operators can be arbitrarily composed to specify complex properties as we will see in Section~\ref{sec:ZigBee} and~\ref{sec:epidemic} .
\subsection{Invariance properties of the Euclidean spatial model}
The properties we consider with respect to the Euclidean spatial model are typically local and depend on the relative distance and position among nodes in the plane. As such, they should be invariant with respect to change of coordinates, i.e. with respect to isometric transformations of the plane This class of transformations include translations, rotations, and reflections, and can be described by matrix multiplications of the form
\[
\begin{bmatrix}
x'_{\ell} \\
y'_{\ell} \\
1 \\
\end{bmatrix}
=
\begin{bmatrix}
\beta \cos (\alpha) & - \beta \sin (\alpha) & \beta t_x \\
\gamma \sin (\alpha) & \gamma \cos (\alpha) & \gamma t_y \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
x_{\ell} \\
y_{\ell} \\
1 \\
\end{bmatrix}
\]
Invariance of satisfaction of spatial properties holds in STREL logic, for the Euclidean space model of Definition \ref{def:Euclidean}. Consider more specifically an Euclidean space model $\mathcal{E}(L,\mu, R) = \langle L, \mathbf{W}^{\mu, R} \rangle$ and $\mathcal{E}(L,\mu', R)= \langle L, \mathbf{W}^{\mu', R} \rangle$, obtained by applying an isometric transformation $A$: $\mu'(\ell) = A(\mu(\ell))$. For invariance to hold, we need to further require that distance predicates used in spatial operators are invariant for isometric transformations. More specifically, for any isometry $A$, we require a distance predicate $d$ on the semiring $\mathbb{R}^{\infty}\times\mathbb{R}^{\infty}$ to satisfy $d((x,y)) = d(A((x,y)))$. This is the case for the norm-based predicates used in the examples, of the form $d((x,y)) = \|(x,y\|_2 \leq r$.
Notice that, the path structure is preserved (the edges given by $R$ is the same), and the truth of isometry-invariant distance predicates along paths in $\mathcal{E}(L,\mu, R)$ and $\mathcal{E}(L,\mu', R)$ is also the same. This straightforwardly implies that the truth value of spatial operators will be unchanged by isometry.
\begin{proposition}[Equisatisfiability under isometry] Let $\mathcal{E}(L,\mu, R) = \langle L, \mathbf{W}^{\mu, R} \rangle$
be an euclidean spatial model and $\mathcal{E}(L,\mu', R)= \langle L, \mathbf{W}^{\mu', R} \rangle$ an isometric transformation of the former. Consider a spatial formula $\varphi_{1} \: \reach{ d}{f} \: \varphi_{2}$ or $\escape{d}{f} \: \varphi_{1}$, where $d$ is an isometry preserving predicate.
Assume $\mathbf{m} ( \mathcal{S}, \vec{x}, \varphi_{j}, t, \ell) = \mathbf{m}'( \mathcal{S}, \vec{x}, \varphi_{j}, t, \ell)$, $j=1,2$, where $\mathbf{m}$ and $\mathbf{m}'$ are the monitoring functions for the two spatial models. Then it holds that
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \reach{ d}{f} \: \varphi_{2}, t, \ell) = \mathbf{m}'( \mathcal{S}, \vec{x}, \varphi_{1} \: \reach{ d}{f} \: \varphi_{2}, t, \ell)$ and $\mathbf{m}(\mathcal{S}, \vec{x}, \escape{d}{f} \: \varphi_{1}, t, \ell) = \mathbf{m}'( \mathcal{S}, \vec{x}, \escape{d}{f} \: \varphi_{1}, t, \ell)$, for all $\ell$ and $t$.
\end{proposition}
\section{Monitoring STREL}
\label{sec:alg}
In this section we present a
monitoring algorithm that can be used to check if a given
signal satisfies or not a STREL property.
The proposed algorithm follows an \emph{offline} approach. Indeed, the proposed algorithm takes as input the complete spatio-temporal signal together with the property we want to monitor.
At the end of this section we will also briefly discuss a possible alternative approach that can lead to a distributed and \emph{online} monitoring procedure.
In this case, the spatio-temporal signal is not known at the beginning, and it is discovered while data are collected from the system while it is running.
\subsection{Offline monitor}
Offline monitoring is performed via a function $\mathsf{monitor}$
that takes as inputs a dynamical spatial model $\mathcal{S}$, a trace $\vec{x}$
and a formula $\phi$ and returns the \emph{piecewise constant
spatio-temporal signal} $\tilde{\sigma}$ representing the monitoring
of $\phi$.
Function $\mathsf{monitor}$ is defined by induction on the syntax of the formula (Algorithm \ref{algo:part1}).
The spatio-temporal signal resulting from the monitoring of atomic proposition $\mu$ is just obtained by applying function $\iota(\mu)$ to the trace $\mathbf{x}$. The spatio-temporal signals associated with $\neg\varphi$ and $\varphi_1\wedge \varphi_2$ are obtained by applying operators $\odot_{D_2}$ and $\otimes_{D_2}$ to the signals resulting from the monitoring of $\varphi$ and from the monitoring of $\varphi_1$ and $\varphi_2$\, where $\oplus_{D_2}$, $\otimes_{D_2}$ and $\odot_{D_{2}}$ depend the \emph{signal domain} used to represent satisfaction values.
\algnewcommand\algorithmicswitch{\textbf{switch}}
\algnewcommand\algorithmiccase{\textbf{case}}
\algnewcommand\algorithmicassert{\texttt{assert}}
\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%
\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}%
\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%
\algtext*{EndSwitch}%
\algtext*{EndCase}%
\begin{algorithm}[tbp]
\caption{Monitoring algorithm}
\label{algo:part1}
\vspace{1mm}
\begin{multicols}{2}
\begin{algorithmic}[1]
\Function{Monitor}{$\mathcal{S}$, $\vec{x}$, $\psi$}
\Case{$\psi=\nu$}
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\ForAll{ $t\in \mathcal{T}(\vec{x}(\ell))$ }
\State $\tilde{\sigma}(\ell,t)=\iota(\mu)(\vec{x}(\ell,t))$
\EndFor
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\neg\psi_1$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\ForAll{ $t \in \mathcal{T}(\tilde{\sigma}_1(\ell))$ }
\State $\tilde{\sigma}(\ell,t)=\odot_{D_2} \tilde{\sigma}_1(\ell,t)$
\EndFor
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \wedge \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\ForAll{ $t \in \mathcal{T}(\tilde{\sigma}_1(\ell))\cup \mathcal{T}(\tilde{\sigma}_2(\ell))$ }
\State $\tilde{\sigma}(\ell,t)=\tilde{\sigma}_1(\ell,t) \otimes_{D_2} \tilde{\sigma}_2(\ell,t)$
\EndFor
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \until{[t1,t2]} \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\State $\tilde{\sigma}(\ell)=\Call{Until}{t1,t2,\tilde{\sigma}_1(\ell),\tilde{\sigma}_2(\ell)}$
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \since{[t1,t2]} \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\State $\tilde{\sigma}(\ell)=\Call{Since}{t1,t2,\tilde{\sigma}_1(\ell),\tilde{\sigma}_2(\ell)}$
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \reach{[d_1,d_2]}{f} \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $t \in \mathcal{T}(\tilde{\sigma}_1)\cup\mathcal{T}(\tilde{\sigma}_2)$ }
\State $\tilde{\sigma}@t=\Call{Reach}{\mathcal{S}(t),f,d_1,d_2,\tilde{\sigma}_1@t,\tilde{\sigma}_2@t}$
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \escape{d}{f} \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $t \in \mathcal{T}(\tilde{\sigma}_1)\cup\mathcal{T}(\tilde{\sigma}_2)$ }
\State $\tilde{\sigma}@t=\Call{Escape}{\mathcal{S}(t), f, d,\tilde{\sigma}_1@t,\tilde{\sigma}_2@t}$
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\EndFunction
\end{algorithmic}
\end{multicols}
\end{algorithm}
Monitoring of temporal properties, namely $\varphi_1 \until{[t_{1}, t_{2}]}\varphi_2$ and $\varphi_1 \since{[t_{1}, t_{2}]} \varphi_2$, relies on functions \textproc{Until} and \textproc{Since}. These are defined by using the same approach of~\cite{Donze2013} and~\cite{MalerN13}. However, while their monitoring relies on classical Boolean and arithmetic operators, here the procedure is parametrised with respect to operators $\oplus_{D_2}$ and $\otimes_{D_2}$ of the considered semiring.
\begin{algorithm}[tbp]
\caption{Monitoring function for \emph{reach} operator}
\label{algo:reachmonitoring}
\vspace{1mm}
\begin{algorithmic}[1]
\Function{Reach}{$(L,\mathbf{W})$, $f: A\rightarrow B$, $d_1\in B$, $d_2\in B$, $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$}
\If{$d_2\not=\top_{B}$}
\State{} \Return \Call{BoundedReach}{$(L,\mathbf{W})$, $f$, $d_1$, $d_2$ , $\mathbf{s}_1$, $\mathbf{s}_2$}
\Else{}
\State{} \Return \Call{UnboundedReach}{$(L,\mathbf{W})$, $f$, $d_1$, $\mathbf{s}_1$, $\mathbf{s}_2$}
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
To monitor $\varphi_{1} \: \reach{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi_{2}$
we first compute
the signals $\mathbf{s}_1$ and $\mathbf{s}_2$, resulting from the monitoring of $\varphi_1$ and $\varphi_2$. After that, the final result is obtained by aggregating the spatial signals $\mathbf{s}_1@t$ and $\mathbf{s}_2@t$ at each time $t\in \mathcal{T}(\mathbf{s}_1)\cup \mathcal{T}(\mathbf{s}_2)$ by using function \textproc{Reach} defined in Algorithm~\ref{algo:reachmonitoring}. In this function two cases are distinguished: $d_2\not=\top_{B}$ or $d_2=\top_{B}$. In the first case, the resulting monitoring value is calculated via function \textproc{BoundedReach} defined in Algoritm~\ref{algo:boundedreachmonitoring}. Conversely, when $d_2=\top_{B}$ monitoring is performed by relying on function \textproc{UnoundedReach} defined in Algoritm~\ref{algo:unboundedreachmonitoring}.
\noindent \paragraph{\bf \textproc{BoundedReach}} Function \textproc{BoundedReach}, defined in Algorithm~\ref{algo:boundedreachmonitoring}, takes as parameters the spatial model $\langle L,\mathbf{W} \rangle$ at time $t$, the function $f:A\rightarrow B$, used to compute the distances over paths, and the interval $[d_1,d_2]$, describing the reachability bound.
The proposed algorithm is based on a flooding that computes the output signal $\mathbf{s}$.
At line $3$, we inizialite the set $Q$, it contains all triples $(\ell, s_2[\ell], \bot_B)$, where $\bot_B$ is the minimum element of the distance domain $B$ (e.g. if $B=\mathbb{R}_{\geq 0}$, $\bot_B=0$).
Let us denote $Q_i$ the value of $Q$ after $i$ iterations of loop starting at line $4$.
$Q_i$ contains a triple $(\ell, v, d)$ if and only if there exists a path such that with $i$-steps we reach a distance $d <_{B} d_2$ and a monitored value $v$.
To compute the values in $Q_{i+1}$, for each element $(\ell,v,d)\in Q_i$ we consider the locations $\ell'$ next to $\ell$ at a distance $w$ ($\nextto{\ell'}{w}{\ell}$) to compute the items: $v' = v\otimes \mathbf{s}_1(\ell')$ and $d' = d+_{B}f(w)$. The element $(\ell', v',d')$ is added to $Q_{i+1}$ if $d'<_{B} d_2$, i.e. if the sum of the current distance plus the distance between $\ell$ and $\ell'$ is still less than $d_2$.
When $d' \in [d_1,d_2]$, $\mathbf{s}(\ell')$ is updated to take into account the new computed value. We recal that $s$ stores the value of the semantics of the reach operator.
\begin{algorithm}[tbp]
\caption{Monitoring bounded reachability}
\label{algo:boundedreachmonitoring}
\vspace{1mm}
\begin{algorithmic}[1]
\Function{BoundedReach}{$(L,\mathbf{W})$, $f: A\rightarrow B$, $d_1\in B$,$d_2\in B$, $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$}
\State $\forall \ell\in L. \mathbf{s}[\ell] = \left\{\begin{array}{ll}
\mathbf{s}_2[\ell] & d_1=\bot_{B}\\
\bot_{D_2} & \mbox{otherwise}
\end{array}\right.
$
\State $Q=\{ (\ell, \mathbf{s}_2[\ell], \bot_{B}) | \ell\in L \}$
\While{ $Q\not=\emptyset$ }
\State{$Q'=\emptyset$}
\ForAll{$(\ell,v,d) \in Q$}
\ForAll{ $\ell': \nextto{\ell'}{w}{\ell}$ }
\State $v'=v~\otimes_{D_2}~\mathbf{s}_1[\ell']$
\State $d'=d~+_{B}~f(w)$
\If{$(d_1\leq d'\leq d_2)$}
\State $\mathbf{s}[\ell'] = \mathbf{s}[\ell']\oplus_{D_2} v'$
\EndIf
\If{$d'< d_2$}
\If{$\exists (\ell',v'',d')\in Q'$}
\State $Q' = (Q'-\{ (\ell',v'',d') \})\cup \{ (\ell',v'\oplus_{D_2} v'',d') \}$
\Else{}
\State $Q' = Q'\cup \{ (\ell',v',d') \}$
\EndIf
\EndIf
\EndFor
\EndFor
\State{$Q=Q'$}
\EndWhile
\State \Return{ $\mathbf{s}$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\noindent \paragraph{\bf \textproc{UnboundedReach}} Function \textproc{UnboundedReach} defined in Algoritm~\ref{algo:unboundedreachmonitoring} is used when the interval in the \emph{reach} formula is unbounded. In this case the function takes as parameters the spatial model $\langle L,\mathbf{W}\rangle $ at time $t$, the function $f:A\rightarrow B$, used to compute the distances over paths, and the lower bound $d_1$, and returns a \emph{spatial signal} $\mathbf{s}$. If $d_1 = \bot_B$, i.e. when we are considering a totally unbound reach with no constraint,s we initialize $\mathbf{s}=\mathbf{s}_2$. Otherwise, when $d_1\not=\bot$, we have first to call function \textproc{boundedReach} by passing as upper bound $d_1+d_{max}$, where $d_{max}$ is the max value that function $f$ can associate to a single edge in $\mathbf{W}$.
After that, $\mathbf{s}$ will contain the \emph{reachability} value computed up to the bound $[d_1,d_1+d_{max}]$ (lines (5)-(6)).
Hence, the computed values are \emph{back propagated} until a fix point is reached.
This will guarantee that for each location, only the values of $\mathbf{s}_2$ at a path distance $[d_1,\top_{B}]$ are considered in computation of reachability values.
\begin{algorithm}[tbp]
\caption{Monitoring unbounded reachability}
\label{algo:unboundedreachmonitoring}
\vspace{1mm}
\begin{algorithmic}[1]
\Function{UnboundedReach}{$(L,\mathbf{W})$, $f: A\rightarrow B$, $d_1\in B$, $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$}
\If{$d_1=\bot_{B}$}
\State $\mathbf{s}=\mathbf{s}_2$
\Else
\State $d_{max}=\max\{ f(w) | \exists.\ell,\ell'\in L: \nextto{\ell}{w}{\ell'} \}$
\State $\mathbf{s}=\Call{BoundedReach}{(L,\mathbf{W}), f, d_1, d_1+_{B} d_{max},\mathbf{s}_1,\mathbf{s}_2}$
\EndIf
\State $T=L$
\While{$T\not=\emptyset$}
\State $T'=\emptyset$
\ForAll{$\ell\in T$}
\ForAll{ $\ell': \nextto{\ell'}{w}{\ell}$ }
\State $v' = (\mathbf{s}[\ell]\otimes_{D_2} \mathbf{s}_1[\ell'])\oplus_{D_2} \mathbf{s}[\ell']$
\If{$v'\not= \mathbf{s}[\ell']$}
\State{$\mathbf{s}[\ell']=v'$}
\State $T'=T'\cup\{ \ell' \}$
\EndIf
\EndFor
\EndFor
\State $T=T'$
\EndWhile
\State \Return{ $\mathbf{s}$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\noindent \paragraph{\bf \textproc{Escape}} Monitoring algorithm for $\escape{[d_{1},d_{2}]}{f:A\rightarrow B}{\varphi}$ is reported in Algorithm~\ref{algo:escapemonitoring}. Given a spatial model $\langle L,\mathbf{W}\rangle $ at time $t$, a distance function $f:A\rightarrow B$, an interval $[d_1,d_2]$, it computes the spatial signal representing the monitoring value of $\escape{d}{f} \varphi$ at time $t$.
Function $\mathsf{escape}$ first computes the \emph{matrix distance} $D$ (line 2), obtained from the given space model and distance function $f$. After that, a matrix $e:L\times L\rightarrow D_2$ is computed.
The matrix $e$ is initialised so that all the elements $e[\ell,\ell]$ in the diagonal are equal to $\mathbf{s}_1(\ell)$, while all the other elements are set to $\bot_{D_2}$ (lines 3-4).
After that, iteratively, elements of $e$ are updated by considering the values in the neighbours in each location (lines 6-20). A value $e[\ell_1',\ell_2]$ is updated iff
$\mathbf{s}_1(\ell_1') \otimes_{D_2} e[\ell_1,\ell_2] >_{D_2} e[\ell_1',\ell_2] $, where $\ell_1'$ is a neighbored of $\ell_1$.
The updates ends when a fixpoint is reached \footnote{We prove that the loop always terminates in Lemma~\ref{lemma:escapecorrectness}.}.
At the end of the loop computation, the element $e[\ell_1,\ell_2]$ contains the \emph{escape} value from $\ell_1$ to $\ell_2$, defined in the semantics without the distance constraint. This latter is took in consideration in line 23, where the final monitored value $s$ is computed. For each $\ell$, the equation $\bigoplus_{D_2}(\{ e[\ell,\ell'] | D[\ell,\ell']\in [d_1,d_2] \})$ considers the minimum values $e[\ell,\ell']$ of all $\ell'$ that satisfies the distance contraint, i.e. such that $ D[\ell,\ell']\in [d_1,d_2]$.
\begin{algorithm}[tbp]
\caption{Monitoring \emph{escape}}
\label{algo:escapemonitoring}
\vspace{1mm}
\begin{algorithmic}[1]
\Function{Escape}{$(L,\mathbf{W})$, $f: A\rightarrow B$, $d_1\in B$,$d_2\in B$, $\mathbf{s}_1: L\rightarrow D_2$}
\State $D = \Call{MinDistance}{L,\mathbf{W},f)}$
\State $\forall \ell,\ell'\in L. e[\ell,\ell'] = \bot_{D_2}$
\State $\forall \ell\in L. e[\ell,\ell]=\mathbf{s}_1(\ell)$
\State $T=\{ (\ell,\ell) | \ell\in L \}$
\While{ $T\not= \emptyset$ }
\State $e'=e$
\State $T'=\emptyset$
\ForAll{ $(\ell_1,\ell_2) \in T$ }
\ForAll{ $\ell_1': \nextto{\ell_1'}{w}{\ell_1}$ }
\State{ $v = e[\ell_1',\ell_2]\oplus_{D_2}(\mathbf{s}_1(\ell_1') \otimes_{D_2} e[\ell_1,\ell_2])$}
\If{$v\not=e[\ell_1',\ell_2]$}
\State{$T'=T'\cup \{ (\ell_1',\ell_2) \}$}
\State{$e'[\ell_1',\ell_2]=v$}
\EndIf
\EndFor
\EndFor
\State{T=T'}
\State{e=e'}
\EndWhile
\State $\mathbf{s}=[]$
\ForAll{ $\ell\in L$ }
\State $\mathbf{s}(\ell)=\bigoplus_{D_2}(\{ e[\ell,\ell'] | D[\ell,\ell']\in [d_1,d_2] \})$
\EndFor{}
\State \Return $\mathbf{s}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{Correctness}
In this sub section we discuss the correctness of the algorithms.
\begin{lemma}[BoundedReach correctness]
\label{lemma:boundreachcorrectness}
Given an $A$-spatial model $\mathcal{S}=(L,\mathbf{W})$, a function $f: A\rightarrow B$, an interval $[d_1, d_2]$, ($d_1,d_2\in B$, $d_1\leq_{B} d_2$ and) $d_2\not=\top_{B}$), and two spatial signals $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$ such that $\Call{BoundedReach}{(L,\mathbf{W}),f,d_1,d_2,\mathbf{s}_1,\mathbf{s}_2}=\mathbf{s}$. For any $\ell\in L$, we have that:
\[
\mathbf{s}(\ell)={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell)}
~~{ \oplus_{D_2}}_{i : \left(d_{\tau}^{f}[i] \in [d_{1},d_{2}]\right)}
\left(
\mathbf{s}_2(\tau[i])
\otimes_{D_{2}}
{\otimes_{D_2}}_{j < i}
\mathbf{s}_1(\tau[j])
\right)
\]
\end{lemma}
\begin{proof}
Let us denote by $\mathbf{s}_{i}$ and $Q_i$ the value of variables $\mathbf{s}$ and $Q$ respectively in Algorithm~\ref{algo:boundedreachmonitoring} after $i$ iterations of the \emph{while-loop} at line $(4)$. The statement follows directly from the following properties and by observing that (since $f(w)>0$ for any $w\in A$) the algorithm terminates after a finite number of iterations:
\begin{enumerate}
\item[$P1$:] if $(\ell,v,d)\in Q_i$ then $d\leq_{B} d_2$;
\item[$P2$:] if $(\ell,v_1,d)\in Q_i$ and $(\ell,v_2,d)\in Q_i$ then $v_1=v_2$;
\item[$P3$:] $(\ell,v,d)\in Q_i$ if and only if
\[
v={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell): d_{\tau}^{f}[i]=d}
\mathbf{s}_2(\tau[i])
\otimes_{D_{2}}
\left(
{\otimes_{D_2}}_{j < i}
\mathbf{s}_1(\tau[j])
\right)
\]
\item[$P4$:] for any $\ell\in L$:
\[
\mathbf{s}_i[\ell]={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell)}
~~{ \oplus_{D_2}}_{k : k\leq i\wedge \left(d_{\tau}^{f}[k] \in [d_{1},d_{2}]\right)}
\left(
\mathbf{s}_2(\tau[k])
\otimes_{D_{2}}
{\otimes_{D_2}}_{j < k}
\mathbf{s}_1(\tau[j])
\right)
\]
\end{enumerate}
Properties $P1$ and $P2$ are direct consequences of instructions at lines $(3)$ and $(13)-(19)$ of Algorithm~\ref{algo:boundedreachmonitoring}. Indeed, $(\ell,v,d)\in Q_0$ if and only if $d=\bot_{B}$ (line $(3)$), while $(\ell,v,d)$ is inserted in $Q'=Q_{i+1}$ if and only if $d<_{B} d_2$ (line $(13)$) and no other $(\ell,v',d)$ is already in $Q'$ (lines $(14)-(18)$).
Property $P3$ can be proved by induction on $i$ by observing that the property is satisfied by $Q_0$ and that for any $i$:
\[
(\ell',v',d')\in Q_{i+1} \Leftrightarrow d'<_{B} d_2\mbox{ and }
v'={\oplus_{D_2}}_{(\ell,v,d)\in Q_i:\nextto{\ell'}{w}{\ell}\wedge d+f(w)=d'}
\left(
\mathbf{s}_1(\ell')
\otimes_{D_2}
v
\right)
\]
From the above, and from inductive hypothesis, we have that:
\[ \small
\begin{array}{rcl}
v' & = & {\oplus_{D_2}}_{(\ell,v,d)\in Q_i:\nextto{\ell'}{w}{\ell}\wedge d+f(w)=d'}
\left(
\mathbf{s}_1(\ell')
\otimes_{D_2}
{\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell): d_{\tau}^{f}[i]=d}
\mathbf{s}_2(\tau[i])
\otimes_{D_{2}}
\left(
{\otimes_{D_2}}_{j < i}
\mathbf{s}_1(\tau[j])
\right)
\right)\\
& = &
{\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell'): d_{\tau}^{f}[i+1]=d'}
\mathbf{s}_2(\tau[i+1])
\otimes_{D_{2}}
\left(
{\otimes_{D_2}}_{j < i+1}
\mathbf{s}_1(\tau[j])
\right)
\end{array}
\]
That is the statement of $P3$.
Finally, we can probe $P4$ by induction on $i$ by using $P3$ and by observing that:
\[
\mathbf{s}_{i+1}[\ell']=
{\oplus_{D_2}}_{(\ell,v,d)\in Q_i:\nextto{\ell'}{w}{\ell}\wedge d+f(w)\in [d_1,d_2]} \mathbf{s}_{i}[\ell']\oplus_{D_2}\left(
\mathbf{s}_1[\ell'] \otimes v\right)
\]
\end{proof}
\begin{lemma}[UnboundedReach correctness]
\label{lemma:unboundreachcorrectness}
Given an $A$-spatial model $(L,\mathbf{W})$, a function $f: A\rightarrow B$, a value $d_1\in B$ ($d_1\not= \top_{B}$), and two spatial signals $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$ such that $\Call{UnboundedReach}{(L,\mathbf{W}),f,d_1,\mathbf{s}_1,\mathbf{s}_2}=\mathbf{s}$. For any $\ell\in L$:
\[
\mathbf{s}(\ell)={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{ \oplus_{D_2}}_{\ell'\in\tau : \left(d_{\tau}^{f}[\ell'] \geq d_{1}\right)}
\left(
\mathbf{s}_2(\ell')
\otimes_{D_{2}}
{\otimes_{D_2}}_{j < \tau(\ell')}
\mathbf{s}_1(\tau[j])
\right)
\]
\end{lemma}
\begin{proof}
Directly from the pseudo code in Algorithm~\ref{algo:unboundedreachmonitoring} and from Lemma~\ref{lemma:boundreachcorrectness}, we can observe that the value $\mathbf{s}$ computed by function $\Call{UnboundedReach}{}$ is the limit ($\mathbf{s} = \lim_{i\rightarrow\infty }\mathbf{s}_{i}
$) of the sequence of signals $\mathbf{s}_i$ such that for any $\ell\in L$:
\[
\mathbf{s}_{i+1}[\ell]=\bigoplus_{\ell\in L:\nextto{\ell}{w}{\ell'}} (\mathbf{s}_{i}(\ell)\oplus \mathbf{s}_1(\ell')\otimes \mathbf{s}_{i}(\ell'))
\]
The initial spatial signal is $\mathbf{s}_{0}=\mathbf{s}_2$, if $d_1=\bot_{B}$, while it is:
\[
\mathbf{s}_0[\ell]={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell)}
~~{ \oplus_{D_2}}_{i : \left(d_{\tau}^{f}[i] \in [d_{1},d_{1}+d_{max}]\right)}
\left(
\mathbf{s}_2(\tau[i])
\otimes_{D_{2}}
{\otimes_{D_2}}_{j < i}
\mathbf{s}_1(\tau[j])
\right)
\]
\noindent
when $d_1\not=\bot_{B}$ and $d_{max}=\max\{ f(w) | \exists.\ell,\ell'\in L: \nextto{\ell}{w}{\ell'} \}$. In both the cases, the statement follows by applying standard algebra and the properties of $\oplus$ and $\otimes$.
\end{proof}
\begin{lemma}[Escape correctness]
\label{lemma:escapecorrectness}
Given an $A$-spatial model $(L,\mathbf{W})$, a function $f: A\rightarrow B$, an interval $[d_1, d_2]$, ($d_1,d_2\in B$, $d_1\leq_{B} d_2$ and) $d_2\not=\top_{B}$), and a spatial signal $\mathbf{s}_1: L\rightarrow D_2$ such that $\Call{Escape}{(L,\mathbf{W}),f,d_1,d_2,\mathbf{s}_1}=\mathbf{s}$. For any $\ell\in L$:
\[
\mathbf{s}(\ell)={\bigoplus_{D_2}}_{\tau\in Routes((L,\mathbf{W}),\ell)}
~~{\bigoplus_{D_2}}_{\ell'\in \tau : \left(d_{(L,\mathbf{W})}^{f}[\ell,\ell'] \in [d_{1},d_{2}]\right)}
~~{\bigotimes_{D_2}}_{i \leq \tau(\ell')} \mathbf{s}_1(\tau[i])
\]
\begin{proof}
Let us denote by $e_{i}$ the content of data structures $e$ after $i$ iterations of the loop at line $(6)$ of Algorithm~\ref{algo:escapemonitoring}.
We have only to prove the following properties:
\begin{itemize}
\item[$P1$] For any $\ell_1,\ell_2$, $D[\ell_1,\ell_2]=d_{(L,\mathbf{W})}^{f}[\ell,\ell']$
\item[$P2$] For any $i$:
\[
e_{i}[\ell_1,\ell_2]={\bigoplus_{D_2}}_{\tau\in Routes((L,\mathbf{W}),\ell)}
~~{\bigotimes_{D_2}}_{j \leq \tau(\ell')\wedge j\leq i} \mathbf{s}_1(\tau[j])
\]
\item[$P3$] The loop terminates after at most $k=|L|$ iterations and
\[
e_{k}[\ell_1,\ell_2]={\bigoplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{\bigotimes_{D_2}}_{j \leq \tau(\ell')} \mathbf{s}_1(\tau[j])
\]
\end{itemize}
Property $P1$ follows directly from definition of $d_{(L,\mathbf{W})}^{f}[\ell,\ell']$ and from the fact that $\mathsf{MinDistance(L,\mathbf{W},f)}$ computes the matrix of min distances computed in $\langle L,\mathbf{W}\rangle$ via $f$. Property $P2$ can be proved by induction on $i$ and follows directly from the code of Algorithm~\ref{algo:escapemonitoring}. Finally, $P3$ is a consequence of the fact that after at most $|L|$ iterations a fix point is reached since all the locations have been taken into account.
We can conclude that the statement of the lemma follows directly from properties $P1$, $P2$ and $P3$ above by observing that:
\[
\begin{array}{rcl}
\mathbf{s}(\ell) & = & \bigoplus_{D_2}(\{ e[\ell,\ell'] | D[\ell,\ell']\in [d_1,d_2] \}) \\
& = & \bigoplus_{D_2}(\{ e_{k}[\ell,\ell'] | D[\ell,\ell']\in [d_1,d_2] \}) \\
& = & {\bigoplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{\bigoplus_{D_2}}_{\ell'\in \tau : \left(d_{(L,\mathbf{W})}^{f}[\ell,\ell'] \in [d_{1},d_{2}]\right)}
~~{\bigotimes_{D_2}}_{j \leq \tau(\ell')} \mathbf{s}_1(\tau[j])
\end{array}
\]
\end{proof}
\end{lemma}
\begin{theorem}
Given a dynamical spatial model $\mathcal{S}$, a trace $\vec{x}$
and a formula $\phi$,
\[
\Call{Monitor}{\mathcal{S},\vec{x},\phi}=\tilde{\sigma}_{\phi}^{\mathcal{S},\vec{x}}
\]
\end{theorem}
\begin{proof}
The proof easily follows by induction on $\phi$ by using Lemma~\ref{lemma:boundreachcorrectness}, Lemma~\ref{lemma:unboundreachcorrectness} and Lemma~\ref{lemma:escapecorrectness}.
\end{proof}
\subsection{Complexity}
In this subsection we discuss the complexity of each algorithms.
\begin{proposition}[BoundedReach complexity]
\label{prop:reachboundcomplexity}
Given an $A$-spatial model $\mathcal{S}=(L,\mathbf{W})$, a function $f: A\rightarrow B$, an interval $[d_1, d_2]$, ($d_1,d_2\in B$, $d_1\leq_{B} d_2$ and) $d_2\not=\top_{B}$), and two spatial signals $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$ such that $\Call{BoundedReach}{(L,\mathbf{W}),f,d_1,d_2,\mathbf{s}_1,\mathbf{s}_2}=\mathbf{s}$. We define $d_{min}=\min_{(\ell, w, \ell') \in \mathbf{W}}(f(w))$ and $k=\min\{ i | i*d_{min} > d_{2} \}$, where $i*d_{min}$ indicates the sum of $i$ copies of $d_{min}$, then the algorithm terminates after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$ steps, where $m$ is the number of edges and $\beta_{d_2}$ is an integer counting the \emph{different distances} accumulated after $k$ steps\footnote{ This value is in practice a constant and depends on the weights associated with edges and on the bound $d_2$. For instance, in the case of function $hop$ of Example~\ref{ex:distancefunction} $\beta=1$. In general, $\beta_{d_2}$ has the same order of magnitude as $k$ considered above.}.
\end{proposition}
\begin{proof}
First, we need to compute the upper bound on the number of iterations of the while loop starting at line (4).
Let us denote by $Q_k$ the value of $Q$ after $k$ iterations. If $Q_k=\emptyset$, the loop stops after at most $k$ iterations. $Q_k$ is empty if no elements are added at that iteration.
An element $(\ell', v', d')$ is not added to $Q_k$ iff $d' \geq d_2$ where $d'=d +_{B} f(w)\geq d +_{B} d_{min} $ . Note that, at the first iteration $Q_0$, $d = \bot_B$. At each iteration, we add a value greater or equal to $d_{min}$, this means that after $k$ iterations $d' \geq k*d_{min}$ but $k*d_{min} > d_2$ for definition. Q can have at most $\beta_{d_2}\cdot | L |$ elements and, at each iteration of the while loop, for each elements in Q, we consider their connected ones.
This implies that function \Call{BoundedReach}{} terminates after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$ steps.
\end{proof}
\begin{proposition}[UnBoundedReach complexity]
\label{prop:reachunboundcomplexity}
Given an $A$-spatial model $(L,\mathbf{W})$, a function $f: A\rightarrow B$, a value $d_1\in B$ ($d_1\not= \top_{B}$), and two spatial signals $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$ such that $\Call{UnboundedReach}{(L,\mathbf{W}),f,d_1,\mathbf{s}_1,\mathbf{s}_2}=\mathbf{s}$.
Let $d_{min}=\min_{(\ell, w, \ell') \in \mathbf{W}}(f(w))$, $d_{max}=\max_{(\ell, w, \ell') \in \mathbf{W}}(f(w))$ and $k=\min\{ i | i*d_{min} > d_{1}+d_{max} \}$, the algorithm stops after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$ steps, where $m$ is the number edges and $\beta_{d_2}$ is an integer counting the \emph{different distances} accumulated after $k$ steps. Furthermore, when $d_1=\bot_{B}$, this complexity reduces to $O(|L|*m)$.
\end{proposition}
\begin{proof}
We have already observed in Proposition~\ref{prop:reachboundcomplexity} that the first part of Algorithm~\ref{algo:unboundedreachmonitoring} terminates after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$. We can here observe that the second part of the algorithm (lines $(9)-(21)$) needs at most $|L|\cdot m$ steps. This because, the for-loop at lines $(12)-(18)$ consists of at most $O(m)$ steps (indeed each edge is considered at most two times). Moreover, a location can be inserted in $T$ at most $|L|$ times.
Concluding, the algorithm terminates after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)+O(|L|\cdot m)=O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$ steps. When $d_1=\bot_{B}$ lines $(9)-(21)$ are not executed, then the algorithm
terminates after $O(|L|\cdot m)$.
\end{proof}
\begin{proposition}[Escape complexity]
\label{prop:escapecomplexity}
Given an $A$-spatial model $(L,\mathbf{W})$, a function $f: A\rightarrow B$, an interval $[d_1, d_2]$, ($d_1,d_2\in B$, $d_1\leq_{B} d_2$ and) $d_2\not=\top_{B}$), and a spatial signal $\mathbf{s}_1: L\rightarrow D_2$ such that $\Call{Escape}{(L,\mathbf{W}),f,d_1,d_2,\mathbf{s}_1}=\mathbf{s}$. Algorithm terminates in at most $O(|L|\cdot m )$ steps, where $m$ is the number of edges.
\end{proposition}
\begin{proof}
The computation of function $\Call{MinDistance}{L,\mathbf{W},f}$ needs $O( m \log(|L|))$ steps.
Moreover, from property $P3$ in the proof of Lemma~\ref{lemma:escapecorrectness}, we have that the loop at line (6) terminates after at most $|L|$ iterations. In each of these iterations each edge is taken into account at most 2 times (one for each of the connected locations). This means that the loop terminates after $O(|L|*m)$ steps.
Finally, to compute the resulting spatial signal, $O(|L|)$ steps are needed for the loop at line $(22)$.
Concluding, the algorithm terminates in $O( m \log(|L|))+O(|L|*m)+O(|L|)$, that is $O(|L|*m)$.
\end{proof}
We can conclude this section by observing that the number of steps needed to evaluate function $\Call{Monitor}{}$ in Algorithm~\ref{algo:part1} is linear in the size of $\phi$, in the length of the signal and in the number of \emph{edges} in the spatial model and it is quadratic in the number of locations.
\section{Case study: ZigBee protocol monitoring}
\label{sec:ZigBee}
In this section, we consider the running example used in the previous sections. We discuss some properties to show the expressivity and potentiality of STREL.
Given a MANET with a ZigBee protocol, (Example~\ref{ex:zigbee}),
we consider as spatial models both its proximity and connectivity graphs computed with respect to the Cartesian coordinates (Example \ref{ex:manet}). Nodes have three kind of roles: {\it coordinator}, {\it router} and {\it EndDevice}, as described in Example \ref{ex:zigbee}. Moreover, each device is also equipped with a sensor to monitor its battery level ($X_B$), the humidity ($X_H$) and the pollution ($X_H$) in its position.
The semiring is the union between the \emph{max/min} semiring $\mathbb{R}^{\infty}$ (for the proximity graph) and the \emph{integer} semiring $\mathbb{N}^{\infty}$ (for the connectivity graph). We will use also two types of distances: ${\it hop}$ and the $\Delta$ distances described in Example~\ref{ex:distancefunction}.
Atomic propositions $\{ {\color{green!45!black!70!} coord } , \router, {\color{blue!45!black!70!} end\_dev } \}$ describe the type of nodes. We also consider inequalities on the values that are read from sensors, plus special propositions $@_\ell$ which encode the address of a specific location, i.e. they are true only in the location $\ell$.
In the following, we describe several properties of these ZigBee MANET networks that are easily captured by STREL logic, to exemplify its expressive power.
A class of properties naturally encoded in STREL is related to the connectivity of the network. First, we can be interested to know if a node is properly connected, meaning that it can reach the coordinator through a path of routers:
\begin{equation}
\phi_{connect} = device \reach{[0,1]}{hop} (router \reach{ }{hop} coord )
\end{equation}
The meaning of this property is that one end node reaches in a step a node which is a router and that is connected to the coordinator via a path of routers.
We may also want to know if there is a path to the router which is reliable in terms of battery levels, for instance such that all routers have a battery level above 30\%:
\begin{eqnarray}
&\phi_{reliable\_router} = ((battery > 0.5) \wedge router) \reach{ }{hop} coord &\nonumber \\
&\phi_{reliable\_connect} = device \reach{[0,1] }{hop} (\phi_{reliable\_router} )&
\end{eqnarray}
The properties focus on spatial connectivity at a fixed time. We can add also temporal requirements, for instance asking that a broken connection is restored within $h$ time units:
\begin{equation}
\phi_{connect\_restore} = \glob{} (\neg \phi_{connect} \rightarrow \ev{[0,h]}\phi_{connect} )
\end{equation}
Another class of properties of interest is the acyclicity of transmissions. To this end, we need to force the connectivity graph to be direct, with edges pointing in the direction of the coordinator (i.e. transmission reduces the distance from the coordinator). With STREL, we can easily detect the absence of a cycle for a fixed location $\ell$. This is captured by
$\phi^{\ell}_{acyclic} = \neg \phi^{\ell}_{cycle}$, where
\begin{equation}
\phi^{\ell}_{cycle} = @_\ell \reach{[0,1]}{hop} (\neg @_\ell \wedge \somewhere{}{hop}@_\ell)
\end{equation}
In order to characterize the whole network as acyclic, we need to take the conjunction of the previous formulae for all locations (or at least for routers, enforcing end devices to be connected only with routers). This is necessary as STREL is interpreted locally, on each location, and this forbids us to express properties of the whole network with location unaware formulae. This is a price for an efficient monitoring, as global properties of networks require more expressive and computationally expensive logics.
However, we can use the parametrization of STREL and the property of a Voronoi diagram to specify the global connection or the acyclicity of the graph. Indeed, the proximity graph connects always all the locations of the system, then the property $\everywhere{}{\Delta} \phi$, verified on the proximity graph, holds iff $\phi$ holds in all the location of the system.
Up to now we have presented qualitative properties, depending on the type of node. If we express properties of sensor measurements, we can also consider a quantitative semantics, returning a measure of robustness of (dis)satisfaction. As an example, we can monitor \eqref{eq:f1} if in each location an high value of pollution eventually implies, within $T$ time units, an high value of humidity, or \eqref{eq:f2} in which locations it is possible to find a `safe' route, where both the humidity and the pollution are below a certain threshold. We can also check \eqref{eq:f3} if a location which is not safe is at distance at most $d$ from a location which is safe. Finally \eqref{eq:f4}, we can check if a target device (identified by $X_S=1$ is reachable from all the location in less than d hops.
\begin{eqnarray}
&\phi_{PH} = (X_P > 150) \Rightarrow \ev{[0,T]} (X_H > 100) \label{eq:f1} &\\
&\phi_{Safe} =\glob{[0,T]} \escape{[d, \infty]}{\Delta} \: {(X_H < 90) \wedge (X_P < 150) } \label{eq:f2} &\\
&\phi_{some} = \somewhere{[0,d]}{\Delta} \phi_{Safe} \label{eq:f3} &\\
&\phi_{target} = \everywhere{}{hop} \somewhere{[0,d]}{hop}\: { (X_S = 1) } \label{eq:f4} &
\end{eqnarray}
\section{Case study: epidemic spreading}
\label{sec:epidemic}
In this section, we investigate a case study based on an epidemic spreading model in a population of a disease transmitting via direct contact, like flu or COVID19. The simplest models of epidemic spreading are based on a mean field assumption and treat the population as an homogeneous entity, assuming equal probability that two individuals can enter into contact \cite{epidemiology2019}. A more accurate description, instead, models potential contacts in a more explicit way, hence the population is represented as a network of interacting agents \cite{network_epidemic2015}, in which nodes are the agents and links are the potential contacts. Such a network can be static (links are fixed) or dynamic (links change during the simulation) and possibly adaptive \cite{network_epidemic2015}.
These kind of models are particularly useful when dealing with scenarios in which the number of potentially infective contacts, and thus of secondary infections, can vary a lot between individuals, the so called super-spreading scenario \cite{superspreading_2005}, which seems to be the relevant one also to capture the spreading of COVID19 disease \cite{superspreading_COVID_2020}.
In our case study, we consider a discrete-time model composed of two contact networks, one static, describing contacts within work colleagues, family, closed relatives and friends, and another one dynamic, modelling less frequent interaction events, like going to the restaurant, the cinema, or the disco.
The static network is defined by a degree distribution, assumed to be the same for each node, and modelled as a lognormal distribution with cutoff (mean 10, 99 percentile 50, cutoff at 200).\footnote{Contact distributions are constructed to resemble contact data collected by a regional government in Italy, which is not publicly available.} To generate the network, we sample a degree for each node and then sample the network relying on the \textsc{expected\_degree\_graph} method of networkX Python library \cite{networkX}.
This network is sampled once and not modified during simulations.
The dynamic event network, instead, is resampled at every simulation step (essentially corresponding to a day). Here, we additionally choose a subset of nodes which will be affected by the events. Each node has assigned a random probability of taking part to the event (chosen uniformly among the following frequency: once every month, once every two weeks, once every week, twice per week) and at each step, the node is included in the event network with such a probability. Then, each active node receives a degree sampled from a different degree distribution with a longer tail (lognormal with
mean 10 and 99 percentile 100, cutoff at 1000), to model super-spreading effects.\footnote{Note that, as we rely on distributions with cut-off, there is no practical difference in using a lognormal or a heavier tail distribution like a power law.}
In order to simulate our discrete-time model, with step corresponding to one day, we further give each agent one of four different states (\textbf{S}usceptible, \textbf{E}xposed but not infective, \textbf{I}nfective, \textbf{R}ecovered), and sample the duration in days of the transitions from E to I and from I to R according to a gamma distribution taken from COVID19 literature \cite{merler_2020}. Infection of a Susceptible node can happen if it is in contact with an Infective node, with a probability $p$ which is edge dependent and sampled for each edge according to a Beta distribution with mean 0.05 (which roughly gives an R0 close to 2, as observed in the second phase of the COVID epidemics, in Lombardia, Italy). We assume independence among different edges when modelling the spreading of infection.
At each time step, the spatial model of the epidemic network is designed by the pair $\langle L, \mathbf{W}\rangle$, where the set of locations $L$ corresponds to the set of agents and the proximity function $\mathbf{W}$ is such that $(\ell_i,w,\ell_j)\in \mathbf{W}$ if and only there is a probability greater than zero that $\ell_i$ and $\ell_j$
are in contact.
The value of the weight $w$ corresponds to the sampled probability described above, both for the static and the dynamic one.
More specifically, $w=-\ln(p_{\ell_i,\ell_j}(t))$, where $p_{\ell_i,\ell_j}(t)$ is the infection probability at time t.
Hence, higher is $w$, the lower is the probability that agent $\ell_i$ is in infected by agent $\ell_j$. We consider two types of distances here, the $hop$ distance, counting the number of hops and the $weight$ distance, summing the value of edges $w$.
The spatio-temporal trace of our epidemic network is $x: L \rightarrow \mathbb{T} \rightarrow \mathbb{Z}$ with only one signal $\vec x(\ell)= \nu$ associating with each agent $\ell$ and at each time t, the state $x(\ell, t) \in \mathbb{S} = \{\textbf{Susceptible}, \textbf{Exposed}, \textbf{Infected}, \textbf{Recovered} \}=\{\textbf{S}, \textbf{E}, \textbf{I}, \textbf{R} \}$.
To give an idea to the behaviour of this models we plot in Figure~\ref{fig:simulation} the number of nodes in each state at each time for a random simulation.
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{img/simulation.pdf}
\caption{Number of nodes in each state at each time for a random simulation.}
\label{fig:simulation}
\end{figure}
The first property that we consider is:
\begin{equation}
\phi_{dangerous\_days} = \glob{} (\textbf{Susceptible} \reach{[0,1]}{hop} (\ev{[0,2]}(\textbf{Infected})) => \ev{[0,7]} \textbf{Infected} )
\end{equation}
$\phi_{dangerous\_days}$ is satisfied in a location when it is always true (at each time step) that if a susceptible individual is directly connected with an individual that will be eventually infected in the next 2 days then it will eventually be infected within the next $7$ days. If we consider only the static network this property is satisfied on $447\pm12.5$ nodes of the network, considering 500 experiments, instead, considering only the dynamic network the property is satisfied by $350\pm70.5$ nodes. As expected the daily contacts are more dangerous than casual contact, and the dynamic network has more variability than the static one.
The second property that we consider is:
\begin{equation}
\phi_{safe\_radius} = \glob{} ( \everywhere{[0,r]}{weight} (\neg \textbf{Infected}) => \glob{[0,T]} (\neg \textbf{Infected}) )
\end{equation}
$\phi_{safe}$ holds in a location if it is always true that, when all the connected locations at a weight distance less than $r$ (i.e. with infection probability $\leq 10^{-r}$) are not infected, implies that this location will remain not infected for the next T days. With this property we can study the relation between the probability of being infected from connected nodes and being actually an infected individual after a number of days. If a location satisfies the property, it means that being in a radius $r$ of not infected individuals it prevents from infection.
If a location does not satisfy the property it means that there is some infected node at distance more than $r$, connected with it that cause its infection within the next T days. Setting $T=7$ days, we study the variation of $r$ versus the number of nodes that satisfy the property (in a network with 500 nodes). Figure~\ref{fig:safe_radius} shows the results. We report also a second scale with the corresponding probability value. We can see that
with $r=3$ which corresponds to a connection probability equal to $0.05$ (the mean of our Beta distribution), only half of the nodes satisfy the property and to have almost all nodes that satisfy the property we need to consider a very large radius. This means that having in the network edges with very large values of $r$ will not prevent the spread of the disease.
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{img/phi1.png}
\caption{Number of nodes that satisfy property$\phi_{safe\_radius}$ versus parameter r}
\label{fig:safe_radius}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
We presented STREL, a formal specification language to express and to monitor spatio-temporal requirements over a dynamic network of spatially-distributed CPS. Our monitoring framework considers the CPS topological configuration as a weighted graph with the nodes modeling the CPS entities while the edges representing their arrangement according to predefined
spatial relations (e.g. proximity relation, connectivity, euclidean distance, etc.). Both nodes and edges contain attributes that model physical and logical quantities that can change over time.
STREL combines the Signal Temporal Logic~\cite{Maler2004} with two spatial operators \emph{reach} and \emph{escape} that are interpreted over
the weighted graph. Other spatial modalities such as \emph{everywhere}, \emph{somewhere} and \emph{surround} can also be derived from them.
We demonstrated how STREL can be interpreted according to different semantics (Boolean, real-valued) semantics, defining an unified framework capturing all of them, based on constraint semirings. We further provided a generic offline monitoring algorithm based on such semiring formulation. We showed several examples of requirements that we can monitor in our framework, considering two different case studies.
As future work, we plan to investigate an algorithm for online monitoring of STREL properties. Furthermore, we aim to extend our framework with new features such as the possibility to synthesize automatically spatio-temporal controllers from the STREL specification or to provide
automatically an explanation of the failure, enabling to detect the responsible components when a STREL requirement is violated.
\section*{Acknowledgment}
This research has been partially supported by the Austrian FWF projects ZK-35 and W1255-N23, by the Italian PRIN project ``SEDUCE'' n. 2017TWRCNB and by the Italian PRIN project ``IT-MaTTerS'' n, 2017FTXR7S.
\bibliographystyle{alpha}
\section{Introduction}
From contact tracing devices preventing the epidemic spread to vehicular networks and smart cities, \emph{cyber-physical systems} (CPS) are pervasive information and communication technologies augmenting the human perception and control over the physical world. CPS consist of engineering, physical and biological systems that are tightly interacting with computational entities through sensors and actuators. CPS are networked at every scale and they are connected to the Internet (Internet of Things) enabling the humans and other software systems to inter-operate with them through the World Wide Web.
A prominent example can be found in the modern automotive systems where the extensive integration of sensor networks and embedded computational units has lead to the development of various driving assistance features that facilitate the driver during repetitive maneuvers and protect the passengers from hazardous situations. Furthermore, the upcoming 5G networks will empower soon vehicles also with the possibility to exchange information among each other and the road infrastructure about position and speed of vehicles, driving conditions on a particular road, accidents, or traffic jams.
Thus, this dynamic network infrastructure promises to enhance further autonomous driving applications, to reduce traffic congestion and to improve safety.
The rise of this disrupting and revolutionary technology comes at a price: these systems are becoming so ubiquitous that unexpected failures can potentially manifest causing fatal accidents and diminishing the trust of the public opinion.
Their safety-critical nature~\cite{RatasichKGGSB19} requires the engineers to verify their correct execution with respect to rigorously defined spatial and temporal requirements.
Detecting anomalies and failures at design time is extremely challenging. Exhaustive
verification techniques such as model checking
are limited to very small instances due to state-space explosion. A more practical approach is to simulate a digital replica of the CPS (its digital twin)
and to test the behaviour under different scenarios.
Requirements are generally
expressed in a formal specification language that can be
monitored online (during the simulation) or offline over the simulation traces.
This approach, called also
\emph{specification-based monitoring}~\cite{bartocci2018specification}, is nowadays the core functionality of several other computer-aided verification and design techniques for CPS such as statistical model checking~\cite{YounesS06,BortolussiMS2016ic,DavidLLMW11},
falsification analysis~\cite{SilvettiPB17,YaghoubiF2017acc,sankaranarayanan_model-based_2017,AbbasFSIG13tecs}, failure explanation/debugging~\cite{BartocciFMN18,BartocciMMMN19,BartocciMMMNP20} and parameter synthesis~\cite{DonzeKR09hscc,DonzeCLL09,Bortolussi2015,TCS2015}.
The majority of specification languages and tools available for CPS supports only the monitoring of temporal properties. Examples are Metric Temporal Logic (MTL)~\cite{Koymans90},
Signal Temporal Logic (STL)~\cite{mn13,maler2016runtime}, Timed Regular Expressions (TRE)~\cite{AsarinCM02} and Shape Expressions~\cite{BartocciDGMNQ20}.
However, spatio-temporal patterns plays a key role in the understanding of how emergent behaviors can arise
from local interactions in such complex systems of systems.
Thus, an important problem is then how to specify in a formal language spatio-temporal requirements~\cite{NenziBBLV20}, and how to efficiently detect/monitor them on the actual CPS or on the simulation of its digital twin.
In this paper, we present the Spatio-Temporal Reach and Escape Logic (STREL), a spatio-temporal specification language originally introduced in~\cite{Bartocci17memocode} and recently supported by the \textsc{MoonLight}~\cite{BartocciBLNS20} tool. STREL enables the specification of spatio-temporal requirements and their monitoring over the execution of mobile and spatially distributed components.
In this framework, space is represented as a weighted graph, describing the topological configurations in which the single
components (nodes of the graph) are arranged. Both nodes and edges have attributes modelling physical and logical quantities that can change in time.
STREL extends Signal Temporal Logic~\cite{Maler2004} with two spatial operators \emph{reach} and \emph{escape} from which is possible to derive other spatial modalities such as \emph{everywhere}, \emph{somewhere} and \emph{surround}. These operators enable a monitoring procedure where the satisfaction of the property at each location depends only on the satisfaction of its neighbours. Furthermore, we show how STREL can be interpreted according different semantics (Boolean, real-valued) semantics based on constraint semirings, an algebraic structure suitable for constraint satisfaction and optimisation.
The use of STREL is by no means restricted to CPS as application domain, but it is capable of capturing many interesting notions in other spatio-temporal systems, including biological systems (e.g. Turing patterns~\cite{BartocciBMNS15, bartocci2014,NenziBCLM18}), epidemic spreading scenarios (in real space or on networks)~\cite{network_epidemic2015}, or ecological systems. In these cases, monitoring algorithms typically act as key subroutines in statistical model checking
~\cite{BartocciBMNS15}.
This paper extends our preliminary work~\cite{Bartocci17memocode} as follows:
\begin{itemize}
\item we guide the reader through the all paper using a running example to facilitate the comprehension of our framework in each step;
\item we simplify the definition of dynamical spatial model and of the spatio-temporal semantics;
\item we extend spatial STREL operators to support interval constraints on distances;
\item we propose new monitoring algorithms, more efficient and able to work with interval constraints. We also provide correctness proofs and discuss in detail its algorithmic complexity;
\item we consider a second case study where we monitor
spatio-temporal requirements in STREL on a simulated epidemic spreading model for COVID19;
\end{itemize}
The rest of the paper is organized as follows. We discuss
the related work in Section~\ref{sec:related}. In Section~\ref{sec:runningExample} we introduce the reader with a running example while in Section~\ref{sec:definitions} we present the considered model of space and the type of signals. The STREL specification language is presented in Section~\ref{sec:ReachSTL} and the offline monitoring algorithms are discussed in Section ~\ref{sec:alg}. In Section~\ref{sec:ZigBee} and \ref{sec:epidemic} we discuss the two case studies: the ZigBee protocol and the COVID-19 epidemic spreading. Finally, Section~\ref{sec:conclusion} presents the conclusions.
\section{Related Work}
\label{sec:related}
Monitoring spatial-temporal properties over CPS executions was initially addressed in~\cite{Talcott08} where Talcott introduced the notion of spatial-temporal event-based model.
In this model, actions (e.g. the exchange of messages, or a physical changes) are labeled with time and space stamps and they trigger events that are further processed by a monitor.
In~\cite{TVG09}
the model was further extended to enable different space representations. Although the approaches in~\cite{Talcott08,TVG09}
provide a valuable algorithmic framework,
they lack a specification language and the monitors cannot be automatically generated.
In the context of \emph{collective adaptive systems}~\cite{CianciaLLM16},
several mathematical structures such as \emph{topological spaces}, \emph{closure spaces}, \emph{quasi-discrete closure spaces} and \emph{finite graphs}~\cite{NenziBCLM15}
have been employed to reason about spatial relations (e.g. \emph{closeness} and \emph{neighborhood}) of interacting agents. Another line of research in~\cite{GrosuSCWEB09,spatel,bartocci2014} proposed the use of \emph{quad-trees} spatial structures~\cite{FinkelB74} in~\cite{GrosuSCWEB09,spatel,bartocci2014} to reason about fractal-like spatial patterns or spatial
superposition properties in a grid, such as electrical spiral formation in cardiac tissues~\cite{GrosuSCWEB09} or power management requirements in a smart grid~\cite{spatel}. However, quad-trees are spatial data structures that are not invariant with respect to isometric transformations such as translation, rotation and reflection:
two spatial patterns that are equivalent modulo an isometric transformation
are usually represented by two different quad-trees. To overcome this limitation, in our
approach we have considered weighted graphs
where the edges represent the spatial relations between the involved entities.
Spatial logics have been the subject of theoretically investigation since at least almost a couple of decades~\cite{handbookSP}.
The work in~\cite{handbookSP} focuses on theoretically investigation, expressivity and decidability, often in continuous space. Less attention has been placed on more practical aspects, especially in the verification procedures.
For example, model checking techniques for spatial models have been considered more recently.
In~\cite{ciancia2014}, the authors introduce a {\it Spatial Logic for Closure Spaces} (SLCS) that leverages a discrete and topological notion of space, based on closure spaces~\cite{Gal99}. An extension of the SLCS with temporal aspects, as ``snapshot'' models, was proposed later in~\cite{CianciaGGLLM15}. This extends SLCS with the temporal modality of the branching logic {\it Computation Tree Logic}~\cite{EmersonH83}. However, the algorithms to check snapshot models are very computational expensive and are susceptible to state-space explosion problems because the spatial formulae need to be recomputed at every state.
It is also worth mentioning \textsc{VoxLogicA}~\cite{BelmonteCLM19,BuonamiciBCLM20} a recently developed a spatial model checking tool for image analysis. However, this tool is not suitable to monitor CPS, because it is specialized for medical imaging and does not take into consideration time.
Relevant works are also those on spatial logics for process algebra with locations such as in~\cite{CC04,Cardelli2000mobile}, or spatial logic for rewrite theories~\cite{rew1}.
Other logic-based formalisms have been introduced
to reason about the topological~\cite{BC02} or directional~\cite{BS10} aspects of locally
interacting components.
In the topological approach~\cite{BC02}, the entities are sets of points in the space and the relation between them is preserved under translation, scaling and rotation.
If the relation between objects depends on their relative position then the spatial model supports the
directional reasoning. These logics are highly computationally complex~\cite{BS10} or even undecidable~\cite{MR99} and indeed impractical to use.
Monitoring spatial-temporal behaviors has recently started to receive more attention
with several works such as {\it Spatial-Temporal Logic} (SpaTeL)~\cite{spatel}, {\it Signal Spatio-Temporal Logic} SSTL~\cite{NenziBCLM15}, {\it Spatial Aggregation Signal Temporal Logic}
(SaSTL)~\cite{MaBLS020,sastl2021} and {\it Spatio-Temporal Reach and Escape Logic} STREL~\cite{Bartocci17memocode}.
SpaTeL is the unification of
{\it Signal Temporal Logic}~\cite{Maler2004} (STL) and {\it Tree Spatial Superposition Logic} (TSSL)
introduced in~\cite{bartocci2014} to classify and detect spatial patterns that are expressed using quad trees~\cite{FinkelB74}. This allows one to capture very complex spatial structures, but at the price of a complex formulation of spatial properties, which are in practice only learned from some template images.
SSTL instead combines STL with two spatial modalities, one expressing that something is true \emph{somewhere} nearby and the other capturing the notion of being \emph{surrounded} by a region that satisfies a given spatio-temporal property. SSTL has
two possible semantics a Boolean and a real-valued one.
SSTL~\cite{NenziBCLM18} operates over a static topological space while STREL on the contrary can monitor entities over a dynamic topological space.
Furthermore, STREL generalizes SSTL spatial modalities with the \emph{reach} and \emph{escape} operators, simplifying the monitoring that can be computed locally with respect to each node. SaSTL~\cite{MaBLS020,sastl2021} is
recently proposed specification language that augment STL with two new logical operators for expressing spatial aggregation and spatial counting characteristics that are typical in monitoring spatio-temporal requirements in a smart city. Similarly to SSTL, also SaSTL operates only on a static topological space. Finally, another important key characteristic of STREL with respect to all the aforementioned
spatio-temporal specification languages is the possibility to define the semantics using constraint semirings algebraic structures. This provides the possibility to elegantly define a unified monitoring framework for both the qualitative and quantitative semantics (similarly to~\cite{JaksicBGN18} for the STL case). Moreover, it opens our framework to the possibility to define new semantics for STREL by just defining and plugging-in a new semiring algebraic structure but without the need to redefine the monitoring algorithm.
\section{Running Example: A Mobile ad hoc sensor network}
\label{sec:runningExample}
A mobile ad-hoc sensor network (MANET) can consist of up ten thousands of mobile devices connected wirelessly usually deployed to monitor environmental changes such as pollution, humidity, light and temperature.
Each sensor node is equipped with a sensing transducer, data processor, a radio transceiver and an embedded battery. A node can move independently in any direction and indeed can change its links to other devices frequently.
Two nodes can communicate each other if their Euclidean distance is at most their communication range as depicted in Fig.~\ref{fig:proxconnect}~(right) .
Moreover, the nodes can be of different type and their behaviour and communication can depend on their types. In the next section we consider the simplest MANET with all nodes of the same type, while in Section~\ref{sec:ReachSTL} we will differentiate them to describe more complex behaviours.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{img/p1}
\includegraphics[scale=0.5]{img/c1}
\caption{Proximity graph (left) and Connectivity graph (right)}
\label{fig:proxconnect}
\end{figure}
\section{Spatial Models, Signals and Traces}
\label{sec:definitions}
In this section, we introduce the model of space we consider,
and the type of signals that the logic specifies.
\subsection{Constraint Semirings}
An elegant and general way to represent the result of monitoring is based
on \emph{constraint semiring}. This is an algebraic structure that consists
of a domain and two operations named \emph{choose} and \emph{combine}.
Constraint semirings are subclass of semirings which have been shown
to be very flexible, expressive and convenient for a wide range of problems,
in particular for optimisation and solving problems with soft constraints
and multiple criteria~\cite{BMR97}, and in model checking~{\protect{\cite{LM05}}}.
\begin{definition}[semiring]
A \emph{semiring} is a tuple $\langle A, \oplus, \otimes, \bot, \top \rangle$ composed by
a set $A$, two operators $\oplus$, $\otimes$ and two constants $\bot$,
$\top$ such that:
\begin{itemize}
\item $\oplus : A \times A \rightarrow A$ is an associative, commutative operator to ``choose'' among values\footnote{We let
$x\oplus y$ to denote $\oplus(\{ x , y\})$.},
with $\bot$ as unit element ($\bot \oplus a = a, \forall a \in A$).
\item $\otimes : A \times A \rightarrow A$ is an associative operator to ``combine'' values with $\top$ as unit element ( $\top \otimes a = a, \forall a \in A$) and $\bot$ as absorbing element ($\bot \otimes a = \bot, \forall a \in A$ )
\item $\otimes$ distributes over $\oplus$;
\end{itemize}
\end{definition}
\begin{definition}[constraint semiring]
A \emph{constraint semiring}
is a semiring $\langle A, \oplus, \otimes, \bot, \top \rangle$ such that:
\begin{itemize}
\item $\oplus$ is defined over $2^A$, idempotent ( $a\in A$ $a\oplus a=a\otimes a =a$) and has $\top$ as absorbing element ( $\top \oplus a = \top$)
\item $\otimes$ is commutative
\item $\sqsubseteq$, which is defined as $a\sqsubseteq b$ iff {$a\oplus b=b$},
provides a complete lattice $\langle A , \sqsubseteq , \bot, \top \rangle$.
\end{itemize}
We say that a \emph{constraint semiring} $A$ is \emph{idempotent} if and only if also the combine operator $\otimes$ is idempotent, i.e. $a=a\otimes a =a$. Moreover, we say that a \emph{semiring}
$A$ is \emph{total} when $\sqsubseteq$ is a {\emph{total order}}.
\end{definition}
With an abuse of notation we sometimes refer to a semiring
$\langle A, \oplus,\otimes , \bot, \top \rangle$ with the carrier $A$
and to its components by subscripting them with the carrier, i.e.,
$\oplus_A$, $\otimes_A$, $\bot_A$ and $\top_A$. For the sake of a lighter
notation we drop the subscripts when clear from the context.
\begin{example}\label{ex:semirings}
Typical examples of semirings are\footnote{We use $\mathbb{R}^{\infty}$ (resp. $\mathbb{N}^{\infty}$) to denote $\mathbb{R}\cup\{-\infty,+\infty\}$ (resp. $\mathbb{N}\cup\{\infty\}$).}:
\begin{itemize}
\item the Boolean semiring $\langle \{\mathit{true},\mathit{false}\}, \vee, \wedge, \mathit{false}, \mathit{true} \rangle$;
\item the tropical semiring $\langle \mathbb{R}_{\geq 0}^{\infty},\emph{min},+,+\infty,0 \rangle$;
\item the max/min semiring: $\langle \mathbb{R}^{\infty}, \emph{max},\emph{min}, -\infty, +\infty \rangle$ ;
\item the integer semiring: $\langle \mathbb{N}^{\infty}, \emph{max},\emph{min}, 0, +\infty \rangle$.
\end{itemize}
Boolean, max/min and integer semirings are \emph{idempotent} while tropical semiring is not. All the above semirings are \emph{total}.
\end{example}
One of the advantages of \emph{semirings} is that these can be easily composed. For instance, if $A$ and $B$ are two semirings, one can consider the \emph{cartesian product} $\langle A\times B,(\bot_A,\bot_B), (\top_A,\top_B), \oplus,\otimes\rangle$ where operations are applied elementwise.
\subsection{Spatial model}
Space is represented via a graph with edges having a weight from a set $A$.
We consider directed graphs (undirected graphs can consider a symmetric relation).
\begin{definition}[$A-$spatial model]
An $A-$\emph{spatial model} $\mathcal{S}$ is a pair $\langle L, \mathbf{W}\rangle$ where:
\begin{itemize}
\item $L$ is a set of \emph{locations}, also named \emph{space universe};
\item $\mathbf{W}\subseteq L\times A\times L$ is a \emph{proximity function} associating at most one label $w \in A$ with each distinct pair $\ell_1,\ell_2\in L$.
\end{itemize}
\end{definition}
We will use $\mathbb{S}_{A}$ to denote the set of $A$-\emph{spatial models}, while $\mathbb{S}^{L}_{A}$ indicates the set of $A$-\emph{spatial models} having $L$ as a set of locations. In the following, we will equivalently write $(\ell_1,w,\ell_2)\in \mathbf{W}$ as $\mathbf{W}(\ell_1,\ell_2)=w$ or $\nextto{\ell_1}{w}{\ell_2}$, saying that $\ell_1$ is \emph{next to} $\ell_2$ with weight $w \in A$.
\begin{example} $\mathbb{R}_{\geq 0}^{\infty}$-spatial model on \emph{tropical semiring} (see Example~\ref{ex:semirings}) can be used to represent standard {\it weighed graphs} as Figure~\ref{fig:spmodel}. $L$ is the set of nodes and the proximity function $\mathbf{W}$ defines the weight of the edges, e.g. $\mathbf{W}(\ell_2,\ell_7)= \mathbf{W}(\ell_7,\ell_2) =5$
{\small
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}
[scale=.6,auto=left,every node/.style={circle,thick,inner sep=0pt,minimum size=6mm}]
\node (1) [draw = black] at (-1,-1) {$\ell_1$};
\node (2) [draw = black] at ( -1,1) {$\ell_2$};
\node (3) [draw = black] at ( -2,3) {$\ell_3$};
\node (4) [draw = black] at ( -5, 1){$\ell_4$};
\node (5) [draw = black] at ( 2, 2) {$\ell_5$};
\node (6) [draw = black] at (-4, 2) {$\ell_6$};
\node (7) [draw = black] at (1, 0) {$\ell_7$};
\node (8) [draw = black] at (-3,0) {$\ell_8$};
\node (9)[draw = black] at (4,1) {$\ell_9$};
\draw [-] (1) -- (2) node[midway] {2};
\draw [-] (1) -- (8) node[midway] {1};
\draw [-] (2) -- (7) node[midway] {5};
\draw [-] (2) -- (3) node[midway] {1};
\draw [-] (4) -- (6) node[midway] {4};
\draw [-] (8) -- (6) node[midway] {3};
\draw [-] (7) -- (9) node[midway] {7};
\draw [-] (7) -- (5) node[midway] {2};
\draw [-] (3) -- (5) node[midway] {3};
\end{tikzpicture}
\end{center}
\caption{Example of a weighted undirected graph; e.g. $\mathbf{W}(\ell_2,\ell_7)= \mathbf{W}(\ell_7,\ell_2) =5$ }
\label{fig:spmodel}
\end{figure}
}
\end{example}
A special class of spatial models are the ones based on \emph{Euclidean spaces}.
\begin{definition}[Euclidean spatial model]
\label{def:Euclidean}
Let $L$ be a set of locations, $R\subseteq L\times L$ a (reflexive) relation and $\lambda: L\rightarrow \mathbb{R}^{2}$ a function mapping each location to a point in $\mathbb{R}^{2}$, we let $\mathcal{E}(L,R,\lambda)$ be the $\mathbb{R}^{\infty}\times\mathbb{R}^{\infty}$-spatial model\footnote{$\mathbb{R}^{\infty}$ is the \emph{max/min} semiring considered in Example~\ref{ex:semirings}.} $\langle L, \mathbf{W}^{\lambda, R}\rangle$ such that:
\begin{displaymath}
\mathbf{W}^{\lambda,R}=\{ (\ell_1,\lambda(\ell_1)-\lambda(\ell_2),\ell_2) | (\ell_1,\ell_2)\in R \}
\end{displaymath}
\label{def:euclisomod}
\end{definition}
Note that we label edges with a 2-dimensional vector $w$ describing how to reach $\ell_2$ from $\ell_1$, i.e., $\lambda(\ell_1) - w = \lambda(\ell_2)$. This obviously allows us to compute the euclidean distance between $\ell_1$ and $\ell_2$ as $\| w \|_2$, but, as we will see, allows us to compute the euclidean distance of any pair of locations connected by any path, not necessarily by a line in the plane.
\begin{example}
\label{ex:manet}
When considering a MANET, we can easily define different proximity functions for the same set of locations, where each location represents a mobile device.
Given a set of $n$ reference points in a two-dimensional Euclidean plane, a Voronoi
diagram~\cite{Aurenhammer1991} partitions the plane into set of $n$ regions, one per reference point,
assigning each point of the plane to the region corresponding to the closest reference point.
The dual of the Voronoi diagram is the proximity graph or Delaunay triangulation~\cite{Delaunay1934}.
In Figure~\ref{fig:proxconnect} (left) we can see an example of Voronoi diagram (in blue) and proximity graph (in red).
The proximity function can then be defined with respect to the Cartesian coordinates, as in Definition~\ref{def:euclisomod}:
\begin{math}
\mathbf{W}^{\mu,R}(\ell_i,\ell_j)=\mu(\ell_i)-\mu(\ell_j)=(x_i,y_i)-(x_j,y_j)= (x_i - x_j , y_i -y_j)
\end{math}, where
$(x_i,y_i)$ are the plane coordinates of the location $\ell_i$.
The proximity function can be also equal to a value that depends of other specific characteristics or behaviors of our nodes. For instance, Fig.~\ref{fig:proxconnect}~(right) represents the connectivity graph of MANET. In this case a location $\ell_i$ is next to a location $\ell_j$ if and only if they are within their communication range.
\end{example}
Given an $A$-spatial model we can define \emph{routes}.
\begin{definition}[route]
Let $\mathcal{S}=\langle L,\mathbf{W}\rangle$, a \emph{route} $\tau$
is an infinite sequence $\ell_0 \ell_1\cdots \ell_k\cdots$ in $L^{\omega}$ such that for any $i\geq 0$, $\nextto{\ell_i}{d}{\ell_{i+1}}$.
\end{definition}
Let $\tau=\ell_0 \ell_1\cdots \ell_k\cdots$ be a route, $i\in \mathbb{N}$ and $\ell_i \in L$, we use:
\begin{itemize}
\item $\tau[i]$ to denote the $i$-th node $\ell_i$ in $\tau$;
\item $\tau[i..]$ to indicate the suffix route $\ell_i \ell_{i+1} \cdots$;
\item $\ell \in \tau$ when there exists an index $i$ such that $\tau[i]=\ell$, while we use $\ell\not\in \tau$ if this index does not exist;
\item $\tau(\ell)$ to denote the first occurrence of $\ell$ in $\tau$:
\[
\tau(\ell)=\left\{
\begin{array}{ll}
\min\{ i | \tau[i]=\ell \} & \mbox{if $\ell\in \tau$}\\
\infty & \mbox{otherwise} \\
\end{array}
\right.
\]
\end{itemize}
We also use $Routes(\mathcal{S})$ to denote the set of routes in $\mathcal{S}$, while $Routes(\mathcal{S},\ell)$ denotes the set of routes starting from $\ell \in L$.
We can use routes to define the \emph{distance} among two locations in a \emph{spatial model}. This distance is computed via an appropriate function $f$ that combines all the weights in a route into a value taken from an appropriate \emph{total ordered} monoid $B$.
\begin{definition}[Distance Domain]
\label{def:distDom}
We say that \emph{distance domain} $(B,\bot_B,\top_B,+_{B},\leq_{B})$ whenever $\leq_{B}$ is a total order relation over $B$ where $\bot_{B}$ is the minimum while $\top_{B}$ is the maximum and $(B,\bot_B,+_{B})$ is a monoid. Given a distance domain $B$, we will use $\bot_{B}$, $\top_{B}$, $+_B$ and $\leq_B$ to denote its elements.
\end{definition}
\begin{definition}[Distance Function and Distance over paths]
Let $\mathcal{S}=\langle L,\mathbf{W}\rangle$ be an $\mathcal{S}=\langle L,\mathbf{W}\rangle$
be an $A$-spatial model, $\tau$ a route in $\mathcal{S}$, $\langle B,\bot_{B},\top_{B},+_{B},\leq_{B} \rangle$ a \emph{distance domain}, we call $f:A\rightarrow B$ the \emph{distance function}, associating elements of $A$ to the distance domain $B$. The distance $d_{\tau}^{f}[i]$ up-to index $i\in \mathbb{N}^{\infty}$ is defined as follows:
\[
d_{\tau}^{f}[i]= \begin{cases}
\bot_{B} & i=0 \\
\infty & i=\infty \\
f(w) +_{B} d_{\tau[1..]}^{f}[i-1] & (i>0)\mbox{ and } \nextto{\tau[0]}{w}{\tau[1]}
\end{cases} \\
\]
\noindent
Given a locations $\ell\in L$, the distance over $\tau$ up-to $\ell$ is then $d_{\tau}^{f}[\ell] = d_{\tau}^{f}[\tau(\ell)]$ if $\ell\in \tau$, while it is $\top_{B}$ if $\ell\not\in \rho$.
\end{definition}
\begin{example}
\label{ex:distancefunction}
Considering again a MANET, one could be interested in different types of distances, e.g.,
\emph{counting} the number of \emph{hop}, or distances induced by the weights of the Euclidean space structure.
\noindent
To count the number of hop, we can simply use the function
$hop: A \rightarrow \mathbb{N}^{\infty}$, taking values in the distance domain $\langle \mathbb{N}^{\infty}, 0, \infty, +, \leq \rangle$,:
\[
hop(w)=1
\]
and in this case $d^{hop}_\tau[i]=i$
Considering the proximity function $\mathbf{W}^{\mu,R}(\ell_i,\ell_j)$ computed from the Cartesian coordinates and the distance domain $\langle \mathbb{R}^{\infty}, 0, \infty, +, \leq \rangle$, we can use the Euclidean distance $\Delta(x,y)= \| (x,y) \|$, where $(x,y)$ are the coordinates of the vectors returned by $\mathbf{W}^{\mu,R}$.
It is easy to see that for any route $\tau$ and for any location $\ell \in L$ in $\tau$, the function $d_{\tau}^{\Delta}(\ell)$ yields the sum of lengths of the edges in $\mathbb{R}^{2}$ connecting $\ell $ to $\tau(0)$.
Given a distance function $f:A\rightarrow B$, the distance between two locations $\ell_1$ and $\ell_2$ in a $A$-spatial model is obtained by choosing the minimum distance along all possible routes starting from $\ell_1$ and ending in $\ell_2$:
\[d^{f}_{\mathcal{S}}[\ell_1,\ell_2] = \min\left\{ d^{f}_{\tau}[\ell_2] | \tau\in Routes(\mathcal{S},\ell_1) \right\}.
\]
\begin{example}
\label{ex:distanceLocations}
Consider again the distance functions defined for a MANETS. For \emph{hop}, we are taking the minimum hop-length over all paths connecting $\ell_1$ and $\ell_2$, resulting in the shortest path distance.
\end{example}
\end{example}
\subsection{Spatio-Temporal Signals}
\begin{definition}
A
{\emph{signal domain}} is a tuple $\langle D, \oplus,\otimes, \odot,\bot,\top\rangle$ where:
\begin{itemize}
\item $\langle D, \oplus,\otimes,\bot, \top\rangle$, is an \emph{idempotent semiring};
\item $\odot: D\rightarrow D$, is a \emph{negation function} such that:
\begin{itemize}
\item $\odot\top =\bot$;
\item $\odot\bot = \top$;
\item $\odot(v_1\oplus v_2)=(\odot v_1)\otimes (\odot v_2)$
\item $\odot(v_1\otimes v_2)=(\odot v_1)\oplus (\odot v_2)$
\item for any $v\in D$, $\odot ( \odot v ) = v$.
\end{itemize}
\end{itemize}
\end{definition}
In this paper we will consider two \emph{signal domains}:
\begin{itemize}
\item Boolean signal domain $\langle \{ \top , \bot \}, \vee, \wedge,\neg, ,\bot,\top, \rangle$ for qualitative monitoring;
\item {Max/min signal domain $\langle \mathbb{R}^{\infty}, \max, \min, -, \bot, \top,\rangle$} for quantitative monitoring.
\end{itemize}
For signal domains we will use the same notation and notational conventions introduced for semirings.
\begin{definition} Let $\mathbb{T}=[0, T] \subseteq \mathbb{R}_{\geq 0}$ a time domain and $\langle D, \oplus,\otimes, \odot ,\bot ,\top \rangle$ a \emph{signal domain}, a \emph{temporal $D$-signal} $\nu$ is a function
$\nu: \mathbb{T}\rightarrow D$.
\noindent
Consider a finite sequence:
\[
\tilde{\tsign} = [(t_{0}, d_0),\ldots,(t_{n}, d_{n})]
\]
such that for $\forall i, t_i\in \mathbb{T}$, $t_i<t_{i+1}$ and $d_i\in D$.
We let $\tilde{\tsign}$ denote a \emph{piecewise constant temporal $D$-signal} in $\mathbb{T}=[t_0, T]$, that is
\[
\tilde{\tsign}(t) = \begin{cases}
& d_i \quad \text{ for } t_{i} \leq t < t_{i+1}, \\
& d_n \quad \text{ for } t_{n} \leq t \leq T;
\end{cases} \\
\]
\end{definition}
Given a \emph{piecewise constant temporal signal} $\tilde{\tsign}=[(t_{0}, d_0),\ldots,(t_{n}, d_{n})]$ we will use $\mathcal{T}( \tilde{\tsign} )$ to denote the set $\{ t_0,\ldots, t_n \}$ of \emph{time steps} in $\tilde{\tsign}$; $start(\tilde{\tsign})$ to denote $t_0$; while we will say that $\tilde{\tsign}$ is \emph{minimal} if and only if for any $i$, $d_i\not=d_{i+1}$.
We will also let $\tilde{\tsign}[ t=d ]$ to denote the signal obtained from $\tilde{\tsign}$ by adding the element $(t,d)$.
Finally, if $\nu_1$ and $\nu_2$ are two $D$-temporal signals, and $op: D\times D\rightarrow D$, $\nu_1~op~\nu_2$ denotes the signal associating with each time $t$ the value $\nu_1(t)~op~\nu_2(t)$. Similarly, if $op:D_1 \rightarrow D_2$, $op~\nu_1$ denotes the $D_2-$signal associating with $t$ the value $op~ \nu_1(t)$.
\begin{definition}[Spatial $D$-signal] Let $L$ be a \emph{space universe}, and $\langle D, \oplus,\otimes, \odot ,\bot ,\top\rangle$ a signal domain. A \emph{spatial $D$-signal} is a function $\mathbf{s}: L\rightarrow D$.
\end{definition}
\begin{definition}[Spatio-temporal $D$-signal]
Let $L$ be a space universe, $\mathbb{T}=[0, T]$ a time domain, and $\langle D, \oplus,\otimes, \odot,\top,\bot\rangle$ a signal domain, a spatio-temporal $D$-signal is a function
\[ \sigma: L \rightarrow \mathbb{T} \rightarrow D \]
\noindent
such that $\sigma(\ell)=\nu$ is a temporal signal that returns a value $\nu(t) \in {D}$ for each time $t \in \mathbb{T}$. We say that $\sigma$ is \emph{piecewise constant} when for any $\ell$, $\sigma(\ell)$ is a \emph{piecewise constant temporal signal}. \emph{Piecewise constants} signal are denoted by $\tilde{\sigma}$. Moreover, we will use $\mathcal{T}(\tilde{\sigma})$ to denote $\bigcup_{\ell}\mathcal{T}(\tilde{\sigma}(\ell))$. Finally, we let $\tilde{\sigma}@t$ denote the spatial signal associating each location $\ell$ with $\tilde{\sigma}(\ell,t)$.
\end{definition}
Given a spatio-temporal signal $\sigma$, we will use $\sigma@t$
to denote the \emph{spatial signal} at time $t$, i.e. the signal $\mathbf{s}$ such that $\mathbf{s}(\ell)=\sigma(\ell)(t)$, for any $\ell \in L$.
Different kinds of signals can be considered while the signal domain $D$ is changed. Signals with $D= \{ true , false \}$ are called \emph{Boolean signals}; with $D = \mathbb{R}^{\infty}$ are called real-valued or \emph{quantitative signals}.
\begin{definition}[$D$-Trace]
Let $L$ be a space universe, a {\it spatio-temporal $D$-trace} is a function
$\vec x: L \rightarrow \mathbb{T} \rightarrow D_1 \times \cdots \times D_n$
such that for any $\ell\in L$ yields a vector of temporal signals $\vec{x}(\ell)=(\nu_1,\ldots,\nu_n)$.
In the rest of the paper we will use $\vec{x}(\ell,t)$ to denote $\vec{x}(\ell)(t)$.
\end{definition}
\begin{example}
We can consider a $(\mathbb{R} \times \mathbb{R})$-spatio-temporal trace of our sensor network as $\vec x: L \rightarrow \mathbb{T} \rightarrow \mathbb{R} \times \mathbb{R}$ that associates a set of temporal signals $\vec x(\ell)= (\nu_B,\nu_T)$ at each location, where $\nu_B$ and $\nu_T$ respectively correspond to the temporal signals of the battery and the temperature in location $\ell$, and each signal has domain $\langle \mathbb{R}, \max, \min, -, \bot, \top,\rangle$.
\end{example}
We plan to work with spatial models that can dynamically change their configurations. For this reason, we need to define a function that returns the spatial configuration at each time.
\begin{definition}[Dynamical $A$-Spatial Model]
Let $L$ be a spatial universe, a {\it dynamical $A$-spatial model} is a function $\mathcal{S} : \mathbb{T}\rightarrow \mathbb{S}^{L}_{A}$ associating each element in the time domain $\mathbb{T}$ with $A$-spatial model $\mathcal{S}(t)=\langle L, \mathbf{W}\rangle$ that describes the spatial configuration of locations.
\end{definition}
With an abuse of notation we use $\mathcal{S}$ for both a dynamical spatial model and a static spatial model, where for any $t$ $\mathcal{S} =\mathcal{S}(t)$.
\begin{example}
Let us considering a MANET with a proximity graph.
Figure~\ref{fig:voronoimobility} shows two different snapshots, $\mathcal{S}(t_1)=\langle L,\mathbf{W}_1 \rangle$ and $\mathcal{S}(t_2)=\langle L,\mathbf{W}_2 \rangle$, of the the dynamical spatial model $S$ for time $t_1$ and $t_2$. We can see that locations $\ell_1$ and $\ell_2$ change their position, this changed also the Voronoi diagram and the proximity graph.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{img/voronoi1}
\includegraphics[scale=0.4]{img/voronoi2}
\caption{Two snapshots of a spatial model with 7 locations $\ell_1,\dots,\ell_7$ that move in a 2D Euclidian space. The plane is partitioned using a Voronoi Diagram ({\color{blue} blue}). In {\color{red} red} we have the proximity graph.}
\label{fig:voronoimobility}
\end{figure}
\end{example}
\section{Spatio-temporal Reach and Escape Logic}
\label{sec:ReachSTL}
In this section, we present the {\it Spatio-Temporal Reach
and Escape Logic}~(STREL), an extension of the {\it Signal Temporal Logic}.
We define the syntax and the semantics of STREL, describing
in detail the spatial operators and their expressiveness.
The syntax of STREL is given by
%
{\small
\[
\varphi :=
\mu \mid
\neg \varphi \mid
\varphi_{1} \wedge \varphi_{2} \mid
\varphi_{1} \: \until{[t_{1},t_{2}]} \: \varphi_{2} \mid
\varphi_{1} \: \since{[t_{1},t_{2}]} \: \varphi_{2} \mid
\varphi_{1} \: \reach{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi_{2} \mid
\escape{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi
\]
}
where $\mu$ is an {\it atomic predicate} ($AP$), {\it negation} $\neg$ and {\it conjunction} $\wedge$ are the standard Boolean connectives, $\until{[t_{1},t_{2}]}$ and $\since{[t_{1},t_{2}]}$ are the {\it Until} and the {\it Since} temporal modalities, with $[t_{1},t_{2}]$ a real positive closed interval. For more details about the temporal operators, we refer the reader to~\cite{MalerN13, Maler2004, Donze2013}.
The spatial modalities are the {\it reachability} $\reach{[d_{1},d_{2}]}{f:A\rightarrow B}$ and the {\it escape} $\escape{[d_{1},d_{2}]}{f:A\rightarrow B}$ operators, with $f:A \rightarrow B$ a \emph{distance function} (see Definition~\ref{ex:distancefunction}), $B$ a \emph{distance domain}, and $d_{1},d_{2}\in B$ with $d_1\leq_{B} d_2$.
In what follows we will omit the type info about function $f$ when it is clear from the context or where it does not play any role.
The reachability operator $\phi_1 \reach{[d_1, d_2]}{f}\phi_2$ describes the behaviour of reaching a location satisfying property $\phi_2$, with a distance from the initial location belongs to $[d_1, d_2]$, passing only through locations that satisfy $\phi_1$.
The escape operator $\escape{[d_1, d_2]}f{\phi}$, instead, describes the possibility of escaping from a certain region via a route passing only through locations that satisfy $\phi$, with distance between the starting location of the path and the last that belongs to the interval $[d_1, d_2]$.
As customary, we can derive the {\it disjunction} operator $\vee$ and the future {\it eventually} $\ev{[t_{1},t_{2}]}$ and {\it always} $\glob{[t_{1},t_{2}]}$ operators from the until temporal modality, and the corresponding past variants from the since temporal modality, see~\cite{MalerN13} for details.
We can define also other three derived spatial operators: the {\it somewhere} $\somewhere{\leq d}{f:A\rightarrow B}\phi$ and the {\it everywhere} $\everywhere{\leq d}{f:A\rightarrow B}\phi$ that describe behaviors of some or of all locations at a certain distance from a specific point, and the {\it surround} that expresses the topological notion of being surrounded by a $\phi_2$-region, while being in a $\phi_1$-region, with additional metric constraints. A more thorough discussion of the spatial operators will be given after introducing the semantics.
\subsection{Semantics}
The semantics of STREL is evaluated point-wise at each time and each location. We stress that each STREL formula $\varphi$ abstracts from the specific domain used to express the satisfaction value of $\varphi$.
These, of course, are needed to define the semantics. In the following, we assume that $D_1$ is the domain of the spatio-temporal traces, $D_2$ is the semiring where the logic is evaluated and $B$ is a distance domain as defined in Definition~\ref{def:distDom}.
\begin{definition} [Semantics]
\label{generalsemantics}
Let $\mathcal{S}$ be a dynamical $A$-spatial model, $D_1$ and $D_2$ be two signal domains, $\vec x$ be a {\it spatio-temporal $D_1$-trace} for $L$ and $\mathcal{S}$ a dynamical spatial model.
The $D_2$-monitoring function $\mathbf{m}$ of $\vec x$ is recursively defined in Table~\ref{tab:monitoring}.
\end{definition}
\begin{table*}
\begin{center}
\begin{tabular}{rcl}
\small
$\mathbf{m}( \mathcal{S}, \vec{x}, \mu, t, \ell)$ & $=$ & $\iota(\mu,\vec{x}(\ell,t))$ \\[.2cm]
$\mathbf{m}(\mathcal{S}, \vec{x}, \neg\varphi, t, \ell)$ & $=$ & $\odot_{D_{2}} \mathbf{m}(\mathcal{S}, \vec{x}, \varphi, t, \ell)$ \\[.2cm]
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \wedge \varphi_2, t, \ell)$ & $=$ & $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \ell) \otimes_{D_2} \mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t, \ell)$ \\[.2cm]
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \until{[t_{1},t_{2}]} \: \varphi_{2}, t, \ell)$ & $=$ & ${\bigoplus_{D_2}}_{t' \in [t+t_{1}, t+t_{2}]} \big (\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2, t', \ell) \otimes_{D_2} {\bigotimes_{D_2}}_{t'' \in [t, t']} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t'', \ell) \big) $ \\[.2cm]
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \since{[t_{1},t_{2}]} \: \varphi_{2}, t, \ell)$ & $=$ &
${\bigoplus_{D_2}}_{t' \in [t-t_{2}, t-t_{1}]} \big (\mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t', \ell) \otimes_{D_2} {\bigotimes_{D_2}}_{t'' \in [t', t]} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t'', \ell) \big) $ \\[.2cm]
$\mathbf{m}(\mathcal{S}, \vec{x}, \varphi_{1} \: \reach{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi_{2}, t, \ell)$ & $=$ \\
\multicolumn{3}{r}{
${\bigoplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{ \bigoplus_{D_2}}_{i : \left(d_{\tau}^{f}[i] \in [d_{1},d_{2}]\right)}
\left(
\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2, t, \tau[i])
\otimes_{D_{2}}
{\bigotimes_{D_2}}_{j < i}
\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \tau[j])
\right)$} \\[.2cm]
$\mathbf{m}( \mathcal{S}, \vec{x}, \escape{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi, t, \ell)$ & $=$ &
${\bigoplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{\bigoplus_{D_2}}_{\ell'\in \tau : \left(d_{\mathcal{S}(t)}^{f}[\ell,\ell'] \in [d_{1},d_{2}]\right)}
~~{\bigotimes_{D_2}}_{i \leq \tau(\ell')}
\mathbf{m}( \mathcal{S}, \vec{x}, \varphi, t, \tau[i])$ \\[.2cm]
\end{tabular}
\end{center}
\caption{Monitoring function.}
\label{tab:monitoring}
\end{table*}
Given a formula $\phi$, the function $\mathbf{m}( \mathcal{S}, \vec{x}, \phi, t, \ell)$ corresponds to the evaluation of the formula at time $t$ in the location $\ell$.
The procedure will be exactly the same for different choices of the formula evaluation domain, just operators have to be interpreted according to the chosen semirings and signal domains.
In particular, the choice of the signal domain $D_2$ produces different types of semantics.
In this paper, we consider two signal domains: $\mathbb{B}$ and $\mathbb{R^\infty}$, giving rise to qualitative and quantitative monitoring, corresponding respectively to a Boolean answer value and real satisfaction value.
For the
Boolean signal domain ($D_2 = \langle \{ \top , \bot \}, \vee, \wedge,\neg \rangle $ ),
we say that $(\mathcal{S} , \vec{x}(\ell,t))$ satisfies a formula $\phi$ iff $\mathbf{m}( \mathcal{S}, \vec{x}, \phi, t, \ell)= \top$.
For {max/min signal domain $\langle \mathbb{R}^{\infty}, \max, \min, -, \bot, \top,\rangle$} we say that $(\mathcal{S} , \vec{x}(\ell,t))$ satisfies a formula $\phi$ iff
$\mathbf{m}( \mathcal{S}, \vec{x}, \phi, t, \ell) > 0$.
In the following, we will use $\tilde{\sigma}^{\mathcal{S},\vec{x}}_{\phi}$ to denote the spatio-temporal $D_2$-signal such that for any $t$ and $\ell$ $\mathbf{m}( \mathcal{S}, \vec{x}, \phi, t, \ell)=\tilde{\sigma}^{\mathcal{S},\vec{x}}_{\phi}(t)$.
We describe now in detail the semantics through
the sensor network example as the system on which we specify our properties, in particular we will use the graph in Figure~\ref{fig:spprop} to describe the spatial operators.
\begin{example}[ZigBee protocol]
\label{ex:zigbee}
In Fig.~\ref{fig:spprop}, the graph represents a MANET. In particular, we consider the nodes with three different roles such as the ones implemented in the ZigBee protocol: {\it coordinator}, {\it router} and {\it EndDevice}. The Coordinator node $( {\color{green!45!black!70!} coord } )$, represented in green color in the graph, is unique in each network and is responsible to initialize the network. After the initialisation of the network has started, the coordinator behaves as a router.
The Router node $(\router)$, represented in red color in the graph, acts as a intermediate router, passing on data from other devices. The EndDevice node $( {\color{blue!45!black!70!} end\_dev } )$, represented in blue, can communicate
only with a parent node (either the Coordinator or a Router) and it is unable to relay data from other devices.
Nodes move in space and the figure corresponds to the spatial configuration at a fixed time $t$.
As spatio-temporal trace let us consider a $\{ {\color{green!45!black!70!} coord } , \router, {\color{blue!45!black!70!} end\_dev } \} \times \mathbb{R}$-trace $\vec x: L \rightarrow \mathbb{T} \rightarrow \mathbb{Z} \times \mathbb{R}^{\infty}$ denoting the pair: (kind of node, level of battery), i.e. $\vec x (\ell, t)= ( {\color{green!45!black!70!} coord } , b)$ if $\ell$ is a coordinator, $\vec x (\ell, t)= ( \router,b) $ if $\ell$ is a router, and $\vec x (\ell, t)= ( {\color{blue!45!black!70!} end\_dev } ,b) $ if $\ell$ is an end node, where $b$ is the level of the battery.
\end{example}
\noindent{\bf Atomic Proposition.}
The function $\iota: AP\times D_1^{n} \rightarrow D_2$ is the \emph{signal interpretation function} and permits to translate the input trace in a different ${D}_2$-spatio temporal signal for each atomic proposition in $AP$, which will be the input of the monitoring procedure.
Different types of atomic propositions and signal interpretations are admissible. E.g., we can simply consider a finite set
$\{p_1, \dots, p_n \}=AP$ and an interpretation function $\iota(p_i,\vec x(\ell,t))=\top$ iff $x_i(\ell,t)=\top$. In Fig.~\ref{fig:spprop}, we can consider atomic propositions describing the type of node, i.e., the Boolean propositions $\{ {\color{green!45!black!70!} coord } , \router, {\color{blue!45!black!70!} end\_dev } \}$ are true if the node is of the corresponding type. In case of real valued signals and of a quantitative interpretation of the logic ($D_2$ being in this case the real valued max/min semiring), we can consider inequalities $\mu=(g(\vec{x})\geq 0)$ for some real function $g$ and define $\iota(\mu,\vec{t,\ell})=g(\vec{x,t})$, e.g. $b > 0.5$, that means "the level of the battery is greater than $50\%$
\noindent{\bf Negation.} The negation operator is interpreted with the negation function $\odot_{D_{2}}$ of the chosen signal domain; e.g.
$\mathbf{m}(\mathcal{S}, \vec{x}, \neg\varphi, t, \ell)= \neg \mathbf{m}( \mathcal{S}, \vec{x}, \varphi, t, \ell)$ for the Boolean signal domain and $\mathbf{m}(\mathcal{S}, \vec{x}, \neg\varphi, t, \ell)= - \mathbf{m}( \mathcal{S}, \vec{x}, \varphi, t, \ell)$ for the quantitative ones.
\noindent{\bf Conjunction and Disjunction} The conjunction operator $\varphi_1 \wedge \varphi_2$ is interpreted with $\otimes_{D_2}$, $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \wedge \varphi_2, t, \ell) = \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \ell) \otimes_{D_2} \mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t, \ell)$, which corresponds to wedge $\wedge$ operator for the Boolean semantics. This means that $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \wedge \varphi_2, t, \ell)=1$ iff both $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1)$ and
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2)$ are equal to 1.
For the quantitative semantics $\otimes_{D_2}$ is interpreted ad the minimum operator, so $\otimes_{D_2}$, $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \wedge \varphi_2, t, \ell) = \min(\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \ell) , \mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t, \ell))$. Similarly the disjunction $\varphi_1 \vee \varphi_2$ is interpreted through the $\oplus_{D_2}$ operator, i.e. $\otimes_{D_2}$, $\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1 \vee \varphi_2, t, \ell) = \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \ell) \oplus_{D_2} \mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t, \ell)$, which corresponds to the $\vee$ for the Boolean semantics and to the $\max$ for the quantitative one.
In the rest of the description, we focus on the Boolean semantics, i.e. $D_2 = \langle \{ \top , \bot \}, \vee, \wedge,\neg \rangle $, the Quantitative semantics can be derived substituting $\vee, \wedge$ with $\min, \max$, as seen for conjunction and disjunction.
\noindent{\bf Until.} \[\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \until{[t_{1},t_{2}]} \: \varphi_{2}, t, \ell) = \bigvee_{t' \in t + [t_{1}, t_{2}]} \linebreak (\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2, t', \ell) \wedge \bigwedge_{t'' \in [t, t']} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t'', \ell) \big) .\]
As customary, $(\mathcal{S}, \vec{x}(\ell,t))$ satisfies $\varphi_{1} \: \until{[t_{1},t_{2}]} \: \varphi_{2}$ iff it satisfies $\varphi_{1}$ from $t$ until, in a time between $t_{1}$ and $t_{2}$ time units in the future, $\varphi_{2}$ becomes true. Note how the temporal operators are evaluated in each location separately.
\noindent{\bf Since.} \[\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \since{[t_{1},t_{2}]} \: \varphi_{2}, t, \ell) = \bigvee_{t' \in t - [-t_{2}, -t_{1}]} \linebreak \big (\mathbf{m}(\mathcal{S}, \vec{x}, \varphi_2, t', \ell) \wedge \bigwedge_{t'' \in [t', t]} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t'', \ell) \big)\]
$(\mathcal{S}, \vec{x}(\ell,t)$ satisfies $\varphi_{1} \: \since{[t_{1},t_{2}]} \: \varphi_{2}$ iff it satisfies $\varphi_{1}$ from now since, in a time between $t_{1}$ and $t_{2}$ time units in the past, $\varphi_{2}$ was true.
Except for the interpretation function, the semantics of the Boolean and the temporal operators is directly derived from and coincident with that of STL (qualitative for Boolean signal domain and quantitative for an $\mathbb{R}^\infty$ signal domain), see~\cite{Maler2004, Donze2013} for details.
\noindent{\bf Reachability.} \[\mathbf{m}(\mathcal{S}, \vec{x}, \varphi_{1} \: \reach{[d_{1},d_{2}]}{f} \: \varphi_{2}, t, \ell)=
\bigvee_{\tau\in Routes(\mathcal{S}(t),\ell)} \linebreak
\bigvee_{i:\left(d_{\tau}^{f}[i]
\in [d_{1},d_{2}]\right)}
( \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_2, t, \tau[i])
\wedge
\bigwedge_{j < i} \mathbf{m}( \mathcal{S}, \vec{x}, \varphi_1, t, \tau[j]) )\]
\noindent $(\mathcal{S} , \vec{x}(\ell,t))$ satisfies
$\varphi_{1} \: \reach{d}{f} \: \varphi_{2}$ iff it satisfies $\varphi_2$ in a location $\ell'$ reachable from $\ell$ through a route $\tau$, with a length $d_{\tau}^{f}[\ell']$ belonging to the interval $[d_1, d_2]$, and such that $\tau[0]=\ell$ and all its elements with index less than $\tau(\ell')$ satisfy $\varphi_1$.
In Figure~\ref{fig:spprop}, we report an example of reachability property, considering f as the $hop$ function described in Example~\ref{ex:distancefunction}. In the graph, the location $\ell_6$ (meaning the trajectory $\vec{x}$ at time t in position $\ell_6$) satisfies $ {\color{blue!45!black!70!} end\_dev } \: \reach{[0,1]}{hop} \: \router$. Indeed, there exists a route $\tau = \ell_6\ell_5$ such that $d_{\tau}^{hop}[1]=1$, where $\tau[0]=\ell_6$, $\tau[1]=\ell_5$, $\tau[1]$ satisfies the red property (it is a router) and all the other elements of the route satisfy the blue property (they are end-devices). Instead, for example, the location $\ell_8$ does not satisfy the property because it does not satisfies the blue (end-device) property.
\noindent{\bf Escape.} \[\mathbf{m}( \mathcal{S}, \vec{x}, \escape{[d_1, d_2]}{f} \: \varphi, t, \ell)=
\bigvee_{\tau\in Routes(\mathcal{S}(t),\ell)} \linebreak
\bigvee_{\ell'\in \tau:\left(d_{\lambda(t)}^{f}[\ell,\ell'] \in [d_1, d_2]\right) }
~\bigwedge_{i \leq \tau(\ell')}
\mathbf{m}( \mathcal{S}, \vec{x}, \varphi, t, \tau[i]).\]
\noindent $(\mathcal{S}, \vec{x}(\ell,t))$
satisfies $\escape{d}{f} \: \varphi$ if and only if there exists a route $\tau$ and a location $\ell'\in \tau$ such that $\tau[0]=\ell$ and $d_\mathcal{S}[\tau[0],\ell']$ belongs to the interval $[d_1, d_2]$, while $\ell'$ and all the elements $\tau[0],...\tau[k-1]$ (with $\tau(\ell')=k$) satisfy satisfies $\varphi$.
In Fig~\ref{fig:spprop}, we report an example of escape property. In the graph, the location $\ell_{10}$ satisfies $ \escape{[2, \infty]}{hop} \: \neg {\color{blue!45!black!70!} end\_dev } $. Indeed, there exists a route $\tau = \ell_{10}\ell_7\ell_8$ such that $\tau[0]=\ell_{10}$, $\tau[2]=\ell_8$, $d_S^{hop}[\ell_{10},\ell_1]=2$ and $\ell_{10}$, $\ell_7$ and $\ell_8$ do not satisfy the blue property, i.e. they are not end-devices. Note that the route $\ell_{10}\ell_{9}$ is not a good route to satisfy the property because the distance $d_S^{hop}[\ell_{10},\ell_{9}]=1$.
\begin{figure}[ht]
{\small
\center
\begin{tikzpicture}
[scale=.6,auto=left,every node/.style={circle,thick,inner sep=0pt,minimum size=6mm}]
\node (1) [fill=blue!45!black!70!, draw = black] at (-3,-3) {\wh{1}};
\node (2) [fill=blue!45!black!70!, draw = black] at ( 1,-2) {\wh{2}};
\node (3) [fill=blue!45!black!70!, draw = black] at ( 3,-1) {\wh{3}};
\node (4) [fill=blue!45!black!70!, draw = black] at ( -3, 1) {\wh{4}};
\node (5) [fill=red!50!black!70!, draw = black] at ( 1, 2) {\wh{5}};
\node (6) [fill=blue!45!black!70!, draw = black] at (-1, 2) {\wh{6}};
\node (7) [fill=red!50!black!70!, draw = black] at (0, 0) {\wh{7}};
\node (8) [fill=red!50!black!70!, draw = black] at (-2,-1) {\wh{8}};
\node (9) [fill=red!50!black!70!, draw = black] at (3,3) {\wh{9}};
\node (10) [fill=green!45!black!70!, draw = black] at (4,1) {\wh{10}};
\node (11) [fill=red!50!black!70!, draw = black] at (5,0) {\wh{11}};
\node (12) [fill=blue!45!black!70!, draw = black] at (6,-2) {\wh{12}};
\node (13) [fill=blue!45!black!70!, draw = black] at (8,1) {\wh{13}};
\node (14) [fill=blue!45!black!70!, draw = black] at (5,3) {\wh{14}};
\node (15) [fill=blue!45!black!70!, draw = black] at (8,-1) {\wh{15}};
\node (16) [fill=red!50!black!70!, draw = black] at (6.5,1.8) {\wh{16}};
\draw [-] (1) -- (8) node[midway] {};
\draw [-] (2) -- (7) node[midway] {};
\draw [-] (8) -- (6) node[midway] {};
\draw [-] (8) -- (7) node[midway] {};
\draw [-] (7) -- (10) node[midway] {};
\draw [-] (7) -- (5) node[midway] {};
\draw [-] (3) -- (10) node[midway] {};
\draw [-] (6) -- (5) node[midway] {};
\draw [-] (10) -- (11) node[midway] {};
\draw [-] (10) -- (9) node[midway] {};
\draw [-] (11) -- (15) node[midway] {};
\draw [-] (11) -- (12) node[midway] {};
\draw [-] (9) -- (14) node[midway] {};
\draw [-] (10) -- (14) node[midway] {};
\draw [-] (10) -- (16) node[midway] {};
\draw [-] (11) -- (16) node[midway] {};
\draw [-] (13) -- (16) node[midway] {};
\draw [-] (8) -- (4) node[midway] {};
\end{tikzpicture}
\caption{
Example of spatial properties. {\bf Reachability:} $ {\color{blue!45!black!70!} end\_dev } \: \reach{[0,1]}{hop} \: \router$. {\bf Escape:} $ \escape{[2, \infty]}{hop} \: \neg {\color{blue!45!black!70!} end\_dev } $.
{\bf Somewhere}: $\somewhere{[0, 4] }{hop} {\color{green!45!black!70!} coord } $. {\bf Everywhere}: $\everywhere{[0, 2] }{hop} \router$. {\bf Surround:} $ ( {\color{green!45!black!70!} coord } \vee \router ) \surround{[0,3]}{hop} \: {\color{blue!45!black!70!} end\_dev } $. }
\label{fig:spprop}
}
\end{figure}
We can also derive other three spatial operators: {\it somewhere}, {\it everywhere} and {\it surround}.
\noindent{\bf Somewhere.} $ \somewhere{ [0, d] }{f} \varphi := true \reach{[0, d]}{f} \varphi $
is satisfied by $(\mathcal{S} , x(t,\ell))$
iff there exists a location that satisfies $\varphi$ reachable from $\ell$ via a route $\tau$ with a distance belonging to the interval $[0, d]$. This length is computed via the function $f$. In Fig.~\ref{fig:spprop}, all the locations satisfy the property $\somewhere{[0,4]}{hop} {\color{green!45!black!70!} coord } $ because, for all $\ell_i$, there is always a path $\tau = \ell_i \dots \ell_{10}$ with a length $d_\tau^{hop}(k)<4$, where $\tau[0]=\ell_{i}$, $\tau[k]=\ell_{10}$, and $\ell_{10}$ satisfies the green property.
\noindent{\bf Everywhere.} $ \everywhere{[0,d]}{f} \varphi := \neg \somewhere{[0,d]}{f} \neg \varphi $
is satisfied by $(\mathcal{S}, \vec{x}((\ell,t))$ iff all the locations reachable from $\ell$ via a path, with length belonging to the interval $[0, d]$, satisfy $\varphi$. In Fig.~\ref{fig:spprop}, there are no locations that satisfy the property $\everywhere{[0,2]}{hop} \router$ because for all the locations $\ell_i$ there is a path $\tau=\ell_i\ell_j$ s.t. $\ell_j$ does not satisfy the red property.
\noindent{\bf {Surround}.} $\varphi_{1} \surround{[0, d]}{f} \varphi_{2} := \varphi_{1} \wedge \neg (\varphi_{1}\reach{[0, d]}{f} \neg (\varphi_1 \vee \varphi_{2}) \wedge \neg (\escape{\neg [d, \infty]}{f} \varphi_{1}) $ expresses the topological notion of being surrounded by a $\varphi_2$-region, while being in a $\varphi_{1}$-region, with an additional metric constraint. The operator has been introduced in~\cite{CianciaLLM16} as a basic operator, while here it is a derived one. The idea is that one cannot escape from a $\varphi_{1}$-region without passing from a location that satisfies $\varphi_2$ and, in any case, one has to reach a $\varphi_2$-location via a path with a length less or equal to $d$. In Fig.~\ref{fig:spprop}, the location $\ell_{10}$ satisfies the property $ ( {\color{green!45!black!70!} coord } \: \vee \: \router ) \surround{[0,3]}{hop} \: {\color{blue!45!black!70!} end\_dev } $. In fact, it satisfies the green property, it cannot reach a location that does not satisfy the blue or the red property via a path with length lesser or equal to 3 and it cannot escape through a path satisfying the green or red properties at a distance more than 3.
The operators can be arbitrarily composed to specify complex properties as we will see in Section~\ref{sec:ZigBee} and~\ref{sec:epidemic} .
\subsection{Invariance properties of the Euclidean spatial model}
The properties we consider with respect to the Euclidean spatial model are typically local and depend on the relative distance and position among nodes in the plane. As such, they should be invariant with respect to change of coordinates, i.e. with respect to isometric transformations of the plane This class of transformations include translations, rotations, and reflections, and can be described by matrix multiplications of the form
\[
\begin{bmatrix}
x'_{\ell} \\
y'_{\ell} \\
1 \\
\end{bmatrix}
=
\begin{bmatrix}
\beta \cos (\alpha) & - \beta \sin (\alpha) & \beta t_x \\
\gamma \sin (\alpha) & \gamma \cos (\alpha) & \gamma t_y \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
x_{\ell} \\
y_{\ell} \\
1 \\
\end{bmatrix}
\]
Invariance of satisfaction of spatial properties holds in STREL logic, for the Euclidean space model of Definition \ref{def:Euclidean}. Consider more specifically an Euclidean space model $\mathcal{E}(L,\mu, R) = \langle L, \mathbf{W}^{\mu, R} \rangle$ and $\mathcal{E}(L,\mu', R)= \langle L, \mathbf{W}^{\mu', R} \rangle$, obtained by applying an isometric transformation $A$: $\mu'(\ell) = A(\mu(\ell))$. For invariance to hold, we need to further require that distance predicates used in spatial operators are invariant for isometric transformations. More specifically, for any isometry $A$, we require a distance predicate $d$ on the semiring $\mathbb{R}^{\infty}\times\mathbb{R}^{\infty}$ to satisfy $d((x,y)) = d(A((x,y)))$. This is the case for the norm-based predicates used in the examples, of the form $d((x,y)) = \|(x,y\|_2 \leq r$.
Notice that, the path structure is preserved (the edges given by $R$ is the same), and the truth of isometry-invariant distance predicates along paths in $\mathcal{E}(L,\mu, R)$ and $\mathcal{E}(L,\mu', R)$ is also the same. This straightforwardly implies that the truth value of spatial operators will be unchanged by isometry.
\begin{proposition}[Equisatisfiability under isometry] Let $\mathcal{E}(L,\mu, R) = \langle L, \mathbf{W}^{\mu, R} \rangle$
be an euclidean spatial model and $\mathcal{E}(L,\mu', R)= \langle L, \mathbf{W}^{\mu', R} \rangle$ an isometric transformation of the former. Consider a spatial formula $\varphi_{1} \: \reach{ d}{f} \: \varphi_{2}$ or $\escape{d}{f} \: \varphi_{1}$, where $d$ is an isometry preserving predicate.
Assume $\mathbf{m} ( \mathcal{S}, \vec{x}, \varphi_{j}, t, \ell) = \mathbf{m}'( \mathcal{S}, \vec{x}, \varphi_{j}, t, \ell)$, $j=1,2$, where $\mathbf{m}$ and $\mathbf{m}'$ are the monitoring functions for the two spatial models. Then it holds that
$\mathbf{m}( \mathcal{S}, \vec{x}, \varphi_{1} \: \reach{ d}{f} \: \varphi_{2}, t, \ell) = \mathbf{m}'( \mathcal{S}, \vec{x}, \varphi_{1} \: \reach{ d}{f} \: \varphi_{2}, t, \ell)$ and $\mathbf{m}(\mathcal{S}, \vec{x}, \escape{d}{f} \: \varphi_{1}, t, \ell) = \mathbf{m}'( \mathcal{S}, \vec{x}, \escape{d}{f} \: \varphi_{1}, t, \ell)$, for all $\ell$ and $t$.
\end{proposition}
\section{Monitoring STREL}
\label{sec:alg}
In this section we present a
monitoring algorithm that can be used to check if a given
signal satisfies or not a STREL property.
The proposed algorithm follows an \emph{offline} approach. Indeed, the proposed algorithm takes as input the complete spatio-temporal signal together with the property we want to monitor.
At the end of this section we will also briefly discuss a possible alternative approach that can lead to a distributed and \emph{online} monitoring procedure.
In this case, the spatio-temporal signal is not known at the beginning, and it is discovered while data are collected from the system while it is running.
\subsection{Offline monitor}
Offline monitoring is performed via a function $\mathsf{monitor}$
that takes as inputs a dynamical spatial model $\mathcal{S}$, a trace $\vec{x}$
and a formula $\phi$ and returns the \emph{piecewise constant
spatio-temporal signal} $\tilde{\sigma}$ representing the monitoring
of $\phi$.
Function $\mathsf{monitor}$ is defined by induction on the syntax of the formula (Algorithm \ref{algo:part1}).
The spatio-temporal signal resulting from the monitoring of atomic proposition $\mu$ is just obtained by applying function $\iota(\mu)$ to the trace $\mathbf{x}$. The spatio-temporal signals associated with $\neg\varphi$ and $\varphi_1\wedge \varphi_2$ are obtained by applying operators $\odot_{D_2}$ and $\otimes_{D_2}$ to the signals resulting from the monitoring of $\varphi$ and from the monitoring of $\varphi_1$ and $\varphi_2$\, where $\oplus_{D_2}$, $\otimes_{D_2}$ and $\odot_{D_{2}}$ depend the \emph{signal domain} used to represent satisfaction values.
\algnewcommand\algorithmicswitch{\textbf{switch}}
\algnewcommand\algorithmiccase{\textbf{case}}
\algnewcommand\algorithmicassert{\texttt{assert}}
\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%
\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}%
\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%
\algtext*{EndSwitch}%
\algtext*{EndCase}%
\begin{algorithm}[tbp]
\caption{Monitoring algorithm}
\label{algo:part1}
\vspace{1mm}
\begin{multicols}{2}
\begin{algorithmic}[1]
\Function{Monitor}{$\mathcal{S}$, $\vec{x}$, $\psi$}
\Case{$\psi=\nu$}
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\ForAll{ $t\in \mathcal{T}(\vec{x}(\ell))$ }
\State $\tilde{\sigma}(\ell,t)=\iota(\mu)(\vec{x}(\ell,t))$
\EndFor
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\neg\psi_1$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\ForAll{ $t \in \mathcal{T}(\tilde{\sigma}_1(\ell))$ }
\State $\tilde{\sigma}(\ell,t)=\odot_{D_2} \tilde{\sigma}_1(\ell,t)$
\EndFor
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \wedge \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\ForAll{ $t \in \mathcal{T}(\tilde{\sigma}_1(\ell))\cup \mathcal{T}(\tilde{\sigma}_2(\ell))$ }
\State $\tilde{\sigma}(\ell,t)=\tilde{\sigma}_1(\ell,t) \otimes_{D_2} \tilde{\sigma}_2(\ell,t)$
\EndFor
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \until{[t1,t2]} \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\State $\tilde{\sigma}(\ell)=\Call{Until}{t1,t2,\tilde{\sigma}_1(\ell),\tilde{\sigma}_2(\ell)}$
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \since{[t1,t2]} \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $\ell \in L$ }
\State $\tilde{\sigma}(\ell)=\Call{Since}{t1,t2,\tilde{\sigma}_1(\ell),\tilde{\sigma}_2(\ell)}$
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \reach{[d_1,d_2]}{f} \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $t \in \mathcal{T}(\tilde{\sigma}_1)\cup\mathcal{T}(\tilde{\sigma}_2)$ }
\State $\tilde{\sigma}@t=\Call{Reach}{\mathcal{S}(t),f,d_1,d_2,\tilde{\sigma}_1@t,\tilde{\sigma}_2@t}$
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\Case{$\psi=\psi_1 \escape{d}{f} \psi_2$}
\State $\tilde{\sigma}_1=\Call{Monitor}{\mathcal{S},\vec{x},\psi_1}$
\State $\tilde{\sigma}_2=\Call{Monitor}{\mathcal{S},\vec{x},\psi_2}$
\State $\tilde{\sigma}=[]$
\ForAll{ $t \in \mathcal{T}(\tilde{\sigma}_1)\cup\mathcal{T}(\tilde{\sigma}_2)$ }
\State $\tilde{\sigma}@t=\Call{Escape}{\mathcal{S}(t), f, d,\tilde{\sigma}_1@t,\tilde{\sigma}_2@t}$
\EndFor
\State \Return{ $\tilde{\sigma}$ }
\EndCase
\EndFunction
\end{algorithmic}
\end{multicols}
\end{algorithm}
Monitoring of temporal properties, namely $\varphi_1 \until{[t_{1}, t_{2}]}\varphi_2$ and $\varphi_1 \since{[t_{1}, t_{2}]} \varphi_2$, relies on functions \textproc{Until} and \textproc{Since}. These are defined by using the same approach of~\cite{Donze2013} and~\cite{MalerN13}. However, while their monitoring relies on classical Boolean and arithmetic operators, here the procedure is parametrised with respect to operators $\oplus_{D_2}$ and $\otimes_{D_2}$ of the considered semiring.
\begin{algorithm}[tbp]
\caption{Monitoring function for \emph{reach} operator}
\label{algo:reachmonitoring}
\vspace{1mm}
\begin{algorithmic}[1]
\Function{Reach}{$(L,\mathbf{W})$, $f: A\rightarrow B$, $d_1\in B$, $d_2\in B$, $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$}
\If{$d_2\not=\top_{B}$}
\State{} \Return \Call{BoundedReach}{$(L,\mathbf{W})$, $f$, $d_1$, $d_2$ , $\mathbf{s}_1$, $\mathbf{s}_2$}
\Else{}
\State{} \Return \Call{UnboundedReach}{$(L,\mathbf{W})$, $f$, $d_1$, $\mathbf{s}_1$, $\mathbf{s}_2$}
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
To monitor $\varphi_{1} \: \reach{[d_{1},d_{2}]}{f:A\rightarrow B} \: \varphi_{2}$
we first compute
the signals $\mathbf{s}_1$ and $\mathbf{s}_2$, resulting from the monitoring of $\varphi_1$ and $\varphi_2$. After that, the final result is obtained by aggregating the spatial signals $\mathbf{s}_1@t$ and $\mathbf{s}_2@t$ at each time $t\in \mathcal{T}(\mathbf{s}_1)\cup \mathcal{T}(\mathbf{s}_2)$ by using function \textproc{Reach} defined in Algorithm~\ref{algo:reachmonitoring}. In this function two cases are distinguished: $d_2\not=\top_{B}$ or $d_2=\top_{B}$. In the first case, the resulting monitoring value is calculated via function \textproc{BoundedReach} defined in Algoritm~\ref{algo:boundedreachmonitoring}. Conversely, when $d_2=\top_{B}$ monitoring is performed by relying on function \textproc{UnoundedReach} defined in Algoritm~\ref{algo:unboundedreachmonitoring}.
\noindent \paragraph{\bf \textproc{BoundedReach}} Function \textproc{BoundedReach}, defined in Algorithm~\ref{algo:boundedreachmonitoring}, takes as parameters the spatial model $\langle L,\mathbf{W} \rangle$ at time $t$, the function $f:A\rightarrow B$, used to compute the distances over paths, and the interval $[d_1,d_2]$, describing the reachability bound.
The proposed algorithm is based on a flooding that computes the output signal $\mathbf{s}$.
At line $3$, we inizialite the set $Q$, it contains all triples $(\ell, s_2[\ell], \bot_B)$, where $\bot_B$ is the minimum element of the distance domain $B$ (e.g. if $B=\mathbb{R}_{\geq 0}$, $\bot_B=0$).
Let us denote $Q_i$ the value of $Q$ after $i$ iterations of loop starting at line $4$.
$Q_i$ contains a triple $(\ell, v, d)$ if and only if there exists a path such that with $i$-steps we reach a distance $d <_{B} d_2$ and a monitored value $v$.
To compute the values in $Q_{i+1}$, for each element $(\ell,v,d)\in Q_i$ we consider the locations $\ell'$ next to $\ell$ at a distance $w$ ($\nextto{\ell'}{w}{\ell}$) to compute the items: $v' = v\otimes \mathbf{s}_1(\ell')$ and $d' = d+_{B}f(w)$. The element $(\ell', v',d')$ is added to $Q_{i+1}$ if $d'<_{B} d_2$, i.e. if the sum of the current distance plus the distance between $\ell$ and $\ell'$ is still less than $d_2$.
When $d' \in [d_1,d_2]$, $\mathbf{s}(\ell')$ is updated to take into account the new computed value. We recal that $s$ stores the value of the semantics of the reach operator.
\begin{algorithm}[tbp]
\caption{Monitoring bounded reachability}
\label{algo:boundedreachmonitoring}
\vspace{1mm}
\begin{algorithmic}[1]
\Function{BoundedReach}{$(L,\mathbf{W})$, $f: A\rightarrow B$, $d_1\in B$,$d_2\in B$, $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$}
\State $\forall \ell\in L. \mathbf{s}[\ell] = \left\{\begin{array}{ll}
\mathbf{s}_2[\ell] & d_1=\bot_{B}\\
\bot_{D_2} & \mbox{otherwise}
\end{array}\right.
$
\State $Q=\{ (\ell, \mathbf{s}_2[\ell], \bot_{B}) | \ell\in L \}$
\While{ $Q\not=\emptyset$ }
\State{$Q'=\emptyset$}
\ForAll{$(\ell,v,d) \in Q$}
\ForAll{ $\ell': \nextto{\ell'}{w}{\ell}$ }
\State $v'=v~\otimes_{D_2}~\mathbf{s}_1[\ell']$
\State $d'=d~+_{B}~f(w)$
\If{$(d_1\leq d'\leq d_2)$}
\State $\mathbf{s}[\ell'] = \mathbf{s}[\ell']\oplus_{D_2} v'$
\EndIf
\If{$d'< d_2$}
\If{$\exists (\ell',v'',d')\in Q'$}
\State $Q' = (Q'-\{ (\ell',v'',d') \})\cup \{ (\ell',v'\oplus_{D_2} v'',d') \}$
\Else{}
\State $Q' = Q'\cup \{ (\ell',v',d') \}$
\EndIf
\EndIf
\EndFor
\EndFor
\State{$Q=Q'$}
\EndWhile
\State \Return{ $\mathbf{s}$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\noindent \paragraph{\bf \textproc{UnboundedReach}} Function \textproc{UnboundedReach} defined in Algoritm~\ref{algo:unboundedreachmonitoring} is used when the interval in the \emph{reach} formula is unbounded. In this case the function takes as parameters the spatial model $\langle L,\mathbf{W}\rangle $ at time $t$, the function $f:A\rightarrow B$, used to compute the distances over paths, and the lower bound $d_1$, and returns a \emph{spatial signal} $\mathbf{s}$. If $d_1 = \bot_B$, i.e. when we are considering a totally unbound reach with no constraint,s we initialize $\mathbf{s}=\mathbf{s}_2$. Otherwise, when $d_1\not=\bot$, we have first to call function \textproc{boundedReach} by passing as upper bound $d_1+d_{max}$, where $d_{max}$ is the max value that function $f$ can associate to a single edge in $\mathbf{W}$.
After that, $\mathbf{s}$ will contain the \emph{reachability} value computed up to the bound $[d_1,d_1+d_{max}]$ (lines (5)-(6)).
Hence, the computed values are \emph{back propagated} until a fix point is reached.
This will guarantee that for each location, only the values of $\mathbf{s}_2$ at a path distance $[d_1,\top_{B}]$ are considered in computation of reachability values.
\begin{algorithm}[tbp]
\caption{Monitoring unbounded reachability}
\label{algo:unboundedreachmonitoring}
\vspace{1mm}
\begin{algorithmic}[1]
\Function{UnboundedReach}{$(L,\mathbf{W})$, $f: A\rightarrow B$, $d_1\in B$, $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$}
\If{$d_1=\bot_{B}$}
\State $\mathbf{s}=\mathbf{s}_2$
\Else
\State $d_{max}=\max\{ f(w) | \exists.\ell,\ell'\in L: \nextto{\ell}{w}{\ell'} \}$
\State $\mathbf{s}=\Call{BoundedReach}{(L,\mathbf{W}), f, d_1, d_1+_{B} d_{max},\mathbf{s}_1,\mathbf{s}_2}$
\EndIf
\State $T=L$
\While{$T\not=\emptyset$}
\State $T'=\emptyset$
\ForAll{$\ell\in T$}
\ForAll{ $\ell': \nextto{\ell'}{w}{\ell}$ }
\State $v' = (\mathbf{s}[\ell]\otimes_{D_2} \mathbf{s}_1[\ell'])\oplus_{D_2} \mathbf{s}[\ell']$
\If{$v'\not= \mathbf{s}[\ell']$}
\State{$\mathbf{s}[\ell']=v'$}
\State $T'=T'\cup\{ \ell' \}$
\EndIf
\EndFor
\EndFor
\State $T=T'$
\EndWhile
\State \Return{ $\mathbf{s}$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\noindent \paragraph{\bf \textproc{Escape}} Monitoring algorithm for $\escape{[d_{1},d_{2}]}{f:A\rightarrow B}{\varphi}$ is reported in Algorithm~\ref{algo:escapemonitoring}. Given a spatial model $\langle L,\mathbf{W}\rangle $ at time $t$, a distance function $f:A\rightarrow B$, an interval $[d_1,d_2]$, it computes the spatial signal representing the monitoring value of $\escape{d}{f} \varphi$ at time $t$.
Function $\mathsf{escape}$ first computes the \emph{matrix distance} $D$ (line 2), obtained from the given space model and distance function $f$. After that, a matrix $e:L\times L\rightarrow D_2$ is computed.
The matrix $e$ is initialised so that all the elements $e[\ell,\ell]$ in the diagonal are equal to $\mathbf{s}_1(\ell)$, while all the other elements are set to $\bot_{D_2}$ (lines 3-4).
After that, iteratively, elements of $e$ are updated by considering the values in the neighbours in each location (lines 6-20). A value $e[\ell_1',\ell_2]$ is updated iff
$\mathbf{s}_1(\ell_1') \otimes_{D_2} e[\ell_1,\ell_2] >_{D_2} e[\ell_1',\ell_2] $, where $\ell_1'$ is a neighbored of $\ell_1$.
The updates ends when a fixpoint is reached \footnote{We prove that the loop always terminates in Lemma~\ref{lemma:escapecorrectness}.}.
At the end of the loop computation, the element $e[\ell_1,\ell_2]$ contains the \emph{escape} value from $\ell_1$ to $\ell_2$, defined in the semantics without the distance constraint. This latter is took in consideration in line 23, where the final monitored value $s$ is computed. For each $\ell$, the equation $\bigoplus_{D_2}(\{ e[\ell,\ell'] | D[\ell,\ell']\in [d_1,d_2] \})$ considers the minimum values $e[\ell,\ell']$ of all $\ell'$ that satisfies the distance contraint, i.e. such that $ D[\ell,\ell']\in [d_1,d_2]$.
\begin{algorithm}[tbp]
\caption{Monitoring \emph{escape}}
\label{algo:escapemonitoring}
\vspace{1mm}
\begin{algorithmic}[1]
\Function{Escape}{$(L,\mathbf{W})$, $f: A\rightarrow B$, $d_1\in B$,$d_2\in B$, $\mathbf{s}_1: L\rightarrow D_2$}
\State $D = \Call{MinDistance}{L,\mathbf{W},f)}$
\State $\forall \ell,\ell'\in L. e[\ell,\ell'] = \bot_{D_2}$
\State $\forall \ell\in L. e[\ell,\ell]=\mathbf{s}_1(\ell)$
\State $T=\{ (\ell,\ell) | \ell\in L \}$
\While{ $T\not= \emptyset$ }
\State $e'=e$
\State $T'=\emptyset$
\ForAll{ $(\ell_1,\ell_2) \in T$ }
\ForAll{ $\ell_1': \nextto{\ell_1'}{w}{\ell_1}$ }
\State{ $v = e[\ell_1',\ell_2]\oplus_{D_2}(\mathbf{s}_1(\ell_1') \otimes_{D_2} e[\ell_1,\ell_2])$}
\If{$v\not=e[\ell_1',\ell_2]$}
\State{$T'=T'\cup \{ (\ell_1',\ell_2) \}$}
\State{$e'[\ell_1',\ell_2]=v$}
\EndIf
\EndFor
\EndFor
\State{T=T'}
\State{e=e'}
\EndWhile
\State $\mathbf{s}=[]$
\ForAll{ $\ell\in L$ }
\State $\mathbf{s}(\ell)=\bigoplus_{D_2}(\{ e[\ell,\ell'] | D[\ell,\ell']\in [d_1,d_2] \})$
\EndFor{}
\State \Return $\mathbf{s}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{Correctness}
In this sub section we discuss the correctness of the algorithms.
\begin{lemma}[BoundedReach correctness]
\label{lemma:boundreachcorrectness}
Given an $A$-spatial model $\mathcal{S}=(L,\mathbf{W})$, a function $f: A\rightarrow B$, an interval $[d_1, d_2]$, ($d_1,d_2\in B$, $d_1\leq_{B} d_2$ and) $d_2\not=\top_{B}$), and two spatial signals $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$ such that $\Call{BoundedReach}{(L,\mathbf{W}),f,d_1,d_2,\mathbf{s}_1,\mathbf{s}_2}=\mathbf{s}$. For any $\ell\in L$, we have that:
\[
\mathbf{s}(\ell)={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell)}
~~{ \oplus_{D_2}}_{i : \left(d_{\tau}^{f}[i] \in [d_{1},d_{2}]\right)}
\left(
\mathbf{s}_2(\tau[i])
\otimes_{D_{2}}
{\otimes_{D_2}}_{j < i}
\mathbf{s}_1(\tau[j])
\right)
\]
\end{lemma}
\begin{proof}
Let us denote by $\mathbf{s}_{i}$ and $Q_i$ the value of variables $\mathbf{s}$ and $Q$ respectively in Algorithm~\ref{algo:boundedreachmonitoring} after $i$ iterations of the \emph{while-loop} at line $(4)$. The statement follows directly from the following properties and by observing that (since $f(w)>0$ for any $w\in A$) the algorithm terminates after a finite number of iterations:
\begin{enumerate}
\item[$P1$:] if $(\ell,v,d)\in Q_i$ then $d\leq_{B} d_2$;
\item[$P2$:] if $(\ell,v_1,d)\in Q_i$ and $(\ell,v_2,d)\in Q_i$ then $v_1=v_2$;
\item[$P3$:] $(\ell,v,d)\in Q_i$ if and only if
\[
v={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell): d_{\tau}^{f}[i]=d}
\mathbf{s}_2(\tau[i])
\otimes_{D_{2}}
\left(
{\otimes_{D_2}}_{j < i}
\mathbf{s}_1(\tau[j])
\right)
\]
\item[$P4$:] for any $\ell\in L$:
\[
\mathbf{s}_i[\ell]={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell)}
~~{ \oplus_{D_2}}_{k : k\leq i\wedge \left(d_{\tau}^{f}[k] \in [d_{1},d_{2}]\right)}
\left(
\mathbf{s}_2(\tau[k])
\otimes_{D_{2}}
{\otimes_{D_2}}_{j < k}
\mathbf{s}_1(\tau[j])
\right)
\]
\end{enumerate}
Properties $P1$ and $P2$ are direct consequences of instructions at lines $(3)$ and $(13)-(19)$ of Algorithm~\ref{algo:boundedreachmonitoring}. Indeed, $(\ell,v,d)\in Q_0$ if and only if $d=\bot_{B}$ (line $(3)$), while $(\ell,v,d)$ is inserted in $Q'=Q_{i+1}$ if and only if $d<_{B} d_2$ (line $(13)$) and no other $(\ell,v',d)$ is already in $Q'$ (lines $(14)-(18)$).
Property $P3$ can be proved by induction on $i$ by observing that the property is satisfied by $Q_0$ and that for any $i$:
\[
(\ell',v',d')\in Q_{i+1} \Leftrightarrow d'<_{B} d_2\mbox{ and }
v'={\oplus_{D_2}}_{(\ell,v,d)\in Q_i:\nextto{\ell'}{w}{\ell}\wedge d+f(w)=d'}
\left(
\mathbf{s}_1(\ell')
\otimes_{D_2}
v
\right)
\]
From the above, and from inductive hypothesis, we have that:
\[ \small
\begin{array}{rcl}
v' & = & {\oplus_{D_2}}_{(\ell,v,d)\in Q_i:\nextto{\ell'}{w}{\ell}\wedge d+f(w)=d'}
\left(
\mathbf{s}_1(\ell')
\otimes_{D_2}
{\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell): d_{\tau}^{f}[i]=d}
\mathbf{s}_2(\tau[i])
\otimes_{D_{2}}
\left(
{\otimes_{D_2}}_{j < i}
\mathbf{s}_1(\tau[j])
\right)
\right)\\
& = &
{\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell'): d_{\tau}^{f}[i+1]=d'}
\mathbf{s}_2(\tau[i+1])
\otimes_{D_{2}}
\left(
{\otimes_{D_2}}_{j < i+1}
\mathbf{s}_1(\tau[j])
\right)
\end{array}
\]
That is the statement of $P3$.
Finally, we can probe $P4$ by induction on $i$ by using $P3$ and by observing that:
\[
\mathbf{s}_{i+1}[\ell']=
{\oplus_{D_2}}_{(\ell,v,d)\in Q_i:\nextto{\ell'}{w}{\ell}\wedge d+f(w)\in [d_1,d_2]} \mathbf{s}_{i}[\ell']\oplus_{D_2}\left(
\mathbf{s}_1[\ell'] \otimes v\right)
\]
\end{proof}
\begin{lemma}[UnboundedReach correctness]
\label{lemma:unboundreachcorrectness}
Given an $A$-spatial model $(L,\mathbf{W})$, a function $f: A\rightarrow B$, a value $d_1\in B$ ($d_1\not= \top_{B}$), and two spatial signals $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$ such that $\Call{UnboundedReach}{(L,\mathbf{W}),f,d_1,\mathbf{s}_1,\mathbf{s}_2}=\mathbf{s}$. For any $\ell\in L$:
\[
\mathbf{s}(\ell)={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{ \oplus_{D_2}}_{\ell'\in\tau : \left(d_{\tau}^{f}[\ell'] \geq d_{1}\right)}
\left(
\mathbf{s}_2(\ell')
\otimes_{D_{2}}
{\otimes_{D_2}}_{j < \tau(\ell')}
\mathbf{s}_1(\tau[j])
\right)
\]
\end{lemma}
\begin{proof}
Directly from the pseudo code in Algorithm~\ref{algo:unboundedreachmonitoring} and from Lemma~\ref{lemma:boundreachcorrectness}, we can observe that the value $\mathbf{s}$ computed by function $\Call{UnboundedReach}{}$ is the limit ($\mathbf{s} = \lim_{i\rightarrow\infty }\mathbf{s}_{i}
$) of the sequence of signals $\mathbf{s}_i$ such that for any $\ell\in L$:
\[
\mathbf{s}_{i+1}[\ell]=\bigoplus_{\ell\in L:\nextto{\ell}{w}{\ell'}} (\mathbf{s}_{i}(\ell)\oplus \mathbf{s}_1(\ell')\otimes \mathbf{s}_{i}(\ell'))
\]
The initial spatial signal is $\mathbf{s}_{0}=\mathbf{s}_2$, if $d_1=\bot_{B}$, while it is:
\[
\mathbf{s}_0[\ell]={\oplus_{D_2}}_{\tau\in Routes(\mathcal{S},\ell)}
~~{ \oplus_{D_2}}_{i : \left(d_{\tau}^{f}[i] \in [d_{1},d_{1}+d_{max}]\right)}
\left(
\mathbf{s}_2(\tau[i])
\otimes_{D_{2}}
{\otimes_{D_2}}_{j < i}
\mathbf{s}_1(\tau[j])
\right)
\]
\noindent
when $d_1\not=\bot_{B}$ and $d_{max}=\max\{ f(w) | \exists.\ell,\ell'\in L: \nextto{\ell}{w}{\ell'} \}$. In both the cases, the statement follows by applying standard algebra and the properties of $\oplus$ and $\otimes$.
\end{proof}
\begin{lemma}[Escape correctness]
\label{lemma:escapecorrectness}
Given an $A$-spatial model $(L,\mathbf{W})$, a function $f: A\rightarrow B$, an interval $[d_1, d_2]$, ($d_1,d_2\in B$, $d_1\leq_{B} d_2$ and) $d_2\not=\top_{B}$), and a spatial signal $\mathbf{s}_1: L\rightarrow D_2$ such that $\Call{Escape}{(L,\mathbf{W}),f,d_1,d_2,\mathbf{s}_1}=\mathbf{s}$. For any $\ell\in L$:
\[
\mathbf{s}(\ell)={\bigoplus_{D_2}}_{\tau\in Routes((L,\mathbf{W}),\ell)}
~~{\bigoplus_{D_2}}_{\ell'\in \tau : \left(d_{(L,\mathbf{W})}^{f}[\ell,\ell'] \in [d_{1},d_{2}]\right)}
~~{\bigotimes_{D_2}}_{i \leq \tau(\ell')} \mathbf{s}_1(\tau[i])
\]
\begin{proof}
Let us denote by $e_{i}$ the content of data structures $e$ after $i$ iterations of the loop at line $(6)$ of Algorithm~\ref{algo:escapemonitoring}.
We have only to prove the following properties:
\begin{itemize}
\item[$P1$] For any $\ell_1,\ell_2$, $D[\ell_1,\ell_2]=d_{(L,\mathbf{W})}^{f}[\ell,\ell']$
\item[$P2$] For any $i$:
\[
e_{i}[\ell_1,\ell_2]={\bigoplus_{D_2}}_{\tau\in Routes((L,\mathbf{W}),\ell)}
~~{\bigotimes_{D_2}}_{j \leq \tau(\ell')\wedge j\leq i} \mathbf{s}_1(\tau[j])
\]
\item[$P3$] The loop terminates after at most $k=|L|$ iterations and
\[
e_{k}[\ell_1,\ell_2]={\bigoplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{\bigotimes_{D_2}}_{j \leq \tau(\ell')} \mathbf{s}_1(\tau[j])
\]
\end{itemize}
Property $P1$ follows directly from definition of $d_{(L,\mathbf{W})}^{f}[\ell,\ell']$ and from the fact that $\mathsf{MinDistance(L,\mathbf{W},f)}$ computes the matrix of min distances computed in $\langle L,\mathbf{W}\rangle$ via $f$. Property $P2$ can be proved by induction on $i$ and follows directly from the code of Algorithm~\ref{algo:escapemonitoring}. Finally, $P3$ is a consequence of the fact that after at most $|L|$ iterations a fix point is reached since all the locations have been taken into account.
We can conclude that the statement of the lemma follows directly from properties $P1$, $P2$ and $P3$ above by observing that:
\[
\begin{array}{rcl}
\mathbf{s}(\ell) & = & \bigoplus_{D_2}(\{ e[\ell,\ell'] | D[\ell,\ell']\in [d_1,d_2] \}) \\
& = & \bigoplus_{D_2}(\{ e_{k}[\ell,\ell'] | D[\ell,\ell']\in [d_1,d_2] \}) \\
& = & {\bigoplus_{D_2}}_{\tau\in Routes(\mathcal{S}(t),\ell)}
~~{\bigoplus_{D_2}}_{\ell'\in \tau : \left(d_{(L,\mathbf{W})}^{f}[\ell,\ell'] \in [d_{1},d_{2}]\right)}
~~{\bigotimes_{D_2}}_{j \leq \tau(\ell')} \mathbf{s}_1(\tau[j])
\end{array}
\]
\end{proof}
\end{lemma}
\begin{theorem}
Given a dynamical spatial model $\mathcal{S}$, a trace $\vec{x}$
and a formula $\phi$,
\[
\Call{Monitor}{\mathcal{S},\vec{x},\phi}=\tilde{\sigma}_{\phi}^{\mathcal{S},\vec{x}}
\]
\end{theorem}
\begin{proof}
The proof easily follows by induction on $\phi$ by using Lemma~\ref{lemma:boundreachcorrectness}, Lemma~\ref{lemma:unboundreachcorrectness} and Lemma~\ref{lemma:escapecorrectness}.
\end{proof}
\subsection{Complexity}
In this subsection we discuss the complexity of each algorithms.
\begin{proposition}[BoundedReach complexity]
\label{prop:reachboundcomplexity}
Given an $A$-spatial model $\mathcal{S}=(L,\mathbf{W})$, a function $f: A\rightarrow B$, an interval $[d_1, d_2]$, ($d_1,d_2\in B$, $d_1\leq_{B} d_2$ and) $d_2\not=\top_{B}$), and two spatial signals $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$ such that $\Call{BoundedReach}{(L,\mathbf{W}),f,d_1,d_2,\mathbf{s}_1,\mathbf{s}_2}=\mathbf{s}$. We define $d_{min}=\min_{(\ell, w, \ell') \in \mathbf{W}}(f(w))$ and $k=\min\{ i | i*d_{min} > d_{2} \}$, where $i*d_{min}$ indicates the sum of $i$ copies of $d_{min}$, then the algorithm terminates after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$ steps, where $m$ is the number of edges and $\beta_{d_2}$ is an integer counting the \emph{different distances} accumulated after $k$ steps\footnote{ This value is in practice a constant and depends on the weights associated with edges and on the bound $d_2$. For instance, in the case of function $hop$ of Example~\ref{ex:distancefunction} $\beta=1$. In general, $\beta_{d_2}$ has the same order of magnitude as $k$ considered above.}.
\end{proposition}
\begin{proof}
First, we need to compute the upper bound on the number of iterations of the while loop starting at line (4).
Let us denote by $Q_k$ the value of $Q$ after $k$ iterations. If $Q_k=\emptyset$, the loop stops after at most $k$ iterations. $Q_k$ is empty if no elements are added at that iteration.
An element $(\ell', v', d')$ is not added to $Q_k$ iff $d' \geq d_2$ where $d'=d +_{B} f(w)\geq d +_{B} d_{min} $ . Note that, at the first iteration $Q_0$, $d = \bot_B$. At each iteration, we add a value greater or equal to $d_{min}$, this means that after $k$ iterations $d' \geq k*d_{min}$ but $k*d_{min} > d_2$ for definition. Q can have at most $\beta_{d_2}\cdot | L |$ elements and, at each iteration of the while loop, for each elements in Q, we consider their connected ones.
This implies that function \Call{BoundedReach}{} terminates after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$ steps.
\end{proof}
\begin{proposition}[UnBoundedReach complexity]
\label{prop:reachunboundcomplexity}
Given an $A$-spatial model $(L,\mathbf{W})$, a function $f: A\rightarrow B$, a value $d_1\in B$ ($d_1\not= \top_{B}$), and two spatial signals $\mathbf{s}_1: L\rightarrow D_2$, $\mathbf{s}_2:L\rightarrow D_2$ such that $\Call{UnboundedReach}{(L,\mathbf{W}),f,d_1,\mathbf{s}_1,\mathbf{s}_2}=\mathbf{s}$.
Let $d_{min}=\min_{(\ell, w, \ell') \in \mathbf{W}}(f(w))$, $d_{max}=\max_{(\ell, w, \ell') \in \mathbf{W}}(f(w))$ and $k=\min\{ i | i*d_{min} > d_{1}+d_{max} \}$, the algorithm stops after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$ steps, where $m$ is the number edges and $\beta_{d_2}$ is an integer counting the \emph{different distances} accumulated after $k$ steps. Furthermore, when $d_1=\bot_{B}$, this complexity reduces to $O(|L|*m)$.
\end{proposition}
\begin{proof}
We have already observed in Proposition~\ref{prop:reachboundcomplexity} that the first part of Algorithm~\ref{algo:unboundedreachmonitoring} terminates after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$. We can here observe that the second part of the algorithm (lines $(9)-(21)$) needs at most $|L|\cdot m$ steps. This because, the for-loop at lines $(12)-(18)$ consists of at most $O(m)$ steps (indeed each edge is considered at most two times). Moreover, a location can be inserted in $T$ at most $|L|$ times.
Concluding, the algorithm terminates after $O(k\cdot \beta_{d_2}\cdot|L|\cdot m)+O(|L|\cdot m)=O(k\cdot \beta_{d_2}\cdot|L|\cdot m)$ steps. When $d_1=\bot_{B}$ lines $(9)-(21)$ are not executed, then the algorithm
terminates after $O(|L|\cdot m)$.
\end{proof}
\begin{proposition}[Escape complexity]
\label{prop:escapecomplexity}
Given an $A$-spatial model $(L,\mathbf{W})$, a function $f: A\rightarrow B$, an interval $[d_1, d_2]$, ($d_1,d_2\in B$, $d_1\leq_{B} d_2$ and) $d_2\not=\top_{B}$), and a spatial signal $\mathbf{s}_1: L\rightarrow D_2$ such that $\Call{Escape}{(L,\mathbf{W}),f,d_1,d_2,\mathbf{s}_1}=\mathbf{s}$. Algorithm terminates in at most $O(|L|\cdot m )$ steps, where $m$ is the number of edges.
\end{proposition}
\begin{proof}
The computation of function $\Call{MinDistance}{L,\mathbf{W},f}$ needs $O( m \log(|L|))$ steps.
Moreover, from property $P3$ in the proof of Lemma~\ref{lemma:escapecorrectness}, we have that the loop at line (6) terminates after at most $|L|$ iterations. In each of these iterations each edge is taken into account at most 2 times (one for each of the connected locations). This means that the loop terminates after $O(|L|*m)$ steps.
Finally, to compute the resulting spatial signal, $O(|L|)$ steps are needed for the loop at line $(22)$.
Concluding, the algorithm terminates in $O( m \log(|L|))+O(|L|*m)+O(|L|)$, that is $O(|L|*m)$.
\end{proof}
We can conclude this section by observing that the number of steps needed to evaluate function $\Call{Monitor}{}$ in Algorithm~\ref{algo:part1} is linear in the size of $\phi$, in the length of the signal and in the number of \emph{edges} in the spatial model and it is quadratic in the number of locations.
\section{Case study: ZigBee protocol monitoring}
\label{sec:ZigBee}
In this section, we consider the running example used in the previous sections. We discuss some properties to show the expressivity and potentiality of STREL.
Given a MANET with a ZigBee protocol, (Example~\ref{ex:zigbee}),
we consider as spatial models both its proximity and connectivity graphs computed with respect to the Cartesian coordinates (Example \ref{ex:manet}). Nodes have three kind of roles: {\it coordinator}, {\it router} and {\it EndDevice}, as described in Example \ref{ex:zigbee}. Moreover, each device is also equipped with a sensor to monitor its battery level ($X_B$), the humidity ($X_H$) and the pollution ($X_H$) in its position.
The semiring is the union between the \emph{max/min} semiring $\mathbb{R}^{\infty}$ (for the proximity graph) and the \emph{integer} semiring $\mathbb{N}^{\infty}$ (for the connectivity graph). We will use also two types of distances: ${\it hop}$ and the $\Delta$ distances described in Example~\ref{ex:distancefunction}.
Atomic propositions $\{ {\color{green!45!black!70!} coord } , \router, {\color{blue!45!black!70!} end\_dev } \}$ describe the type of nodes. We also consider inequalities on the values that are read from sensors, plus special propositions $@_\ell$ which encode the address of a specific location, i.e. they are true only in the location $\ell$.
In the following, we describe several properties of these ZigBee MANET networks that are easily captured by STREL logic, to exemplify its expressive power.
A class of properties naturally encoded in STREL is related to the connectivity of the network. First, we can be interested to know if a node is properly connected, meaning that it can reach the coordinator through a path of routers:
\begin{equation}
\phi_{connect} = device \reach{[0,1]}{hop} (router \reach{ }{hop} coord )
\end{equation}
The meaning of this property is that one end node reaches in a step a node which is a router and that is connected to the coordinator via a path of routers.
We may also want to know if there is a path to the router which is reliable in terms of battery levels, for instance such that all routers have a battery level above 30\%:
\begin{eqnarray}
&\phi_{reliable\_router} = ((battery > 0.5) \wedge router) \reach{ }{hop} coord &\nonumber \\
&\phi_{reliable\_connect} = device \reach{[0,1] }{hop} (\phi_{reliable\_router} )&
\end{eqnarray}
The properties focus on spatial connectivity at a fixed time. We can add also temporal requirements, for instance asking that a broken connection is restored within $h$ time units:
\begin{equation}
\phi_{connect\_restore} = \glob{} (\neg \phi_{connect} \rightarrow \ev{[0,h]}\phi_{connect} )
\end{equation}
Another class of properties of interest is the acyclicity of transmissions. To this end, we need to force the connectivity graph to be direct, with edges pointing in the direction of the coordinator (i.e. transmission reduces the distance from the coordinator). With STREL, we can easily detect the absence of a cycle for a fixed location $\ell$. This is captured by
$\phi^{\ell}_{acyclic} = \neg \phi^{\ell}_{cycle}$, where
\begin{equation}
\phi^{\ell}_{cycle} = @_\ell \reach{[0,1]}{hop} (\neg @_\ell \wedge \somewhere{}{hop}@_\ell)
\end{equation}
In order to characterize the whole network as acyclic, we need to take the conjunction of the previous formulae for all locations (or at least for routers, enforcing end devices to be connected only with routers). This is necessary as STREL is interpreted locally, on each location, and this forbids us to express properties of the whole network with location unaware formulae. This is a price for an efficient monitoring, as global properties of networks require more expressive and computationally expensive logics.
However, we can use the parametrization of STREL and the property of a Voronoi diagram to specify the global connection or the acyclicity of the graph. Indeed, the proximity graph connects always all the locations of the system, then the property $\everywhere{}{\Delta} \phi$, verified on the proximity graph, holds iff $\phi$ holds in all the location of the system.
Up to now we have presented qualitative properties, depending on the type of node. If we express properties of sensor measurements, we can also consider a quantitative semantics, returning a measure of robustness of (dis)satisfaction. As an example, we can monitor \eqref{eq:f1} if in each location an high value of pollution eventually implies, within $T$ time units, an high value of humidity, or \eqref{eq:f2} in which locations it is possible to find a `safe' route, where both the humidity and the pollution are below a certain threshold. We can also check \eqref{eq:f3} if a location which is not safe is at distance at most $d$ from a location which is safe. Finally \eqref{eq:f4}, we can check if a target device (identified by $X_S=1$ is reachable from all the location in less than d hops.
\begin{eqnarray}
&\phi_{PH} = (X_P > 150) \Rightarrow \ev{[0,T]} (X_H > 100) \label{eq:f1} &\\
&\phi_{Safe} =\glob{[0,T]} \escape{[d, \infty]}{\Delta} \: {(X_H < 90) \wedge (X_P < 150) } \label{eq:f2} &\\
&\phi_{some} = \somewhere{[0,d]}{\Delta} \phi_{Safe} \label{eq:f3} &\\
&\phi_{target} = \everywhere{}{hop} \somewhere{[0,d]}{hop}\: { (X_S = 1) } \label{eq:f4} &
\end{eqnarray}
\section{Case study: epidemic spreading}
\label{sec:epidemic}
In this section, we investigate a case study based on an epidemic spreading model in a population of a disease transmitting via direct contact, like flu or COVID19. The simplest models of epidemic spreading are based on a mean field assumption and treat the population as an homogeneous entity, assuming equal probability that two individuals can enter into contact \cite{epidemiology2019}. A more accurate description, instead, models potential contacts in a more explicit way, hence the population is represented as a network of interacting agents \cite{network_epidemic2015}, in which nodes are the agents and links are the potential contacts. Such a network can be static (links are fixed) or dynamic (links change during the simulation) and possibly adaptive \cite{network_epidemic2015}.
These kind of models are particularly useful when dealing with scenarios in which the number of potentially infective contacts, and thus of secondary infections, can vary a lot between individuals, the so called super-spreading scenario \cite{superspreading_2005}, which seems to be the relevant one also to capture the spreading of COVID19 disease \cite{superspreading_COVID_2020}.
In our case study, we consider a discrete-time model composed of two contact networks, one static, describing contacts within work colleagues, family, closed relatives and friends, and another one dynamic, modelling less frequent interaction events, like going to the restaurant, the cinema, or the disco.
The static network is defined by a degree distribution, assumed to be the same for each node, and modelled as a lognormal distribution with cutoff (mean 10, 99 percentile 50, cutoff at 200).\footnote{Contact distributions are constructed to resemble contact data collected by a regional government in Italy, which is not publicly available.} To generate the network, we sample a degree for each node and then sample the network relying on the \textsc{expected\_degree\_graph} method of networkX Python library \cite{networkX}.
This network is sampled once and not modified during simulations.
The dynamic event network, instead, is resampled at every simulation step (essentially corresponding to a day). Here, we additionally choose a subset of nodes which will be affected by the events. Each node has assigned a random probability of taking part to the event (chosen uniformly among the following frequency: once every month, once every two weeks, once every week, twice per week) and at each step, the node is included in the event network with such a probability. Then, each active node receives a degree sampled from a different degree distribution with a longer tail (lognormal with
mean 10 and 99 percentile 100, cutoff at 1000), to model super-spreading effects.\footnote{Note that, as we rely on distributions with cut-off, there is no practical difference in using a lognormal or a heavier tail distribution like a power law.}
In order to simulate our discrete-time model, with step corresponding to one day, we further give each agent one of four different states (\textbf{S}usceptible, \textbf{E}xposed but not infective, \textbf{I}nfective, \textbf{R}ecovered), and sample the duration in days of the transitions from E to I and from I to R according to a gamma distribution taken from COVID19 literature \cite{merler_2020}. Infection of a Susceptible node can happen if it is in contact with an Infective node, with a probability $p$ which is edge dependent and sampled for each edge according to a Beta distribution with mean 0.05 (which roughly gives an R0 close to 2, as observed in the second phase of the COVID epidemics, in Lombardia, Italy). We assume independence among different edges when modelling the spreading of infection.
At each time step, the spatial model of the epidemic network is designed by the pair $\langle L, \mathbf{W}\rangle$, where the set of locations $L$ corresponds to the set of agents and the proximity function $\mathbf{W}$ is such that $(\ell_i,w,\ell_j)\in \mathbf{W}$ if and only there is a probability greater than zero that $\ell_i$ and $\ell_j$
are in contact.
The value of the weight $w$ corresponds to the sampled probability described above, both for the static and the dynamic one.
More specifically, $w=-\ln(p_{\ell_i,\ell_j}(t))$, where $p_{\ell_i,\ell_j}(t)$ is the infection probability at time t.
Hence, higher is $w$, the lower is the probability that agent $\ell_i$ is in infected by agent $\ell_j$. We consider two types of distances here, the $hop$ distance, counting the number of hops and the $weight$ distance, summing the value of edges $w$.
The spatio-temporal trace of our epidemic network is $x: L \rightarrow \mathbb{T} \rightarrow \mathbb{Z}$ with only one signal $\vec x(\ell)= \nu$ associating with each agent $\ell$ and at each time t, the state $x(\ell, t) \in \mathbb{S} = \{\textbf{Susceptible}, \textbf{Exposed}, \textbf{Infected}, \textbf{Recovered} \}=\{\textbf{S}, \textbf{E}, \textbf{I}, \textbf{R} \}$.
To give an idea to the behaviour of this models we plot in Figure~\ref{fig:simulation} the number of nodes in each state at each time for a random simulation.
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{img/simulation.pdf}
\caption{Number of nodes in each state at each time for a random simulation.}
\label{fig:simulation}
\end{figure}
The first property that we consider is:
\begin{equation}
\phi_{dangerous\_days} = \glob{} (\textbf{Susceptible} \reach{[0,1]}{hop} (\ev{[0,2]}(\textbf{Infected})) => \ev{[0,7]} \textbf{Infected} )
\end{equation}
$\phi_{dangerous\_days}$ is satisfied in a location when it is always true (at each time step) that if a susceptible individual is directly connected with an individual that will be eventually infected in the next 2 days then it will eventually be infected within the next $7$ days. If we consider only the static network this property is satisfied on $447\pm12.5$ nodes of the network, considering 500 experiments, instead, considering only the dynamic network the property is satisfied by $350\pm70.5$ nodes. As expected the daily contacts are more dangerous than casual contact, and the dynamic network has more variability than the static one.
The second property that we consider is:
\begin{equation}
\phi_{safe\_radius} = \glob{} ( \everywhere{[0,r]}{weight} (\neg \textbf{Infected}) => \glob{[0,T]} (\neg \textbf{Infected}) )
\end{equation}
$\phi_{safe}$ holds in a location if it is always true that, when all the connected locations at a weight distance less than $r$ (i.e. with infection probability $\leq 10^{-r}$) are not infected, implies that this location will remain not infected for the next T days. With this property we can study the relation between the probability of being infected from connected nodes and being actually an infected individual after a number of days. If a location satisfies the property, it means that being in a radius $r$ of not infected individuals it prevents from infection.
If a location does not satisfy the property it means that there is some infected node at distance more than $r$, connected with it that cause its infection within the next T days. Setting $T=7$ days, we study the variation of $r$ versus the number of nodes that satisfy the property (in a network with 500 nodes). Figure~\ref{fig:safe_radius} shows the results. We report also a second scale with the corresponding probability value. We can see that
with $r=3$ which corresponds to a connection probability equal to $0.05$ (the mean of our Beta distribution), only half of the nodes satisfy the property and to have almost all nodes that satisfy the property we need to consider a very large radius. This means that having in the network edges with very large values of $r$ will not prevent the spread of the disease.
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{img/phi1.png}
\caption{Number of nodes that satisfy property$\phi_{safe\_radius}$ versus parameter r}
\label{fig:safe_radius}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
We presented STREL, a formal specification language to express and to monitor spatio-temporal requirements over a dynamic network of spatially-distributed CPS. Our monitoring framework considers the CPS topological configuration as a weighted graph with the nodes modeling the CPS entities while the edges representing their arrangement according to predefined
spatial relations (e.g. proximity relation, connectivity, euclidean distance, etc.). Both nodes and edges contain attributes that model physical and logical quantities that can change over time.
STREL combines the Signal Temporal Logic~\cite{Maler2004} with two spatial operators \emph{reach} and \emph{escape} that are interpreted over
the weighted graph. Other spatial modalities such as \emph{everywhere}, \emph{somewhere} and \emph{surround} can also be derived from them.
We demonstrated how STREL can be interpreted according to different semantics (Boolean, real-valued) semantics, defining an unified framework capturing all of them, based on constraint semirings. We further provided a generic offline monitoring algorithm based on such semiring formulation. We showed several examples of requirements that we can monitor in our framework, considering two different case studies.
As future work, we plan to investigate an algorithm for online monitoring of STREL properties. Furthermore, we aim to extend our framework with new features such as the possibility to synthesize automatically spatio-temporal controllers from the STREL specification or to provide
automatically an explanation of the failure, enabling to detect the responsible components when a STREL requirement is violated.
\section*{Acknowledgment}
This research has been partially supported by the Austrian FWF projects ZK-35 and W1255-N23, by the Italian PRIN project ``SEDUCE'' n. 2017TWRCNB and by the Italian PRIN project ``IT-MaTTerS'' n, 2017FTXR7S.
\bibliographystyle{alpha}
|
1,108,101,566,513 | arxiv | \section{Introduction}
\noindent
In this work we consider the problem of the existence of commutative $C^*$-algebras that are generated by families of Toeplitz operators on weighted Bergman spaces over irreducible bounded symmetric domains. More precisely, we are interested in the case where the Toeplitz operators are those given by symbols invariant by some closed subgroup of the group of biholomorphisms. This problem has turned out to be a quite interesting one thanks in part to the application of representation theory.
An important particular case is given when one considers the subgroup fixing some point in the domain, in other words, a maximal compact subgroup of the group of biholomorphisms. In \cite{DOQ}, we proved that for such maximal compact subgroups, the corresponding $C^*$-algebra is commutative. On the other hand, there is another interesting family of subgroups to consider: the maximal tori in the group of biholomorphisms. By the results from \cite{DOQ} it is straightforward to check that the $C^*$-algebra generated by the Toeplitz operators whose symbols are invariant under a fixed maximal torus is commutative if and only if the irreducible bounded symmetric domain is biholomorphically equivalent to some unit ball.
These results have inspired Nikolai Vasilevski to pose the following question. Let $D$ be an irreducible bounded symmetric domain that is not biholomorphically equivalent to a unit ball (that is, it is not of rank one), $K$ a maximal compact subgroup and $T$ a maximal torus in the group of biholomorphisms of $D$. Does there exist a closed subgroup $H$ such that $T \subsetneq H \subsetneq K$ for which the $C^*$-algebras (for all weights) generated by Toeplitz operators with $H$-invariant symbols are commutative? The goal of this work is to give a positive answer to this question for the classical Cartan domain of type $I$ of $2\times 2$ matrices. In the rest of this work we will denote simply by $D$ this domain.
The group of biholomorphisms of $D$ is realized by the Lie group $\mathrm{U}(2,2)$ acting by fractional linear transformations. A maximal compact subgroup is given by $\mathrm{U}(2)\times\mathrm{U}(2)$, which contains the maximal torus $\mathbb{T}^2\times\mathbb{T}^2$, where $\mathbb{T}^2$ denotes the group of $2\times2$ diagonal matrices with diagonal entries in $\mathbb{T}$. We prove that there are exactly two subgroups properly between $\mathrm{U}(2)\times\mathrm{U}(2)$ and $\mathbb{T}^2\times\mathbb{T}^2$, and these are $\mathrm{U}(2)\times\mathbb{T}^2$ and $\mathbb{T}^2\times\mathrm{U}(2)$ (see Proposition~\ref{prop:subgroupsT4U2U2}), for which it is also proved that the corresponding $C^*$-algebras generated by Toeplitz operators are unitarily equivalent (see Proposition~\ref{prop:U2T2vsT2U2}). In Section~\ref{sec:U(2)T2} we study the properties of $\mathrm{U}(2)\times\mathbb{T}^2$-invariant symbols. The main result here is Theorem~\ref{thm:Toeplitz-U2T2}, where we prove the commutativity of the $C^*$-algebras generated by Toeplitz operators whose symbols are $\mathrm{U}(2)\times\mathbb{T}^2$-invariant. As a first step to understand the structure of these $C^*$-algebras we provide in Section~\ref{sec:spectra} a computation of the spectra of the Toeplitz operators. The main result here is Theorem~\ref{thm:coefficients}.
We would like to use this opportunity to thank Nikolai Vasilevski, to whom this work is dedicated. Nikolai has been a very good friend and an excellent collaborator. He has provided us all with many ideas to work with.
\section{Preliminaries}\label{sec:preliminaries}
\noindent
Let us consider the classical Cartan domain given by
\[D = \{ Z \in M_{2\times2}(\mathbb{C}) : Z Z^* < I_2 \},
\]
where $A<B$ means that $B-A $ is positive definite. This domain is sometimes denoted by either $D^I_{2,2}$ or $D_{2,2}$.
We consider the Lie groups
\[
\mathrm{U}(2,2) = \{ M \in \mathrm{GL}(4,\mathbb{C}) : M^* I_{2,2} M = I_{2,2} \},\]
where
\[
I_{2,2} =
\begin{pmatrix}
I_2 & 0 \\
0 & -I_2
\end{pmatrix}.
\]
and the Lie group
\[ \mathrm{SU} (2,2) =\{M\in\mathrm{U} (2,2) : \det M =1\}.\]
Then $\mathrm{SU} (2,2)$, and hence also $\mathrm{U}(2,2)$, act transitively on $D$ by
\[
\begin{pmatrix}
A & B \\
C & D
\end{pmatrix}\cdot Z = (AZ+B)(CZ+D)^{-1},
\]
where we have a block decomposition by matrices with size $2\times2$. And $\mathrm{SU} (2,2)$ is, up to covering, the
group of biholomorphic isometries of $D$ and the action of $\mathrm{SU} (2,2)$ is locally faithful.
We observe that the action of $\mathrm{U}(2,2)$ on $D$ is not faithful. More precisely, the kernel of its action is the subgroup of matrices of the form $tI_4$, where $t \in \mathbb{T}$.
The maximal compact subgroup of $\mathrm{U}(2,2)$ that fixes the origin $0$ in $D$ is given by
\[
\mathrm{U}(2)\times\mathrm{U}(2) = \left\{
\begin{pmatrix}
A & 0 \\
0 & B
\end{pmatrix} : A\in\mathrm{U}(2), B\in\mathrm{U}(2)
\right\}.
\]
For simplicity, we write the elements of $\mathrm{U}(2)\times\mathrm{U}(2)$ as $(A,B)$ instead of using their
block diagonal representation. A maximal torus of $\mathrm{U}(2)\times\mathrm{U}(2)$ is given by
\[
\mathbb{T}^4 = \{ (D_1, D_2) \in \mathrm{U}(2)\times\mathrm{U}(2) :
D_1, D_2 \text{ diagonal} \}.
\]
The corresponding maximal compact subgroup and maximal torus in $\mathrm{SU} (2,2)$ are given by
\begin{align*}
\mathrm{S}(\mathrm{U}(2)\times\mathrm{U}(2)) &= \{(A,B) \in \mathrm{U}(2)\times\mathrm{U}(2) :
\det(A)\det(B) =1 \}, \\
\mathbb{T}^3 &= \{(D_1,D_2) \in \mathbb{T}^4 : \det(D_1)\det(D_2) = 1\}.
\end{align*}
For every $\lambda > 3$ we will consider the weighted measure $v_\lambda$ on $D$ given by
\[
\dif v_\lambda(Z) = c_\lambda \det(I_2 - Z Z^*)^{\lambda - 4}\dif Z
\]
where the constant $c_\lambda$ is chosen so that $v_\lambda$ is a probability measure. In particular, we have,
see \cite[Thm. 2.2.1]{Hua63}:
\[
c_\lambda =
\frac{ (\lambda -3)(\lambda-2)^2(\lambda-1)}{\pi^4}, \quad \lambda >3.\]
The Hilbert space inner product defined by $v_\lambda$ will be denoted by $\left<\cdot,\cdot\right>_\lambda$. We will from now on always assume that $\lambda >3$. The weighted Bergman space $\mathcal{H}^2_\lambda(D)$ is the Hilbert space of holomorphic functions that belong to $L^2(D,v_\lambda)$. This is a reproducing kernel Hilbert space with Bergman kernel given by
\[
k_\lambda(Z,W) = \det(I_2 - Z W^*)^{-\lambda},
\]
which yields the Bergman projection $B_\lambda : L^2(D,v_\lambda) \rightarrow \mathcal{H}^2_\lambda(D)$ given by
\[B_\lambda f(Z)=\int_D f(W)k_\lambda (Z,W)dv_\lambda (W).\]
We recall that the space of holomorphic polynomials $\mathcal{P}(M_{2\times2}(\mathbb{C}))$ is dense on every weighted Bergman space. Furthermore, it is well known that one has, for every $\lambda > 3$, the decomposition
\[
\mathcal{H}^2_\lambda(D) = \bigoplus_{d=0}^\infty \mathcal{P}^d(M_{2\times2}(\mathbb{C}))
\]
into a direct sum of Hilbert spaces, where $\mathcal{P}^d(M_{2\times2}(\mathbb{C}))$ denotes the subspace of homogeneous holomorphic polynomials of degree $d$.
For every essentially bounded symbol $\varphi \in L^\infty(D)$ and for every $\lambda > 3$ we define the corresponding Toeplitz operator by
\[
T^{(\lambda)}_\varphi(f) = B_\lambda(\varphi f), \quad f\in\mathcal{H}_\lambda^2(D) .\]
In particular, these Toeplitz operators are given by the following expression
\[
T^{(\lambda)}_\varphi(f)(Z) =
c_\lambda\int_{D}
\frac{\varphi(W)f(W)\det(I_2-WW^*)^{\lambda-4}}{\det(I_2-ZW^*)^\lambda}\, \dif W.
\]
On the other hand, for every $\lambda > 3$ there is an irreducible unitary representation of $\mathrm{U} (2,2)$ acting
on $\mathcal{H}_\lambda^2(D)$ given by
\begin{align*}
\pi_\lambda : \widetilde{\mathrm{U}}(2,2) \times \mathcal{H}^2_\lambda(D) &\rightarrow
\mathcal{H}^2_\lambda(D) \\
(\pi_\lambda(g)f)(Z) &= j(g^{-1},Z)^\frac{\lambda}{4} f(g^{-1}Z),
\end{align*}
where $j(g,Z)$ denotes the complex Jacobian of the transformation $g$ at the point $Z$.
We note that every $g \in \mathrm{U}(2)\times\mathrm{U}(2)$ defines a linear unitary transformation of $D$ that preserves all the measures $\dif v_\lambda$.
If $\lambda/4$ is not an integer, then $j(g,Z)^{\lambda/4}$ is not always well defined which makes it necessary to consider a covering of $\mathrm{U}(2,2)$. We therefore consider the universal covering group $\widetilde\mathrm{U} (2,2)$ of $\mathrm{U} (2,2)$ and its subgroup $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$, the universal covering group of $\mathrm{U}(2)\times\mathrm{U}(2)$. Here the covering map is given by
\[
(x,A,y,B) \mapsto (e^{ix}A,e^{iy}B).
\]
Hence, the action of $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$ on $D$ is given by the expression
\[
(x,A,y,B)Z = e^{i(x-y)}AZB^{-1}.
\]
It follows that the restriction of $\pi_\lambda$ to the subgroup $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$ is given by the expression
\[
(\pi_\lambda(x,A,y,B)f)(Z) = e^{i\lambda(y-x)}f(e^{i(y-x)}A^{-1}ZB).
\]
It is well known that this restriction is multiplicity-free for every $\lambda>3$ (see \cite{DOQ} and \cite{K08}).
It is useful to consider as well the representation
\begin{align*}
\pi_\lambda' : (U(2)\times\mathrm{U}(2)) \times \mathcal{H}^2_\lambda(D) &\rightarrow
\mathcal{H}^2_\lambda(D) \\
(\pi_\lambda'(g)f)(Z) &= f(g^{-1}Z),
\end{align*}
which is well-defined and unitary as a consequence of the previous remarks.
Note that the representations $\pi_\lambda$ and $\pi_\lambda'$ are defined on groups that differ by a covering, but they also differ by the factor $e^{i\lambda(y-x)}$. It follows that $\pi_\lambda'$ is multiplicity-free with the same isotypic decomposition as that of $\pi_\lambda$.
\section{Toeplitz operators invariant under subgroups of $\mathrm{U}(2)\times\mathrm{U}(2)$}
\noindent
For a closed subgroup $H \subset \mathrm{U}(2)\times\mathrm{U}(2)$ we will denote by $\mathcal{A}^H$ the complex vector space of essentially bounded symbols $\varphi$ on $D$ that are $H$-invariant, i.e.~such that for every $h \in H$ we have
\[
\varphi(hZ) = \varphi(Z)
\]
for almost every $Z \in D$. Denote by $\mathcal{T}^{(\lambda)}(\mathcal{A}^H)$ the $C^*$-algebra generated by Toeplitz operators with symbols in $\mathcal{A}^H$ acting on the weighted Bergman space $\mathcal{H}^2_\lambda(D)$. We have $\mathrm{U} (2)\times \mathrm{U}(2)= \mathbb{T} (\mathrm{S}(\mathrm{U} (2)\times \mathrm{U}(2)))$ and the center acts trivially on $D$. We also point to the special case that will be the main topic of this article.
Let us denote
\[
\mathrm{U}(2) \times \mathbb{T} = \left\{
(A, t) = \left(A,
\begin{pmatrix}
t & 0 \\
0 & \overline{t}
\end{pmatrix}
\right) : A \in \mathrm{U}(2), t \in \mathbb{T}
\right\}.
\]
We now prove that $\mathrm{U}(2)\times\mathbb{T}$-invariance is equivalent to $\mathrm{U}(2)\times\mathbb{T}^2$-invariance.
\begin{lemma}\label{lem:U2T-invariance}
The groups $\mathrm{U}(2)\times\mathbb{T}^2$ and $\mathrm{U}(2)\times\mathbb{T}$ have the same orbits. In other words, for every $Z \in D$, we have
\[
(\mathrm{U}(2)\times\mathbb{T}) Z = (\mathrm{U}(2)\times\mathbb{T}^2) Z.
\]
In particular, an essentially bounded symbol $\varphi$ is $\mathrm{U}(2)\times\mathbb{T}^2$-invariant if and only if it is $\mathrm{U}(2)\times\mathbb{T}$-invariant.
\end{lemma}
\begin{proof}
We observe that $\mathrm{U}(2)\times\mathbb{T}^2$ is generated as a group by $\mathrm{U}(2)\times\mathbb{T}$ and the subgroup
\[
\{I_2\}\times\mathbb{T} I_2.
\]
But for every $t \in \mathbb{T}$ and $Z \in D$ we have
\[
(I_2,tI_2)Z = \overline{t}Z = (\overline{t}I_2,I_2)Z
\]
which is a biholomorphism of $D$ already realized by elements of $\mathrm{U}(2)\times\mathbb{T}$. Hence, $\mathrm{U}(2)\times\mathbb{T}^2$ and $\mathrm{U}(2)\times\mathbb{T}$ yield the same transformations on their actions on $D$, and so the result follows.
\end{proof}
The following is now a particular case of \cite[Thm. 6.4]{DOQ}
and can be proved directly in exactly the same way.
\begin{theorem}\label{thm:H-commutativeC*}
For a closed subgroup $H$ of $\mathrm{U}(2)\times\mathrm{U}(2)$ the following conditions are equivalent for every $\lambda > 3$:
\begin{enumerate}
\item The $C^*$-algebra $\mathcal{T}^{(\lambda)}(\mathcal{A}^H)$ is commutative.
\item The restriction $\pi_\lambda|_H$ is multiplicity-free.
\end{enumerate}
\end{theorem}
As noted in Section~\ref{sec:preliminaries}, the unitary representation $\pi_\lambda$ is multiplicity-free on $\mathrm{S}(\mathrm{U}(2)\times\mathrm{U}(2))$ and thus the $C^*$-algebra generated by Toeplitz operators by $\mathrm{S}(\mathrm{U}(2)\times\mathrm{U}(2))$-invariant symbols is commutative for every weight $\lambda > 3$. Such operators are also known as radial Toeplitz operators.
On the other hand, it follows from Example~6.5 from \cite{DOQ} that the restriction $\pi_\lambda|_{\mathbb{T}^3}$ is not multiplicity-free, where $\mathbb{T}^3$ is the maximal torus of $\mathrm{S}(\mathrm{U}(2)\times\mathrm{U}(2))$ described in Section~\ref{sec:preliminaries}. Hence, we conclude that $\mathcal{T}^{(\lambda)}(\mathcal{A}^{\mathbb{T}^3})$ is not commutative for any $\lambda > 3$.
We now consider subgroups $H$ such that $\mathbb{T}^3 \subset H \subset \mathrm{S}(\mathrm{U}(2)\times\mathrm{U}(2))$ or, equivalently, subgroups $H$ such that $\mathbb{T}^4 \subset H \subset \mathrm{U}(2)\times\mathrm{U}(2)$. For simplicity, we will assume that $H$ is connected.
\begin{proposition}\label{prop:subgroupsT4U2U2}
Let $\mathbb{T}^4$ denote the subgroup of diagonal matrices in $\mathrm{U}(2)\times\mathrm{U}(2)$. Then the only connected subgroups strictly between $\mathrm{U}(2)\times\mathrm{U}(2)$ and $\mathbb{T}^4$ are $\mathrm{U}(2)\times\mathbb{T}^2$ and $\mathbb{T}^2\times\mathrm{U}(2)$. In particular, the only connected subgroups strictly between $\mathrm{S}(\mathrm{U}(2)\times\mathrm{U}(2))$ and $\mathbb{T}^3$ are $\mathrm{S}(\mathrm{U}(2)\times\mathbb{T}^2)$ and $\mathrm{S}(\mathbb{T}^2\times\mathrm{U}(2))$.
\end{proposition}
\begin{proof}
It is enough to prove the first claim for the corresponding Lie algebras.
First note that ($x_1, x_2 \in \mathbb{R}, z \in \mathbb{C}$)
\[
\left[
\begin{pmatrix}
ix_1 & 0 \\
0 & ix_2
\end{pmatrix},
\begin{pmatrix}
0 & z \\
-\overline{z} & 0
\end{pmatrix}
\right]
=
\begin{pmatrix}
0 & i(x_1-x_2)z \\
-\overline{i(x_1-x_2)z} & 0
\end{pmatrix},
\]
which proves that the space
\[
V =
\left\{
\begin{pmatrix}
0 & z \\
-\overline{z} & 0
\end{pmatrix} : z \in \mathbb{C}
\right\}
\]
is an irreducible $i\mathbb{R}^2$-submodule of $\mathfrak{u}(2)$. Hence, the decomposition of $\mathfrak{u}(2)\times\mathfrak{u}(2)$ into irreducible $i\mathbb{R}^4$-submodules is given by
\[
\mathfrak{u}(2)\times\mathfrak{u}(2) = i\mathbb{R}^4 \oplus V\times\{0\}
\oplus \{0\} \times V.
\]
We conclude that $\mathfrak{u}(2)\times i\mathbb{R}^2$ and $i\mathbb{R}^2\times\mathfrak{u}(2)$ are the only $i\mathbb{R}^4$-submodules strictly between $\mathfrak{u}(2)\times\mathfrak{u}(2)$ and $i\mathbb{R}^4$, and both are Lie algebras.
\end{proof}
There is natural biholomorphism
\begin{align*}
F : D &\rightarrow D \\
Z &\mapsto Z^\top
\end{align*}
that clearly preserves all the weighted measures $\dif v_\lambda$. Hence, $F$ induces a unitary map
\begin{align*}
F^* : L^2(D,v_\lambda) &\rightarrow L^2(D,v_\lambda) \\
F^*(f) &= f\circ F^{-1}
\end{align*}
that preserves $\mathcal{H}^2_\lambda(D)$. And the same expression
\[
\varphi \mapsto F^*(\varphi) = \varphi\circ F^{-1}
\]
defines an isometric isomorphism on the space $L^\infty(D)$ of essentially bounded symbols.
Furthermore, we consider the automorphism $\rho \in \mathrm{Aut}(\mathrm{U}(2)\times\mathrm{U}(2))$ given by
$\rho(A,B) = (\overline{B},\overline{A})$. Thus, we clearly have
\[
F((A,B)Z) = F(AZB^{-1}) = \overline{B}Z^\top \overline{A}^{-1}
= \rho(A,B) F(Z),
\]
for all $(A,B) \in \mathrm{U}(2)\times\mathrm{U}(2)$ and $Z \in D$. In other words, the map $F$ intertwines the $\mathrm{U}(2)\times\mathrm{U}(2)$-action with that of the image of $\rho$.
We observe that $\rho(\mathrm{U}(2)\times\mathbb{T}^2) = \mathbb{T}^2\times\mathrm{U}(2)$. Hence, the previous constructions can be used to prove that both groups define equivalent $C^*$-algebras from invariant Toeplitz operators.
\begin{proposition}\label{prop:U2T2vsT2U2}
The isomorphism of $L^\infty(D)$ given by $F^*$ maps $\mathcal{A}^{\mathrm{U}(2)\times\mathbb{T}^2}$ onto $\mathcal{A}^{\mathbb{T}^2\times\mathrm{U}(2)}$. Furthermore, for every weight $\lambda > 3$ and for every $\varphi \in \mathcal{A}^{\mathrm{U}(2)\times\mathbb{T}^2}$ we have
\[
T^{(\lambda)}_{F^*(\varphi)} = F^*\circ T^{(\lambda)}_\varphi
\circ (F^*)^{-1}.
\]
In particular, the $C^*$-algebras $\mathcal{T}^{(\lambda)}(\mathcal{A}^{\mathrm{U}(2)\times\mathbb{T}^2})$ and $\mathcal{T}^{(\lambda)}(\mathcal{A}^{\mathbb{T}^2\times\mathrm{U}(2)})$ are unitarily equivalent for every $\lambda > 3$.
\end{proposition}
\begin{proof}
From the above computations, for a given $\varphi \in L^\infty(D)$ we have
\[
\varphi \circ (A,B) \circ F^{-1}
= \varphi \circ F^{-1} \circ \rho(A,B)
\]
for every $(A,B) \in \mathrm{U}(2)\times\mathrm{U}(2)$. Hence, $\varphi$ is $U(2)\times\mathbb{T}^2$-invariant if and only if $F^*(\varphi)$ is $\mathbb{T}^2\times\mathrm{U}(2)$-invariant. This proves the first part.
On the other hand, we use that the map $F^*$ is unitary on $L^2(D,v_\lambda)$ to conclude that for every $f,g \in \mathcal{H}^2_\lambda(D)$ we have
\begin{align*}
\left<T^{(\lambda)}_{F^*(\varphi)}(f),g\right>_\lambda
&= \left<F^*(\varphi)f,g\right>_\lambda \\
&= \left<(\varphi\circ F^{-1})f,g\right>_\lambda \\
&= \left<\varphi (f\circ F),g\circ F\right>_\lambda \\
&= \left<T^{(\lambda)}_{\varphi} \circ (F^*)^{-1}(f),
(F^*)^{-1}g\right>_\lambda \\
&= \left<F^* \circ T^{(\lambda)}_{\varphi} \circ (F^*)^{-1}(f),
g\right>_\lambda,
\end{align*}
and this completes the proof.
\end{proof}
\section{$\mathrm{U}(2)\times\mathbb{T}^2$-invariant symbols}\label{sec:U(2)T2}
\noindent
As noted in Section~\ref{sec:preliminaries}, the subgroup $\mathrm{U}(2)\times\mathrm{U}(2)$ does not act faithfully. Hence, it is convenient to consider suitable subgroups for which the action is at least locally faithful. This is particularly important when describing the orbits of the subgroups considered. We also noted before that the most natural choice is to consider subgroups of $\mathrm{S}(\mathrm{U}(2)\times\mathrm{U}(2))$, however for our setup it will be useful to consider other subgroups.
For the case of the subgroup $\mathrm{U}(2)\times\mathbb{T}^2$ it turns out that $\mathrm{U}(2)\times\mathbb{T}^2$-invariance is equivalent to $\mathrm{S}(\mathrm{U}(2)\times\mathbb{T}^2)$-invariance. This holds for the action through biholomorphisms on $D$ and so for every induced action on function spaces over $D$.
To understand the structure of the $\mathrm{U}(2)\times\mathbb{T}$-orbits the next result provides a choice of a canonical element on each orbit.
\begin{proposition}\label{prop:U2T-orbits}
For every $Z \in M_{2\times2}(\mathbb{C})$ there exists $r \in [0,\infty)^3$ and $(A,t) \in \mathrm{U}(2)\times\mathbb{T}$ such that
\[
(A,t) Z =
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix}.
\]
Furthermore, if $Z = (Z_1,Z_2)$ satisfies $\det(Z), \left<Z_1,Z_2\right> \not= 0$, then $r$ is unique and $(A,t)$ is unique up to a sign.
\end{proposition}
\begin{proof}
First assume that $\det(Z) = 0$, so that we can write $Z = (au, bu)$ for some unitary vector $u \in \mathbb{C}^2$ and for $a,b \in \mathbb{C}$. For $Z = 0$ the claim is trivial. If either $a$ or $b$ is zero, but not both, then we can choose $A \in \mathrm{U}(2)$ that maps the only nonzero column into a positive multiple of $e_1$ and the result follows. Finally, we assume that $a$ and $b$ are both non-zero. In this case, choose $A \in \mathrm{U}(2)$ such that $A(au) = |a|e_1$ and $t \in \mathbb{T}$ such that
\[
t^2 = \frac{a|b|}{b|a|}.
\]
Then, one can easily check that
\[
(tA,t)Z =
\begin{pmatrix}
|a| & |b| \\
0 & 0
\end{pmatrix}.
\]
Let us now assume that $\det(Z) \not= 0$. From the unit vector
\[
\begin{pmatrix}
a \\
b
\end{pmatrix} = \frac{Z_1}{|Z_1|},
\]
we define
\[
A =
\begin{pmatrix}
\overline{a} & \overline{b} \\
-b & a
\end{pmatrix} \in \mathrm{SU}(2).
\]
Then, it follows easily that we have
\[
AZ =
\begin{pmatrix}
|Z_1| & \frac{1}{|Z_1|}\left<Z_2,Z_1\right> \\
0 & \frac{1}{|Z_1|} \det(Z)
\end{pmatrix}.
\]
If $s,t \in \mathbb{T}$ are given, then we have
\[
\left(
\begin{pmatrix}
t & 0 \\
0 & s
\end{pmatrix}A, t\right)Z =
\begin{pmatrix}
|Z_1| & \frac{t^2}{|Z_1|}\left<Z_2,Z_1\right> \\
0 & \frac{st}{|Z_1|} \det(Z)
\end{pmatrix}.
\]
Hence, it is enough to choose $s,t \in \mathbb{T}$ so that $r_2 = t^2\left<Z_2,Z_1\right>$ and $r_3 = st\det(Z)$ are both non-negative to complete the existence part with $r_1 = |Z_1|$.
For the uniqueness, let us assume that $\det(Z),\left<Z_1,Z_2\right> \not= 0$ and besides the identity in the statement assume that we also have
\[
(A',t') Z =
\begin{pmatrix}
r_1' & r_2' \\
0 & r_3'
\end{pmatrix},
\]
with the same restrictions. Then, we obtain the identity
\begin{equation}\label{eq:U2T-r}
(A'A^{-1},t'\overline{t})
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix} =
\begin{pmatrix}
r_1' & r_2' \\
0 & r_3'
\end{pmatrix}.
\end{equation}
This implies that $A'A^{-1}$ is a diagonal matrix of the form
\[
\begin{pmatrix}
a & 0 \\
0 & b
\end{pmatrix}
\]
with $a,b \in \mathbb{T}$. Then, taking the determinant of \eqref{eq:U2T-r} we obtain $abr_1r_3 = r_1'r_3'$, which implies that $ab = 1$. If we now use the identities from the entries in \eqref{eq:U2T-r}, then one can easily conclude that $r = r'$ and $(A',t') = \pm (A,t)$.
\end{proof}
The following result is an immediate consequence.
\begin{corollary}
Let $\varphi \in L^\infty(D)$ be given. Then, $\varphi$ is $\mathrm{U}(2)\times\mathbb{T}^2$-invariant if and only if for a.e.~$Z \in D$ we have
\[
\varphi(Z) =
\varphi\left(
\begin{matrix}
r_1 & r_2 \\
0 & r_3
\end{matrix}
\right)
\]
where $r=(r_1,r_2,r_3)$ are the (essentially) unique values obtained from $Z$ in Proposition~\ref{prop:U2T-orbits}.
\end{corollary}
\section{Toeplitz operators with $\mathrm{U}(2)\times\mathbb{T}^2$-invariant symbols}
\noindent
As noted in Section~\ref{sec:preliminaries}, for every $\lambda > 3$ the restriction of $\pi_\lambda$ to $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$ is multiplicity-free. We start this section by providing an explicit description of the corresponding isotypic decomposition.
Let us consider the following set of indices
\[
\overrightarrow{\mathbb{N}}^2 = \{\nu=(\nu_1,\nu_2) \in \mathbb{Z}^2 : \nu_1 \geq \nu_2 \geq 0 \}.
\]
Then, for every $\nu \in \overrightarrow{\mathbb{N}}^2$, we let $F_\nu$ denote the complex irreducible $\mathrm{SU}(2)$-module with dimension $\nu_1-\nu_2 + 1$. For example, $F_\nu$ can be realized as the $\mathrm{SU}(2)$-module given by $\mathrm{Sym}^{\nu_1-\nu_2}(\mathbb{C}^2)$ or by the space of homogeneous polynomials in two complex variables and degree $\nu_1-\nu_2$. Next, we let the center $\mathbb{T} I_2$ of $\mathrm{U}(2)$ act on the space $F_\nu$ by the character $t \mapsto t^{\nu_1 + \nu_2}$. It is easy to check that the actions on $F_\nu$ of $\mathrm{SU}(2)$ and $\mathbb{T} I_2$ are the same on their intersection $\{\pm I_2\}$. This turns $F_\nu$ into a complex irreducible $\mathrm{U}(2)$-module. We note (and will use without further remarks) that the $\mathrm{U}(2)$-module structure of $F_\nu$ can be canonically extended to a module structure over $\mathrm{GL}(2,\mathbb{C})$.
We observe that the dual $F_\nu^*$ as $\mathrm{U}(2)$-module is realized by the same space with the same $\mathrm{SU}(2)$-action but with the action of the center $\mathbb{T} I_2$ now given by the character $t \mapsto t^{-\nu_1-\nu_2}$.
If $V$ is any $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$-module, then for every $\lambda$ we consider a new $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$-module given by the action
\begin{equation}\label{eq:Vlambda}
(x,A,y,B)\cdot v = e^{i\lambda(y-x)} (x,A,y,B)v
\end{equation}
where $(x,A,y,B) \in \mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$, $v \in V$ and the action of $(x,A,y,B)$ on $v$ on the left-hand side is given by the original structure of $V$. We will denote by $V_\lambda$ this new $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$-module structure.
In particular, for every $\nu \in \overrightarrow{\mathbb{N}}^2$ the space $F_\nu^*\otimes F_\nu$ is an irreducible module over $\mathrm{U}(2)\times\mathrm{U}(2)$ and, for every $\lambda > 3$, the space $(F_\nu^*\otimes F_\nu)_\lambda$ is an irreducible module over $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$. Note that two such modules defined for $\nu, \nu' \in \overrightarrow{\mathbb{N}}^2$ are isomorphic (over the corresponding group) if and only if $\nu=\nu'$.
\begin{proposition}\label{prop:U2U2-isotypic}
For every $\lambda > 3$, the isotypic decomposition of the restriction of $\pi_\lambda$ to $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$ is given by
\[
\mathcal{H}^2_\lambda(D) \cong
\bigoplus_{\nu \in \overrightarrow{\mathbb{N}}^2} (F_\nu^*\otimes F_\nu)_\lambda,
\]
and this decomposition is multiplicity-free. With respect to this isomorphism and for every $d \in \mathbb{N}$, the subspace $\mathcal{P}^d(M_{2\times2}(\mathbb{C}))$ corresponds to the sum of the terms for $\nu$ such that $|\nu| = d$.
Furthermore, for the Cartan subalgebra given by the diagonal matrices of $\mathfrak{u}(2)\times\mathfrak{u}(2)$ and a suitable choice of positive roots, the irreducible $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$-submodule of $\mathcal{H}^2_\lambda(D)$ corresponding to $(F_\nu^*\otimes F_\nu)_\lambda$ has a highest weight vector given by
\[
p_\nu(Z) = z_{11}^{\nu_1-\nu_2}\det(Z)^{\nu_2},
\]
for every $\nu \in \overrightarrow{\mathbb{N}}^2$.
\end{proposition}
\begin{proof}
By the remarks in Section~\ref{sec:preliminaries} we can consider the representation $\pi_\lambda'$. Furthermore, it was already mentioned in that section that $\mathcal{P}^d(M_{2\times2}(\mathbb{C}))$ is $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$-invariant and so we compute its decomposition into irreducible submodules. In what follows we consider both $\pi_\lambda$ and $\pi_\lambda'$ always restricted to $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$. We also recall that for $\pi_\lambda'$ we already have an action for $\mathrm{U}(2)\times\mathrm{U}(2)$ without the need of passing to the universal covering group.
Note that the representation $\pi_\lambda'$ on each $\mathcal{P}^d(M_{2\times2}(\mathbb{C}))$ naturally extends with the same expression from $\mathrm{U}(2)\times\mathrm{U}(2)$ to $\mathrm{GL}(2,\mathbb{C})\times\mathrm{GL}(2,\mathbb{C})$. This action is regular in the sense of representations of algebraic groups. By the Zariski density of $\mathrm{U}(2)$ in $\mathrm{GL}(2,\mathbb{C})$ it follows that invariance and irreducibility of subspaces as well as isotypic decompositions with respect to either $\mathrm{U}(2)$ or $\mathrm{GL}(2,\mathbb{C})$ are the same for $\pi_\lambda'$ in $\mathcal{P}^d(M_{2\times2}(\mathbb{C}))$. Hence, we can apply Theorem~5.6.7 from \cite{GW} (see also \cite{Johnson80}) to conclude that
\[
\mathcal{P}^d(M_{2\times2}(\mathbb{C})) \cong
\bigoplus_{\substack{\nu \in \overrightarrow{\mathbb{N}}^2 \\ |\nu| = d}} F_\nu^*\otimes F_\nu
\]
as $\mathrm{U}(2)\times\mathrm{U}(2)$-modules for the representation $\pi_\lambda'$. Since the representations $\pi_\lambda$ and $\pi_\lambda'$ differ by the factor $e^{i\lambda(y-x)}$ for elements of the form $(x,A,y,B)$, taking the sum over $d \in \mathbb{N}$ we obtain the isotypic decomposition of $\mathcal{H}^2_\lambda(D)$ as stated. This is multiplicity-free as a consequence of the remarks in this section.
Finally, the claim on highest weight vectors is contained in the proof of Theorem~5.6.7 from \cite{GW}, and it can also be found in \cite{Johnson80}.
\end{proof}
We now consider the subgroup $\mathrm{U}(2)\times\mathbb{T}^2$. Note that the subgroup of $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathrm{SU}(2)$ corresponding to $\mathrm{U}(2)\times\mathbb{T}^2$ is realized by $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$ with covering map given by the expression
\[
(x,A,y,t) \mapsto
\left(e^{ix}A,
e^{iy}
\begin{pmatrix}
t & 0 \\
0 & \overline{t}
\end{pmatrix}
\right).
\]
In particular, the action of $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$ on $D$ is given by
\[
(x,A,y,t) Z = e^{i(x-y)}AZ
\begin{pmatrix}
t & 0 \\
0 & \overline{t}
\end{pmatrix},
\]
and the representation $\pi_\lambda$ restricted to $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$ is given by
\[
(\pi_\lambda(x,A,y,t)f)(Z) = e^{i\lambda(y-x)}
f\left(e^{i(y-x)}A^{-1}Z
\begin{pmatrix}
t & 0 \\
0 & \overline{t}
\end{pmatrix}\right).
\]
We recall that for any Cartan subgroup of $\mathrm{U}(2)$ we have a weight space decomposition
\[
F_\nu = \bigoplus_{j=0}^{\nu_1-\nu_2} F_\nu(\nu_1-\nu_2 - 2j),
\]
where $F_\nu(k)$ denotes the $1$-dimensional weight space corresponding to the weight $k = -\nu_1+\nu_2, -\nu_1+\nu_2 +2, \dots, \nu_1-\nu_2 - 2, \nu_1-\nu_2$. For simplicity, we will always consider the Cartan subgroup $\mathbb{T}^2$ of $\mathrm{U}(2)$ given by its subset of diagonal matrices. We conclude that $F_\nu(k)$ is isomorphic, as a $\mathbb{T}^2$-module, to the $1$-dimensional representation corresponding to the character $(t_1,t_2)\mapsto t_1^{\nu_2}t_2^k$. We will denote by $\mathbb{C}_{(m_1,m_2)}$ the $1$-dimensional $\mathbb{T}^2$-module defined by the character $(t_1,t_2) \mapsto t_1^{m_1}t_2^{m_2}$, where $(m_1,m_2) \in \mathbb{Z}^2$. In particular, we have $F_\nu(k) \cong \mathbb{C}_{(\nu_2,k)}$ for every $k = -\nu_1+\nu_2, -\nu_1+\nu_2 +2, \dots, \nu_1-\nu_2 - 2, \nu_1-\nu_2$.
Using the previous notations and remarks we can now describe the isotypic decomposition for the restriction of $\pi_\lambda$ to $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$. As before, for a module $V$ over the group $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$ we will denote by $V_\lambda$ the module over the same group obtained by the expression \eqref{eq:Vlambda}.
\begin{proposition}\label{prop:U2T2-isotypic}
For every $\lambda > 3$, the isotypic decomposition of the restriction of $\pi_\lambda$ to $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$ is given by
\[
\mathcal{H}^2_\lambda(D) \cong
\bigoplus_{\nu \in \overrightarrow{\mathbb{N}}^2}
\bigoplus_{j=0}^{\nu_1-\nu_2}
(F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2-2j)})_\lambda,
\]
and this decomposition is multiplicity-free.
Furthermore, for the Cartan subalgebra given by the diagonal matrices of $\mathfrak{u}(2)\times i\mathbb{R}^2$ and a suitable choice of positive roots, the irreducible $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$-submodule of $\mathcal{H}^2_\lambda(D)$ corresponding to $(F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2-2j)})_\lambda$ has a highest weight vector given by
\[
p_{\nu,j}(Z) = z_{11}^{\nu_1-\nu_2-j}z_{12}^j\det(Z)^{\nu_2},
\]
for every $\nu \in \overrightarrow{\mathbb{N}}^2$ and $j = 0, \dots, \nu_1-\nu_2$.
\end{proposition}
\begin{proof}
We build from Proposition~\ref{prop:U2U2-isotypic} and its proof so we follow their notation.
As noted above in this section we have a weight space decomposition
\[
F_\nu = \bigoplus_{j=0}^{\nu_1-\nu_2} F_\nu(\nu_1-\nu_2 - 2j)
\cong \bigoplus_{j=0}^{\nu_1-\nu_2} \mathbb{C}_{(\nu_2,\nu_1-\nu_2 - 2j)},
\]
where the isomorphism holds term by term as modules over the Cartan subgroup $\mathbb{T}^2$ of diagonal matrices of $\mathrm{U}(2)$. It follows from this and Proposition~\ref{prop:U2U2-isotypic} that we have an isomorphism
\[
\mathcal{H}^2_\lambda(D) \cong
\bigoplus_{\nu \in \overrightarrow{\mathbb{N}}^2}
\bigoplus_{j=0}^{\nu_1-\nu_2}
F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2-2j)},
\]
of modules over $\mathrm{U}(2)\times\mathbb{T}^2$ for the restriction of $\pi_\lambda'$ to this subgroup. Hence, with the introduction of the factor $e^{i\lambda(y-x)}$ from \eqref{eq:Vlambda} we obtain the isomorphism of modules over $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$ for the restriction of $\pi_\lambda$ to this subgroup. This proves the first part of the statement.
We also note that the modules $(F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2-2j)})_\lambda$ are clearly irreducible over $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$ and non-isomorphic for different values of $\nu$ and $j$. Hence, the restriction of $\pi_\lambda$ to $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$ is multiplicity-free.
On the other hand, the proof of Theorem~5.6.7 from \cite{GW}, on which that of Proposition~\ref{prop:U2U2-isotypic} is based, considers the Cartan subalgebra defined by diagonal matrices in $\mathfrak{u}(2)\times\mathfrak{u}(2)$ and the order on roots for which the positive roots correspond to matrices of the form $(X,Y)$ with $X$ lower triangular and $Y$ upper triangular. With these choices, for every $\nu \in \overrightarrow{\mathbb{N}}^2$, the highest weight vector $p_\nu(Z)$ from Proposition~\ref{prop:U2U2-isotypic} lies in the subspace corresponding to the tensor product of two highest weight spaces. Hence, $p_\nu(Z)$ lies in the subspace corresponding to $(F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2)})_\lambda$. In particular, $p_\nu(Z)$ is a highest weight vector for $(F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2)})_\lambda$.
It is well known from the description of the representations of $\mathfrak{sl}(2,\mathbb{C})$ that the element
\[
Y =
\begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix} \in \mathfrak{sl}(2,\mathbb{C})
\]
acts on $F_\nu$ so that it maps
\[
F_\nu(\nu_1-\nu_2-2j) \rightarrow F_\nu(\nu_1-\nu_2-2j-2)
\]
isomorphically for every $j = 0, \dots, \nu_1-\nu_2-1$. This holds for the order where the upper triangular matrices in $\mathfrak{sl}(2,\mathbb{C})$ define positive roots. Since the action of $\mathrm{U}(2)\times\{I_2\}$ commutes with that of $Y$ it follows that the element $(0,Y) \in \mathfrak{sl}(2,\mathbb{C})\times\mathfrak{sl}(2,\mathbb{C})$ maps a highest weight vector of $F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2-2j)}$ onto a highest weight vector of $F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2-2j-2)}$. Hence, a straightforward computation that applies $j$-times the element $(0,Y)$ starting from $p_\nu(Z)$ shows that the vector
\[
p_{\nu,j}(Z) = z_{11}^{\nu_1-\nu_2-j}z_{12}^j\det(Z)^{\nu_2}
\]
defines a highest weight vector for the submodule corresponding to space $F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2-2j)}$ for the representation $\pi_\lambda'$ restricted to $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$. Again, it is enough to consider the factor from \eqref{eq:Vlambda} to conclude the claim on the highest weight vectors for $\pi_\lambda$ restricted to $\mathbb{R}\times\mathrm{SU}(2)\times\mathbb{R}\times\mathbb{T}$.
\end{proof}
As a consequence we obtain the following result.
\begin{theorem}\label{thm:Toeplitz-U2T2}
For every $\lambda > 3$, the $C^*$-algebra $\mathcal{T}^{(\lambda)}(\mathcal{A}^{\mathrm{U}(2)\times\mathbb{T}^2})$ generated by Toeplitz operators with essentially bounded $\mathrm{U}(2)\times\mathbb{T}^2$-invariant symbols is commutative. Furthermore, if $H$ is a connected subgroup between $\mathbb{T}^4$ and $\mathrm{U}(2)\times\mathrm{U}(2)$ such that $\mathcal{T}^{(\lambda)}(\mathcal{A}^H)$ is commutative, then $H$ is either of $\mathrm{U}(2)\times\mathrm{U}(2)$, $\mathrm{U}(2)\times\mathbb{T}^2$ or $\mathbb{T}^2\times\mathrm{U}(2)$. Also, for the last two choices of $H$, the corresponding $C^*$-algebras $\mathcal{T}^{(\lambda)}(\mathcal{A}^H)$ are unitarily equivalent.
\end{theorem}
\begin{proof}
The commutativity of $\mathcal{T}^{(\lambda)}(\mathcal{A}^{\mathrm{U}(2)\times\mathbb{T}^2})$ follows from Proposition~\ref{prop:U2T2-isotypic} and Theorem~\ref{thm:H-commutativeC*}. The possibilities on the choices of $H$ follows from Proposition~\ref{prop:subgroupsT4U2U2} and the remarks from Section~\ref{sec:preliminaries}. The last claim is the content of Proposition~\ref{prop:U2T2vsT2U2}.
\end{proof}
We also obtain the following orthogonality relations for the polynomials $p_{\nu,j}$.
\begin{proposition}\label{prop:pnuj_Schur_relations}
Let $\nu \in \overrightarrow{\mathbb{N}}^2$ be fixed. Then, we have
\[
\int_{\mathrm{U}(2)} p_{\nu,j}(A) \overline{p_{\nu,k}(A)} \dif A
= \frac{\delta_{jk}}{\nu_1-\nu_2+1} \binom{\nu_1-\nu_2}{j}
\]
for every $j,k = 0, \dots, \nu_1-\nu_2$.
\end{proposition}
\begin{proof}
We remember that the irreducible $\mathrm{U}(2)$-module $F_\nu$ can be realized as the space of homogeneous polynomials of degree $\nu_1-\nu_2$ in two complex variables. For this realization, the $\mathrm{U}(2)$-action is given by
\[
(\pi_\nu(A)p)(z) = \det(A)^{\nu_1}p(A^{-1} z)
\]
for $A \in U(2)$ and $z \in \mathbb{C}^2$.
Also, the computation of orthonormal bases on Bergman spaces on the unit ball (see for example \cite{Zhu2005}) implies that there is a $\mathrm{U}(2)$-invariant inner product $\left<\cdot,\cdot\right>$ on $F_\nu$ for which the basis
\[
\left\{ v_j(z_1,z_2) = \binom{\nu_1-\nu_2}{j}^{\frac{1}{2}} z_1^{\nu_1-\nu_2-j}z_2^{j} : j=0,1,\ldots, \nu_1-\nu_2\right\},
\]
is orthonormal. We fix the inner product and this orthonormal basis for the rest of the proof.
With these choices it is easy to see that the map given by
\[
Z \mapsto \left<\pi_\nu(Z)v_j, v_0\right>,
\]
for $Z \in \mathrm{GL}(2,\mathbb{C})$, is polynomial and is a highest weight vector for the $\mathrm{U}(2)\times\mathbb{T}^2$-module corresponding to $F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2-2j)}$ in the isomorphism given by Proposition~\ref{prop:U2T2-isotypic}. Hence there is a complex number $\alpha_{\nu,j}$ such that
\[
p_{\nu,j}(Z) =
\alpha_{\nu,j} \left<\pi_\nu(Z)v_j, v_\nu\right>
\]
for all $Z \in \mathrm{GL}(2,\mathbb{C})$ and $j = 0, \dots, \nu_1-\nu_2$.
By Schur's orthogonality relations we conclude that
\[
\int_{\mathrm{U}(2)} p_{\nu,j}(Z) \overline{p_{\nu,k}(Z)} \dif Z
= \frac{\delta_{jk}|\alpha_{\nu,j}|^2}{\nu_1-\nu_2+1}
\]
for every $j,k = 0, \dots, \nu_1-\nu_2$.
Next we choose
\[
A_0 =
\begin{pmatrix}
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{pmatrix}
\in \mathrm{SU}(2).
\]
and evaluate at this matrix to compute the constant $\alpha_{\nu,j}$.
First, we compute
\begin{align*}
(\pi_\nu (A_0^{-1}) v_0)(z_1,z_2)
&= v_0 \left(
\begin{pmatrix}
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{pmatrix}
\begin{pmatrix}
z_1\\
z_2
\end{pmatrix}
\right) \\
&= v_0\left(\frac{1}{\sqrt{2}}(z_1-z_2),\frac{1}{\sqrt{2}}(z_1+z_2)\right) \\
&= \frac{1}{\sqrt{2^{\nu_1-\nu_2}}} (z_1-z_2)^{\nu_1-\nu_2} \\
&= \frac{1}{\sqrt{2^{\nu_1-\nu_2}}}
\sum_{j=0}^{\nu_1-\nu_2} (-1)^j
\binom{\nu_1-\nu_2}{j}z_1^{\nu_1-\nu_2-j}z_2^j,
\end{align*}
which implies that
\[
\left<\pi_\nu(A_0)v_j, v_0\right> =
\left<v_j, \pi_\nu(A_0^{-1})v_0\right> =
\frac{(-1)^j}{\sqrt{2^{\nu_1-\nu_2}}} \binom{\nu_1-\nu_2}{j}^{\frac{1}{2}}.
\]
Meanwhile,
\[
p_{\nu,j}(A_0)
= \left(\frac{1}{\sqrt{2}}\right)^{\nu_1-\nu_2-j} \left(-\frac{1}{\sqrt{2}}\right)^{j} \det(A_0)^{\nu_2}
= \frac{(-1)^j}{\sqrt{2^{\nu_1-\nu_2}}},
\]
thus implying that
\[
\alpha_{\nu,j} = \binom{\nu_1-\nu_2}{j}^{\frac{1}{2}}.
\]
This completes our proof.
\end{proof}
\section{The spectra of Toeplitz operators with $\mathrm{U}(2)\times\mathbb{T}^2$-invariant symbols}\label{sec:spectra}
\noindent
We recall that the Haar measure $\mu$ on $\mathrm{GL}(2,\mathbb{C})$ is given by
\[
\dif\mu(Z) =| \det(Z)|^{-4} \dif Z=\det (ZZ^*)^{-2} \dif Z.
\]
where $\dif Z$ denotes the Lebesgue measure on the Euclidean space
$M_{2\times2}(\mathbb{C})$. Furthermore, we have the following expression for the Haar measure:
\begin{lemma}\label{lem:Haar_foliated}
For every function $f \in C_c(\mathrm{GL}(2,\mathbb{C}))$ we have
\[
\int_{\mathrm{GL}(2,\mathbb{C})} f(Z) \dif\mu(Z)
= \int_\mathbb{C} \int_{(0,\infty)^2} \int_{\mathrm{U}(2)}
f\left( A
\left( \begin{matrix}
a_1 & z\\
0 & a_2
\end{matrix}\right)
\right)
a_2^{-2} \dif A \dif a \dif z.
\]
\end{lemma}
\begin{proof}
For the moment let
\[ n_z =
\begin{pmatrix} 1 & z\\ 0 & 1\end{pmatrix}. \]
We start with the Iwasawa decomposition of $GL(2,\mathbb{C})$ that allows us to decompose any $Z \in \mathrm{GL}(2,\mathbb{C})$ as
\[
Z = A \diag (a_1,b_1) n_z \]
where $A \in \mathrm{U}(2)$, $a_1, a_2 > 0$ and $z \in \mathbb{C}$. Then,
\cite[Prop. 8.43]{Knapp2002} and some changes of coordinates we obtain the result as follows.
\begin{align*}
\int_{\mathrm{GL}(2,\mathbb{C})} f(Z) &\dif\mu(Z) \\
&= \int_\mathbb{C} \int_0^\infty \int_0^\infty \int_{\mathrm{U}(2)}
f\left(A
\begin{pmatrix}
a_1 & 0 \\
0 & a_2
\end{pmatrix} n_z
\right)
a_1^2 a_2^{-2} \dif A \dif a_1 \dif a_2 \dif z \\
&= \int_\mathbb{C} \int_0^\infty \int_0^\infty \int_{\mathrm{U}(2)}
f\left(A
\begin{pmatrix}
a_1 & a_1 z\\
0 & a_2
\end{pmatrix}
\right)
a_1^2 a_2^{-2} \dif A \dif a_1 \dif a_2 \dif z \\
&= \int_\mathbb{C} \int_0^\infty \int_0^\infty \int_{\mathrm{U}(2)}
f\left(A
\begin{pmatrix}
a_1 & z\\
0 & a_2
\end{pmatrix}
\right)
a_2^{-2} \dif A \dif a_1 \dif a_2 \dif z.\qedhere
\end{align*}
\end{proof}
By the remarks above, the weighted measure $v_\lambda$ on $D$ can be written in terms of the Haar measure on $\mathrm{GL}(2,\mathbb{C})$ as follows
\begin{align}
\dif v_{\lambda}(Z) &= c_\lambda | \det(Z)|^4\det(I_2-ZZ^*)^{\lambda-4} \dif\mu(Z)\label{eq:vlambda_Haar}\\
&=c_\lambda \det(ZZ^*)^2\det(I_2-ZZ^*)^{\lambda-4} \dif\mu(Z) .\nonumber
\end{align}
We use this and Lemma~\ref{lem:Haar_foliated} to write down the measure $v_\lambda$ in terms of measures associated to the foliation on $M_{2\times2}(\mathbb{C})$ given by the action of $\mathrm{U}(2)\times \mathbb{T}^2$ (see Proposition \ref{prop:U2T-orbits}). The next result applies only to suitably invariant functions, but this is enough for our purposes.
\begin{proposition}\label{prop:vlambda_U2T2}
Let $\lambda > 3$ be fixed. If $f \in C_c(M_{2\times2}(\mathbb{C}))$ is a function that satisfies $f(t_\theta Z t_\theta^{-1}) = f(Z)$ for every $Z \in M_{2\times2}(\mathbb{C})$ where
\[
t_\theta =
\begin{pmatrix}
e^{2\pi i \theta} & 0 \\
0 & e^{-2\pi i \theta}
\end{pmatrix}, \quad \theta\in\mathbb{R} ,
\]
then we have
\[
\int_{M_{2\times2}(\mathbb{C})} f(Z) \dif v_\lambda(Z) =
2\pi c_\lambda \int_{R_+^3} \int_{\mathrm{U}(2)}
f\left(A
\begin{pmatrix}
r_1 & r_2\\
0 & r_3
\end{pmatrix}
\right) r_1^4 r_2 r_3^2
b(r)^{\lambda-4}
\dif A \dif r,
\]
where $b(r) = 1-r_1^2-r_2^2-r_3^2+r_1^2 r_3^2$ for $r \in (0,\infty)^3$.
\end{proposition}
\begin{proof}
First we observe that for every $A \in U(n), a_1, a_2 > 0$ and $z \in \mathbb{C}$ we have
\begin{align*}
\det\left(I_2 - A
\begin{pmatrix}
a_1 & z \\
0 & a_2
\end{pmatrix}
\begin{pmatrix}
a_1 & 0 \\
\overline{z} & a_2
\end{pmatrix}
A^*\right)
&=
\det\left(I_2 -
\begin{pmatrix}
a_1^2 + |z|^2 & a_2 z \\
a_2 \overline{z} & a_2^2
\end{pmatrix}
\right) \\
&=
\det\left(
\begin{pmatrix}
1-a_1^2 - |z|^2 & -a_2 z \\
-a_2 \overline{z} & 1-a_2^2
\end{pmatrix}\right) \\
&=
1-a_1^2-a_2^2-|z|^2+a_1^2 a_2^2 \\
&=
b(a_1,|z|,a_2),
\end{align*}
where $b$ is defined as in the statement. Using this last identity, \eqref{eq:vlambda_Haar} and Lemma~\ref{lem:Haar_foliated} we compute the following for $f$ as in the statement. We apply some coordinates changes and use the bi-invariance of the Haar measure of $\mathrm{U}(n)$.
\begin{align*}
\int_{M_{2\times2}(\mathbb{C})} &f(Z) \dif v_\lambda(Z) \\
=&\,
c_\lambda \int_\mathbb{C} \int_{(0,\infty)^2} \int_{\mathrm{U}(2)}
f\left(A
\begin{pmatrix}
a_1 & z\\
0 & a_2
\end{pmatrix}
\right) \\
&\times a_1^4 a_2^{2}
b(a_1,|z|,a_2)^{\lambda-4}
\dif A \dif a \dif z \\
=&\,
2\pi c_\lambda \int_0^1 \int_{(0,\infty)^3} \int_{\mathrm{U}(2)}
f\left(A
\begin{pmatrix}
a_1 & re^{2\pi i \theta}\\
0 & a_2
\end{pmatrix}
\right) \\
&\times a_1^4 a_2^{2} r
b(a_1,r,a_2)^{\lambda-4}
\dif A \dif a \dif r \dif \theta \\
=&\,
2\pi c_\lambda \int_0^1 \int_{(0,\infty)^3} \int_{\mathrm{U}(2)}
f\left(A t_{\theta/2}
\begin{pmatrix}
a_1 & r\\
0 & a_2
\end{pmatrix} t_{\theta/2}^{-1}
\right) \\
&\times a_1^4 a_2^{2} r
b(a_1,r,a_2)^{\lambda-4}
\dif A \dif a \dif r \dif \theta \\
=&\,
2\pi c_\lambda \int_0^1 \int_{(0,\infty)^3} \int_{\mathrm{U}(2)}
f\left(t_{\theta/2}^{-1}A t_{\theta/2}
\begin{pmatrix}
a_1 & r\\
0 & a_2
\end{pmatrix}
\right) \\
&\times a_1^4 a_2^{2} r
b(a_1,r,a_2)^{\lambda-4}
\dif A \dif a \dif r \dif \theta \\
=&\,
2\pi c_\lambda \int_0^1 \int_{(0,\infty)^3} \int_{\mathrm{U}(2)}
f\left(A
\begin{pmatrix}
a_1 & r\\
0 & a_2
\end{pmatrix}
\right) \\
&\times a_1^4 a_2^{2} r
b(a_1,|z|,a_2)^{\lambda-4}
\dif A \dif a \dif r \dif \theta.
\end{align*}
\end{proof}
In view of Proposition~\ref{prop:vlambda_U2T2} the following formula will be useful.
\begin{lemma}\label{lem:pnuj_asum}
For every $\nu \in \overrightarrow{\mathbb{N}}^2$ and $j = 0, \dots, \nu_1-\nu_2$ we have
\[
p_{\nu,j}\left(A
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix} \right)
= \sum_{k=0}^j \binom{j}{k}
p_{\nu,k}(A) r_1^{\nu_1-j}r_2^{j-k} r_3^{\nu_2 + k}
\]
for every $A \in \mathrm{U}(2)$ and $r \in (0,\infty)^3$.
\end{lemma}
\begin{proof}
Let $A \in \mathrm{U}(2)$ be given and write
\[
A = \begin{pmatrix}
\alpha & \beta \\
-\gamma \overline{\beta} & \gamma \overline{ \alpha}
\end{pmatrix},
\]
where $\alpha,\beta,\gamma\in\mathbb{C}$ with $|\alpha|^2 + |\beta| ^2 = 1$ and $|\gamma| = 1$. Hence, we have
\[
A
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix} =
\begin{pmatrix}
\alpha r_1 & \alpha r_2 + \beta r_3 \\
* & *
\end{pmatrix},
\]
and so we conclude that
\begin{align*}
p_{\nu,j}&\left(A
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix}
\right) \\
&= (\alpha r_1)^{\nu_1-\nu_2-j} (\alpha r_2 + \beta r_3)^j
\det
\left(A
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix}
\right)^{\nu_2} \\
&= (\alpha r_1)^{\nu_1-\nu_2-j}
\sum_{k=0}^j \binom{j}{k}(\alpha r_2)^{j-k} (\beta r_3)^k
\det
\left(A
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix}
\right)^{\nu_2} \\
&= \sum_{k=0}^j \binom{j}{k} \alpha ^{\nu_1-\nu_2-k} \beta^{k}
\det(A)^{\nu_2} r_1^{\nu_1-j}r_2^{j-k} r_3^{\nu_2 + k} \\
&= \sum_{k=0}^j \binom{j}{k}
p_{\nu,k}(A) r_1^{\nu_1-j}r_2^{j-k} r_3^{\nu_2 + k}.
\end{align*}
Note that in the last line we have used the expression obtained in the first line.
\end{proof}
We now apply the previous results to compute the spectra of the Toeplitz operators with $\mathrm{U}(2)\times\mathbb{T}^2$-invariant symbols.
\begin{theorem}\label{thm:coefficients}
Let $\lambda > 3$ and $\varphi \in \mathcal{A}^{\mathrm{U}(2)\times\mathbb{T}^2}$ be given. With the notation of Proposition~\ref{prop:U2T2-isotypic}, the Toeplitz operator $T_\varphi$ acts on the subspace of $\mathcal{H}^2_\lambda(D)$ corresponding to $(F_\nu^*\otimes \mathbb{C}_{(\nu_2,\nu_1-\nu_2-2j)})_\lambda$ as a multiple of the identity by the constant
\begin{align*}
\gamma(\varphi,\nu,j) &=
\frac{\left<\varphi p_{\nu,j},p_{\nu,j}\right>_\lambda}%
{\left<p_{\nu,j},p_{\nu,j}\right>_\lambda} = \\
&\frac{\displaystyle\sum_{k=0}^j \binom{j}{k}^2\binom{\nu_1-\nu_2}{k}
\int_\Omega
\varphi
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3 \\
\end{pmatrix}
a(r,\nu, j, k) b(r)^{\lambda-4} \dif r}%
{\displaystyle\sum_{k=0}^j \binom{j}{k}^2\binom{\nu_1-\nu_2}{k}
\int_\Omega
a(r,\nu, j, k) b(r)^{\lambda-4} \dif r}
\end{align*}
for every $\nu\in\overrightarrow{\mathbb{N}}^2$ and $j=0,\dots,\nu_1-\nu_2$, where
\[
\Omega = \left\{ r \in (0,\infty)^3 :
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix} \in D \right\}.
\]
with the functions $a(r,\nu, j, k)= r_1^{2(\nu_1-j)+4}r_2^{2(j-k)+1} r_3^{2(\nu_2+k)+2}$, for $0 \leq k \leq j$, and $b(r) = 1-r_1^2-r_2^2-r_3^2+r_1^2 r_3^2$ for $r \in (0,\infty)^3$.
\end{theorem}
\begin{proof}
Let $\varphi \in \mathcal{A}^{\mathrm{U}(2)\times\mathbb{T}^2}$ be given and fix $\nu \in \overrightarrow{\mathbb{N}}^2$ and $j = 0, \dots, \nu_1-\nu_2$. First, we observe that we have
\[
|p_{\nu,j}(tZt^{-1})|^2 = |p_{\nu,j}(Z)|^2
\]
for all $Z\in M_{2\times2}(\mathbb{C})$ and $t\in\mathbb{T}^2$. The symbol $\varphi$ is bi-$\mathbb{T}^2$-invariant as well. Hence, we can apply Proposition~\ref{prop:vlambda_U2T2} to $\varphi |p_{\nu,j}|^2$ to compute as follows
\begin{align*}
\left<\varphi p_{\nu,j},p_{\nu,j}\right>_\lambda
&= \int_D \varphi(Z) |p_{\nu,j}(Z)|^2 \dif v_\lambda(Z) \\
=&\, 2\pi c_\lambda
\int_\Omega \int_{U(2)}
\varphi\left(A
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix}
\right)
\left|p_{\nu,j}\left(A
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix}
\right)\right|^2 \\
&\times r_1^4 r_2 r_3^2 b(r)^{\lambda-4}
\dif A \dif r \\
=&\, 2\pi c_\lambda
\sum_{k=0}^j \binom{j}{k}^2 \int_\Omega
\varphi
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix}
\int_{U(2)} |p_{\nu,k}(A)|^2 \dif A \\
&\times a(r,\nu,j.k) b(r)^{\lambda-4} \dif r \\
=&\, \frac{2\pi c_\lambda}{\nu_1-\nu_2+1}
\sum_{k=0}^j \binom{j}{k}^2\binom{\nu_1-\nu_2}{k}
\int_\Omega
\varphi
\begin{pmatrix}
r_1 & r_2 \\
0 & r_3
\end{pmatrix}\\
& \times
a(r,\nu,j.k) b(r)^{\lambda-4} \dif r.
\end{align*}
The second identity applies Proposition~\ref{prop:vlambda_U2T2}. For the third identity we apply Proposition~\ref{prop:pnuj_Schur_relations} and the invariance of $\varphi$. In the last identity we apply again the orthogonality relations from Proposition~\ref{prop:pnuj_Schur_relations}.
The proof is completed by taking $\varphi \equiv 1$ in the above computation.
\end{proof}
|
1,108,101,566,514 | arxiv | \section{Introduction}
\label{sec:intro}
The four 8.2m telescopes of the Very Large Telescope (VLT) at the European Southern Observatory (ESO)
form the world’s most scientifically productive ground-based observatory in the visible and infrared. However, looking at the future of the VLT, there is a long-standing need for an optimised ultraviolet (UV) spectrograph \citep{Barbuy2014} with a large increase of efficiency with respect to existing instruments (UVES and X-Shooter).
The European Extremely Large Telescope (ELT), under construction in northern Chile by ESO, with a primary aperture of 39m will be unprecedented in its light-gathering power, coupled with exquisite angular resolution via correction for atmospheric turbulence by adaptive optics (AO). At variance with current large telescopes such as the VLT, AO is an integral part of the ELT, which has a five-mirror design including a large adaptive mirror (M4) and a fast tip-tilt mirror (M5). The choice of protected silver (Ag+Al) for the ELT mirror coatings (apart from M4) ensures a durable, proven surface with excellent performance across a wide wavelength range. However, the performance drops significantly in the blue-UV part of the spectrum compared to bare aluminium. ESO is actively researching alternative coatings, but in the short-medium term we can assume that the performance of the ELT in the blue-UV will be limited. Indeed, during the Phase A study of the MOSAIC multi-object spectrograph \citep{Evans2016} it was concluded that a blue-optimised instrument on the VLT could potentially be competitive with the ELT at wavelengths shorter than 400 nm (Fig.\,\ref{fig:cubelt}). In addition, this spectral range is complementary to the ELT and JWST. Motivated by this, in 2018 we revisited \citep{Evans2018} the Phase A study undertaken in 2012 of the Cassegrain U-band Brazilian-ESO Spectrograph. The past study investigated a $R\sim 20$k spectrograph operating at ‘ground UV’ wavelengths (spanning 300-400 nm) to open-up exciting new scientific opportunities compared to the (then) planned instrumentation suite for Paranal Observatory \citep{Barbuy2014,Bristow2014}.
\begin{figure*}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{CUBES_Fig3B.jpg}}
\caption{\footnotesize
{\bf Comparison of the total (instrument+telescope+sky) CUBES efficiency in the $R>20$\,K mode with respect to UVES.}}
\label{fig:cubelt}
\end{figure*}
In January 2020 ESO issued a Call for Proposal for a Phase A study of a UV Spectrograph to be installed at a Cassegrain focus of the VLT, with the goal of high-efficiency ($>40$\%) and intermediate resolving power ($\sim$ 20K) in the ground-UV domain (305-400 nm requirement, 300-420 nm goal).
In May 2020 the CUBES (Cassegrain U-Band Efficient Spectrograph) Consortium, led by INAF\footnote{Istituto Nazionale di AstroFisica}, was selected to carry out the study.
The CUBES project completed a Phase A conceptual design study in June 2021.
After the endorsement by the ESO Council at the end of 2021,
Phase B started in February 2022 with the signature of the Construction Agreement between ESO and the leading institute of the CUBES Consortium, opening the detailed design and construction phase. Here we report the present status of the project, which will provide a world-leading UV capability for ESO from 2028 well into the ELT era. More detailed information about the project is reported in \citet{Cristiani2022SPIE}.
\section{Science with CUBES}
\label{sec:science}
The CUBES science case covers a wide range of astrophysical topics. We propose below a brief highlight of the main key cases across the Solar System, Galactic, and extra-galactic fields \citep[see also ][]{Evansetal2022}.
\subsection{Searching for water in the asteroid belt}
The search for water in our solar system is a long-standing problem \citep{Opitom2022}. It is a difficult task with ground-based facilities, given the water content of Earth's atmosphere. The typical diagnostics of water outgassing from small bodies is the OH emission at 308 nm. Observation of the OH line has been possible so far for a few active comets while they are near the Sun and Earth, with severe limitations. We still miss knowledge of water production around their orbits and the role of seasonal effects that the Rosetta mission revealed to be important. In general, most comets are simply too faint to be studied with current facilities. Main-belt comets, bodies in asteroidal orbits, can undergo activity thought to arise from sublimation. Constraining the OH emission of these objects is well beyond our current capabilities. Since main-belt comets show a size distribution similar to the general population in the asteroid belt, the detection of outgassing water with CUBES would point to a potentially large population of icy bodies. This could imply a large reservoir of water, a parameter of considerable interest in models of the formation and evolution of the inner solar system.
\subsection{Accretion, winds \& outflows in YSOs}
The evolution of circumstellar disks, mass accretion around young stars, and outflow and winds are fundamental aspects of the formation of protoplanets. Observations about these phenomena require multi-wavelengths studies of stars during the first 10\,Myr of their evolution and in particular of Classical T Tauri stars (CTTS). These young, low- to solar-mass stars are actively accreting mass from planet-forming disks. Spectroscopic surveys of CTTS in nearby star-forming regions have been carried out to study the often complex relationships between accretion, jets and disk structure. CUBES, both due to its increased UV sensitivity and coverage of a critical wavelength range, will enable more detailed studies of the accretion processes/wind-outflows than currently possible as well as studies of CCTS in star-forming region at larger distances.
\subsection{Bulk composition of exo-planets}
In the past few decades, we have learned that it is normal for stars to have planets and the study of exoplanet formation and evolution has become a major astrophysical topic. The best approach available at present for estimating the bulk composition of exo-planet systems is based on spectroscopic analysis of evolved white dwarf (WD) stars accreting debris from tidally-disrupted planetesimals. WDs are hot so most of their abundance diagnostics are in the near-UV (e.g. Sc, Ti, V, Cr, Mn, Fe, Ni). However, WDs are also intrinsically faint, and only about twenty systems have precise abundances so far. CUBES will transform this exciting area of exo-planet research by increasing the sample of known exo-planetesimal compositions providing precise constraints on the next generation of planet-formation models.
\subsection{Stellar nucleosynthesis}
The spectral features of more than a quarter of the chemical elements are only observable in the near UV, but the low efficiency of instruments in this domain severely restricted previous studies. Advancements in the field require high-resolution, near-UV spectroscopy of a large number and diversity of stars. Three main CUBES science cases deal with this topic:
i) {\it Metal-poor stars and light elements}. A key case is to probe the early chemical evolution of the Galaxy, via chemical abundance patterns in the oldest, low-mass stars that formed from material enriched by the first supernovae. The so-called Carbon-enhanced metal-poor (CEMP) stars are the perfect probes to investigate nucleosynthesis by the first stars. CUBES will enable quantitative spectroscopy for large samples of metal-poor stars, providing direct estimates for a broad range of heavy elements, as well as valuable constraints on CNO elements.
ii) {\it Heavy-element nucleosynthesis}. Stellar abundances from CUBES will provide critical tests of the various production channels of heavy elements for both r- and s-process elements. Determining the abundances of neutron-capture elements in metal-poor stars is fundamental to understand the physics of these processes and the chemical evolution of the Galaxy as well as the origin of the Galactic halo. Since lines due to many of these elements are in the UV domain (e.g. Hf, Sn, Ag, Bi) and have only been measured for a very restricted number of stars, CUBES will play a critical role to fill this gap.
iii) {\it Beryllium is one of the lightest and simplest elements}. Nevertheless, questions remain about its production in the early Universe. Recent results are consistent with no primordial production, but larger samples are required to investigate this further. Only $\sim 200$ stars have Be abundances so far \citep[limited to $V$ $\sim$ 12 mag in a few hrs with UVES, ][]{UVES2000}. CUBES will provide large homogeneous samples of Be abundances in stars belonging to different populations up to three magnitudes deeper, providing new insights into its production and tracing the star-formation history of the Galaxy (see Fig.\,\ref{fig:cubesbe}).
\begin{figure*}[t!]
\resizebox{6.9cm}{!}{\includegraphics[clip=true]{be3.png}} \resizebox{6.9cm}{!}{\includegraphics[clip=true]{be2.png}} \\
\center{ \resizebox{7.4cm}{!}{\includegraphics[clip=true]{be1.png}}}
\caption{\footnotesize
{\bf Results of fitting four simulated observations of a bright (V = 12.5\,mag) subgiant star (T$_{\rm eff}$ = 5600\,K and $\log$ g = 3.4\,dex) with [Fe/H] = -3.5 and $\log$(Be∕H) = -13.5. The final simulated observations have
different realizations of SNR = 340 and were computed to have R = 23\,K
and sampling of 2.35\,px.
{\it Top Left:} One of the best fit synthetic spectra is shown
in the plot and has the same Be abundance of the simulated observation.
{\it Top Right:} The best fit is chosen using the coefficient of determination, $R^2$, which involves the ratio between the residual sum of squares and the total sum of squares. {\it Bottom:} The boxplot displays the best fitting Be abundances for each one of the four simulated observations (with results deviating at most by 0.05\,dex from the input Be abundance).}}
\label{fig:cubesbe}
\end{figure*}
\subsection{Primordial deuterium abundance}
Within the Standard Model of particle physics and cosmology there is still no accepted model for dark energy and dark matter, or why the Universe contains baryons instead of antibaryons, or even why the Universe contains baryons at all. We are also missing crucial properties of neutrinos (e.g. their hierarchy, why they change flavour, and the number that existed during the earliest phases of the Universe).
Some of these questions can be investigated by measuring the nuclides produced a few minutes after the Big Bang. The primordial deuterium (D/H) abundance is currently our most reliable probe of Big Bang Nucleosynthesis \citep{Cooketal2014}. CUBES will provide a large, reliable sample of D/H estimates from quasar absorption spectra. Its significant gain at the shortest wavelengths compared to existing facilities will enable observations at lower redshifts (less contamination by the Lyman-alpha forest) giving more absorption-line systems from which to estimate D/H and smaller uncertainties.
\subsection{The missing baryonic mass at the cosmic noon}
Remarkable progresses about the missing baryon problem has been recently made possible at low redshifts by studying the dispersion measure in Fast Radio Bursts (FRBs) \citep{Macquart2020} and at $z > 1.5$ by observations and simulations of the Lyman forest \citep[e.g., ][]{Weinberg1997}. Still, we have insufficient knowledge about how baryonic matter is distributed among the different gaseous components and we would need to better constrain the mechanisms (stellar and AGN feedback, accretion, etc.) that determine the observed distribution. A UV efficient spectrograph with relatively high resolution offers the possibility to dig into the complex nature of the inter- and circum-galactic gas at $z \sim$ 1.5 to 3, via two experiments with quasar absorption lines:
i) The baryons in the diffuse IGM are studied through the detection and analysis of Lyman-$\alpha$ lines at $z \simeq 1.5$ to 2.3. This redshift range, immediately after the era of peak star-formation in the Universe, is poorly investigated due to observational difficulties as the low efficiency of ground-based spectrographs in the UV, but is critical to connect the low- and high-redshift results.
ii) Observing the O VI absorption lines at $1.9 < z < 2.9$ to trace the warm-hot gas at $T > 10^5$ K, associated with the IGM or with the CGM \citep{Lehner2014}. In all cases spectroscopy in the wavelength range at $\lambda > 400$ nm is necessary to complement the information on the neutral IGM component derived from the
Lyman forest, checking the associated metal absorption lines (in particular due to C IV and Si IV) and deriving the contribution of the ionised gas. It is also needed to complete the coverage of the associated HI and metal transitions. To this aim spectra of the same targets obtained at higher resolution with, e.g., UVES/VLT (simultaneously via a fiber link or retrieved from the archives) will be needed.
\subsection{Cosmic UV background}
Galaxies are likely able to produce most of the UV emissivity needed for cosmic reionisation at high redshift but quasars also possibly contribute. Estimates of the escape fraction (f$_{\rm esc}$) of hydrogen-ionising photons able to escape a galaxy are close to 100\% for quasars. However, the volume density of low- and intermediate-luminosity quasars at $z$ $>$ 4 is still uncertain, so it is unclear if they are the dominant source of ionisation. In contrast, star-forming galaxies are more numerous, but estimates of f$_{\rm esc}$ from observations of the Lyman continuum ($\lambda_{\rm rest}$ $<$ 91.2 nm) have uncertainties of tens of percent and are limited to a handful of systems at $z$ = 2.5 to 4. To be detectable from Earth escaping photons have to survive absorption along the line of sight by the intergalactic medium, which become stronger with redshift and is significantly variable between sightlines. Given these competing factors, the ideal redshift range for ground-based observations of the Lyman continuum of a galaxy is $z$ = 2.4 to 3.5, i.e. from about 410\,nm down to the atmospheric cut-off. For this reason CUBES could be an asset for this science case thanks to its high throughput. Furthermore, since the galaxies to be observed are extremely faint this science case is also one of the main drivers of the low resolution mode.
\subsection{Transient astronomy}
Time-domain astronomy is one of the most active branches of modern astrophysics. In a few years, new observational facilities, specifically designed with the goal of securing high-cadence observations of large fractions of the nightly sky, will become operational. Equally important, ``big-data'' algorithms are increasingly being applied and developed to manage the large amount of data provided by these facilities. The discovery space opened by rare or peculiar transients is very large, involving all categories of sources. For low or high redshift objects, a highly efficient UV spectrograph can shed light on a variety of physical ingredients and, in this context, the possible synergy of the CUBES and UVES spectrographs could open the exciting perspective of a UV-blue continuous spectral coverage.
\section{From Science to Requirements}
\label{sec:requirements}
The science cases of interest for the CUBES community defined the reference for the development of the Top level Requirements (TLR):
\begin{itemize}
\item Spectral range: CUBES must provide a spectrum of the target over the entire wavelength range of 305 – 400 nm in a single exposure (goal: 300 – 420 nm).
\item Efficiency: The efficiency of the spectrograph, from slit to detector (included), has to be $>40$\% for 305 – 360 nm (goal $>45$\%, with $>50$\% at 313 nm), and $>37$\% (goal $40$\%) between 360 and 400 nm.
\item Resolving power ($R$): In any part of the spectrum, $R$ needs to be $>19$\,K, with an average value $>20K$.
\item Signal-to-noise (S/N) ratio: In a 1 hr exposure the spectrograph must be able to obtain, for an A0-type star of $U$ = 17.5 mag (goal $U \ge 18$ mag), a S/N = 20 at 313\,nm for a 0.007\,nm wavelength pixel.
\end{itemize}
During Phase A studies an important addition was identified with the provision of a second (lower) resolving power (with $R \sim 7$k), to enable background-limited observations of faint sources where spectral resolution is less critical.
There are also studies for a fiber link to UVES, in order to provide simultaneous observations at longer wavelengths. This would considerably broaden the scientific capabilities of the project.
\section{Instrument Design Overview}
\label{sec:design}
The instrument is designed to be used alone or combined with the fiber-feed to UVES. Two resolution modes (HR, $R > 20$\,K; LR $R \sim 7$\,K) are enable by the exchange of two independent image slicers. An Active Flexure Compensation system (AFC) is part of the baseline. All the top level requirements (TLRs), in particular those related to efficiency, are met (see Fig.\,\ref{fig:whcubes} for a view of the whole instrument).
\begin{figure*}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{cubes_scheme_PDR_v2.png}}
\caption{\footnotesize
{\bf Functional design of CUBES. Light path is from the top to the bottom. The shown acronyms are: FL cam (fibre-link A\&G camera),
ND filters (Neutral Density filters), WL filter (Wavelength filter),
ADC (Atmospheric Dispersion Corrector), P (single Pin-hole mask), Ps (multi pin-holes mask), A\&G (Acquisition and Guiding), Cal mirror (Calibration mirror), DC (Dichroic), AFC (Active Flexure Compensation),
LR (Low Resolution), HR (High Resolution), Cryo (Cryocooler),
FF LED (Flat-Field LED).}}
\label{fig:whcubes}
\end{figure*}
\subsection{Instrument sub-systems and Operations}
The current baseline design of CUBES includes a
calibration unit that provides the light sources necessary to register frames for flat fielding, wavelength calibration, alignment, and the AFC. A foreoptics sub-system includes an atmospheric dispersion corrector (ADC) and Acquisition and Guiding functionalities. There are two user-selectable image slicers (to enable different spectral resolutions) followed by a dichroic beamsplitter that feeds two spectrograph arms, each consisting of a Fused Silica single-lens collimator, a first-order transmission grating with a groove density up to 3600 l/mm, and a 3-lens all-silica camera. Each arm has its own detector cryostat, with a 9k or 10k CCD detector (we also have an option to increase the sampling by 11\% using the STA 10k 9$\mu m$ detector instead of the E2V 9K 10$\mu m$), read-out electronics, cryo-vacuum components (both hardware and specific control electronics). The Instrument Control Electronics are based on PLCs that are compliant with latest ELT design standard, and control all optomechanical and cryo/vacuum functions. The Scientific Detector system is controlled by ESOs new NGC2 standard. The Instrument Software package has elaborate instrument control software, data-reduction and simulation tools (see Sect.\ref{sec:Software}) for details). And, finally, a Fiber Link unit provides the option of simultaneous observations with UVES by injecting red light into optical fibers (1 object, 3 sky) that subtend 1 arcsec on the sky and are approximatively 40 m long to UVES on the Nasmyth platform \citep{UVES2000}.
\subsection{Optics and Dispersing Elements}
Using two lens doublets and a number of folding prisms the foreoptics relays a FoV of 6”x10” for LR and 1.5”x10” for HR at the telescope focus to the entrance plane of the spectrograph. Zero-deviation ADC prisms in the parallel beam between the doublets provide atmospheric dispersion correction over a range 300-405\,nm for zenith angles of 0-60$^\circ$. By inserting a dichroic just below the telescope focal plane, light redward of 420\,nm may be directed to the UVES fiber feed.
At the magnified telescope focal plane produced by the fore-optics (0.5\,arcsec/mm plate scale), two user-selectable reflective image slicers are used to reformat the field of view. The first component, an array of six spherical slicer mirrors decomposes the rectangular FoV into six slices which are reimaged by six spherical camera mirrors composing the spectrograph entrance slit, defined by a slit mask. The slicer efficiency is expected to be $>90\%$ (goal 94\%) The output slit mask has six slitlets, corresponding to six slices, each one measuring 0.25”x10” on the sky for the HR slicer ($R \sim 20K$) and 1”x10” for the LR slicer ($R \sim 7K$). Further slit mask apertures are illuminated by a ThAr fiber source for use by the AFC system.
The light coming from the slit mask is folded by a Total Internal Reflection (TIR) prism and then reaches a dichroic which splits the light by reflecting the Blue-Arm passband (300–352.3\,nm) and transmitting the Red-Arm passband (346.3–405\,nm).
In order to achieve a high ($>20K$) resolution without the efficiency losses associated with cross-dispersed echelles, CUBES uses state-of-the-art first-order dispersing elements. Binary transmission gratings produced by E-beam microlithography and an Atomic Layer Deposition (ALD) overcoat have been identified as a suitable technology \citep[see ][]{Zeitner2022}. Their theoretical average (RCWA) diffraction efficiency is $> 90\%$ and studies carried out through simulations and prototyping show that the measured efficiency is compliant with the expectation and demonstrate the feasibility of the instrument in terms of light throughput.
\subsection{Mechanics}
To achieve high resolution, CUBES requires a fairly large beam diameter of 160 mm and a collimator focal length of 3\,m. In order to limit the total mass, a light-weight construction principle has been adopted. The optical layout contains several folds so all optical elements. The optical layout was optimized such that all optical elements of the spectrograph from slit to detector lie in a single plane, so all spectrograph optics can be mounted on a single optical bench. This is arguably the most stable configuration since the dispersion direction of CUBES is parallel to the stiff surface plane of the optical bench. A general focus of the mechanical design is, in fact, to minimize the effects of gravitational bending of the instrument.
In the current design, the CUBES main mechanical structure is divided into three main components: 1. A telescope adapter, that provides a stiff connection between the Cassegrain telescope flange and the optical bench assembly; 2. An optical bench that provides a stable platform for the spectrograph optics as well as for the foreoptics; 3. An assembly to provide support for auxiliary equipment such as electronic racks, the calibration unit and vacuum equipment. In Fig.\,\ref{fig:cubesmech} the mechanical concept of CUBES is shown.
\begin{figure*}[t!]
\resizebox{6.9cm}{!}{\includegraphics[clip=true]{mech1.png}} \resizebox{6.9cm}{!}{\includegraphics[clip=true]{mech2.png}} \\
\center{ \resizebox{7.4cm}{!}{\includegraphics[clip=true]{structure.png}}}
\caption{\footnotesize
{\bf Mechanical concept for CUBES: The general layout of the main mechanical components is shown. For reference, the Telescope Adapter has a diameter of about 1.5 m. The Fore-Optics Assembly is located near the center of the bench, while the spectrograph opto-mechanics are ‘hanging’ on the bottom side of the same bench.}}
\label{fig:cubesmech}
\end{figure*}
\subsection{Software}
\label{sec:Software}
The CUBES instrument is designed including a {\it ``software ecosystem''}, whose individual packages cooperate to support the users from the proposal preparation to the science-grade spectra: i) the Exposure Time Calculator (ETC), a web application used to predict the CUBES
exposure time required to achieve a given SNR;
ii) the Observation Preparation Software (OPS): a list of tools aimed to help the users identify the best instrument settings to achieve a scientific goal;
iii) the Instrument Control Software (ICS) and the Detector Control Software (DCS): devoted to the instrument devices and detectors control;
iv) the Data Reduction Software (DRS): a collection of recipes aimed to remove instrument signature from the science exposures, and produce the final calibrated 2D and 1D spectra;
v) the End-to-end Simulator (E2E): a package able to simulate realistic science exposures of a given science target, by taking into account the CUBES instrumental effects, and allowing early testing of the data reduction pipelines, as well as early validation of design decisions.
The above mentioned packages are developed according to the recently published ELT software standards, and will be based on the ELT Instrument Framework libraries and tools.
\section{Project and Management}
\label{sec:management}
The CUBES consortium is composed of institutes from five countries:
\begin{itemize}
\item INAF - Istituto Nazionale di Astrofisica, Italy, (consortium leader)
\item IAG-USP - Instituto de Astronomia, Geofísica e Ciências Atmosféricas (primary Brazil partner) and LNA - Laboratório Nacional de Astrofísica (secondary Brazil partner), Brazil
\item LSW - Landessternwarte, Zentrum für Astronomie der Universtität Heidelberg, Germany
\item NCAC - Nicolaus Copernicus Astronomical Center of the Polish Academy of Sciences, Poland
\item STFC-UKATC - UK Astronomy Technology Centre, (primary UK partner) and Durham University Centre for Advanced Instrumentation (secondary UK partner), United Kingdom
\end{itemize}
The Consortium organization is fairly standard, with a Principal Investigator (PI) with the ultimate responsibility for the project, acting as the formal contact point between ESO and the Consortium. The PI represents the leading technical institute, INAF. Each country is then represented in the managerial structure by one Co-PI; together they form the CUBES Executive Board (EB). The managerial aspects are delegated by the EB to the Project Manager (PM). The scientific aspects are delegated by the EB to the Project Scientist (PS). The project manager is supported by a System Engineer (SE) and by a Software System Engineer (SSE) who are in charge to supervise the overall system design. The SE and SSE work in close contact with the Instrument Scientist (IS) who makes sure that the adopted technical solutions match the foreseen instrument scientific needs.
CUBES follows the standard project phasing for ESO instruments which is based on the stage-gate paradigm. Important decision points are project milestones (gates of the project) which mark the transition into a new stage when successfully completed. In the current plan CUBES will be available to the ESO user community in 2028.
\subsection{Public Engagement}
CUBES is an ambitious research program, and some of the scientific topics are related to the hottest open questions in modern astrophysics. Considering the vast discovery potential of the project, and the remarkable research and development technological activities, we consider good public communication as particularly important. Dissemination of science and technology is a fundamental part of our project. We have prepared a web page that is also a useful tool for the project as a whole\footnote{\url{https://cubes.inaf.it/home}}, and profiles in the main social media, i.e. Facebook, Twitter etc.
\section{Conclusions}
\label{sec:conclusions}
The Cassegrain U-Band Efficient Spectrograph (CUBES) for the ESO VLT has been presented. Analysis of the design shows that it will deliver outstanding ($>40$\%) throughput across its bandpass, at a mean $R > 20$\,K (HR mode) and $R \sim 7K$ (LR mode). A fiber link from CUBES to UVES is being studied, which would provide the capability of simultaneous high-resolution spectroscopy at $\lambda > 420$\,nm. The CUBES design is able to address a large variety of scientific cases, from Solar System science to Cosmology, with no obvious technical showstopper. With contributions from institutes in five countries, the CUBES design is well placed to become the most efficient ground-based spectrograph at near-UV wavelengths, with science operations anticipated for 2028, opening a unique discovery space for the VLT for years to come.
\noindent {\bf{Affiliations}}\par
$^{6}$Ioffe Institute, HSE University, Saint-Petersburg, Russia\par
$^{7}$Universidade de São Paulo, IAG, Rua do Matão 1226, São Paulo, Brazil\par
$^{8}$Donostia International Physics Center (DIPC), Guipuzkoa, Spain\par
$^{9}$IKERBASQUE Basque Foundation for Science, Bilbao, Spain\par
$^{10}$University of Hull, E.A. Milne Centre for Astrophysics, Hull, UK\par
$^{11}$STFC - United Kingdom Astronomy Technology Centre (UK ATC), Edinburgh, UK\par
$^{12}$European Southern Observatory (ESO), ESO Headquarters
Karl-Schwarzschild-Str. 2 85748 Garching bei M\"unchen
Germany\par
$^{13}$Durham University, Department of Physics, Centre for Advanced Instrumentation, Durham, UK\par
$^{14}$INAF - Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 3, Padova, Italy\par
$^{15}$INAF - Osservatorio Astronomico di Roma, Via Frascati 33, Monte Porzio Catone, Italy\par
$^{16}$Centre for Astrophysics, University of Southern Queensland, Toowoomba 4350, Australia\par
$^{17}$INAF - Osservatorio Astronomico di Abruzzo, Via M. Maggini, I-64100 Teramo, Italy\par
$^{18}$INFN - Sezione di Pisa, Largo Pontecorvo 3, I-56127 Pisa, Italy\par
$^{19}$LNA/MCTI, Laboratorio Nacional De Astrofisica, Itajubá, Brazil\par
$^{20}$Dipartimento di Fisica, Sezione di Astronomia, Università di Trieste, Italy\par
$^{21}$Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Heidelberg, Germany\par
$^{22}$Centre for Extragalactic Astronomy, Durham University, Durham DH1 3LE, UK\par
$^{23}$Observat\'orio Nacional, Rua Gen. Jos\'e Cristino 77, S\~ao Crist\'ov\~ao, Rio de Janeiro, Brazil\par
$^{24}$Steward Observatory, University of Arizona, 950 N. Cherry Ave., Tucson, AZ, 85719\par
$^{25}$Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw, Poland\par
$^{26}$Department of Astronomy, University of Geneva, Chemin Pegasi 51, Versoix, Switzerland\par
$^{27}$Consultant Astronomical Instrumentation, Alpenrosenstr.15, 85521 Ottobrunn, Germany\par
$^{28}$Italian Space Agency - Space Science Data Centre, via del Politecnico snc, Rome, Italy\par
$^{29}$Australian Astronomical Optics, Macquarie University, North Ryde, NSW 2113, Australia\par
$^{30}$European Space Agency (ESA), ESA Office, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA\par
$^{31}$Astrophysics Research Centre, Queen's University, Belfast, UK\par
$^{32}$University of Warwick, Department of Physics, Gibbet Hill Road, Coventry, CV7 4AL, UK\par
$^{33}$Goethe University Frankfurt, Institute for Applied Physics, Frankfurt am Main, Germany\par
$^{34}$Dipartimento di Fisica e Astronomia dell'Università, Vicolo dell'Osservatorio 3, Padova, Italy\par
$^{35}$Observat\'orio do Valongo, Universidade Federal do Rio de Janeiro, Brazil\par
$^{36}$INAF - Institute of Space Astrophysics and Planetology, Roma, Italy\par
$^{37}$Franco-Chilean Laboratory for Astronomy, Las Condes, Santiago, Chile\par
$^{38}$Institut d’Astrophysique de Paris, CNRS-SU, UMR 7095, 98bis bd Arago, Paris, France\par
$^{39}$Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, UK\par
$^{40}$INAF - Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy\par
$^{41}$INAF - Osservatorio di Astrofisica e Scienza dello Spazio, Bologna, Italy\\
\begin{acknowledgements}
R.S. acknowledges support by the Polish National Science Centre through project 2018/31/B/ST9/01469.
We gratefully acknowledge support from the German Federal Ministry of Education and Research (BMBF) through project 05A20VHA.
B.B. acknowledges the FAPESP grant 2014/18100-4.
For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
\end{acknowledgements}
\bibliographystyle{aa}
\section{Introduction}
\label{sec:intro}
The four 8.2m telescopes of the Very Large Telescope (VLT) at the European Southern Observatory (ESO)
form the world’s most scientifically productive ground-based observatory in the visible and infrared. However, looking at the future of the VLT, there is a long-standing need for an optimised ultraviolet (UV) spectrograph \citep{Barbuy2014} with a large increase of efficiency with respect to existing instruments (UVES and X-Shooter).
The European Extremely Large Telescope (ELT), under construction in northern Chile by ESO, with a primary aperture of 39m will be unprecedented in its light-gathering power, coupled with exquisite angular resolution via correction for atmospheric turbulence by adaptive optics (AO). At variance with current large telescopes such as the VLT, AO is an integral part of the ELT, which has a five-mirror design including a large adaptive mirror (M4) and a fast tip-tilt mirror (M5). The choice of protected silver (Ag+Al) for the ELT mirror coatings (apart from M4) ensures a durable, proven surface with excellent performance across a wide wavelength range. However, the performance drops significantly in the blue-UV part of the spectrum compared to bare aluminium. ESO is actively researching alternative coatings, but in the short-medium term we can assume that the performance of the ELT in the blue-UV will be limited. Indeed, during the Phase A study of the MOSAIC multi-object spectrograph \citep{Evans2016} it was concluded that a blue-optimised instrument on the VLT could potentially be competitive with the ELT at wavelengths shorter than 400 nm (Fig.\,\ref{fig:cubelt}). In addition, this spectral range is complementary to the ELT and JWST. Motivated by this, in 2018 we revisited \citep{Evans2018} the Phase A study undertaken in 2012 of the Cassegrain U-band Brazilian-ESO Spectrograph. The past study investigated a $R\sim 20$k spectrograph operating at ‘ground UV’ wavelengths (spanning 300-400 nm) to open-up exciting new scientific opportunities compared to the (then) planned instrumentation suite for Paranal Observatory \citep{Barbuy2014,Bristow2014}.
\begin{figure*}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{CUBES_Fig3B.jpg}}
\caption{\footnotesize
{\bf Comparison of the total (instrument+telescope+sky) CUBES efficiency in the $R>20$\,K mode with respect to UVES.}}
\label{fig:cubelt}
\end{figure*}
In January 2020 ESO issued a Call for Proposal for a Phase A study of a UV Spectrograph to be installed at a Cassegrain focus of the VLT, with the goal of high-efficiency ($>40$\%) and intermediate resolving power ($\sim$ 20K) in the ground-UV domain (305-400 nm requirement, 300-420 nm goal).
In May 2020 the CUBES (Cassegrain U-Band Efficient Spectrograph) Consortium, led by INAF\footnote{Istituto Nazionale di AstroFisica}, was selected to carry out the study.
The CUBES project completed a Phase A conceptual design study in June 2021.
After the endorsement by the ESO Council at the end of 2021,
Phase B started in February 2022 with the signature of the Construction Agreement between ESO and the leading institute of the CUBES Consortium, opening the detailed design and construction phase. Here we report the present status of the project, which will provide a world-leading UV capability for ESO from 2028 well into the ELT era. More detailed information about the project is reported in \citet{Cristiani2022SPIE}.
\section{Science with CUBES}
\label{sec:science}
The CUBES science case covers a wide range of astrophysical topics. We propose below a brief highlight of the main key cases across the Solar System, Galactic, and extra-galactic fields \citep[see also ][]{Evansetal2022}.
\subsection{Searching for water in the asteroid belt}
The search for water in our solar system is a long-standing problem \citep{Opitom2022}. It is a difficult task with ground-based facilities, given the water content of Earth's atmosphere. The typical diagnostics of water outgassing from small bodies is the OH emission at 308 nm. Observation of the OH line has been possible so far for a few active comets while they are near the Sun and Earth, with severe limitations. We still miss knowledge of water production around their orbits and the role of seasonal effects that the Rosetta mission revealed to be important. In general, most comets are simply too faint to be studied with current facilities. Main-belt comets, bodies in asteroidal orbits, can undergo activity thought to arise from sublimation. Constraining the OH emission of these objects is well beyond our current capabilities. Since main-belt comets show a size distribution similar to the general population in the asteroid belt, the detection of outgassing water with CUBES would point to a potentially large population of icy bodies. This could imply a large reservoir of water, a parameter of considerable interest in models of the formation and evolution of the inner solar system.
\subsection{Accretion, winds \& outflows in YSOs}
The evolution of circumstellar disks, mass accretion around young stars, and outflow and winds are fundamental aspects of the formation of protoplanets. Observations about these phenomena require multi-wavelengths studies of stars during the first 10\,Myr of their evolution and in particular of Classical T Tauri stars (CTTS). These young, low- to solar-mass stars are actively accreting mass from planet-forming disks. Spectroscopic surveys of CTTS in nearby star-forming regions have been carried out to study the often complex relationships between accretion, jets and disk structure. CUBES, both due to its increased UV sensitivity and coverage of a critical wavelength range, will enable more detailed studies of the accretion processes/wind-outflows than currently possible as well as studies of CCTS in star-forming region at larger distances.
\subsection{Bulk composition of exo-planets}
In the past few decades, we have learned that it is normal for stars to have planets and the study of exoplanet formation and evolution has become a major astrophysical topic. The best approach available at present for estimating the bulk composition of exo-planet systems is based on spectroscopic analysis of evolved white dwarf (WD) stars accreting debris from tidally-disrupted planetesimals. WDs are hot so most of their abundance diagnostics are in the near-UV (e.g. Sc, Ti, V, Cr, Mn, Fe, Ni). However, WDs are also intrinsically faint, and only about twenty systems have precise abundances so far. CUBES will transform this exciting area of exo-planet research by increasing the sample of known exo-planetesimal compositions providing precise constraints on the next generation of planet-formation models.
\subsection{Stellar nucleosynthesis}
The spectral features of more than a quarter of the chemical elements are only observable in the near UV, but the low efficiency of instruments in this domain severely restricted previous studies. Advancements in the field require high-resolution, near-UV spectroscopy of a large number and diversity of stars. Three main CUBES science cases deal with this topic:
i) {\it Metal-poor stars and light elements}. A key case is to probe the early chemical evolution of the Galaxy, via chemical abundance patterns in the oldest, low-mass stars that formed from material enriched by the first supernovae. The so-called Carbon-enhanced metal-poor (CEMP) stars are the perfect probes to investigate nucleosynthesis by the first stars. CUBES will enable quantitative spectroscopy for large samples of metal-poor stars, providing direct estimates for a broad range of heavy elements, as well as valuable constraints on CNO elements.
ii) {\it Heavy-element nucleosynthesis}. Stellar abundances from CUBES will provide critical tests of the various production channels of heavy elements for both r- and s-process elements. Determining the abundances of neutron-capture elements in metal-poor stars is fundamental to understand the physics of these processes and the chemical evolution of the Galaxy as well as the origin of the Galactic halo. Since lines due to many of these elements are in the UV domain (e.g. Hf, Sn, Ag, Bi) and have only been measured for a very restricted number of stars, CUBES will play a critical role to fill this gap.
iii) {\it Beryllium is one of the lightest and simplest elements}. Nevertheless, questions remain about its production in the early Universe. Recent results are consistent with no primordial production, but larger samples are required to investigate this further. Only $\sim 200$ stars have Be abundances so far \citep[limited to $V$ $\sim$ 12 mag in a few hrs with UVES, ][]{UVES2000}. CUBES will provide large homogeneous samples of Be abundances in stars belonging to different populations up to three magnitudes deeper, providing new insights into its production and tracing the star-formation history of the Galaxy (see Fig.\,\ref{fig:cubesbe}).
\begin{figure*}[t!]
\resizebox{6.9cm}{!}{\includegraphics[clip=true]{be3.png}} \resizebox{6.9cm}{!}{\includegraphics[clip=true]{be2.png}} \\
\center{ \resizebox{7.4cm}{!}{\includegraphics[clip=true]{be1.png}}}
\caption{\footnotesize
{\bf Results of fitting four simulated observations of a bright (V = 12.5\,mag) subgiant star (T$_{\rm eff}$ = 5600\,K and $\log$ g = 3.4 dex) with [Fe/H] = -3.5 and $\log$(Be∕H) = -13.5. The final simulated observations have
different realizations of SNR = 340 and were computed to have R = 23\,K
and sampling of 2.35\,px.
{\it Top Left:} One of the best fit synthetic spectra is shown in the plot and has the same Be abundance of the simulated observation. {\it Top Right:} The best fit is chosen using the coefficient of determination, $R^2$, which involves the ratio between the residual sum of squares and the total sum of squares. {\it Bottom:} The boxplot displays the best fitting Be abundances for each one of the four simulated observations (with results deviating at most by 0.05 dex from the input Be abundance).}}
\label{fig:cubesbe}
\end{figure*}
\subsection{Primordial deuterium abundance}
Within the Standard Model of particle physics and cosmology there is still no accepted model for dark energy and dark matter, or why the Universe contains baryons instead of antibaryons, or even why the Universe contains baryons at all. We are also missing crucial properties of neutrinos (e.g. their hierarchy, why they change flavour, and the number that existed during the earliest phases of the Universe).
Some of these questions can be investigated by measuring the nuclides produced a few minutes after the Big Bang. The primordial deuterium (D/H) abundance is currently our most reliable probe of Big Bang Nucleosynthesis \citep{Cooketal2014}. CUBES will provide a large, reliable sample of D/H estimates from quasar absorption spectra. Its significant gain at the shortest wavelengths compared to existing facilities will enable observations at lower redshifts (less contamination by the Lyman-alpha forest) giving more absorption-line systems from which to estimate D/H and smaller uncertainties.
\subsection{The missing baryonic mass at the cosmic noon}
Remarkable progresses about the missing baryon problem has been recently made possible at low redshifts by studying the dispersion measure in Fast Radio Bursts (FRBs) \citep{Macquart2020} and at $z > 1.5$ by observations and simulations of the Lyman forest \citep[e.g., ][]{Weinberg1997}. Still, we have insufficient knowledge about how baryonic matter is distributed among the different gaseous components and we would need to better constrain the mechanisms (stellar and AGN feedback, accretion, etc.) that determine the observed distribution. A UV efficient spectrograph with relatively high resolution offers the possibility to dig into the complex nature of the inter- and circum-galactic gas at $z \sim$ 1.5 to 3, via two experiments with quasar absorption lines:
i) The baryons in the diffuse IGM are studied through the detection and analysis of Lyman-$\alpha$ lines at $z \simeq 1.5$ to 2.3. This redshift range, immediately after the era of peak star-formation in the Universe, is poorly investigated due to observational difficulties as the low efficiency of ground-based spectrographs in the UV, but is critical to connect the low- and high-redshift results.
ii) Observing the O VI absorption lines at $1.9 < z < 2.9$ to trace the warm-hot gas at $T > 10^5$ K, associated with the IGM or with the CGM \citep{Lehner2014}. In all cases spectroscopy in the wavelength range at $\lambda > 400$ nm is necessary to complement the information on the neutral IGM component derived from the
Lyman forest, checking the associated metal absorption lines (in particular due to C IV and Si IV) and deriving the contribution of the ionised gas. It is also needed to complete the coverage of the associated HI and metal transitions. To this aim spectra of the same targets obtained at higher resolution with, e.g., UVES/VLT (simultaneously via a fiber link or retrieved from the archives) will be needed.
\subsection{Cosmic UV background}
Galaxies are likely able to produce most of the UV emissivity needed for cosmic reionisation at high redshift but quasars also possibly contribute. Estimates of the escape fraction (f$_{\rm esc}$) of hydrogen-ionising photons able to escape a galaxy are close to 100\% for quasars. However, the volume density of low- and intermediate-luminosity quasars at $z$ $>$ 4 is still uncertain, so it is unclear if they are the dominant source of ionisation. In contrast, star-forming galaxies are more numerous, but estimates of f$_{\rm esc}$ from observations of the Lyman continuum ($\lambda_{\rm rest}$ $<$ 91.2 nm) have uncertainties of tens of percent and are limited to a handful of systems at $z$ = 2.5 to 4. To be detectable from Earth escaping photons have to survive absorption along the line of sight by the intergalactic medium, which become stronger with redshift and is significantly variable between sightlines. Given these competing factors, the ideal redshift range for ground-based observations of the Lyman continuum of a galaxy is $z$ = 2.4 to 3.5, i.e. from about 410\,nm down to the atmospheric cut-off. For this reason CUBES could be an asset for this science case thanks to its high throughput. Furthermore, since the galaxies to be observed are extremely faint this science case is also one of the main drivers of the low resolution mode.
\subsection{Transient astronomy}
Time-domain astronomy is one of the most active branches of modern astrophysics. In a few years, new observational facilities, specifically designed with the goal of securing high-cadence observations of large fractions of the nightly sky, will become operational. Equally important, ``big-data'' algorithms are increasingly being applied and developed to manage the large amount of data provided by these facilities. The discovery space opened by rare or peculiar transients is very large, involving all categories of sources. For low or high redshift objects, a highly efficient UV spectrograph can shed light on a variety of physical ingredients and, in this context, the possible synergy of the CUBES and UVES spectrographs could open the exciting perspective of a UV-blue continuous spectral coverage.
\section{From Science to Requirements}
\label{sec:requirements}
The science cases of interest for the CUBES community defined the reference for the development of the Top level Requirements (TLR):
\begin{itemize}
\item Spectral range: CUBES must provide a spectrum of the target over the entire wavelength range of 305 – 400 nm in a single exposure (goal: 300 – 420 nm).
\item Efficiency: The efficiency of the spectrograph, from slit to detector (included), has to be $>40$\% for 305 – 360 nm (goal $>45$\%, with $>50$\% at 313 nm), and $>37$\% (goal $40$\%) between 360 and 400 nm.
\item Resolving power ($R$): In any part of the spectrum, $R$ needs to be $>19$\,K, with an average value $>20K$.
\item Signal-to-noise (S/N) ratio: In a 1 hr exposure the spectrograph must be able to obtain, for an A0-type star of $U$ = 17.5 mag (goal $U \ge 18$ mag), a S/N = 20 at 313\,nm for a 0.007\,nm wavelength pixel.
\end{itemize}
During Phase A studies an important addition was identified with the provision of a second (lower) resolving power (with $R \sim 7$k), to enable background-limited observations of faint sources where spectral resolution is less critical.
There are also studies for a fiber link to UVES, in order to provide simultaneous observations at longer wavelengths. This would considerably broaden the scientific capabilities of the project.
\section{Instrument Design Overview}
\label{sec:design}
The instrument is designed to be used alone or combined with the fiber-feed to UVES. Two resolution modes (HR, $R > 20$\,K; LR $R \sim 7$\,K) are enable by the exchange of two independent image slicers. An Active Flexure Compensation system (AFC) is part of the baseline. All the top level requirements (TLRs), in particular those related to efficiency, are met (see Fig.\,\ref{fig:whcubes} for a view of the whole instrument).
\begin{figure*}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{cubes_scheme_PDR_v2.png}}
\caption{\footnotesize
{\bf Functional design of CUBES. Light path is from the top to the bottom. The shown acronyms are: FL cam (fibre-link A\&G camera),
ND filters (Neutral Density filters), WL filter (Wavelength filter),
ADC (Atmospheric Dispersion Corrector), P (single Pin-hole mask), Ps (multi pin-holes mask), A\&G (Acquisition and Guiding), Cal mirror (Calibration mirror), DC (Dichroic), AFC (Active Flexure Compensation),
LR (Low Resolution), HR (High Resolution), Cryo (Cryocooler),
FF LED (Flat-Field LED).}}
\label{fig:whcubes}
\end{figure*}
\subsection{Instrument sub-systems and Operations}
The current baseline design of CUBES includes a
calibration unit that provides the light sources necessary to register frames for flat fielding, wavelength calibration, alignment, and the AFC. A foreoptics sub-system includes an atmospheric dispersion corrector (ADC) and Acquisition and Guiding functionalities. There are two user-selectable image slicers (to enable different spectral resolutions) followed by a dichroic beamsplitter that feeds two spectrograph arms, each consisting of a Fused Silica single-lens collimator, a first-order transmission grating with a groove density up to 3600 l/mm, and a 3-lens all-silica camera. Each arm has its own detector cryostat, with a 9k or 10k CCD detector (we also have an option to increase the sampling by 11\% using the STA 10k 9$\mu m$ detector instead of the E2V 9K 10$\mu m$), read-out electronics, cryo-vacuum components (both hardware and specific control electronics). The Instrument Control Electronics are based on PLCs that are compliant with latest ELT design standard, and control all optomechanical and cryo/vacuum functions. The Scientific Detector system is controlled by ESOs new NGC2 standard. The Instrument Software package has elaborate instrument control software, data-reduction and simulation tools (see Sect.\ref{sec:Software}) for details). And, finally, a Fiber Link unit provides the option of simultaneous observations with UVES by injecting red light into optical fibers (1 object, 3 sky) that subtend 1 arcsec on the sky and are approximatively 40 m long to UVES on the Nasmyth platform \citep{UVES2000}.
\subsection{Optics and Dispersing Elements}
Using two lens doublets and a number of folding prisms the foreoptics relays a FoV of 6”x10” for LR and 1.5”x10” for HR at the telescope focus to the entrance plane of the spectrograph. Zero-deviation ADC prisms in the parallel beam between the doublets provide atmospheric dispersion correction over a range 300-405\,nm for zenith angles of 0-60$^\circ$. By inserting a dichroic just below the telescope focal plane, light redward of 420\,nm may be directed to the UVES fiber feed.
At the magnified telescope focal plane produced by the fore-optics (0.5\,arcsec/mm plate scale), two user-selectable reflective image slicers are used to reformat the field of view. The first component, an array of six spherical slicer mirrors decomposes the rectangular FoV into six slices which are reimaged by six spherical camera mirrors composing the spectrograph entrance slit, defined by a slit mask. The slicer efficiency is expected to be $>90\%$ (goal 94\%) The output slit mask has six slitlets, corresponding to six slices, each one measuring 0.25”x10” on the sky for the HR slicer ($R \sim 20K$) and 1”x10” for the LR slicer ($R \sim 7K$). Further slit mask apertures are illuminated by a ThAr fiber source for use by the AFC system.
The light coming from the slit mask is folded by a Total Internal Reflection (TIR) prism and then reaches a dichroic which splits the light by reflecting the Blue-Arm passband (300–352.3\,nm) and transmitting the Red-Arm passband (346.3–405\,nm).
In order to achieve a high ($>20K$) resolution without the efficiency losses associated with cross-dispersed echelles, CUBES uses state-of-the-art first-order dispersing elements. Binary transmission gratings produced by E-beam microlithography and an Atomic Layer Deposition (ALD) overcoat have been identified as a suitable technology \citep[see ][]{Zeitner2022}. Their theoretical average (RCWA) diffraction efficiency is $> 90\%$ and studies carried out through simulations and prototyping show that the measured efficiency is compliant with the expectation and demonstrate the feasibility of the instrument in terms of light throughput.
\subsection{Mechanics}
To achieve high resolution, CUBES requires a fairly large beam diameter of 160 mm and a collimator focal length of 3\,m. In order to limit the total mass, a light-weight construction principle has been adopted. The optical layout contains several folds so all optical elements. The optical layout was optimized such that all optical elements of the spectrograph from slit to detector lie in a single plane, so all spectrograph optics can be mounted on a single optical bench. This is arguably the most stable configuration since the dispersion direction of CUBES is parallel to the stiff surface plane of the optical bench. A general focus of the mechanical design is, in fact, to minimize the effects of gravitational bending of the instrument.
In the current design, the CUBES main mechanical structure is divided into three main components: 1. A telescope adapter, that provides a stiff connection between the Cassegrain telescope flange and the optical bench assembly; 2. An optical bench that provides a stable platform for the spectrograph optics as well as for the foreoptics; 3. An assembly to provide support for auxiliary equipment such as electronic racks, the calibration unit and vacuum equipment. In Fig.\,\ref{fig:cubesmech} the mechanical concept of CUBES is shown.
\begin{figure*}[t!]
\resizebox{6.9cm}{!}{\includegraphics[clip=true]{mech1.png}} \resizebox{6.9cm}{!}{\includegraphics[clip=true]{mech2.png}} \\
\center{ \resizebox{7.4cm}{!}{\includegraphics[clip=true]{structure.png}}}
\caption{\footnotesize
{\bf Mechanical concept for CUBES: The general layout of the main mechanical components is shown. For reference, the Telescope Adapter has a diameter of about 1.5 m. The Fore-Optics Assembly is located near the center of the bench, while the spectrograph opto-mechanics are ‘hanging’ on the bottom side of the same bench.}}
\label{fig:cubesmech}
\end{figure*}
\subsection{Software}
\label{sec:Software}
The CUBES instrument is designed including a {\it ``software ecosystem''}, whose individual packages cooperate to support the users from the proposal preparation to the science-grade spectra: i) the Exposure Time Calculator (ETC), a web application used to predict the CUBES
exposure time required to achieve a given SNR;
ii) the Observation Preparation Software (OPS): a list of tools aimed to help the users identify the best instrument settings to achieve a scientific goal;
iii) the Instrument Control Software (ICS) and the Detector Control Software (DCS): devoted to the instrument devices and detectors control;
iv) the Data Reduction Software (DRS): a collection of recipes aimed to remove instrument signature from the science exposures, and produce the final calibrated 2D and 1D spectra;
v) the End-to-end Simulator (E2E): a package able to simulate realistic science exposures of a given science target, by taking into account the CUBES instrumental effects, and allowing early testing of the data reduction pipelines, as well as early validation of design decisions.
The above mentioned packages are developed according to the recently published ELT software standards, and will be based on the ELT Instrument Framework libraries and tools.
\section{Project and Management}
\label{sec:management}
The CUBES consortium is composed of institutes from five countries:
\begin{itemize}
\item INAF - Istituto Nazionale di Astrofisica, Italy, (consortium leader)
\item IAG-USP - Instituto de Astronomia, Geofísica e Ciências Atmosféricas (primary Brazil partner) and LNA - Laboratório Nacional de Astrofísica (secondary Brazil partner), Brazil
\item LSW - Landessternwarte, Zentrum für Astronomie der Universtität Heidelberg, Germany
\item NCAC - Nicolaus Copernicus Astronomical Center of the Polish Academy of Sciences, Poland
\item STFC-UKATC - UK Astronomy Technology Centre, (primary UK partner) and Durham University Centre for Advanced Instrumentation (secondary UK partner), United Kingdom
\end{itemize}
The Consortium organization is fairly standard, with a Principal Investigator (PI) with the ultimate responsibility for the project, acting as the formal contact point between ESO and the Consortium. The PI represents the leading technical institute, INAF. Each country is then represented in the managerial structure by one Co-PI; together they form the CUBES Executive Board (EB). The managerial aspects are delegated by the EB to the Project Manager (PM). The scientific aspects are delegated by the EB to the Project Scientist (PS). The project manager is supported by a System Engineer (SE) and by a Software System Engineer (SSE) who are in charge to supervise the overall system design. The SE and SSE work in close contact with the Instrument Scientist (IS) who makes sure that the adopted technical solutions match the foreseen instrument scientific needs.
CUBES follows the standard project phasing for ESO instruments which is based on the stage-gate paradigm. Important decision points are project milestones (gates of the project) which mark the transition into a new stage when successfully completed. In the current plan CUBES will be available to the ESO user community in 2028.
\subsection{Public Engagement}
CUBES is an ambitious research program, and some of the scientific topics are related to the hottest open questions in modern astrophysics. Considering the vast discovery potential of the project, and the remarkable research and development technological activities, we consider good public communication as particularly important. Dissemination of science and technology is a fundamental part of our project. We have prepared a web page that is also a useful tool for the project as a whole\footnote{\url{https://cubes.inaf.it/home}}, and profiles in the main social media, i.e. Facebook, Twitter etc.
\section{Conclusions}
\label{sec:conclusions}
The Cassegrain U-Band Efficient Spectrograph (CUBES) for the ESO VLT has been presented. Analysis of the design shows that it will deliver outstanding ($>40$\%) throughput across its bandpass, at a mean $R > 20$\,K (HR mode) and $R \sim 7K$ (LR mode). A fiber link from CUBES to UVES is being studied, which would provide the capability of simultaneous high-resolution spectroscopy at $\lambda > 420$\,nm. The CUBES design is able to address a large variety of scientific cases, from Solar System science to Cosmology, with no obvious technical showstopper. With contributions from institutes in five countries, the CUBES design is well placed to become the most efficient ground-based spectrograph at near-UV wavelengths, with science operations anticipated for 2028, opening a unique discovery space for the VLT for years to come.
\noindent {\bf{Affiliations}}\par
$^{6}$Ioffe Institute, HSE University, Saint-Petersburg, Russia\par
$^{7}$Universidade de São Paulo, IAG, Rua do Matão 1226, São Paulo, Brazil\par
$^{8}$Donostia International Physics Center (DIPC), Guipuzkoa, Spain\par
$^{9}$IKERBASQUE Basque Foundation for Science, Bilbao, Spain\par
$^{10}$University of Hull, E.A. Milne Centre for Astrophysics, Hull, UK\par
$^{11}$STFC - United Kingdom Astronomy Technology Centre (UK ATC), Edinburgh, UK\par
$^{12}$European Southern Observatory (ESO), ESO Headquarters
Karl-Schwarzschild-Str. 2 85748 Garching bei M\"unchen
Germany\par
$^{13}$Durham University, Department of Physics, Centre for Advanced Instrumentation, Durham, UK\par
$^{14}$INAF - Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 3, Padova, Italy\par
$^{15}$INAF - Osservatorio Astronomico di Roma, Via Frascati 33, Monte Porzio Catone, Italy\par
$^{16}$Centre for Astrophysics, University of Southern Queensland, Toowoomba 4350, Australia\par
$^{17}$INAF - Osservatorio Astronomico di Abruzzo, Via M. Maggini, I-64100 Teramo, Italy\par
$^{18}$INFN - Sezione di Pisa, Largo Pontecorvo 3, I-56127 Pisa, Italy\par
$^{19}$LNA/MCTI, Laboratorio Nacional De Astrofisica, Itajubá, Brazil\par
$^{20}$Dipartimento di Fisica, Sezione di Astronomia, Università di Trieste, Italy\par
$^{21}$Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Heidelberg, Germany\par
$^{22}$Centre for Extragalactic Astronomy, Durham University, Durham DH1 3LE, UK\par
$^{23}$Observat\'orio Nacional, Rua Gen. Jos\'e Cristino 77, S\~ao Crist\'ov\~ao, Rio de Janeiro, Brazil\par
$^{24}$Steward Observatory, University of Arizona, 950 N. Cherry Ave., Tucson, AZ, 85719\par
$^{25}$Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw, Poland\par
$^{26}$Department of Astronomy, University of Geneva, Chemin Pegasi 51, Versoix, Switzerland\par
$^{27}$Consultant Astronomical Instrumentation, Alpenrosenstr.15, 85521 Ottobrunn, Germany\par
$^{28}$Italian Space Agency - Space Science Data Centre, via del Politecnico snc, Rome, Italy\par
$^{29}$Australian Astronomical Optics, Macquarie University, North Ryde, NSW 2113, Australia\par
$^{30}$European Space Agency (ESA), ESA Office, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA\par
$^{31}$Astrophysics Research Centre, Queen's University, Belfast, UK\par
$^{32}$University of Warwick, Department of Physics, Gibbet Hill Road, Coventry, CV7 4AL, UK\par
$^{33}$Goethe University Frankfurt, Institute for Applied Physics, Frankfurt am Main, Germany\par
$^{34}$Dipartimento di Fisica e Astronomia dell'Università, Vicolo dell'Osservatorio 3, Padova, Italy\par
$^{35}$Observat\'orio do Valongo, Universidade Federal do Rio de Janeiro, Brazil\par
$^{36}$INAF - Institute of Space Astrophysics and Planetology, Roma, Italy\par
$^{37}$Franco-Chilean Laboratory for Astronomy, Las Condes, Santiago, Chile\par
$^{38}$Institut d’Astrophysique de Paris, CNRS-SU, UMR 7095, 98bis bd Arago, Paris, France\par
$^{39}$Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, UK\par
$^{40}$INAF - Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy\par
$^{41}$INAF - Osservatorio di Astrofisica e Scienza dello Spazio, Bologna, Italy\\
\begin{acknowledgements}
R.S. acknowledges support by the Polish National Science Centre through project 2018/31/B/ST9/01469.
We gratefully acknowledge support from the German Federal Ministry of Education and Research (BMBF) through project 05A20VHA.
B.B. acknowledges the FAPESP grant 2014/18100-4.
For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
\end{acknowledgements}
\bibliographystyle{aa}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.